pose2sim/Pose2Sim/Utilities/tests.py

179 lines
5.3 KiB
Python
Raw Permalink Normal View History

Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
#! /usr/bin/env python
# -*- coding: utf-8 -*-
'''
########################################
## Pose2Sim tests ##
########################################
2024-09-19 06:39:58 +08:00
- SINGLE PERSON:
- calibration conversion from .qca.txt
- pose estimation
- synchronization
- person association
- triangulation
- filtering
- marker augmentation
- opensim scaling and inverse kinematics
- run all
- MULTI PERSON:
- calibration conversion from .qca.txt
- pose estimation
- NO synchronization
- person association
- triangulation
- filtering
- marker augmentation
- opensim scaling and inverse kinematics
- run all
- BATCH SESSION, RUN ALL:
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
- Calibration conversion from .qca.txt
- Single person:
2024-09-19 06:39:58 +08:00
- pose estimation
- NO synchronization
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
- person association
- triangulation
- filtering
- marker augmentation
2024-09-19 06:39:58 +08:00
- opensim scaling and inverse kinematics
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
- Multi-person:
2024-09-19 06:39:58 +08:00
- pose estimation
- NO synchronization
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
- person association
- triangulation
- filtering
- marker augmentation
2024-09-19 06:39:58 +08:00
- opensim scaling and inverse kinematics
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
N.B.:
1. Calibration from scene dimensions is not tested, as it requires the
user to click points on the image.
2. OpenSim scaling and IK are not tested yet
3. Not all possible configuration parameters are extensively tested.
Usage:
cd Pose2Sim/Utilities
python tests.py
'''
## INIT
import os
import toml
from unittest.mock import patch
import unittest
from Pose2Sim import Pose2Sim
## AUTHORSHIP INFORMATION
__author__ = "David Pagnon"
__copyright__ = "Copyright 2021, Pose2Sim"
__credits__ = ["David Pagnon"]
__license__ = "BSD 3-Clause License"
2024-07-10 16:12:57 +08:00
__version__ = "0.9.4"
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
__maintainer__ = "David Pagnon"
__email__ = "contact@david-pagnon.com"
__status__ = "Development"
## FUNCTIONS
class TestWorkflow(unittest.TestCase):
@patch('builtins.input', return_value='no') # Mock input() to return 'no'
def test_workflow(self, mock_input):
'''
2024-07-17 22:50:14 +08:00
SINGLE-PERSON, MULTI-PERSON, BATCH PROCESSING:
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
- calibration
- pose estimation
- synchronization
- person association
- triangulation
- filtering
- marker augmentation
2024-09-19 06:39:58 +08:00
- opensim scaling and inverse kinematics
2024-07-17 22:50:14 +08:00
- run all
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
N.B.: Calibration from scene dimensions is not tested, as it requires the
user to click points on the image.
Not all possible configuration parameters are extensively tested.
Usage:
from Pose2Sim import tests; tests.test_workflow()
python tests.py
'''
2024-07-17 22:50:14 +08:00
###################
# SINGLE-PERSON #
###################
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
project_dir = '../Demo_SinglePerson'
config_dict = toml.load(os.path.join(project_dir, 'Config.toml'))
os.chdir(project_dir)
config_dict.get("project").update({"project_dir":project_dir})
config_dict.get("pose").update({"mode":'lightweight'})
config_dict.get("pose").update({"display_detection":False})
config_dict.get("synchronization").update({"display_sync_plots":False})
config_dict['filtering']['display_figures'] = False
Pose2Sim.calibration(config_dict)
Pose2Sim.poseEstimation(config_dict)
Pose2Sim.synchronization(config_dict)
Pose2Sim.personAssociation(config_dict)
Pose2Sim.triangulation(config_dict)
Pose2Sim.filtering(config_dict)
Pose2Sim.markerAugmentation(config_dict)
2024-09-19 06:39:58 +08:00
Pose2Sim.kinematics(config_dict)
2024-07-17 22:50:14 +08:00
config_dict.get("pose").update({"overwrite_pose":False})
Pose2Sim.runAll(config_dict)
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
2024-07-17 22:50:14 +08:00
####################
# MULTI-PERSON #
####################
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
project_dir = '../Demo_MultiPerson'
config_dict = toml.load(os.path.join(project_dir, 'Config.toml'))
os.chdir(project_dir)
config_dict.get("project").update({"project_dir":project_dir})
config_dict.get("pose").update({"mode":'lightweight'})
config_dict.get("pose").update({"display_detection":False})
config_dict.get("synchronization").update({"display_sync_plots":False})
config_dict['filtering']['display_figures'] = False
# Step by step
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
Pose2Sim.calibration(config_dict)
Pose2Sim.poseEstimation(config_dict)
# Pose2Sim.synchronization(config_dict) # No synchronization for multi-person for now
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
Pose2Sim.personAssociation(config_dict)
Pose2Sim.triangulation(config_dict)
Pose2Sim.filtering(config_dict)
Pose2Sim.markerAugmentation(config_dict)
2024-09-19 06:39:58 +08:00
Pose2Sim.kinematics(config_dict)
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
# Run all
2024-07-17 22:50:14 +08:00
config_dict.get("pose").update({"overwrite_pose":False})
Pose2Sim.runAll(config_dict, do_synchronization=False)
2024-07-17 22:50:14 +08:00
####################
# BATCH PROCESSING #
####################
project_dir = '../Demo_Batch'
os.chdir(project_dir)
Pose2Sim.runAll(do_synchronization=False)
2024-07-17 22:50:14 +08:00
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
if __name__ == '__main__':
unittest.main()