pose2sim/Pose2Sim/OpenSim_Setup/Scaling_Setup_Pose2Sim_LSTM.xml

690 lines
35 KiB
XML
Raw Normal View History

2024-01-11 08:12:56 +08:00
<?xml version="1.0" encoding="UTF-8" ?>
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
<OpenSimDocument Version="40500">
2024-01-20 03:03:35 +08:00
<ScaleTool name="Pose2Sim_scaled">
2024-01-11 08:12:56 +08:00
<!--Mass of the subject in kg. Subject-specific model generated by scaling step will have this total mass.-->
2024-01-20 03:03:35 +08:00
<mass>69</mass>
2024-01-11 08:12:56 +08:00
<!--Height of the subject in mm. For informational purposes only (not used by scaling).-->
2024-01-20 03:03:35 +08:00
<height>-1</height>
2024-01-11 08:12:56 +08:00
<!--Age of the subject in years. For informational purposes only (not used by scaling).-->
<age>-1</age>
<!--Notes for the subject.-->
<notes>Unassigned</notes>
<!--Specifies the name of the unscaled model (.osim) and the marker set.-->
<GenericModelMaker>
<!--Model file (.osim) for the unscaled model.-->
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
<model_file>Unassigned</model_file>
2024-01-11 08:12:56 +08:00
<!--Set of model markers used to scale the model. Scaling is done based on distances between model markers compared to the same distances between the corresponding experimental markers.-->
<marker_set_file>Unassigned</marker_set_file>
</GenericModelMaker>
<!--Specifies parameters for scaling the model.-->
<ModelScaler>
<!--Whether or not to use the model scaler during scale-->
<apply>true</apply>
<!--Specifies the scaling method and order. Valid options are 'measurements', 'manualScale', singly or both in any sequence.-->
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
<scaling_order> measurements</scaling_order>
2024-01-11 08:12:56 +08:00
<!--Specifies the measurements by which body segments are to be scaled.-->
<MeasurementSet>
<objects>
<Measurement name="foot">
<!--Flag to turn on and off scaling for this measurement.-->
<apply>true</apply>
<!--Set of marker pairs used to determine the scale factors.-->
<MarkerPairSet>
<objects>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> r_calc_study r_toe_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> r_calc_study r_5meta_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> L_calc_study L_toe_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> L_calc_study L_5meta_study</markers>
</MarkerPair>
</objects>
<groups />
</MarkerPairSet>
<!--Set of bodies to be scaled by this measurement.-->
<BodyScaleSet>
<objects>
<BodyScale name="talus_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="calcn_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="toes_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="talus_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="calcn_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="toes_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
</objects>
<groups />
</BodyScaleSet>
</Measurement>
<Measurement name="tibia_r">
<!--Flag to turn on and off scaling for this measurement.-->
<apply>true</apply>
<!--Set of marker pairs used to determine the scale factors.-->
<MarkerPairSet>
<objects>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> r_knee_study r_ankle_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> r_mknee_study r_mankle_study</markers>
</MarkerPair>
</objects>
<groups />
</MarkerPairSet>
<!--Set of bodies to be scaled by this measurement.-->
<BodyScaleSet>
<objects>
<BodyScale name="tibia_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
</objects>
<groups />
</BodyScaleSet>
</Measurement>
<Measurement name="femur_r">
<!--Flag to turn on and off scaling for this measurement.-->
<apply>true</apply>
<!--Set of marker pairs used to determine the scale factors.-->
<MarkerPairSet>
<objects>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> RHJC_study r_knee_study</markers>
</MarkerPair>
</objects>
<groups />
</MarkerPairSet>
<!--Set of bodies to be scaled by this measurement.-->
<BodyScaleSet>
<objects>
<BodyScale name="femur_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="patella_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
</objects>
<groups />
</BodyScaleSet>
</Measurement>
<Measurement name="tibia_l">
<!--Flag to turn on and off scaling for this measurement.-->
<apply>true</apply>
<!--Set of marker pairs used to determine the scale factors.-->
<MarkerPairSet>
<objects>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> L_knee_study L_ankle_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> L_mknee_study L_mankle_study</markers>
</MarkerPair>
</objects>
<groups />
</MarkerPairSet>
<!--Set of bodies to be scaled by this measurement.-->
<BodyScaleSet>
<objects>
<BodyScale name="tibia_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
</objects>
<groups />
</BodyScaleSet>
</Measurement>
<Measurement name="femur_l">
<!--Flag to turn on and off scaling for this measurement.-->
<apply>true</apply>
<!--Set of marker pairs used to determine the scale factors.-->
<MarkerPairSet>
<objects>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> LHJC_study L_knee_study</markers>
</MarkerPair>
</objects>
<groups />
</MarkerPairSet>
<!--Set of bodies to be scaled by this measurement.-->
<BodyScaleSet>
<objects>
<BodyScale name="femur_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="patella_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
</objects>
<groups />
</BodyScaleSet>
</Measurement>
<Measurement name="pelvisXY">
<!--Flag to turn on and off scaling for this measurement.-->
<apply>true</apply>
<!--Set of marker pairs used to determine the scale factors.-->
<MarkerPairSet>
<objects>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> r.ASIS_study r.PSIS_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> L.ASIS_study L.PSIS_study</markers>
</MarkerPair>
</objects>
<groups />
</MarkerPairSet>
<!--Set of bodies to be scaled by this measurement.-->
<BodyScaleSet>
<objects>
<BodyScale name="pelvis">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y</axes>
</BodyScale>
</objects>
<groups />
</BodyScaleSet>
</Measurement>
<Measurement name="pelvisZ">
<!--Flag to turn on and off scaling for this measurement.-->
<apply>true</apply>
<!--Set of marker pairs used to determine the scale factors.-->
<MarkerPairSet>
<objects>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> RHJC_study LHJC_study</markers>
</MarkerPair>
</objects>
<groups />
</MarkerPairSet>
<!--Set of bodies to be scaled by this measurement.-->
<BodyScaleSet>
<objects>
<BodyScale name="pelvis">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> Z</axes>
</BodyScale>
</objects>
<groups />
</BodyScaleSet>
</Measurement>
<Measurement name="torsoXY">
<!--Flag to turn on and off scaling for this measurement.-->
<apply>true</apply>
<!--Set of marker pairs used to determine the scale factors.-->
<MarkerPairSet>
<objects>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> r_shoulder_study r.PSIS_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> L_shoulder_study L.PSIS_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> C7_study r.PSIS_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> C7_study L.PSIS_study</markers>
</MarkerPair>
</objects>
<groups />
</MarkerPairSet>
<!--Set of bodies to be scaled by this measurement.-->
<BodyScaleSet>
<objects>
<BodyScale name="torso">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y</axes>
</BodyScale>
<BodyScale name="scapulaPhantom_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y</axes>
</BodyScale>
<BodyScale name="scapulaPhantom_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y</axes>
</BodyScale>
</objects>
<groups />
</BodyScaleSet>
</Measurement>
<Measurement name="torsoZ">
<!--Flag to turn on and off scaling for this measurement.-->
<apply>true</apply>
<!--Set of marker pairs used to determine the scale factors.-->
<MarkerPairSet>
<objects>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> r_shoulder_study L_shoulder_study</markers>
</MarkerPair>
</objects>
<groups />
</MarkerPairSet>
<!--Set of bodies to be scaled by this measurement.-->
<BodyScaleSet>
<objects>
<BodyScale name="torso">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> Z</axes>
</BodyScale>
<BodyScale name="scapulaPhantom_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> Z</axes>
</BodyScale>
<BodyScale name="scapulaPhantom_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> Z</axes>
</BodyScale>
</objects>
<groups />
</BodyScaleSet>
</Measurement>
<Measurement name="humerus">
<!--Flag to turn on and off scaling for this measurement.-->
<apply>true</apply>
<!--Set of marker pairs used to determine the scale factors.-->
<MarkerPairSet>
<objects>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> r_shoulder_study r_lelbow_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> r_shoulder_study r_melbow_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> L_shoulder_study L_lelbow_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> L_shoulder_study L_melbow_study</markers>
</MarkerPair>
</objects>
<groups />
</MarkerPairSet>
<!--Set of bodies to be scaled by this measurement.-->
<BodyScaleSet>
<objects>
<BodyScale name="humerus_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="humerus_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
</objects>
<groups />
</BodyScaleSet>
</Measurement>
<Measurement name="radius">
<!--Flag to turn on and off scaling for this measurement.-->
<apply>true</apply>
<!--Set of marker pairs used to determine the scale factors.-->
<MarkerPairSet>
<objects>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> r_lwrist_study r_lelbow_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> r_mwrist_study r_melbow_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> L_lwrist_study L_lelbow_study</markers>
</MarkerPair>
<MarkerPair>
<!--Names of two markers, the distance between which is used to compute a body scale factor.-->
<markers> L_mwrist_study L_melbow_study</markers>
</MarkerPair>
</objects>
<groups />
</MarkerPairSet>
<!--Set of bodies to be scaled by this measurement.-->
<BodyScaleSet>
<objects>
<BodyScale name="ulna_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="radius_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="hand_r">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="ulna_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="radius_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
<BodyScale name="hand_l">
<!--Axes (X Y Z) along which to scale a body. For example, 'X Y Z' scales along all three axes, and 'Y' scales just along the Y axis.-->
<axes> X Y Z</axes>
</BodyScale>
</objects>
<groups />
</BodyScaleSet>
</Measurement>
</objects>
<groups />
</MeasurementSet>
<!--Scale factors to be used for manual scaling.-->
<ScaleSet>
<objects />
<groups />
</ScaleSet>
<!--TRC file (.trc) containing the marker positions used for measurement-based scaling. This is usually a static trial, but doesn't need to be. The marker-pair distances are computed for each time step in the TRC file and averaged across the time range.-->
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
<marker_file>../Demo_SinglePerson/pose-3d/S01_Demo_SingleTrial_0-99_filt_butterworth_LSTM.trc</marker_file>
2024-01-11 08:12:56 +08:00
<!--Time range over which to average marker-pair distances in the marker file (.trc) for measurement-based scaling.-->
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
<time_range> 0 1.633333300000000099</time_range>
2024-01-11 08:12:56 +08:00
<!--Flag (true or false) indicating whether or not to preserve relative mass between segments.-->
<preserve_mass_distribution>true</preserve_mass_distribution>
<!--Name of OpenSim model file (.osim) to write when done scaling.-->
2024-01-20 03:03:35 +08:00
<output_model_file>Unassigned</output_model_file>
2024-01-11 08:12:56 +08:00
<!--Name of file to write containing the scale factors that were applied to the unscaled model (optional).-->
2024-01-20 03:03:35 +08:00
<output_scale_file>Unassigned</output_scale_file>
2024-01-11 08:12:56 +08:00
</ModelScaler>
<!--Specifies parameters for placing markers on the model once a model is scaled. -->
<MarkerPlacer>
<!--Whether or not to use the marker placer during scale-->
2024-01-20 03:03:35 +08:00
<apply>false</apply>
2024-01-11 08:12:56 +08:00
<!--Task set used to specify weights used in the IK computation of the static pose.-->
<IKTaskSet>
<objects>
<IKMarkerTask name="r.ASIS_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="L.ASIS_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="r.PSIS_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="L.PSIS_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="r_knee_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="r_mknee_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="r_ankle_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="r_mankle_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="r_toe_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="r_5meta_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="r_calc_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="L_knee_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="L_mknee_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="L_ankle_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="L_mankle_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="L_toe_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="L_calc_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="L_5meta_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="r_shoulder_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>5</weight>
</IKMarkerTask>
<IKMarkerTask name="L_shoulder_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>5</weight>
</IKMarkerTask>
<IKMarkerTask name="C7_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>20</weight>
</IKMarkerTask>
<IKMarkerTask name="r_lelbow_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>1</weight>
</IKMarkerTask>
<IKMarkerTask name="r_lwrist_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>5</weight>
</IKMarkerTask>
<IKMarkerTask name="r_mwrist_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>5</weight>
</IKMarkerTask>
<IKMarkerTask name="L_lelbow_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>1</weight>
</IKMarkerTask>
<IKMarkerTask name="L_lwrist_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>5</weight>
</IKMarkerTask>
<IKMarkerTask name="L_mwrist_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>5</weight>
</IKMarkerTask>
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
<IKMarkerTask name="RHJC_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
<IKMarkerTask name="LHJC_study">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
<weight>10</weight>
</IKMarkerTask>
2024-01-20 03:03:35 +08:00
<IKCoordinateTask name="pelvis_tilt">
2024-01-11 08:12:56 +08:00
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
2024-01-20 03:03:35 +08:00
<weight>1</weight>
2024-01-11 08:12:56 +08:00
<!--Indicates the source of the coordinate value for this task. Possible values are default_value (use default value of coordinate, as specified in the model file, as the fixed target value), manual_value (use the value specified in the value property of this task as the fixed target value), or from_file (use the coordinate values from the coordinate data specified by the coordinates_file property).-->
2024-01-20 03:03:35 +08:00
<value_type>default_value</value_type>
2024-01-11 08:12:56 +08:00
<!--This value will be used as the desired (or prescribed) coordinate value if value_type is set to manual_value.-->
<value>0</value>
</IKCoordinateTask>
2024-01-20 03:03:35 +08:00
<IKCoordinateTask name="pelvis_list">
2024-01-11 08:12:56 +08:00
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
2024-01-20 03:03:35 +08:00
<weight>1</weight>
2024-01-11 08:12:56 +08:00
<!--Indicates the source of the coordinate value for this task. Possible values are default_value (use default value of coordinate, as specified in the model file, as the fixed target value), manual_value (use the value specified in the value property of this task as the fixed target value), or from_file (use the coordinate values from the coordinate data specified by the coordinates_file property).-->
2024-01-20 03:03:35 +08:00
<value_type>default_value</value_type>
2024-01-11 08:12:56 +08:00
<!--This value will be used as the desired (or prescribed) coordinate value if value_type is set to manual_value.-->
<value>0</value>
</IKCoordinateTask>
<IKCoordinateTask name="ankle_angle_r">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
2024-01-20 03:03:35 +08:00
<weight>1</weight>
2024-01-11 08:12:56 +08:00
<!--Indicates the source of the coordinate value for this task. Possible values are default_value (use default value of coordinate, as specified in the model file, as the fixed target value), manual_value (use the value specified in the value property of this task as the fixed target value), or from_file (use the coordinate values from the coordinate data specified by the coordinates_file property).-->
<value_type>manual_value</value_type>
<!--This value will be used as the desired (or prescribed) coordinate value if value_type is set to manual_value.-->
<value>0</value>
</IKCoordinateTask>
<IKCoordinateTask name="ankle_angle_l">
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
2024-01-20 03:03:35 +08:00
<weight>1</weight>
2024-01-11 08:12:56 +08:00
<!--Indicates the source of the coordinate value for this task. Possible values are default_value (use default value of coordinate, as specified in the model file, as the fixed target value), manual_value (use the value specified in the value property of this task as the fixed target value), or from_file (use the coordinate values from the coordinate data specified by the coordinates_file property).-->
<value_type>manual_value</value_type>
<!--This value will be used as the desired (or prescribed) coordinate value if value_type is set to manual_value.-->
<value>0</value>
</IKCoordinateTask>
2024-01-20 03:03:35 +08:00
<IKCoordinateTask name="L5_S1_Flex_Ext">
2024-01-11 08:12:56 +08:00
<!--Whether or not this task will be used during inverse kinematics solve, default is true.-->
<apply>true</apply>
<!--Weight given to the task when solving inverse kinematics problems, default is 0.-->
2024-01-20 03:03:35 +08:00
<weight>1</weight>
2024-01-11 08:12:56 +08:00
<!--Indicates the source of the coordinate value for this task. Possible values are default_value (use default value of coordinate, as specified in the model file, as the fixed target value), manual_value (use the value specified in the value property of this task as the fixed target value), or from_file (use the coordinate values from the coordinate data specified by the coordinates_file property).-->
2024-01-20 03:03:35 +08:00
<value_type>default_value</value_type>
2024-01-11 08:12:56 +08:00
<!--This value will be used as the desired (or prescribed) coordinate value if value_type is set to manual_value.-->
<value>0</value>
</IKCoordinateTask>
</objects>
<groups />
</IKTaskSet>
<!--TRC file (.trc) containing the time history of experimental marker positions (usually a static trial).-->
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
<marker_file>../Demo_SinglePerson/pose-3d/S01_Demo_SingleTrial_0-99_filt_butterworth_LSTM.trc</marker_file>
2024-01-11 08:12:56 +08:00
<!--Name of file containing the joint angles used to set the initial configuration of the model for the purpose of placing the markers. These coordinate values can also be included in the optimization problem used to place the markers. Before the model markers are placed, a single frame of an inverse kinematics (IK) problem is solved. The IK problem can be solved simply by matching marker positions, but if the model markers are not in the correct locations, the IK solution will not be very good and neither will marker placement. Alternatively, coordinate values (specified in this file) can be specified and used to influence the IK solution. This is valuable particularly if you have high confidence in the coordinate values. For example, you know for the static trial the subject was standing will all joint angles close to zero. If the coordinate set (see the CoordinateSet property) contains non-zero weights for coordinates, the IK solution will try to match not only the marker positions, but also the coordinates in this file. Least-squared error is used to solve the IK problem. -->
<coordinate_file>Unassigned</coordinate_file>
<!--Time range over which the marker positions are averaged.-->
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
<time_range> 0.016666666666666666435 0.41666666666666668517</time_range>
2024-01-11 08:12:56 +08:00
<!--Name of the motion file (.mot) written after marker relocation (optional).-->
2024-01-20 03:03:35 +08:00
<output_motion_file>Unassigned</output_motion_file>
2024-01-11 08:12:56 +08:00
<!--Output OpenSim model file (.osim) after scaling and maker placement.-->
Pose estimation test (#116) Edits from @hunminkim98's awesome work at integrating pose estimation into Pose2Sim with RTMLib. Most of the changes in syntax are not necessarily better, it is mostly for the code to be more consistent with the rest of the library. Thank you again for your fantastic work! General: - Automatically detects whether a valid CUDA install is available. If so, use the GPU with the ONNXRuntime backend. Otherwise, use the CPU with the OpenVINO backend - The tensorflow version used for marker augmentation was incompatible with the cuda torch installation for pose estimation: edited code and models for it to work with the latest tf version. - Added logging information to pose estimation - Readme.md: provided an installation procedure for CUDA (took me a while to find something simple and robust) - Readme.md: added information about PoseEstimation with RTMLib - added poseEstimation to tests.py - created videos for the multi-person case (used to only have json, no video), and reorganized Demo folders. Had to recreate calibration file as well Json files: - the json files only saved one person, I made it save all the detected ones - tracking was not taken into account by rtmlib, which caused issues in synchronization: fixed, waiting for merge - took the save_to_openpose function out from the main function - minified the json files (they take less space when all spaces are removed) Detection results: - Compared the triangulated locations of RTMpose keypoints to the ones of OpenPose to potentially edit model marker locations on OpenSim. Did not seem to need it. Others in Config.toml: - removed the "to_openpose" option, which is not needed - added the flag: save_video = 'to_images' # 'to_video' or 'to_images' or ['to_video', 'to_images'] - changed the way frame_range was handled (made me change synchronization in depth, as well as personAssociation and triangulation) - added the flag: time_range_around_maxspeed in synchronization - automatically detect framerate from video, or set to 60 fps if we work from images (or give a value) - frame_range -> time_range - moved height and weight to project (only read for markerAugmentation, and in the future for automatic scaling) - removed reorder_trc from triangulation and Config -> call it for markerAugmentation instead Others: - Provided an installation procedure for OpenSim (for the future) and made continuous installation check its install (a bit harder since it cannot be installed via pip) - scaling from motion instead of static pose (will have to study whether it's as good or not) - added logging to synchronization - Struggled quite a bit with continuous integration * Starting point of integrating RTMPose into Pose2Sim. (#111) * RTM_to_Open Convert format from RTMPose to OpenPose * rtm_intergrated * rtm_integrated * rtm_integrated * rtm_integrated * rtm * Delete build/lib/Pose2Sim directory * rtm * Delete build/lib/Pose2Sim directory * Delete onnxruntime-gpu * device = cpu * add pose folder * Update tests.py * added annotation * fix typo * Should work be still lots of tests to run. Detailed commit coming soon * intermediary commit * last checks before v0.9.0 * Update continuous-integration.yml * Update tests.py * replaced tabs with spaces * unittest issue * unittest typo * deactivated display for CI test of pose detection * Try to make continuous integration work * a * b * c * d * e * f * g * h * i * j * k * l --------- Co-authored-by: HunMinKim <144449115+hunminkim98@users.noreply.github.com>
2024-07-09 22:39:33 +08:00
<output_model_file>D:\softs\github_david\pose2sim\Pose2Sim\OpenSim_Setup\..\S00_Demo_Session\S00_P00_SingleParticipant\S00_P00_OpenSim\Model_Pose2Sim_S00_P00_LSTM_scaled.osim</output_model_file>
2024-01-11 08:12:56 +08:00
<!--Output marker set containing the new marker locations after markers have been placed.-->
2024-01-20 03:03:35 +08:00
<output_marker_file>Unassigned</output_marker_file>
2024-01-11 08:12:56 +08:00
<!--Maximum amount of movement allowed in marker data when averaging frames of the static trial. A negative value means there is not limit.-->
<max_marker_movement>-1</max_marker_movement>
</MarkerPlacer>
</ScaleTool>
</OpenSimDocument>