joss editor modifs

This commit is contained in:
davidpagnon 2022-08-18 13:17:39 -07:00
parent 8aa0b99af8
commit 0134815eb4
4 changed files with 120 additions and 52 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

View File

@ -22,6 +22,17 @@
publisher={IEEE}
}
@article{Colyer_2018,
title={A review of the evolution of vision-based motion analysis and the integration of advanced computer vision methods towards developing a markerless system},
author={Colyer, Steffi L and Evans, Murray and Cosker, Darren P and Salo, Aki IT},
journal={Sports medicine-open},
volume={4},
number={1},
pages={1--15},
year={2018},
publisher={SpringerOpen}
}
@article{Delp_2007,
title={OpenSim: open-source software to create and analyze dynamic simulations of movement},
author={Delp, Scott L and Anderson, Frank C and Arnold, Allison S and Loan, Peter and Habib, Ayman and John, Chand T and Guendelman, Eran and Thelen, Darryl G},
@ -44,6 +55,35 @@
DOI = {10.1109/ICCV.2017.256}
}
@article{Hartley_1997,
title={Triangulation},
author={Hartley, Richard I and Sturm, Peter},
journal={Computer vision and image understanding},
volume={68},
number={2},
pages={146--157},
year={1997},
publisher={Elsevier}
}
@misc{Hidalgo_2019,
author = {Hidalgo, Ginés},
title = {OpenPose Experimental Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
url = {https://github.com/CMU-Perceptual-Computing-Lab/openpose_train/tree/master/experimental_models#body_25b-model---option-2-recommended}
}
@misc{Hidalgo_2021,
author = {Hidalgo, Ginés},
title = {OpenPose 3D reconstruction module},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
url = {https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/advanced/3d_reconstruction_module.md}
}
@article{Kanko_2021,
title={Concurrent assessment of gait kinematics using marker-based and markerless motion capture},
author={Kanko, Robert M and Laende, Elise K and Davis, Elysia M and Selbie, W Scott and Deluzio, Kevin J},
@ -82,6 +122,17 @@
publisher={Nature Publishing Group}
}
@article{Needham_2021,
title={The accuracy of several pose estimation methods for 3D joint centre localisation},
author={Needham, Laurie and Evans, Murray and Cosker, Darren P and Wade, Logan and McGuigan, Polly M and Bilzon, James L and Colyer, Steffi L},
journal={Scientific reports},
volume={11},
number={1},
pages={1--11},
year={2021},
publisher={Nature Publishing Group}
}
@article{Pagnon_2021,
title={Pose2Sim: An End-to-End Workflow for 3D Markerless Sports Kinematics—Part 1: Robustness},
author={Pagnon, David and Domalain, Mathieu and Reveret, Lionel},
@ -134,6 +185,17 @@
publisher={Elsevier}
}
@article{Zhang_2000,
title={A flexible new technique for camera calibration},
author={Zhang, Zhengyou},
journal={IEEE Transactions on pattern analysis and machine intelligence},
volume={22},
number={11},
pages={1330--1334},
year={2000},
publisher={IEEE}
}
@article{Zheng_2022,
title={Deep learning-based human pose estimation: A survey},
author={Zheng, Ce and Wu, Wenhan and Yang, Taojiannan and Zhu, Sijie and Chen, Chen and Liu, Ruixu and Shen, Ju and Kehtarnavaz, Nasser and Shah, Mubarak},

View File

@ -29,14 +29,13 @@ bibliography: paper.bib
---
# Summary
`Pose2Sim` provides a workflow for 3D markerless kinematics, as an alternative to the more usual marker-based motion capture methods.\
`Pose2Sim` stands for "OpenPose to OpenSim", as it uses OpenPose inputs (2D coordinates obtained from multiple videos) and leads to an OpenSim result (full-body 3D joint angles).
The repository presents a framework for: \
• Detecting 2D joint coordinates from videos, e.g. via OpenPose [@Cao_2019], \
• Calibrating cameras, \
• Tracking of the person viewed by the most cameras, \
• Tracking the person viewed by the most cameras, \
• Triangulating 2D joint coordinates and storing them as 3D positions in a .trc file, \
• Filtering these calculated 3D positions, \
• Scaling and running inverse kinematics via OpenSim [@Delp_2007; @Seth_2018], in order to obtain full-body 3D joint angles.
@ -44,10 +43,9 @@ The repository presents a framework for: \
Each task is easily customizable, and requires only moderate Python skills. Pose2Sim is accessible at [https://github.com/perfanalytics/pose2sim](https://github.com/perfanalytics/pose2sim).
# Statement of need
For the last few decades, marker-based kinematics has been considered the best choice for the analysis of human movement, when regarding the trade-off between ease of use and accuracy. However, a marker-based system is hard to set up outdoors or in context, and it requires placing markers on the body, which can hinder natural movement [Colyer_2018].
For the last few decades, marker-based kinematics has been considered the best choice for the analysis of human movement, when regarding the trade-off between ease of use and accuracy. However, a marker-based system is hard to set up outdoors or in context, and it requires placing markers on the body, which can hinder natural movement.
The emergence of markerless kinematics opens up new possibilities. Indeed, the interest in deep-learning pose estimation neural networks has been growing fast since 2015 [@Zheng_2022], which makes it now possible to collect accurate and reliable kinematic data without the use of physical markers. OpenPose, for example, is a widespread open-source software which provides 2D joint coordinate estimations from videos. These coordinates can then be triangulated in order to produce 3D positions. Yet, when it comes to the biomechanical analysis of human motion, it is often more useful to obtain joint angles than their XYZ positions in space. Joint angles allow for better comparison among trials and individuals, and they represent the first step for other analysis such as inverse dynamics.
The emergence of markerless kinematics opens up new possibilities. Indeed, the interest in deep-learning pose estimation neural networks has been growing fast since 2015 [@Zheng_2022], which makes it now possible to collect accurate and reliable kinematic data without the use of physical markers. OpenPose, for example, is a widespread open-source software which provides 2D joint coordinate estimations from videos. These coordinates can then be triangulated in order to produce 3D positions. Yet, when it comes to the biomechanical analysis of human motion, it is often more useful to obtain joint angles than joint center positions in space. Joint angles allow for better comparison among trials and individuals, and they represent the first step for other analysis such as inverse dynamics.
OpenSim is another widespread open-source software which helps compute 3D joint angles, usually from marker coordinates. It lets scientists define a detailed musculoskeletal model, scale it to individual subjects, and perform inverse kinematics with customizable biomechanical constraints. It provides other features such as net calculation of joint moments or individual muscle forces resolution, although this is out of the scope of our contribution.
@ -58,7 +56,6 @@ So far, little work has been done towards obtaining 3D angles from multiple view
# Features
## Pose2Sim workflow
`Pose2Sim` connects two of the most widely recognized (and open source) pieces of software of their respective fields:\
• OpenPose [@Cao_2019], a 2D human pose estimation neural network\
• OpenSim [@Delp_2007], a 3D biomechanics analysis software
@ -68,65 +65,73 @@ So far, little work has been done towards obtaining 3D angles from multiple view
The workflow is organized as follows \autoref{fig:pipeline}:\
1. Preliminary OpenPose [@Cao_2019] 2D keypoints detection.\
2. Pose2Sim core, including 4 customizable steps:\
    2.i. Camera calibration \
    2.ii. Tracking of the person viewed by the most cameras\
    2.iii. 2D keypoints triangulation\
    2.iv. 3D coordinates filtering\
    2.i. Camera calibration. \
    2.ii. Tracking the person of interest.\
    2.iii. 3D keypoints triangulation.\
    2.iv. 3D coordinates filtering.\
3. A full-body OpenSim [@Delp_2007] skeletal model with OpenPose keypoints is provided, as well as scaling and inverse kinematics setup files. As the position of triangulated keypoints are not dependent on either the operator nor the subject, these setup files can be taken as is.
OpenPose, OpenSim, and the whole `Pose2Sim` workflow run from any video cameras, on any computer, equipped with any operating system. However, on Linux, OpenSim has to be compiled from source.
## Pose2Sim core
Each step of the Pose2Sim core is easily customizable through the 'User/Config.toml' file. Among other things, users can edit:\
• The project hierarchy, the video framerate, the range of analyzed frames,\
• The OpenPose model they wish to use. They can also use AlphaPose [@Fang_2017], or even create their own model (e.g. with DeepLabCut [@Mathis_2018]),\
• Whether they are going to calibrate their cameras with a checkerboard, or to simply convert a calibration file provided by a Qualisys system,\
• Which keypoint they want to track in order to automatically single out the person of interest,\
• The thresholds in confidence and reprojection errors for using or not using a camera while triangulating a keypoint,\
• The minimum number of cameras below which the keypoint won't be triangulated at this frame,\
• The interpolation and filter types and parameters.
Pose2Sim is meant to be as fully and as easily configurable as possible, by editing the 'User/Config.toml' file. Among others, the following parameters can be adjusted.
### Project
User can change the project path and folder names, the video framerate, and the range of analyzed frames.
### Pose 2D
User can specify the 2D pose estimation model they use.\
The OpenPose BODY_25B experimental model is recommended, as it is as fast as the standard BODY_25 model while being more accurate [@Hidalgo_2019]. Non-OpenPose models can also be chosen, whether they are human such as the AlphaPose one [@Fang_2017], or animal such as any DeepLabCut model trained by the user [@Mathis_2018].
### Calibration
Whether cameras are going to be calibrated with a checkerboard, or simply going to be converted from a calibration file provided by a Qualisys system.\
If checkerboard calibration is chosen, corners are detected and refined with OpenCV. This detection can optionally be displayed for verification. Each camera is then calibrated using OpenCV with an algorithm based on [@Zhang_2000]. The user can choose which image should be used for extrinsic calibration (usually the first or the last one.)
### Tracking
Which body keypoint will be tracked in order to automatically single out the person of interest. We recommend the neck point or one of the hip points. Indeed, in most cases they are the least likely to move out of the camera views. \
This is important when other people are in the background of one or several cameras. This is done by trying out all available triangulations performed for a chosen keypoint in all detected persons. The triangulation with the smallest reprojection error is considered to correspond to the person of interest.
### Triangulation
It should be noted that OpenPose natively provides a module for reconstructing 3D keypoints coordinates [@Hidalgo_2021]. However, it is not developped nor supported anymore, and is acknowledged to be rudimentary. It also needs to be compiled from source, which can constitute an obstacle to non-programmer biomechanicians. On the other hand, triangulation is more robust in Pose2Sim. This is made possible largely because instead of using classic Direct Linear Transform (DLT) [@Hartley_1997], we propose a weighted DLT, i.e., a triangulation procedure where each 2D OpenPose coordinate is weighted with the confidence scores of each camera [@Pagnon_2021].
\
    i. The minimum in likelihood below which a camera point will not be taken into account for triangulation.\
    ii. The maximum in reprojection error above which triangulation results will not be accepted. This can happen if OpenPose provides a bad 2D keypoint estimation, or if the person of interest leaves the camera field. Triangulation will then be done again with one camera less.\
    iii. The minimum amount of "good" cameras (remaining after the last two steps) required for triangulating a keypoint. If there is not enough, the 3D keypoint will be interpolated between other frames. The interpolation method can also be chosen.
### Filtering
The filter type and its parameters. Waveforms before and after filtering can be displayed and compared.
### OpenSim
The main contribution of this software is to build a bridge between OpenPose and OpenSim. The latter allows for much more accurate and robust results [@Pagnon_2022], which constrains kinematics to an individually scaled and physically accurate skeletal model. This model also takes into account systematic labelling errors in OpenPose [@Needham_2022]. Since these are considered similar regardless of the subject, neither the model nor the scaling or inverse kinematic files necessarily need to be modified when changing the operator or the participant.\
The OpenSim model, scaling setup file, and inverse kinematics setup files will not be edited or adjusted in the OpenSim GUI, rather than by using the 'User\Config.toml' file. This can be done in the same way as one would do with a standard marker-based experiment.
## Pose2Sim utilities
Some standalone Python tools are also provided.
A large part of Pose2Sim functions are also provided as standalone python scripts. Other tools are also provided for extending its usage, such as the ones presented below \autoref{fig:utilities}.
**Conversion to and from Pose2Sim**
### 2D pose
`json_display_with_img.py`:
Overlays 2D detected .json coordinates on original raw images. High confidence keypoints are green, low confidence ones are red.\
`json_display_without_img.py`:
Plots an animation of 2D detected .json coordinates.\
`DLC_to_OpenPose.py`:
Converts a DeepLabCut [@Mathis_2018] .h5 2D pose estimation file into OpenPose [@Cao_2019] .json files.
`DLC_to_OpenPose.py`
Converts a DeepLabCut [@Mathis_2018] (h5) 2D pose estimation file into OpenPose [@Cao_2019] (json) files.\
`calib_qca_to_toml.py`
Converts a Qualisys .qca.txt calibration file to the Pose2Sim .toml calibration file.\
`calib_toml_to_qca.py`
Converts a Pose2Sim .toml calibration file (e.g., from a checkerboard) to a Qualisys .qca.txt calibration file.\
`calib_from_checkerboard.py`
Calibrates cameras with images or a video of a checkerboard, saves calibration in a Pose2Sim .toml calibration file.\
`c3d_to_trc.py`
Converts 3D point data of a .c3d file to a .trc file compatible with OpenSim. No analog data (force plates, emg) nor computed data (angles, powers, etc) are retrieved.
**Plotting tools**
`json_display_with_img.py`
Overlays 2D detected json coordinates on original raw images. High confidence keypoints are green, low confidence ones are red.\
`json_display_without_img.py`
Plots an animation of 2D detected json coordinates.\
`trc_plot.py`
Displays X, Y, Z coordinates of each 3D keypoint of a TRC file in a different matplotlib tab.\
**Other trc tools**
`trc_desample.py`
Undersamples a trc file.
`trc_Zup_to_Yup.py`
### 3D pose
`trc_plot.py`:
Displays X, Y, Z coordinates of a .trc file, each keypoint represented in its own tab.\
`trc_desample.py`:
Undersamples a .trc file.\
`trc_Zup_to_Yup.py`:
Changes Z-up system coordinates to Y-up system coordinates.\
`trc_filter.py`
`trc_filter.py`:
Filters trc files. Available filters: Butterworth, Butterworth on speed, Gaussian, LOESS, Median.\
`trc_gaitevents.py`
Detects gait events from point coordinates according to [@Zeni_2008].\
`trc_gaitevents.py`:
Detects gait events from point coordinates according to [@Zeni_2008].
![Pose2Sim provides a few additional utilities to extend its capabilities.\label{fig:utilities}](Pose2Sim_workflow_utilities.jpg)
# Acknowledgements
We acknowledge the dedicated people involved in the many major software programs and packages used by Pose2Sim, such as Python, OpenPose, OpenSim, OpenCV [@Bradski_2000], among others.
# References

View File

@ -132,6 +132,7 @@ bin\OpenPoseDemo.exe --model_pose BODY_25B --image_dir <PATH_TO_PROJECT_DIR>\raw
* *N.B.:* The [BODY_25B model](https://github.com/CMU-Perceptual-Computing-Lab/openpose_train/tree/master/experimental_models) has more accurate results; however, feel free to use any OpenPose model (BODY_25B, BODY_25, COCO, with face and/or hands, etc), and to work with videos instead of image files.
* *N.B.:* You can also use [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut), or other 2D pose estimators instead. \
If you decide to do so, you'll have to (1) translate the format to json files (with `DLC_to_OpenPose.py` script, see [Utilities](#utilities)); (2) report the model keypoints in the 'skeleton.py' file; (3) create an OpenSim model if you need 3D joint angles.
* *N.B.:* Use one of the scripts `json_display_with_img.py` or `json_display_with_img.py` if you want to display 2D pose detections.
<img src="Content/Pose2D.png" width="760">
@ -478,7 +479,7 @@ opensim-cmd run-tool <PATH_TO_POSE2SIM>/OpenSim/Setup/<YOUR SCALING OR IK SETUP
</details>
## Utilities
A list of standalone tools, which can be both run as scripts or imported as functions. Check usage in the docstrings of each python file.\
A list of standalone tools, which can be both run as scripts or imported as functions. Check usage in the docstrings of each python file.
<details>
<summary><b>Converting files and Calibrating</b> (CLICK TO SHOW)</summary>