Markerless kinematics with any cameras — From 2D Pose estimation to 3D OpenSim motion
Go to file
2023-01-09 01:10:17 +01:00
.github/workflows Update joss_pdf.yml 2022-07-16 02:32:49 +02:00
Content add BMX 2022-11-24 15:15:15 +01:00
Pose2Sim Update Blazepose_runsave.py 2023-01-09 01:10:17 +01:00
.gitignore initial commit 2022-06-09 14:28:04 +02:00
LICENSE Initial commit 2022-06-09 14:26:15 +02:00
pyproject.toml initial commit 2022-06-09 14:28:04 +02:00
README.md Update README.md 2022-12-14 01:03:16 +01:00
setup.cfg v0.2.6 2022-10-25 17:14:55 +02:00
setup.py initial commit 2022-06-09 14:28:04 +02:00

Continuous integration PyPI version
Downloads License GitHub issues
status DOI

Pose2Sim

Pose2Sim provides a workflow for 3D markerless kinematics, as an alternative to the more usual marker-based motion capture methods.
Pose2Sim stands for "OpenPose to OpenSim", as it uses OpenPose inputs (2D keypoints coordinates obtained from multiple videos) and leads to an OpenSim result (full-body 3D joint angles).

Pose2Sim has been tested on various challenging tasks and conditions.

Contents

  1. Installation and Demonstration
    1. Installation
    2. Demonstration Part-1: Build 3D TRC file on Python
    3. Demonstration Part-2: Obtain 3D joint angles with OpenSim
  2. Use on your own data
    1. Prepare for running on your own data
    2. 2D pose estimation
    3. Cameras calibration
    4. 2D Tracking of person
    5. 3D triangulation
    6. 3D filtering
    7. OpenSim kinematics
  3. Utilities
  4. How to cite and how to contribute
    1. How to cite
    2. How to contribute

Installation and Demonstration

Installation

  1. Install OpenPose (instructions there).
    Windows portable demo is enough.

  2. Install OpenSim 4.x (there).
    Tested up to v4.4-beta on Windows. Has to be compiled from source on Linux (see there).

  3. Optional. Install Anaconda or Miniconda.
    Open an Anaconda terminal and create a virtual environment with typing:

    conda create -n Pose2Sim python=3.7 
    conda activate Pose2Sim
  4. Install Pose2Sim:
    If you don't use Anaconda, type python -V in terminal to make sure python>=3.6 is installed.

    • OPTION 1: Quick install: Open a terminal.

      pip install pose2sim
      
    • OPTION 2: Build from source and test the last changes: Open a terminal in the directory of your choice and Clone the Pose2Sim repository.

      git clone https://github.com/perfanalytics/pose2sim.git
      cd pose2sim
      pip install .
      

Demonstration Part-1: Build 3D TRC file on Python

This demonstration provides an example experiment of a person balancing on a beam, filmed with 4 calibrated cameras processed with OpenPose.

Open a terminal, enter pip show pose2sim, report package location.
Copy this path and go to the Demo folder with cd <path>\pose2sim\Demo.
Type python, and test the following code:

from Pose2Sim import Pose2Sim
Pose2Sim.calibrateCams()
Pose2Sim.track2D()
Pose2Sim.triangulate3D()
Pose2Sim.filter3D()

You should obtain a plot of all the 3D coordinates trajectories. You can check the logs inDemo\Users\logs.txt.
Results are stored as .trc files in the Demo/pose-3d directory.

N.B.: Default parameters have been provided in Demo\Users\Config.toml but can be edited.

Demonstration Part-2: Obtain 3D joint angles with OpenSim

In the same vein as you would do with marker-based kinematics, start with scaling your model, and then perform inverse kinematics.

Scaling

  1. Open OpenSim.
  2. Open the provided Model_Pose2Sim_Body25b.osim model from pose2sim/Demo/opensim. (File -> Open Model)
  3. Load the provided Scaling_Setup_Pose2Sim_Body25b.xml scaling file from pose2sim/Demo/opensim. (Tools -> Scale model -> Load)
  4. Run. You should see your skeletal model take the static pose.

Inverse kinematics

  1. Load the provided IK_Setup_Pose2Sim_Body25b.xml scaling file from pose2sim/Demo/opensim. (Tools -> Inverse kinematics -> Load)
  2. Run. You should see your skeletal model move in the Vizualizer window.

Use on your own data

Deeper explanations and instructions are given below.

Prepare for running on your own data

Get ready.

  1. Find your Pose2Sim\Empty_project, copy-paste it where you like and give it the name of your choice.

  2. Edit the User\Config.toml file as needed, especially regarding the path to your project.

  3. Populate the raw-2dfolder with your camera images or videos.

      Project
      │
      ├──opensim
      │    ├──Geometry
      │    ├──Model_Pose2Sim_Body25b.osim
      │    ├──Scaling_Setup_Pose2Sim_Body25b.xml
      │    └──IK_Setup_Pose2Sim_Body25b.xml
      │        
      ├── raw-2d
      │    ├──raw_cam1_img
      │    ├──...
      │    └──raw_camN_img
      │
      └──User
          └──Config.toml
      
    
    

2D pose estimation

Estimate 2D pose from images with Openpose.

Open a command prompt in your OpenPose directory.
Launch OpenPose for each raw image folder:

bin\OpenPoseDemo.exe --model_pose BODY_25B --image_dir <PATH_TO_PROJECT_DIR>\raw-2d\raw_cam1_img --write_json <PATH_TO_PROJECT_DIR>\pose-2d\pose_cam1_json
  • N.B.: The BODY_25B model has more accurate results; however, feel free to use any OpenPose model (BODY_25B, BODY_25, COCO, with face and/or hands, etc), and to work with videos instead of image files.
  • N.B.: You can also use DeepLabCut, or other 2D pose estimators instead.
    If you decide to do so, you'll have to (1) translate the format to json files (with DLC_to_OpenPose.py script, see Utilities); (2) report the model keypoints in the 'skeleton.py' file; (3) create an OpenSim model if you need 3D joint angles.
  • N.B.: Use one of the scripts json_display_with_img.py or json_display_with_img.py if you want to display 2D pose detections.

N.B.: Markers are not needed and are used only for validation

The project hierarchy becomes: (CLICK TO SHOW)
   Project
   │
   ├──opensim
   │    ├──Geometry
   │    ├──Model_Pose2Sim_Body25b.osim
   │    ├──Scaling_Setup_Pose2Sim_Body25b.xml
   │    └──IK_Setup_Pose2Sim_Body25b.xml
   │
   ├──pose-2d
   │    ├──pose_cam1_json
   │    ├──...
   │    └──pose_camN_json
   │        
   ├── raw-2d
   │   ├──raw_cam1_img
   │   ├──...
   │   └──raw_camN_img
   │
   └──User
       └──Config.toml
   

Cameras calibration

Calibrate your cameras.

  1. If you already have a calibration file (.qca.txt from Qualisys for example):
  • copy it in the calib-2d folder
  • set [calibration] type to 'qca' in your Config.toml file.

or

  1. If you have taken pictures or videos of a checkerboard with your cameras:
  • create a folder for each camera in your calib-2d folder,
  • copy there the images or videos of the checkerboard
  • set [calibration] type to 'checkerboard' in your Config.toml file, and adjust other parameters.

Open an Anaconda prompt or a terminal.
By default, calibrateCams() will look for Config.toml in the User folder of your current directory. If you want to store it somewhere else (e.g. in your data directory), specify this path as an argument: Pose2Sim.calibrateCams(r'path_to_config.toml').

from Pose2Sim import Pose2Sim
Pose2Sim.calibrateCams()

Output:

The project hierarchy becomes: (CLICK TO SHOW)
   Project
   │
   ├──calib-2d
   │   ├──calib_cam1_img
   │   ├──...
   │   ├──calib_camN_img
   │   └──Calib.toml
   │
   ├──opensim
   │    ├──Geometry
   │    ├──Model_Pose2Sim_Body25b.osim
   │    ├──Scaling_Setup_Pose2Sim_Body25b.xml
   │    └──IK_Setup_Pose2Sim_Body25b.xml
   │
   ├──pose-2d
   │    ├──pose_cam1_json
   │    ├──...
   │    └──pose_camN_json
   │        
   ├── raw-2d
   │   ├──raw_cam1_img
   │   ├──...
   │   └──raw_camN_img
   │
   └──User
       └──Config.toml
   

2D tracking of person

Track the person viewed by the most cameras, in case of several detections by OpenPose.
N.B.: Skip this step if only one person is in the field of view.

Open an Anaconda terminal By default, track2D() will look for Config.toml in the User folder of your current directory. If you want to store it somewhere else (e.g. in your data directory), specify this path as an argument: Pose2Sim.track2D(r'path_to_config.toml').

from Pose2Sim import Pose2Sim
Pose2Sim.track2D()

Check printed output. If results are not satisfying, try and release the constraints in the Config.toml file.

Output:

The project hierarchy becomes: (CLICK TO SHOW)
   Project
   │
   ├──calib-2d
   │   ├──calib_cam1_img
   │   ├──...
   │   ├──calib_camN_img
   │   └──Calib.toml
   │
   ├──opensim
   │    ├──Geometry
   │    ├──Model_Pose2Sim_Body25b.osim
   │    ├──Scaling_Setup_Pose2Sim_Body25b.xml
   │    └──IK_Setup_Pose2Sim_Body25b.xml
   │
   ├──pose-2d
   │   ├──pose_cam1_json
   │   ├──...
   │   └──pose_camN_json
   │
   ├──pose-2d-tracked
   │   ├──tracked_cam1_json
   │   ├──...
   │   └──tracked_camN_json
   │        
   ├── raw-2d
   │   ├──raw_cam1_img
   │   ├──...
   │   └──raw_camN_img
   │
   └──User
       └──Config.toml
   

3D triangulation

Triangulate your 2D coordinates in a robust way.

Open an Anaconda terminal. By default, triangulate3D() will look for Config.toml in the User folder of your current directory. If you want to store it somewhere else (e.g. in your data directory), specify this path as an argument: Pose2Sim.triangulate3D(r'path_to_config.toml').

from Pose2Sim import Pose2Sim
Pose2Sim.triangulate3D()

Check printed output, and vizualise your trc in OpenSim.
If your triangulation is not satisfying, try and release the constraints in the Config.toml file.

Output:

The project hierarchy becomes: (CLICK TO SHOW)
   Project
   │
   ├──calib-2d
   │   ├──calib_cam1_img
   │   ├──...
   │   ├──calib_camN_img
   │   └──Calib.toml
   │
   ├──opensim
   │    ├──Geometry
   │    ├──Model_Pose2Sim_Body25b.osim
   │    ├──Scaling_Setup_Pose2Sim_Body25b.xml
   │    └──IK_Setup_Pose2Sim_Body25b.xml
   │
   ├──pose-2d
   │   ├──pose_cam1_json
   │   ├──...
   │   └──pose_camN_json
   │
   ├──pose-2d-tracked
   │   ├──tracked_cam1_json
   │   ├──...
   │   └──tracked_camN_json
   │
   ├──pose-3d
       └──Pose-3d.trc>
   │        
   ├── raw-2d
   │   ├──raw_cam1_img
   │   ├──...
   │   └──raw_camN_img
   │
   └──User
       └──Config.toml
   

3D Filtering

Filter your 3D coordinates.

Open an Anaconda terminal. By default, filter3D() will look for Config.toml in the User folder of your current directory. If you want to store it somewhere else (e.g. in your data directory), specify this path as an argument: Pose2Sim.filter3D(r'path_to_config.toml').

from Pose2Sim import Pose2Sim
Pose2Sim.filter3D()

Check your filtration with the displayed figures, and vizualise your trc in OpenSim. If your filtering is not satisfying, try and change the parameters in the Config.toml file.

Output:

The project hierarchy becomes: (CLICK TO SHOW)
   Project
   │
   ├──calib-2d
   │   ├──calib_cam1_img
   │   ├──...
   │   ├──calib_camN_img
   │   └──Calib.toml
   │
   ├──opensim
   │    ├──Geometry
   │    ├──Model_Pose2Sim_Body25b.osim
   │    ├──Scaling_Setup_Pose2Sim_Body25b.xml
   │    └──IK_Setup_Pose2Sim_Body25b.xml
   │
   ├──pose-2d
   │   ├──pose_cam1_json
   │   ├──...
   │   └──pose_camN_json
   │
   ├──pose-2d-tracked
   │   ├──tracked_cam1_json
   │   ├──...
   │   └──tracked_camN_json
   │
   ├──pose-3d
   │   ├──Pose-3d.trc
   │   └──Pose-3d-filtered.trc
   │        
   ├── raw-2d
   │   ├──raw_cam1_img
   │   ├──...
   │   └──raw_camN_img
   │
   └──User
       └──Config.toml
   

OpenSim kinematics

Obtain 3D joint angles.

Scaling

  1. Use the previous steps to capture a static pose, typically an A-pose or a T-pose.
  2. Open OpenSim.
  3. Open the provided Model_Pose2Sim_Body25b.osim model from pose2sim/Empty_project/opensim. (File -> Open Model)
  4. Load the provided Scaling_Setup_Pose2Sim_Body25b.xml scaling file from pose2sim/Empty_project/opensim. (Tools -> Scale model -> Load)
  5. Replace the example static .trc file with your own data.
  6. Run
  7. Save the new scaled OpenSim model.

Inverse kinematics

  1. Use Pose2Sim to generate 3D trajectories.
  2. Open OpenSim.
  3. Load the provided IK_Setup_Pose2Sim_Body25b.xml scaling file from pose2sim/Empty_project/opensim. (Tools -> Inverse kinematics -> Load)
  4. Replace the example .trc file with your own data, and specify the path to your angle kinematics output file.
  5. Run
  6. Motion results will appear as .mot file in the pose2sim/Empty_project/opensim directory (automatically saved).

Command line

Alternatively, you can use command-line tools:

  • Open an Anaconda terminal in your OpenSim/bin directory, typically C:\OpenSim <Version>\bin.
    You'll need to adjust the time_range, output_motion_file, and enter the full paths to the input and output .osim, .trc, and .mot files in your setup file.

    opensim-cmd run-tool <PATH TO YOUR SCALING OR IK SETUP FILE>.xml
    
  • You can also run OpenSim directly in Python:

    import subprocess
    subprocess.call(["opensim-cmd", "run-tool", "<PATH TO YOUR SCALING OR IK SETUP FILE>.xml"])
    
  • Or take advantage of the full the OpenSim Python API. See there for installation instructions.
    Note that it is easier to install on Python 3.7 and with OpenSim 4.2.

The project hierarchy becomes: (CLICK TO SHOW)
   Project
   │
   ├──calib-2d
   │   ├──calib_cam1_img
   │   ├──...
   │   ├──calib_camN_img
   │   └──Calib.toml
   │
   ├──opensim  
   │    ├──Geometry
   │    ├──Model_Pose2Sim_Body25b.osim
   │    ├──Scaling_Setup_Pose2Sim_Body25b.xml
   │    ├──Model_Pose2Sim_Body25b_Scaled.osim  
   │    ├──IK_Setup_Pose2Sim_Body25b.xml
   │    └──IK_result.mot   
   │
   ├──pose-2d
   │   ├──pose_cam1_json
   │   ├──...
   │   └──pose_camN_json
   │
   ├──pose-2d-tracked
   │   ├──tracked_cam1_json
   │   ├──...
   │   └──tracked_camN_json
   │
   ├──pose-3d
   │   ├──Pose-3d.trc
   │   └──Pose-3d-filtered.trc
   │        
   ├── raw-2d
   │   ├──raw_cam1_img
   │   ├──...
   │   └──raw_camN_img
   │
   └──User
       └──Config.toml
   

Utilities

A list of standalone tools, which can be either run as scripts, or imported as functions. Check usage in the docstrings of each Python file. The figure below shows how some of these toolscan be used to further extend Pose2Sim usage.

Converting files and Calibrating (CLICK TO SHOW)

DLC_to_OpenPose.py Converts a DeepLabCut (h5) 2D pose estimation file into OpenPose (json) files.

c3d_to_trc.py Converts 3D point data of a .c3d file to a .trc file compatible with OpenSim. No analog data (force plates, emg) nor computed data (angles, powers, etc) are retrieved.

calib_from_checkerboard.py Calibrates cameras with images or a video of a checkerboard, saves calibration in a Pose2Sim .toml calibration file.

calib_qca_to_toml.py Converts a Qualisys .qca.txt calibration file to the Pose2Sim .toml calibration file (similar to what is used in AniPose).

calib_toml_to_qca.py Converts a Pose2Sim .toml calibration file (e.g., from a checkerboard) to a Qualisys .qca.txt calibration file.

calib_yml_to_toml.py Converts OpenCV intrinsic and extrinsic .yml calibration files to an OpenCV .toml calibration file.

calib_toml_to_yml.py Converts an OpenCV .toml calibration file to OpenCV intrinsic and extrinsic .yml calibration files.

Plotting tools (CLICK TO SHOW)

json_display_with_img.py Overlays 2D detected json coordinates on original raw images. High confidence keypoints are green, low confidence ones are red.

json_display_without_img.py Plots an animation of 2D detected json coordinates.

trc_plot.py Displays X, Y, Z coordinates of each 3D keypoint of a TRC file in a different matplotlib tab.

Other trc tools (CLICK TO SHOW)

trc_desample.py Undersamples a trc file.

trc_Zup_to_Yup.py Changes Z-up system coordinates to Y-up system coordinates.

trc_filter.py Filters trc files. Available filters: Butterworth, Butterworth on speed, Gaussian, LOESS, Median.

trc_gaitevents.py Detects gait events from point coordinates according to Zeni et al. (2008).

trc_combine.py Combine two trc files, for example a triangulated DeepLabCut trc file and a triangulated OpenPose trc file.

How to cite and how to contribute

How to cite

If you use this code or data, please cite Pagnon et al., 2022b, Pagnon et al., 2022a, or Pagnon et al., 2021.

@Article{Pagnon_2022_JOSS, 
  AUTHOR = {Pagnon, David and Domalain, Mathieu and Reveret, Lionel}, 
  TITLE = {Pose2Sim: An open-source Python package for multiview markerless kinematics}, 
  JOURNAL = {Journal of Open Source Software}, 
  YEAR = {2022},
  DOI = {10.21105/joss.04362}, 
  URL = {https://joss.theoj.org/papers/10.21105/joss.04362}
 }

@Article{Pagnon_2022_Accuracy,
  AUTHOR = {Pagnon, David and Domalain, Mathieu and Reveret, Lionel},
  TITLE = {Pose2Sim: An End-to-End Workflow for 3D Markerless Sports Kinematics—Part 2: Accuracy},
  JOURNAL = {Sensors},
  YEAR = {2022},
  DOI = {10.3390/s22072712},
  URL = {https://www.mdpi.com/1424-8220/22/7/2712}
}

@Article{Pagnon_2021_Robustness,
  AUTHOR = {Pagnon, David and Domalain, Mathieu and Reveret, Lionel},
  TITLE = {Pose2Sim: An End-to-End Workflow for 3D Markerless Sports Kinematics—Part 1: Robustness},
  JOURNAL = {Sensors},
  YEAR = {2021},
  DOI = {10.3390/s21196530},
  URL = {https://www.mdpi.com/1424-8220/21/19/6530}
}

How to contribute

I would happily welcome any proposal for new features, code improvement, and more!
If you want to contribute to Pose2Sim, please follow this guide on how to fork, modify and push code, and submit a pull request. I would appreciate it if you provided as much useful information as possible about how you modified the code, and a rationale for why you're making this pull request. Please also specify on which operating system and on which python version you have tested the code.

Here is a to-do list, for general guidance purposes only:

  • Integrate as a Blender and / or Maya add-on. See Maya-Mocap and BlendOSim
  • Multiple persons kinematics (triangulating multiple persons, and sorting them in time)
  • People association (tracking) with a neural network instead of brute force
  • Use aniposelib for better calibration, and/or wand calibration cf Argus (conversion script from Argus/EasyWand/DLTdv8 here), autocalibration based on a person's dimensions
  • Copy-paste muscles from OpenSim lifting full-body model for inverse dynamics and more
  • Finish deploying OpenPose body_135, AlphaPose HALPE_26, AlphaPose HALPE_136, AlphaPose COCO-WholeBody, MediaPipe BlazePose, COCO, MPII (skeleton.py and OpenSim models). Write SLEAP converter.

  • Conda package and Docker image
  • Outlier rejection (sliding z-score?) Also solve limb swapping
  • Implement normalized DLT and RANSAC triangulation, as well as a triangulation refinement step (cf DOI:10.1109/TMM.2022.3171102)
  • Implement optimal fixed-interval Kalman smoothing for inverse kinematics (Biorbd or OpenSim fork)
  • Utilities: convert Vicon xcp calibration file to toml
  • Run from command line via click or typer
  • Catch errors
  • Make GUI