diff --git a/README.md b/README.md index c2c6145..620ad86 100644 --- a/README.md +++ b/README.md @@ -159,7 +159,7 @@ Make sure you modify the `User\Config.toml` file accordingly. However, it is less robust and accurate than OpenPose, and can only detect a single person. * Use the script `Blazepose_runsave.py` (see [Utilities](#utilities)) to run BlazePose under Python, and store the detected coordinates in OpenPose (json) or DeepLabCut (h5 or csv) format: ``` - python -m Blazepose_runsave -i "" -dJs + python -m Blazepose_runsave -i r"" -dJs ``` Type in `python -m Blazepose_runsave -h` for explanation on parameters and for additional ones. * Make sure you change the `pose_model` and the `tracked_keypoint` in the `User\Config.toml` file. @@ -169,7 +169,7 @@ If you need to detect specific points on a human being, an animal, or an object, 1. Train your DeepLabCut model and run it on your images or videos (more intruction on their repository) 2. Translate the h5 2D coordinates to json files (with `DLC_to_OpenPose.py` script, see [Utilities](#utilities)): ``` - python -m DLC_to_OpenPose -i "" + python -m DLC_to_OpenPose -i r"" ``` 3. Report the model keypoints in the 'skeleton.py' file, and make sure you change the `pose_model` and the `tracked_keypoint` in the `User\Config.toml` file. 4. Create an OpenSim model if you need 3D joint angles. @@ -179,7 +179,7 @@ If you need to detect specific points on a human being, an animal, or an object, * Install and run AlphaPose on your videos (more intruction on their repository) * Translate the AlphaPose single json file to OpenPose frame-by-frame files (with `AlphaPose_to_OpenPose.py` script, see [Utilities](#utilities)): ``` - python -m AlphaPose_to_OpenPose -i "" + python -m AlphaPose_to_OpenPose -i r"" ``` * Make sure you change the `pose_model` and the `tracked_keypoint` in the `User\Config.toml` file. @@ -486,7 +486,7 @@ Alternatively, you can use command-line tools: - You can also run OpenSim directly in Python: ``` import subprocess - subprocess.call(["opensim-cmd", "run-tool", ".xml"]) + subprocess.call(["opensim-cmd", "run-tool", r".xml"]) ``` - Or take advantage of the full the OpenSim Python API. See [there](https://simtk-confluence.stanford.edu:8443/display/OpenSim/Scripting+in+Python) for installation instructions. \ @@ -643,21 +643,21 @@ I would happily welcome any proposal for new features, code improvement, and mor If you want to contribute to Pose2Sim, please follow [this guide](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) on how to fork, modify and push code, and submit a pull request. I would appreciate it if you provided as much useful information as possible about how you modified the code, and a rationale for why you're making this pull request. Please also specify on which operating system and on which python version you have tested the code. *Here is a to-do list, for general guidance purposes only:* ->
  • Integrate as a Blender and / or Maya add-on. See Maya-Mocap and BlendOSim ->
  • Multiple persons kinematics (triangulating multiple persons, and sorting them in time)
  • ->
  • People association (tracking) with a neural network instead of brute force
  • ->
  • Use aniposelib for better calibration, and/or wand calibration cf Argus (conversion script from Argus/EasyWand/DLTdv8 here), autocalibration based on a person's dimensions
  • ->
  • Add a camera synchronization script in the Utilities. ->
  • Copy-paste muscles from OpenSim lifting full-body model for inverse dynamics and more
  • ->
  • Implement SLEAP as an other 2D pose estimation solution (converter, skeleton.py, OpenSim model and setup files).
  • +>
  • calibrateCams: (1) Intrinsic with checkerboard, extrinsic with object or ChArUco board, or (2) SBA calibration with wand (cf Argus, see converter here), or (3) autocalibration based on a person's dimensions. Also see aniposelib for calibration with ChArUco.
  • +>
  • synchronizeCams: Synchronize cameras on 2D keypoint speeds.
  • +>
  • track2D: Multiple persons association (rename to peopleAssociation and ensure backcompatibility). With a neural network instead of brute force?
  • +>
  • triangulate3D: Multiple persons kinematics (output multiple .trc coordinates files).
  • +>
  • GUI: Blender add-on, or webapp. See Maya-Mocap and BlendOSim.
  • >
    +>
  • Catch errors
  • >
  • Conda package and Docker image
  • +>
  • Copy-paste muscles from OpenSim lifting full-body model for inverse dynamics and more
  • +>
  • Implement optimal fixed-interval Kalman smoothing for inverse kinematics (Biorbd or OpenSim fork)
  • +>
    +>
  • Implement SLEAP as an other 2D pose estimation solution (converter, skeleton.py, OpenSim model and setup files).
  • >
  • Outlier rejection (sliding z-score?) Also solve limb swapping
  • >
  • Implement normalized DLT and RANSAC triangulation, as well as a triangulation refinement step (cf DOI:10.1109/TMM.2022.3171102)
  • ->
  • Implement optimal fixed-interval Kalman smoothing for inverse kinematics (Biorbd or OpenSim fork)
  • >
  • Utilities: convert Vicon xcp calibration file to toml
  • >
  • Run from command line via click or typer
  • ->
  • Catch errors
  • ->
  • Make GUI