Edit readme
This commit is contained in:
parent
4b8bf88c6f
commit
fcae1ca02e
15
README.md
15
README.md
@ -415,10 +415,9 @@ If you already have a calibration file, set `calibration_type` type to `convert`
|
|||||||
|
|
||||||
### Associate persons across cameras
|
### Associate persons across cameras
|
||||||
|
|
||||||
> _**Track the person viewed by the most cameras, in case of several detections by OpenPose.**_ \
|
> _**If `multi_person` is set to `false`, the algorithm chooses the person for whom the reprojection error is smallest.\
|
||||||
|
If `multi_person` is set to `true`, it selects all persons with a reprojection error smaller than a threshold, and then associates them across time frames by minimizing the displacement speed.**_ \
|
||||||
***N.B.:** Skip this step if only one person is in the field of view.*\
|
***N.B.:** Skip this step if only one person is in the field of view.*\
|
||||||
> [Want to contribute?](#how-to-contribute) _**Allow for multiple person analysis.**_
|
|
||||||
|
|
||||||
|
|
||||||
Open an Anaconda prompt or a terminal in a `Session`, `Participant`, or `Trial` folder.\
|
Open an Anaconda prompt or a terminal in a `Session`, `Participant`, or `Trial` folder.\
|
||||||
Type `ipython`.
|
Type `ipython`.
|
||||||
@ -480,6 +479,8 @@ Output:\
|
|||||||
> _**Use the Stanford LSTM model to estimate the position of 47 virtual markers.**_\
|
> _**Use the Stanford LSTM model to estimate the position of 47 virtual markers.**_\
|
||||||
_**Note that inverse kinematic results are not necessarily better after marker augmentation.**_ Skip if results are not convincing.
|
_**Note that inverse kinematic results are not necessarily better after marker augmentation.**_ Skip if results are not convincing.
|
||||||
|
|
||||||
|
*N.B.:* Marker augmentation tends to give a more stable, but less precise output. In practice, it is mostly beneficial when using less than 4 cameras.
|
||||||
|
|
||||||
**Make sure that `participant_height` is correct in your `Config.toml` file.** `participant_mass` is mostly optional for IK.\
|
**Make sure that `participant_height` is correct in your `Config.toml` file.** `participant_mass` is mostly optional for IK.\
|
||||||
Only works with models estimating at least the following keypoints (e.g., not COCO):
|
Only works with models estimating at least the following keypoints (e.g., not COCO):
|
||||||
``` python
|
``` python
|
||||||
@ -498,9 +499,6 @@ from Pose2Sim import Pose2Sim
|
|||||||
Pose2Sim.markerAugmentation()
|
Pose2Sim.markerAugmentation()
|
||||||
```
|
```
|
||||||
|
|
||||||
*N.B.:* Again, use marker augmentation with good care, as results are worse than without in about half of the cases.\
|
|
||||||
Marker augmentation tends to give a more stable, but less precise output. In practice, it is mostly beneficial when using less than 4 cameras.
|
|
||||||
|
|
||||||
</br>
|
</br>
|
||||||
|
|
||||||
## OpenSim kinematics
|
## OpenSim kinematics
|
||||||
@ -696,7 +694,6 @@ You will be proposed a to-do list, but please feel absolutely free to propose yo
|
|||||||
|
|
||||||
**Main to-do list**
|
**Main to-do list**
|
||||||
- Graphical User Interface
|
- Graphical User Interface
|
||||||
- Multiple person triangulation
|
|
||||||
- Synchronization
|
- Synchronization
|
||||||
- Self-calibration based on keypoint detection
|
- Self-calibration based on keypoint detection
|
||||||
|
|
||||||
@ -774,9 +771,9 @@ You will be proposed a to-do list, but please feel absolutely free to propose yo
|
|||||||
▢ **Tutorials:** Make video tutorials.
|
▢ **Tutorials:** Make video tutorials.
|
||||||
▢ **Doc:** Use [Sphinx](https://www.sphinx-doc.org/en/master), [MkDocs](https://www.mkdocs.org), or (maybe better), [github.io](https://docs.github.com/fr/pages/quickstart) for clearer documentation.
|
▢ **Doc:** Use [Sphinx](https://www.sphinx-doc.org/en/master), [MkDocs](https://www.mkdocs.org), or (maybe better), [github.io](https://docs.github.com/fr/pages/quickstart) for clearer documentation.
|
||||||
|
|
||||||
▢ **Catch errors**
|
|
||||||
✔ **Pip package**
|
✔ **Pip package**
|
||||||
▢ **Batch processing**
|
✔ **Batch processing**
|
||||||
|
✔ **Catch errors**
|
||||||
▢ **Conda package**
|
▢ **Conda package**
|
||||||
▢ **Docker image**
|
▢ **Docker image**
|
||||||
▢ Run pose estimation and OpenSim from within Pose2Sim
|
▢ Run pose estimation and OpenSim from within Pose2Sim
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
[metadata]
|
[metadata]
|
||||||
name = pose2sim
|
name = pose2sim
|
||||||
version = 0.7.0
|
version = 0.7.1
|
||||||
author = David Pagnon
|
author = David Pagnon
|
||||||
author_email = contact@david-pagnon.com
|
author_email = contact@david-pagnon.com
|
||||||
description = Perform a markerless kinematic analysis from multiple calibrated views as a unified workflow from an OpenPose input to an OpenSim result.
|
description = Perform a markerless kinematic analysis from multiple calibrated views as a unified workflow from an OpenPose input to an OpenSim result.
|
||||||
|
Loading…
Reference in New Issue
Block a user