diff --git a/Content/website/index.md b/Content/website/index.md
index 184809e..f56ca22 100644
--- a/Content/website/index.md
+++ b/Content/website/index.md
@@ -37,11 +37,9 @@ and lens distortions are better taken into account.\ -->
Pose2Sim stands for "OpenPose to OpenSim", as it originally used *OpenPose* inputs (2D keypoints coordinates) and lead to an OpenSim result (full-body 3D joint angles). Pose estimation is now performed with more recent models from [RTMPose](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose), and custom models (from [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) for example) can also be used.
+
-
-
-
-
+
@@ -163,7 +161,7 @@ If you don't use Anaconda, type `python -V` in terminal to make sure python>=3.9
> **Note on storage use:**\
A full installation takes up to 11 GB of storage spate. However, GPU support is not mandatory and takes about 6 GB. Moreover, [marker augmentation](#marker-augmentation) requires Tensorflow and does not necessarily yield better results. You can save an additional 1.3 GB by uninstalling it: `pip uninstall tensorflow`.\
A minimal installation with carefully chosen pose models and without GPU support, Tensorflow, PyQt5 **would take less than 3 GB**.\
-
+
@@ -217,7 +215,7 @@ All of them are clearly documented: feel free to play with them!
- Go to File > Load Motion, and load the joint angle .mot file in the `kinematics` folder.
- If you want to see the 3D marker locations, go to File > Preview Experimental Data, and load the .trc file in the `pose-3d` folder.
-
+
@@ -342,7 +340,7 @@ from Pose2Sim import Pose2Sim
Pose2Sim.poseEstimation()
```
-
+
@@ -351,7 +349,7 @@ Pose2Sim.poseEstimation()
*N.B.:* Pose estimation can be dramatically sped up by increasing the value of `det_frequency`. In that case, the detection is only done every `det_frequency` frames, and bounding boxes are tracked inbetween (keypoint detection is still performed on all frames).\
*N.B.:* Activating `tracking` will attempt to give consistent IDs to the same persons across frames, which might facilitate synchronization if other people are in the background.
-
+
@@ -438,12 +436,12 @@ Pose2Sim.calibration()
```
-
+
Output file:
-
+
### Convert from Qualisys, Optitrack, Vicon, OpenCap, EasyMocap, or bioCV
@@ -497,7 +495,7 @@ If you already have a calibration file, set `calibration_type` type to `convert`
- is flat, without reflections, surrounded by a wide white border, and is not rotationally invariant (Nrows ≠ Ncols, and Nrows odd if Ncols even). Go to [calib.io](https://calib.io/pages/camera-calibration-pattern-generator) to generate a suitable checkerboard.
- A common error is to specify the external, instead of the internal number of corners (one less than the count from calib.io). This may be one less than you would intuitively think.
-
+
***Intrinsic calibration error should be below 0.5 px.***
@@ -521,7 +519,7 @@ If you already have a calibration file, set `calibration_type` type to `convert`
For a more automatic calibration, OpenPose keypoints could also be used for calibration.\
**COMING SOON!**
-
+
***Extrinsic calibration error should be below 1 cm, but depending on your application, results will still be potentially acceptable up to 2.5 cm.***
@@ -543,14 +541,14 @@ from Pose2Sim import Pose2Sim
Pose2Sim.synchronization()
```
-
+
For each camera, this computes mean vertical speed for the chosen keypoints, and finds the time offset for which their correlation is highest.\
All keypoints can be taken into account, or a subset of them. The user can also specify a time for each camera when only one participant is in the scene, preferably performing a clear vertical motion.
-
+
*N.B.:* Works best when:
- only one participant is in the scene (set `approx_time_maxspeed` and `time_range_around_maxspeed` accordingly)
@@ -577,7 +575,7 @@ from Pose2Sim import Pose2Sim
Pose2Sim.personAssociation()
```
-
+
@@ -598,7 +596,7 @@ from Pose2Sim import Pose2Sim
Pose2Sim.triangulation()
```
-
+
@@ -619,14 +617,14 @@ from Pose2Sim import Pose2Sim
Pose2Sim.filtering()
```
-
+
Check your filtration with the displayed figures, and visualize your .trc file in OpenSim. If your filtering is not satisfying, try and change the parameters in the `Config.toml` file.
Output:\
-
+
@@ -654,7 +652,7 @@ from Pose2Sim import Pose2Sim
Pose2Sim.markerAugmentation()
```
-
+
@@ -680,11 +678,11 @@ from Pose2Sim import Pose2Sim
Pose2Sim.kinematics()
```
-
+
-
+
-
+
Once you have the scaled model and the joint angles, you are free to go further! Inverse dynamics, muscle analysis, etc. (make sure previously add muscles from [the Pose2Sim model with muscles](Pose2Sim\OpenSim_Setup\Model_Pose2Sim_Body25b_contacts_muscles.osim)).
@@ -853,7 +851,7 @@ Reprojects 3D coordinates of a trc file to the image planes defined by a calibra
-
+
diff --git a/README.md b/README.md
index 184809e..ddf148d 100644
--- a/README.md
+++ b/README.md
@@ -37,8 +37,6 @@ and lens distortions are better taken into account.\ -->
Pose2Sim stands for "OpenPose to OpenSim", as it originally used *OpenPose* inputs (2D keypoints coordinates) and lead to an OpenSim result (full-body 3D joint angles). Pose estimation is now performed with more recent models from [RTMPose](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose), and custom models (from [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) for example) can also be used.
-
-