From 3261ab71a5c0f8895281378a0bfb537e93da626a Mon Sep 17 00:00:00 2001 From: David PAGNON Date: Sat, 17 Feb 2024 22:53:18 +0100 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 270e68f..6b6c673 100644 --- a/README.md +++ b/README.md @@ -336,7 +336,7 @@ However, it is less robust and accurate than OpenPose, and can only detect a sin * Make sure you changed the `pose_model` and the `tracked_keypoint` in the [Config.toml](https://github.com/perfanalytics/pose2sim/blob/main/Pose2Sim/Demo/S01_Empty_Session/Config.toml) file. ### With DeepLabCut: -If you need to detect specific points on a human being, an animal, or an object, you can also train your own model with [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut). +If you need to detect specific points on a human being, an animal, or an object, you can also train your own model with [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut). In this case, Pose2Sim is used as an alternative to [AniPose](https://github.com/lambdaloop/anipose), but it may yield better results since 3D reconstruction takes confidence into account (see [this article](https://doi.org/10.1080/21681163.2023.2292067)). 1. Train your DeepLabCut model and run it on your images or videos (more instruction on their repository) 2. Translate the h5 2D coordinates to json files (with `DLC_to_OpenPose.py` script, see [Utilities](#utilities)): ``` cmd