From 1f6f9443857d5e69f84085e2be50387e6a1f28d8 Mon Sep 17 00:00:00 2001 From: David PAGNON Date: Wed, 1 Nov 2023 17:23:33 +0100 Subject: [PATCH] Update README.md --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 163b8cd..68c4834 100644 --- a/README.md +++ b/README.md @@ -203,7 +203,7 @@ If you need to detect specific points on a human being, an animal, or an object, ### With AlphaPose: [AlphaPose](https://github.com/MVIG-SJTU/AlphaPose) is one of the main competitors of OpenPose, and its accuracy is comparable. As a top-down approach (unlike OpenPose which is bottom-up), it is faster on single-person detection, but slower on multi-person detection.\ -All AlphaPose models are supported (HALPE_26, HALPE_68, HALPE_136, COCO_133, COCO, MPII). +All AlphaPose models are supported (HALPE_26, HALPE_68, HALPE_136, COCO_133, COCO, MPII). For COCO and MPII, AlphaPose must be run with the flag "--format cmu". * Install and run AlphaPose on your videos (more instruction on their repository) * Translate the AlphaPose single json file to OpenPose frame-by-frame files (with `AlphaPose_to_OpenPose.py` script, see [Utilities](#utilities)): ``` cmd @@ -849,6 +849,7 @@ If you want to contribute to Pose2Sim, please follow [this guide](https://docs.g ✔ **Pose:** Support [BlazePose](https://developers.google.com/mediapipe/solutions/vision/pose_landmarker) for faster inference (on mobile device). ✔ **Pose:** Support [DeepLabCut](http://www.mackenziemathislab.org/deeplabcut) for training on custom datasets. ✔ **Pose:** Support [AlphaPose](https://github.com/MVIG-SJTU/AlphaPose) as an alternative to OpenPose. +✔ **Pose:** Define custom model in config.toml rather than in skeletons.py. ▢ **Pose:** Support [MMPose](https://github.com/open-mmlab/mmpose), [SLEAP](https://sleap.ai/), etc. ▢ **Pose:** Access skeletons more easily by storing them in skeletons.toml.