diff --git a/Readme.md b/Readme.md
index 55b00c2..3c0beeb 100644
--- a/Readme.md
+++ b/Readme.md
@@ -8,6 +8,10 @@
# EasyMocap
+
+
+
+
**EasyMocap** is an open-source toolbox for **markerless human motion capture** from RGB videos. In this project, we provide a lot of motion capture demos in different settings.
![python](https://img.shields.io/github/languages/top/zju3dv/EasyMocap)
@@ -19,7 +23,7 @@
### Multiple views of a single person
-[![report](https://img.shields.io/badge/quickstart-green)](./doc/quickstart.md)
+[![report](https://img.shields.io/badge/quickstart-green)](./doc/quickstart.md) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Cyvu_lPFUajr2RKt6yJIfS3HQIIYl6QU?usp=sharing)
This is the basic code for fitting SMPL[1]/SMPL+H[2]/SMPL-X[3]/MANO[2] model to capture body+hand+face poses from multiple views.
@@ -132,7 +136,7 @@ With our proposed method, we release two large dataset of human motion: LightSta
- [Exporting of multiple data formats(bvh, asf/amc, ...)](./doc/02_output.md)
## Updates
-
+- 08/09/2021: Add a colab demo [here](https://colab.research.google.com/drive/1Cyvu_lPFUajr2RKt6yJIfS3HQIIYl6QU?usp=sharing).
- 06/28/2021: The **Multi-view Multi-person** part is released!
- 06/10/2021: The **real-time 3D visualization** part is released!
- 04/11/2021: The calibration tool and the annotator are released.
@@ -160,7 +164,7 @@ Please open an issue if you have any questions. We appreciate all contributions
## Contributor
-EasyMocap is **authored by** [**Qing Shuai**](https://chingswy.github.io/), [**Qi Fang**](https://raypine.github.io/), [**Junting Dong**](https://jtdong.com/), [**Sida Peng**](https://pengsida.net/), **Di Huang**, **Hujun Bao**, **and** [**Xiaowei Zhou**](https://xzhou.me/).
+EasyMocap is **built by** researchers from the 3D vision group of Zhejiang University: [**Qing Shuai**](https://chingswy.github.io/), [**Qi Fang**](https://raypine.github.io/), [**Junting Dong**](https://jtdong.com/), [**Sida Peng**](https://pengsida.net/), **Di Huang**, [**Hujun Bao**](http://www.cad.zju.edu.cn/home/bao/), **and** [**Xiaowei Zhou**](https://xzhou.me/).
We would like to thank Wenduo Feng, Di Huang, Yuji Chen, Hao Xu, Qing Shuai, Qi Fang, Ting Xie, Junting Dong, Sida Peng and Xiaopeng Ji who are the performers in the sample data. We would also like to thank all the people who has helped EasyMocap [in any way](https://github.com/zju3dv/EasyMocap/graphs/contributors).
diff --git a/apps/calibration/Readme.md b/apps/calibration/Readme.md
index 6102182..2eb9ef7 100644
--- a/apps/calibration/Readme.md
+++ b/apps/calibration/Readme.md
@@ -27,6 +27,15 @@ First, you should record a video with your chessboard for each camera separately
└── xx.mp4
```
+In this tutorial, we use our sample datasets as an example. In that dataset, the intri data is just like the picture below.
+
+
+
+
+
Example Intrinsic Dataset
+
+
+
For the extrinsic parameters, you should place the chessboard pattern where it will be visible to all the cameras (on the floor for example) and then take a picture or a short video on all of the cameras.
```bash
@@ -38,10 +47,20 @@ For the extrinsic parameters, you should place the chessboard pattern where it w
└── xx.mp4
```
+The sample extri data is like the picture below.
+
+
+
+
+
+
Example Extrinsic Dataset
+
+
+
## 2. Detect the chessboard
For both intrinsic parameters and extrinsic parameters, we need detect the corners of the chessboard. So in this step, we first extract images from videos and second detect and write the corners.
```bash
-# extrac 2d
+# extract 2d
python3 scripts/preprocess/extract_video.py ${data} --no2d
# detect chessboard
python3 apps/calibration/detect_chessboard.py ${data} --out ${data}/output/calibration --pattern 9,6 --grid 0.1
@@ -52,31 +71,162 @@ To specify your chessboard, add the option `--pattern`, `--grid`.
Repeat this step for `` and ``.
+After this step, you should get the results like the pictures below.
+
+
+
+
+
Result of Detecting Extrinsic Dataset
+
+
+
+
+
+
+
Result of Detecting Intrinsic Dataset
+
+
+## 2.5 Finetune the Chessboard Detection Result
+
+It is vital for calibration to detect the keypoints of chessboard correctly. **Thus we highly recommend you to carefully inspect the visualization result in ${data}/output.** If you find some detection results are wrong, we provide you a tool to make some modifications to them.
+
+```bash
+python apps/annotation/annot_calib.py $data --mode chessboard --pattern 9,6 --annot chessboard
+```
+
+After running the script above, a OpenCV GUI prompt will show, like below:
+
+
+
+
+
Calibration Annotation Toolkit GUI Interface
+
+
+
+> This tool is component of our awesome annotation toolkits, so some key mapping is similar. To learn more about our annotation tools, please check [the document](../annotation/Readme.md).
+
+At the same time, you can see that the CLI presents some auxilary information.
+
+
+
+
+
+
CLI Prompt of the Annotation Tool
+
+
+
+You can learn from the CLI prompt to know the information and which point you are labeling.
+
+In the GUI, the current edited corner is highlighted by a red circle. If you want to make some modification, use mouse to click the correct place, and then a white anchor "+" is presented there.
+
+
+
+
+
+
Use mouse to specify the correct position
+
+
+If you think the newly specified coordinate(marked as white anchor) should be the correct position for this corner, rather than old one, press `Space` to confirm. Then the corner position will be changed.
+
+
+
+
+
The result after modifing the position of point
+
+
+After finish modifying this point, press `Space` to move on to next point.
+
+
+
+
+
+
Press Space to move on to next point
+
+
+> Currently we only support move to next point. If you want to move to previous point, please `Space` for many times until it back to start.
+
+If you're satisfied with this frame, you can press `D` move on to next frame.
+
+
+
+
+
+
Press D to move on to next frame
+
+
+
+If you press `A`, you can move back to previous frame.
+
+After finish annotating every frames, press `q` to quit.
+
+
+
+
+
CLI prompt to save the result. Press Y to save and N to discard
+
+
+Then you can choose whether to save this annotation.
+
+> If your data is on remote server, then the OpenCV GUI may be too slow to operate if you directly run the script via ssh X forwarding. We recommend you use `sshfs` to mount the remote data directory and locally run this script.
+
+
## 3. Intrinsic Parameter Calibration
+After extracting chessboard, it is available to calibrate the intrinsic parameter.
+
```bash
python3 apps/calibration/calib_intri.py ${data} --step 5
```
+After the script finishes, you'll get `intri.yml` under `${data}/output`.
+
+> This step may take a long time, so please be patient. :-)
+
## 4. Extrinsic Parameter Calibration
+
+
+Then you can calibrate the extrinsic parameter.
+
```
python3 apps/calibration/calib_extri.py ${extri} --intri ${intri}/output/intri.yml
```
+After the script finished, you'll get `extri.yml` under `${intri}/output`.
+
## 5. (Optional)Bundle Adjustment
Coming soon
## 6. Check the calibration
-1. Check the calibration results with chessboard:
+To check whether your camera parameter is correct, we provide several approaches to make verification.
+
+1. **Check the calibration results with chessboard:**
```bash
python3 apps/calibration/check_calib.py ${extri} --out ${intri}/output --vis --show
```
-Check the results with a cube.
+A window will be shown for checking.
+
+
+
+
+
Use chessboard to check results
+
+
+**Check the results with a cube.**
```bash
python3 apps/calibration/check_calib.py ${extri} --out ${extri}/output --cube
```
+You'll get results in `$data/output/cube`.
+
+
+
+
+
+
Use cube to check results
+
+
+
2. (TODO) Check the calibration results with people.
\ No newline at end of file
diff --git a/apps/calibration/assets/cube.jpg b/apps/calibration/assets/cube.jpg
new file mode 100644
index 0000000..d3c6674
Binary files /dev/null and b/apps/calibration/assets/cube.jpg differ
diff --git a/apps/calibration/assets/extri_chessboard.jpg b/apps/calibration/assets/extri_chessboard.jpg
new file mode 100644
index 0000000..4d60338
Binary files /dev/null and b/apps/calibration/assets/extri_chessboard.jpg differ
diff --git a/apps/calibration/assets/extri_sample.png b/apps/calibration/assets/extri_sample.png
new file mode 100644
index 0000000..1321dd4
Binary files /dev/null and b/apps/calibration/assets/extri_sample.png differ
diff --git a/apps/calibration/assets/ft1.png b/apps/calibration/assets/ft1.png
new file mode 100644
index 0000000..7868b48
Binary files /dev/null and b/apps/calibration/assets/ft1.png differ
diff --git a/apps/calibration/assets/ft2.png b/apps/calibration/assets/ft2.png
new file mode 100644
index 0000000..9dcea58
Binary files /dev/null and b/apps/calibration/assets/ft2.png differ
diff --git a/apps/calibration/assets/ft3.png b/apps/calibration/assets/ft3.png
new file mode 100644
index 0000000..6e352cf
Binary files /dev/null and b/apps/calibration/assets/ft3.png differ
diff --git a/apps/calibration/assets/ft4.png b/apps/calibration/assets/ft4.png
new file mode 100644
index 0000000..06fc9ad
Binary files /dev/null and b/apps/calibration/assets/ft4.png differ
diff --git a/apps/calibration/assets/ft5.png b/apps/calibration/assets/ft5.png
new file mode 100644
index 0000000..7d084da
Binary files /dev/null and b/apps/calibration/assets/ft5.png differ
diff --git a/apps/calibration/assets/ft6.png b/apps/calibration/assets/ft6.png
new file mode 100644
index 0000000..cbe64d2
Binary files /dev/null and b/apps/calibration/assets/ft6.png differ
diff --git a/apps/calibration/assets/ft7.png b/apps/calibration/assets/ft7.png
new file mode 100644
index 0000000..7079a90
Binary files /dev/null and b/apps/calibration/assets/ft7.png differ
diff --git a/apps/calibration/assets/intri_chessboard.jpg b/apps/calibration/assets/intri_chessboard.jpg
new file mode 100644
index 0000000..d28d6f9
Binary files /dev/null and b/apps/calibration/assets/intri_chessboard.jpg differ
diff --git a/apps/calibration/assets/intri_sample.png b/apps/calibration/assets/intri_sample.png
new file mode 100644
index 0000000..076dbf0
Binary files /dev/null and b/apps/calibration/assets/intri_sample.png differ
diff --git a/apps/calibration/assets/vis_check.png b/apps/calibration/assets/vis_check.png
new file mode 100644
index 0000000..831f2ae
Binary files /dev/null and b/apps/calibration/assets/vis_check.png differ
diff --git a/logo.png b/logo.png
new file mode 100644
index 0000000..c6471fe
Binary files /dev/null and b/logo.png differ
diff --git a/scripts/preprocess/extract_video.py b/scripts/preprocess/extract_video.py
index 21ae7af..4cb912d 100644
--- a/scripts/preprocess/extract_video.py
+++ b/scripts/preprocess/extract_video.py
@@ -44,18 +44,26 @@ def extract_2d(openpose, image, keypoints, render, args):
skip = True
if not skip:
os.makedirs(keypoints, exist_ok=True)
- cmd = './build/examples/openpose/openpose.bin --image_dir {} --write_json {} --display 0'.format(image, keypoints)
+ if os.name != 'nt':
+ cmd = './build/examples/openpose/openpose.bin --image_dir {} --write_json {} --display 0'.format(image, keypoints)
+ else:
+ cmd = 'bin\\OpenPoseDemo.exe --image_dir {} --write_json {} --display 0'.format(join(os.getcwd(),image), join(os.getcwd(),keypoints))
if args.highres!=1:
cmd = cmd + ' --net_resolution -1x{}'.format(int(16*((368*args.highres)//16)))
if args.handface:
cmd = cmd + ' --hand --face'
if args.render:
- cmd = cmd + ' --write_images {}'.format(render)
+ if os.path.exists(join(os.getcwd(),render)):
+ cmd = cmd + ' --write_images {}'.format(join(os.getcwd(),render))
+ else:
+ os.makedirs(join(os.getcwd(),render), exist_ok=True)
+ cmd = cmd + ' --write_images {}'.format(join(os.getcwd(),render))
else:
cmd = cmd + ' --render_pose 0'
os.chdir(openpose)
os.system(cmd)
+
import json
def read_json(path):
with open(path) as f:
@@ -124,8 +132,9 @@ def load_openpose(opname):
out.append(annot)
return out
-def convert_from_openpose(src, dst, annotdir):
+def convert_from_openpose(path_orig, src, dst, annotdir):
# convert the 2d pose from openpose
+ os.chdir(path_orig)
inputlist = sorted(os.listdir(src))
for inp in tqdm(inputlist, desc='{:10s}'.format(os.path.basename(dst))):
annots = load_openpose(join(src, inp))
@@ -212,7 +221,7 @@ def extract_yolo_hrnet(image_root, annot_root, ext='jpg', use_low=False):
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
- parser.add_argument('path', type=str, default=None, help="the path of data")
+ parser.add_argument('path', type=str, help="the path of data")
parser.add_argument('--mode', type=str, default='openpose', choices=['openpose', 'yolo-hrnet'], help="model to extract joints from image")
parser.add_argument('--ext', type=str, default='jpg', choices=['jpg', 'png'], help="image file extension")
parser.add_argument('--annot', type=str, default='annots', help="sub directory name to store the generated annotation files, default to be annots")
@@ -235,6 +244,7 @@ if __name__ == "__main__":
parser.add_argument('--gtbbox', action='store_true',
help='use the ground-truth bounding box, and hrnet to estimate human pose')
parser.add_argument('--debug', action='store_true')
+ parser.add_argument('--path_origin', default=os.getcwd())
args = parser.parse_args()
mode = args.mode
@@ -266,6 +276,7 @@ if __name__ == "__main__":
join(args.path, 'openpose', sub),
join(args.path, 'openpose_render', sub), args)
convert_from_openpose(
+ path_orig=args.path_origin,
src=join(args.path, 'openpose', sub),
dst=annot_root,
annotdir=args.annot