Merge branch 'master' of github.com:zju3dv/EasyMocap

This commit is contained in:
shuaiqing 2021-09-06 13:29:44 +08:00
commit 2544c970ef
17 changed files with 175 additions and 10 deletions

View File

@ -8,6 +8,10 @@
# EasyMocap
<div align="left">
<img src="logo.png" width="20%">
</div>
**EasyMocap** is an open-source toolbox for **markerless human motion capture** from RGB videos. In this project, we provide a lot of motion capture demos in different settings.
![python](https://img.shields.io/github/languages/top/zju3dv/EasyMocap)
@ -19,7 +23,7 @@
### Multiple views of a single person
[![report](https://img.shields.io/badge/quickstart-green)](./doc/quickstart.md)
[![report](https://img.shields.io/badge/quickstart-green)](./doc/quickstart.md) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Cyvu_lPFUajr2RKt6yJIfS3HQIIYl6QU?usp=sharing)
This is the basic code for fitting SMPL[1]/SMPL+H[2]/SMPL-X[3]/MANO[2] model to capture body+hand+face poses from multiple views.
@ -132,7 +136,7 @@ With our proposed method, we release two large dataset of human motion: LightSta
- [Exporting of multiple data formats(bvh, asf/amc, ...)](./doc/02_output.md)
## Updates
- 08/09/2021: Add a colab demo [here](https://colab.research.google.com/drive/1Cyvu_lPFUajr2RKt6yJIfS3HQIIYl6QU?usp=sharing).
- 06/28/2021: The **Multi-view Multi-person** part is released!
- 06/10/2021: The **real-time 3D visualization** part is released!
- 04/11/2021: The calibration tool and the annotator are released.
@ -160,7 +164,7 @@ Please open an issue if you have any questions. We appreciate all contributions
## Contributor
EasyMocap is **authored by** [**Qing Shuai**](https://chingswy.github.io/), [**Qi Fang**](https://raypine.github.io/), [**Junting Dong**](https://jtdong.com/), [**Sida Peng**](https://pengsida.net/), **Di Huang**, **Hujun Bao**, **and** [**Xiaowei Zhou**](https://xzhou.me/).
EasyMocap is **built by** researchers from the 3D vision group of Zhejiang University: [**Qing Shuai**](https://chingswy.github.io/), [**Qi Fang**](https://raypine.github.io/), [**Junting Dong**](https://jtdong.com/), [**Sida Peng**](https://pengsida.net/), **Di Huang**, [**Hujun Bao**](http://www.cad.zju.edu.cn/home/bao/), **and** [**Xiaowei Zhou**](https://xzhou.me/).
We would like to thank Wenduo Feng, Di Huang, Yuji Chen, Hao Xu, Qing Shuai, Qi Fang, Ting Xie, Junting Dong, Sida Peng and Xiaopeng Ji who are the performers in the sample data. We would also like to thank all the people who has helped EasyMocap [in any way](https://github.com/zju3dv/EasyMocap/graphs/contributors).

View File

@ -27,6 +27,15 @@ First, you should record a video with your chessboard for each camera separately
└── xx.mp4
```
In this tutorial, we use our sample datasets as an example. In that dataset, the intri data is just like the picture below.
<div align="center">
<img src="assets/intri_sample.png" width="60%">
<br>
<sup>Example Intrinsic Dataset<sup/>
</div>
For the extrinsic parameters, you should place the chessboard pattern where it will be visible to all the cameras (on the floor for example) and then take a picture or a short video on all of the cameras.
```bash
@ -38,10 +47,20 @@ For the extrinsic parameters, you should place the chessboard pattern where it w
└── xx.mp4
```
The sample extri data is like the picture below.
<div align="center">
<img src="assets/extri_sample.png" width="60%">
<br>
<sup>Example Extrinsic Dataset<sup/>
</div>
## 2. Detect the chessboard
For both intrinsic parameters and extrinsic parameters, we need detect the corners of the chessboard. So in this step, we first extract images from videos and second detect and write the corners.
```bash
# extrac 2d
# extract 2d
python3 scripts/preprocess/extract_video.py ${data} --no2d
# detect chessboard
python3 apps/calibration/detect_chessboard.py ${data} --out ${data}/output/calibration --pattern 9,6 --grid 0.1
@ -52,31 +71,162 @@ To specify your chessboard, add the option `--pattern`, `--grid`.
Repeat this step for `<intri_data>` and `<extri_data>`.
After this step, you should get the results like the pictures below.
<div align="center">
<img src="assets/extri_chessboard.jpg" width="60%">
<br>
<sup>Result of Detecting Extrinsic Dataset<sup/>
</div>
<div align="center">
<img src="assets/intri_chessboard.jpg" width="60%">
<br>
<sup>Result of Detecting Intrinsic Dataset<sup/>
</div>
## 2.5 Finetune the Chessboard Detection Result
It is vital for calibration to detect the keypoints of chessboard correctly. **Thus we highly recommend you to carefully inspect the visualization result in ${data}/output.** If you find some detection results are wrong, we provide you a tool to make some modifications to them.
```bash
python apps/annotation/annot_calib.py $data --mode chessboard --pattern 9,6 --annot chessboard
```
After running the script above, a OpenCV GUI prompt will show, like below:
<div align="center">
<img src="assets/ft1.png" width="60%">
<br>
<sup>Calibration Annotation Toolkit GUI Interface<sup/>
</div>
> This tool is component of our awesome annotation toolkits, so some key mapping is similar. To learn more about our annotation tools, please check [the document](../annotation/Readme.md).
At the same time, you can see that the CLI presents some auxilary information.
<div align="center">
<img src="assets/ft2.png" width="60%">
<br>
<sup>CLI Prompt of the Annotation Tool<sup/>
</div>
You can learn from the CLI prompt to know the information and which point you are labeling.
In the GUI, the current edited corner is highlighted by a red circle. If you want to make some modification, use mouse to click the correct place, and then a white anchor "+" is presented there.
<div align="center">
<img src="assets/ft3.png" width="60%">
<br>
<sup>Use mouse to specify the correct position<sup/>
</div>
If you think the newly specified coordinate(marked as white anchor) should be the correct position for this corner, rather than old one, press `Space` to confirm. Then the corner position will be changed.
<div align="center">
<img src="assets/ft4.png" width="60%">
<br>
<sup>The result after modifing the position of point<sup/>
</div>
After finish modifying this point, press `Space` to move on to next point.
<div align="center">
<img src="assets/ft5.png" width="60%">
<br>
<sup>Press Space to move on to next point<sup/>
</div>
> Currently we only support move to next point. If you want to move to previous point, please `Space` for many times until it back to start.
If you're satisfied with this frame, you can press `D` move on to next frame.
<div align="center">
<img src="assets/ft6.png" width="60%">
<br>
<sup>Press D to move on to next frame<sup/>
</div>
If you press `A`, you can move back to previous frame.
After finish annotating every frames, press `q` to quit.
<div align="center">
<img src="assets/ft7.png" width="40%">
<br>
<sup>CLI prompt to save the result. Press Y to save and N to discard<sup/>
</div>
Then you can choose whether to save this annotation.
> If your data is on remote server, then the OpenCV GUI may be too slow to operate if you directly run the script via ssh X forwarding. We recommend you use `sshfs` to mount the remote data directory and locally run this script.
## 3. Intrinsic Parameter Calibration
After extracting chessboard, it is available to calibrate the intrinsic parameter.
```bash
python3 apps/calibration/calib_intri.py ${data} --step 5
```
After the script finishes, you'll get `intri.yml` under `${data}/output`.
> This step may take a long time, so please be patient. :-)
## 4. Extrinsic Parameter Calibration
Then you can calibrate the extrinsic parameter.
```
python3 apps/calibration/calib_extri.py ${extri} --intri ${intri}/output/intri.yml
```
After the script finished, you'll get `extri.yml` under `${intri}/output`.
## 5. (Optional)Bundle Adjustment
Coming soon
## 6. Check the calibration
1. Check the calibration results with chessboard:
To check whether your camera parameter is correct, we provide several approaches to make verification.
1. **Check the calibration results with chessboard:**
```bash
python3 apps/calibration/check_calib.py ${extri} --out ${intri}/output --vis --show
```
Check the results with a cube.
A window will be shown for checking.
<div align="center">
<img src="assets/vis_check.png" width="60%">
<br>
<sup>Use chessboard to check results<sup/>
</div>
**Check the results with a cube.**
```bash
python3 apps/calibration/check_calib.py ${extri} --out ${extri}/output --cube
```
You'll get results in `$data/output/cube`.
<div align="center">
<img src="assets/cube.jpg" width="60%">
<br>
<sup>Use cube to check results<sup/>
</div>
2. (TODO) Check the calibration results with people.

Binary file not shown.

After

Width:  |  Height:  |  Size: 278 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 953 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 955 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 959 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 953 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 982 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 932 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 812 KiB

BIN
logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

View File

@ -44,18 +44,26 @@ def extract_2d(openpose, image, keypoints, render, args):
skip = True
if not skip:
os.makedirs(keypoints, exist_ok=True)
cmd = './build/examples/openpose/openpose.bin --image_dir {} --write_json {} --display 0'.format(image, keypoints)
if os.name != 'nt':
cmd = './build/examples/openpose/openpose.bin --image_dir {} --write_json {} --display 0'.format(image, keypoints)
else:
cmd = 'bin\\OpenPoseDemo.exe --image_dir {} --write_json {} --display 0'.format(join(os.getcwd(),image), join(os.getcwd(),keypoints))
if args.highres!=1:
cmd = cmd + ' --net_resolution -1x{}'.format(int(16*((368*args.highres)//16)))
if args.handface:
cmd = cmd + ' --hand --face'
if args.render:
cmd = cmd + ' --write_images {}'.format(render)
if os.path.exists(join(os.getcwd(),render)):
cmd = cmd + ' --write_images {}'.format(join(os.getcwd(),render))
else:
os.makedirs(join(os.getcwd(),render), exist_ok=True)
cmd = cmd + ' --write_images {}'.format(join(os.getcwd(),render))
else:
cmd = cmd + ' --render_pose 0'
os.chdir(openpose)
os.system(cmd)
import json
def read_json(path):
with open(path) as f:
@ -124,8 +132,9 @@ def load_openpose(opname):
out.append(annot)
return out
def convert_from_openpose(src, dst, annotdir):
def convert_from_openpose(path_orig, src, dst, annotdir):
# convert the 2d pose from openpose
os.chdir(path_orig)
inputlist = sorted(os.listdir(src))
for inp in tqdm(inputlist, desc='{:10s}'.format(os.path.basename(dst))):
annots = load_openpose(join(src, inp))
@ -212,7 +221,7 @@ def extract_yolo_hrnet(image_root, annot_root, ext='jpg', use_low=False):
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('path', type=str, default=None, help="the path of data")
parser.add_argument('path', type=str, help="the path of data")
parser.add_argument('--mode', type=str, default='openpose', choices=['openpose', 'yolo-hrnet'], help="model to extract joints from image")
parser.add_argument('--ext', type=str, default='jpg', choices=['jpg', 'png'], help="image file extension")
parser.add_argument('--annot', type=str, default='annots', help="sub directory name to store the generated annotation files, default to be annots")
@ -235,6 +244,7 @@ if __name__ == "__main__":
parser.add_argument('--gtbbox', action='store_true',
help='use the ground-truth bounding box, and hrnet to estimate human pose')
parser.add_argument('--debug', action='store_true')
parser.add_argument('--path_origin', default=os.getcwd())
args = parser.parse_args()
mode = args.mode
@ -266,6 +276,7 @@ if __name__ == "__main__":
join(args.path, 'openpose', sub),
join(args.path, 'openpose_render', sub), args)
convert_from_openpose(
path_orig=args.path_origin,
src=join(args.path, 'openpose', sub),
dst=annot_root,
annotdir=args.annot