# EasyMocap **EasyMocap** is an open-source toolbox for **markerless human motion capture** from RGB videos. ## Features - [x] multi-view, single person => 3d body keypoints - [x] multi-view, single person => SMPL parameters |:heavy_check_mark: Skeleton|:heavy_check_mark: SMPL| |----|----| |![repro](doc/feng/repro_512.gif)|![smpl](doc/feng/smpl_512.gif)| > The following features are not released yet. We are now working hard on them. Please stay tuned! |Input|Output| |----|----| |multi-view, single person | whole body 3d keypoints| |multi-view, single person | SMPL-H/SMPLX/MANO parameters| |sparse view, single person | dense reconstruction and view synthesis: [NeuralBody](https://zju3dv.github.io/neuralbody/).| |:black_square_button: Whole Body|:black_square_button: [Detailed Mesh](https://zju3dv.github.io/neuralbody/)| |----|----| |
|| ## Installation ### 1. Download SMPL models To download the *SMPL* model go to [this](http://smpl.is.tue.mpg.de) (male and female models, version 1.0.0, 10 shape PCs) and [this](http://smplify.is.tue.mpg.de) (gender neutral model) project website and register to get access to the downloads section. Prepare the model as [smplx](https://github.com/vchoutas/smplx#model-loading). **Place them as following:** ```bash data └── smplx ├── J_regressor_body25.npy └── smpl ├── SMPL_FEMALE.pkl ├── SMPL_MALE.pkl └── SMPL_NEUTRAL.pkl ``` ### 2. Requirements - torch==1.4.0 - torchvision==0.5.0 - opencv-python - [pyrender](https://pyrender.readthedocs.io/en/latest/install/index.html#python-installation): for visualization - chumpy: for loading SMPL model Some of python libraries can be found in `requirements.txt`. You can test different version of PyTorch. ## Quick Start We provide an example multiview dataset[[dropbox](https://www.dropbox.com/s/24mb7r921b1g9a7/zju-ls-feng.zip?dl=0)][[BaiduDisk](https://pan.baidu.com/s/1lvAopzYGCic3nauoQXjbPw)(vg1z)]. After downloading the dataset, you can run the following example scripts. ```bash data=path/to/data out=path/to/output # 0. extract the video to images python3 scripts/preprocess/extract_video.py ${data} # 1. example for skeleton reconstruction python3 code/demo_mv1pmf_skel.py ${data} --out ${out} --vis_det --vis_repro --undis --sub_vis 1 7 13 19 # 2. example for SMPL reconstruction python3 code/demo_mv1pmf_smpl.py ${data} --out ${out} --end 300 --vis_smpl --undis --sub_vis 1 7 13 19 ``` ## Not Quick Start ### 0. Prepare Your Own Dataset ```bash zju-ls-feng ├── extri.yml ├── intri.yml └── videos ├── 1.mp4 ├── 2.mp4 ├── ... ├── 8.mp4 └── 9.mp4 ``` The input videos are placed in `videos/`. Here `intri.yml` and `extri.yml` store the camera intrinsici and extrinsic parameters. For example, if the name of a video is `1.mp4`, then there must exist `K_1`, `dist_1` in `intri.yml`, and `R_1((3, 1), rotation vector of camera)`, `T_1(3, 1)` in `extri.yml`. The file format is following [OpenCV format](https://docs.opencv.org/master/dd/d74/tutorial_file_input_output_with_xml_yml.html). ### 1. Run [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) ```bash data=path/to/data out=path/to/output python3 scripts/preprocess/extract_video.py ${data} --openpose