2021-01-14 21:17:40 +08:00
<!--
* @Date: 2021-01-13 20:32:12
* @Author: Qing Shuai
* @LastEditors: Qing Shuai
2021-04-02 12:28:46 +08:00
* @LastEditTime: 2021-04-02 12:26:56
2021-01-14 21:22:44 +08:00
* @FilePath: /EasyMocapRelease/Readme.md
2021-01-14 21:17:40 +08:00
-->
2021-03-13 21:58:16 +08:00
2021-01-14 21:17:40 +08:00
# EasyMocap
2021-03-13 21:58:16 +08:00
2021-04-02 12:28:46 +08:00
**EasyMocap** is an open-source toolbox for **markerless human motion capture** from RGB videos. In this project, we provide a lot of motion capture demos in different settings.
2021-01-17 21:08:07 +08:00
2021-04-02 12:28:46 +08:00
![python ](https://img.shields.io/github/languages/top/zju3dv/EasyMocap )
![star ](https://img.shields.io/github/stars/zju3dv/EasyMocap?style=social )
2021-01-14 21:17:40 +08:00
2021-04-02 12:28:46 +08:00
----
2021-01-14 21:17:40 +08:00
2021-04-02 12:28:46 +08:00
## Core features
2021-01-17 21:08:07 +08:00
2021-04-02 12:28:46 +08:00
### Multiple views of single person
2021-01-14 21:17:40 +08:00
2021-04-02 12:28:46 +08:00
This is the basic code for fitting SMPL[1]/SMPL+H[2]/SMPL-X[3] model to capture body+hand+face poses from multiple views.
2021-01-14 21:41:31 +08:00
2021-04-02 12:28:46 +08:00
< div align = "center" >
< img src = "doc/feng/mv1pmf-smplx.gif" width = "80%" >
< br >
< sup > Videos are from ZJU-MoCap, with 23 calibrated and synchronized cameras.< sup / >
< / div >
2021-03-13 21:58:16 +08:00
2021-04-02 12:28:46 +08:00
### Internet video with a mirror
2021-01-14 21:41:31 +08:00
2021-04-02 12:28:46 +08:00
[![report ](https://img.shields.io/badge/mirrored-link-red )](https://arxiv.org/pdf/2104.00340.pdf)
2021-01-14 21:41:31 +08:00
2021-04-02 12:28:46 +08:00
< div align = "center" >
< img src = "https://raw.githubusercontent.com/zju3dv/Mirrored-Human/main/doc/assets/smpl-avatar.gif" width = "80%" >
< br >
< sup > This video is from < a href = "https://www.youtube.com/watch?v=KOCJJ27hhIE" > Youtube< a / > .< sup / >
< / div >
2021-03-13 21:58:16 +08:00
2021-04-02 12:28:46 +08:00
### Multiple Internet videos with a specific action (Coming soon)
2021-01-14 21:17:40 +08:00
2021-04-02 12:28:46 +08:00
[![report ](https://img.shields.io/badge/imocap-link-red )](https://arxiv.org/pdf/2008.07931.pdf)
2021-03-13 21:58:16 +08:00
2021-04-02 12:28:46 +08:00
< div align = "center" >
< img src = "doc/imocap/frame_00036_036.jpg" width = "80%" >
< / div >
2021-03-13 21:58:16 +08:00
2021-04-02 12:28:46 +08:00
### Multiple views of multiple people (Comming soon)
2021-01-14 21:17:40 +08:00
2021-04-02 12:28:46 +08:00
[![report ](https://img.shields.io/badge/mvpose-link-red )](https://arxiv.org/pdf/1901.04111.pdf)
2021-01-14 21:17:40 +08:00
2021-03-13 21:58:16 +08:00
2021-04-02 12:28:46 +08:00
### Others
This project is used by many other projects:
- [[CVPR21] Dense Reconstruction and View Synthesis from **Sparse Views** ](https://zju3dv.github.io/neuralbody/)
2021-03-13 21:58:16 +08:00
2021-04-02 12:28:46 +08:00
## Other features
2021-01-14 21:17:40 +08:00
2021-04-02 12:28:46 +08:00
- [Camera calibration ](./doc/todo.md )
- [Pose guided synchronization ](./doc/todo.md )
- [Annotator ](./doc/todo.md )
- [Exporting of multiple data formats(bvh, asf/amc, ...) ](./doc/todo.md )
2021-03-13 21:58:16 +08:00
2021-04-02 12:28:46 +08:00
## Updates
- 04/02/2021: We are now rebuilding our project for `v0.2` , please stay tuned. `v0.1` is available at [this link ](https://github.com/zju3dv/EasyMocap/releases/tag/v0.1 ).
2021-03-13 21:58:16 +08:00
2021-04-02 12:28:46 +08:00
## Installation
2021-03-13 21:58:16 +08:00
2021-04-02 12:28:46 +08:00
See [doc/install ](./doc/install.md ) for more instructions.
2021-01-24 22:33:08 +08:00
2021-04-02 12:28:46 +08:00
## Quick Start
2021-03-13 21:58:16 +08:00
2021-04-02 12:28:46 +08:00
See [doc/quickstart ](doc/quickstart.md ) for more instructions.
2021-01-14 21:17:40 +08:00
2021-04-02 12:28:46 +08:00
## Not Quick Start
2021-01-14 21:17:40 +08:00
2021-04-02 12:28:46 +08:00
See [doc/notquickstart ](doc/notquickstart.md ) for more instructions.
2021-01-24 22:33:08 +08:00
2021-01-17 21:08:07 +08:00
## Evaluation
2021-04-02 12:28:46 +08:00
The weight parameters can be set according your data.
2021-01-24 22:33:08 +08:00
2021-04-02 12:28:46 +08:00
More quantitative reports will be added in [doc/evaluation.md ](doc/evaluation.md )
2021-01-17 21:08:07 +08:00
2021-01-14 21:17:40 +08:00
## Acknowledgements
2021-04-02 12:28:46 +08:00
2021-01-14 23:13:49 +08:00
Here are the great works this project is built upon:
2021-01-14 21:17:40 +08:00
2021-01-14 23:13:49 +08:00
- SMPL models and layer are from MPII [SMPL-X model ](https://github.com/vchoutas/smplx ).
2021-01-14 21:17:40 +08:00
- Some functions are borrowed from [SPIN ](https://github.com/nkolot/SPIN ), [VIBE ](https://github.com/mkocabas/VIBE ), [SMPLify-X ](https://github.com/vchoutas/smplify-x )
2021-01-14 23:19:50 +08:00
- The method for fitting 3D skeleton and SMPL model is similar to [TotalCapture ](http://www.cs.cmu.edu/~hanbyulj/totalcapture/ ), without using point cloud.
2021-01-14 21:17:40 +08:00
2021-01-14 23:13:49 +08:00
We also would like to thank Wenduo Feng who is the performer in the sample data.
2021-01-14 21:17:40 +08:00
## Contact
2021-04-02 12:28:46 +08:00
Please open an issue if you have any questions. We appreciate all contributions to improve our project.
2021-01-14 21:17:40 +08:00
## Citation
2021-04-02 12:28:46 +08:00
2021-03-13 21:58:16 +08:00
This project is a part of our work [iMocap ](https://zju3dv.github.io/iMoCap/ ), [Mirrored-Human ](https://zju3dv.github.io/Mirrored-Human/ ) and [Neural Body ](https://zju3dv.github.io/neuralbody/ )
2021-01-14 23:13:49 +08:00
2021-04-02 12:28:46 +08:00
Please consider citing these works if you find this repo is useful for your projects.
2021-01-14 21:17:40 +08:00
```bibtex
@inproceedings {dong2020motion,
title={Motion capture from internet videos},
author={Dong, Junting and Shuai, Qing and Zhang, Yuanqing and Liu, Xian and Zhou, Xiaowei and Bao, Hujun},
booktitle={European Conference on Computer Vision},
pages={210--227},
year={2020},
organization={Springer}
}
2021-01-14 22:48:55 +08:00
2021-03-24 17:03:03 +08:00
@inproceedings {peng2021neural,
2021-01-14 22:48:55 +08:00
title={Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans},
2021-01-16 20:40:50 +08:00
author={Peng, Sida and Zhang, Yuanqing and Xu, Yinghao and Wang, Qianqian and Shuai, Qing and Bao, Hujun and Zhou, Xiaowei},
2021-03-24 17:04:20 +08:00
booktitle={CVPR},
2021-03-04 10:15:38 +08:00
year={2021}
2021-01-14 22:48:55 +08:00
}
2021-03-13 21:58:16 +08:00
2021-03-23 09:33:47 +08:00
@inproceedings {fang2021mirrored,
title={Reconstructing 3D Human Pose by Watching Humans in the Mirror},
2021-03-13 21:58:16 +08:00
author={Fang, Qi and Shuai, Qing and Dong, Junting and Bao, Hujun and Zhou, Xiaowei},
2021-03-23 09:33:47 +08:00
booktitle={CVPR},
2021-03-13 21:58:16 +08:00
year={2021}
}
2021-01-24 22:33:08 +08:00
```
## Reference
```bash
[1] Loper, Matthew, et al. "SMPL: A skinned multi-person linear model." ACM transactions on graphics (TOG) 34.6 (2015): 1-16.
[2] Romero, Javier, Dimitrios Tzionas, and Michael J. Black. "Embodied hands: Modeling and capturing hands and bodies together." ACM Transactions on Graphics (ToG) 36.6 (2017): 1-17.
[3] Pavlakos, Georgios, et al. "Expressive body capture: 3d hands, face, and body from a single image." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
Bogo, Federica, et al. "Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image." European conference on computer vision. Springer, Cham, 2016.
[4] Cao, Z., Hidalgo, G., Simon, T., Wei, S.E., Sheikh, Y.: Openpose: real-time multi-person 2d pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008 (2018)
2021-01-27 15:26:55 +08:00
```