📝 update Readme
This commit is contained in:
parent
b99c42dc9d
commit
319111aed7
27
Readme.md
27
Readme.md
@ -2,7 +2,7 @@
|
||||
* @Date: 2021-01-13 20:32:12
|
||||
* @Author: Qing Shuai
|
||||
* @LastEditors: Qing Shuai
|
||||
* @LastEditTime: 2021-06-04 17:12:01
|
||||
* @LastEditTime: 2021-06-14 16:41:00
|
||||
* @FilePath: /EasyMocapRelease/Readme.md
|
||||
-->
|
||||
|
||||
@ -21,7 +21,7 @@
|
||||
|
||||
[![report](https://img.shields.io/badge/quickstart-green)](./doc/quickstart.md)
|
||||
|
||||
This is the basic code for fitting SMPL[1]/SMPL+H[2]/SMPL-X[3] model to capture body+hand+face poses from multiple views.
|
||||
This is the basic code for fitting SMPL[1]/SMPL+H[2]/SMPL-X[3]/MANO[2] model to capture body+hand+face poses from multiple views.
|
||||
|
||||
<div align="center">
|
||||
<img src="doc/feng/mv1pmf-smplx.gif" width="80%">
|
||||
@ -29,6 +29,12 @@ This is the basic code for fitting SMPL[1]/SMPL+H[2]/SMPL-X[3] model to capture
|
||||
<sup>Videos are from ZJU-MoCap, with 23 calibrated and synchronized cameras.<sup/>
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<img src="doc/feng/mano.gif" width="80%">
|
||||
<br>
|
||||
<sup>Captured with 8 cameras.<sup/>
|
||||
</div>
|
||||
|
||||
### Internet video with a mirror
|
||||
|
||||
[![report](https://img.shields.io/badge/CVPR21-mirror-red)](https://arxiv.org/pdf/2104.00340.pdf) [![quickstart](https://img.shields.io/badge/quickstart-green)](https://github.com/zju3dv/Mirrored-Human)
|
||||
@ -62,11 +68,15 @@ This is the basic code for fitting SMPL[1]/SMPL+H[2]/SMPL-X[3] model to capture
|
||||
<sup>Captured with 4 consumer cameras<sup/>
|
||||
</div>
|
||||
|
||||
### Others
|
||||
### Novel view synthesis from sparse views
|
||||
[![report](https://img.shields.io/badge/CVPR21-neuralbody-red)](https://arxiv.org/pdf/2012.15838.pdf) [![quickstart](https://img.shields.io/badge/quickstart-green)](https://github.com/zju3dv/neuralbody)
|
||||
|
||||
This project is used by many other projects:
|
||||
<div align="center">
|
||||
<img src="doc/neuralbody/sida-frame0.jpg" width="80%"><br/>
|
||||
<img src="doc/neuralbody/sida.gif" width="80%"><br/>
|
||||
<sup>Captured with 8 consumer cameras<sup/>
|
||||
</div>
|
||||
|
||||
- [[CVPR21] Dense Reconstruction and View Synthesis from **Sparse Views**](https://zju3dv.github.io/neuralbody/)
|
||||
|
||||
## Other features
|
||||
|
||||
@ -78,8 +88,9 @@ This project is used by many other projects:
|
||||
|
||||
## Updates
|
||||
|
||||
- 06/04/2021: The **real-time 3D visualization** part is released!
|
||||
- 04/12/2021: Mirrored-Human part is released. We also release the calibration tool and the annotator.
|
||||
- 06/10/2021: The **real-time 3D visualization** part is released!
|
||||
- 04/11/2021: The calibration tool and the annotator are released.
|
||||
- 04/11/2021: **Mirrored-Human** part is released.
|
||||
|
||||
## Installation
|
||||
|
||||
@ -103,7 +114,7 @@ Here are the great works this project is built upon:
|
||||
- `easymocap/estimator/YOLOv4`: an object detector[6](Coming soon)
|
||||
- `easymocap/estimator/HRNet` : a 2D human pose estimator[7](Coming soon)
|
||||
|
||||
We also would like to thank Wenduo Feng who is the performer in the sample data.
|
||||
We also would like to thank Wenduo Feng, Di Huang, Yuji Chen, Hao Xu, Qing Shuai, Qi Fang, Ting Xie, Junting Dong, Sida Peng and Xiaopeng Ji who are the performers in the sample data.
|
||||
|
||||
## Contact
|
||||
|
||||
|
BIN
doc/feng/mano.gif
Normal file
BIN
doc/feng/mano.gif
Normal file
Binary file not shown.
After Width: | Height: | Size: 2.3 MiB |
BIN
doc/neuralbody/sida-frame0.jpg
Normal file
BIN
doc/neuralbody/sida-frame0.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 274 KiB |
BIN
doc/neuralbody/sida.gif
Normal file
BIN
doc/neuralbody/sida.gif
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.9 MiB |
@ -2,7 +2,7 @@
|
||||
* @Date: 2021-04-02 11:53:16
|
||||
* @Author: Qing Shuai
|
||||
* @LastEditors: Qing Shuai
|
||||
* @LastEditTime: 2021-05-27 20:15:52
|
||||
* @LastEditTime: 2021-06-14 14:26:19
|
||||
* @FilePath: /EasyMocapRelease/doc/quickstart.md
|
||||
-->
|
||||
# Quick Start
|
||||
@ -13,17 +13,16 @@ We provide an example multiview dataset[[dropbox](https://www.dropbox.com/s/24mb
|
||||
|
||||
```bash
|
||||
data=path/to/data
|
||||
out=path/to/output
|
||||
# 0. extract the video to images
|
||||
python3 scripts/preprocess/extract_video.py ${data} --handface
|
||||
# 2.1 example for SMPL reconstruction
|
||||
python3 apps/demo/mv1p.py ${data} --out ${out}/smpl --vis_det --vis_repro --undis --sub_vis 1 7 13 19 --vis_smpl
|
||||
python3 apps/demo/mv1p.py ${data} --out ${data}/output/smpl --vis_det --vis_repro --undis --sub_vis 1 7 13 19 --vis_smpl
|
||||
# 2.2 example for SMPL-X reconstruction
|
||||
python3 apps/demo/mv1p.py ${data} --out ${out}/smplx --vis_det --vis_repro --undis --sub_vis 1 7 13 19 --body bodyhandface --model smplx --gender male --vis_smpl
|
||||
python3 apps/demo/mv1p.py ${data} --out ${data}/output/smplx --vis_det --vis_repro --undis --sub_vis 1 7 13 19 --body bodyhandface --model smplx --gender male --vis_smpl
|
||||
# 2.3 example for MANO reconstruction
|
||||
# MANO model is required for this part
|
||||
python3 apps/demo/mv1p.py ${data} --out ${out}/manol --vis_det --vis_repro --undis --sub_vis 1 7 13 19 --body handl --model manol --gender male --vis_smpl
|
||||
python3 apps/demo/mv1p.py ${data} --out ${out}/manor --vis_det --vis_repro --undis --sub_vis 1 7 13 19 --body handr --model manor --gender male --vis_smpl
|
||||
python3 apps/demo/mv1p.py ${data} --out ${data}/output/manol --vis_det --vis_repro --undis --sub_vis 1 7 13 19 --body handl --model manol --gender male --vis_smpl
|
||||
python3 apps/demo/mv1p.py ${data} --out ${data}/output/manor --vis_det --vis_repro --undis --sub_vis 1 7 13 19 --body handr --model manor --gender male --vis_smpl
|
||||
```
|
||||
|
||||
# Demo On Your Dataset
|
||||
|
Loading…
Reference in New Issue
Block a user