diff --git a/Readme.md b/Readme.md index b5e49ec..97f8bef 100644 --- a/Readme.md +++ b/Readme.md @@ -2,7 +2,7 @@ * @Date: 2021-01-13 20:32:12 * @Author: Qing Shuai * @LastEditors: Qing Shuai - * @LastEditTime: 2021-07-12 15:27:41 + * @LastEditTime: 2021-07-21 15:04:22 * @FilePath: /EasyMocapRelease/Readme.md --> @@ -26,6 +26,8 @@ This is the basic code for fitting SMPL[1]/SMPL+H[2]/SMPL-X[3]/MANO[2] model to

+ +
Videos are from ZJU-MoCap, with 23 calibrated and synchronized cameras.
@@ -79,7 +81,17 @@ This is the basic code for fitting SMPL[1]/SMPL+H[2]/SMPL-X[3]/MANO[2] model to ## ZJU-MoCap -With out proposed method, we release two large dataset of human motion: LightStage and Mirrored-Human. See the [website](https://chingswy.github.io/Dataset-Demo/) for more details. +With our proposed method, we release two large dataset of human motion: LightStage and Mirrored-Human. See the [website](https://chingswy.github.io/Dataset-Demo/) for more details. + +
+
+ LightStage: captured with LightStage system +
+ +
+
+ Mirrored-Human: collected from the Internet +
## Other features @@ -126,12 +138,16 @@ Here are the great works this project is built upon: - `easymocap/estimator/YOLOv4`: an object detector[6](Coming soon) - `easymocap/estimator/HRNet` : a 2D human pose estimator[7](Coming soon) -We also would like to thank Wenduo Feng, Di Huang, Yuji Chen, Hao Xu, Qing Shuai, Qi Fang, Ting Xie, Junting Dong, Sida Peng and Xiaopeng Ji who are the performers in the sample data. - ## Contact Please open an issue if you have any questions. We appreciate all contributions to improve our project. +## Contributor + +EasyMocap is **authored by** [**Qing Shuai**](https://chingswy.github.io/), [**Qi Fang**](https://raypine.github.io/), [**Junting Dong**](https://jtdong.com/), [**Sida Peng**](https://pengsida.net/), [**Di Huang**](https://www.raaj.tech), [**Hujun Bao**](https://jhugestar.github.io), **and** [**Xiaowei Zhou**](https://xzhou.me/). + +We would like to thank Wenduo Feng, Di Huang, Yuji Chen, Hao Xu, Qing Shuai, Qi Fang, Ting Xie, Junting Dong, Sida Peng and Xiaopeng Ji who are the performers in the sample data. We would also like to thank all the people who has helped EasyMocap [in any way](https://github.com/zju3dv/EasyMocap/graphs/contributors). + ## Citation This project is a part of our work [iMocap](https://zju3dv.github.io/iMoCap/), [Mirrored-Human](https://zju3dv.github.io/Mirrored-Human/) and [Neural Body](https://zju3dv.github.io/neuralbody/) diff --git a/doc/assets/ZJU-MoCap-lightstage.jpg b/doc/assets/ZJU-MoCap-lightstage.jpg new file mode 100644 index 0000000..1eaee60 Binary files /dev/null and b/doc/assets/ZJU-MoCap-lightstage.jpg differ