About
Who and what are behind the dataset?
What
DIP is a Deep-Learning model predicting full body pose from sparse (only 6) Inertial-Measurement-Units (IMUs) in real-time. DIP is firstly trained on a large scale synthetic dataset generated from archival MoCap data, then fine-tuned on DIP-IMU, the real IMUs data paired with SMPL pose reference we built ourself. DIP can be used in various application scenarios, especially AR and VR.
Authors
Yinghao Huang, Max Planck Insitute for Intelligent Systems, Tübingen
Manuel Kaufmann, Advanced Interactive Technologies Lab, ETH Zürich
Emre Aksan, Advanced Interactive Technologies Lab, ETH Zürich
Michael J. Black, Max Planck Insitute for Intelligent Systems, Tübingen
Otmar Hilliges, Advanced Interactive Technologies Lab, ETH Zürich
Gerard Pons-Moll, Max Planck Institute for Informatics, Saarbrücke