[NeurIPS 24] Estimating Ego-Body Pose from Doubly Sparse Egocentric Video Data

Seunggeun Chi1 Pin-Hao Huang2 Enna Sachdeva2 Hengbo Ma Karthik Ramani1 Kwonjoon Lee2
1Purdue University 2Honda Research Institute

Abstract

We study the problem of estimating the body movements of a camera wearer from egocentric videos. Current methods for ego-body pose estimation rely on temporally dense sensor data, such as IMU measurements from spatially sparse body parts like the head and hands. However, we propose that even temporally sparse observations, such as hand poses captured intermittently from egocentric videos during natural or periodic hand movements, can effectively constrain overall body motion. Naively applying diffusion models to generate full-body pose from head pose and sparse hand pose leads to suboptimal results. To overcome this, we develop a two-stage approach that decomposes the problem into temporal completion and spatial completion. First, our method employs masked autoencoders to impute hand trajectories by leveraging the spatiotemporal correlations between the head pose sequence and intermittent hand poses, providing uncertainty estimates. Subsequently, we employ conditional diffusion models to generate plausible full-body motions based on these temporally dense trajectories of the head and hands, guided by the uncertainty estimates from the imputation. The effectiveness of our method was rigorously tested and validated through comprehensive experiments conducted on various HMD setup with AMASS and Ego-Exo4D datasets.

Video

BibTeX

@article{chi2024estimating,
  title={Estimating Ego-Body Pose from Doubly Sparse Egocentric Video Data},
  author={Chi, Seunggeun and Huang, Pin-Hao and Sachdeva, Enna and Ma, Hengbo and Ramani, Karthik and Lee, Kwonjoon},
  journal={arXiv preprint arXiv:2411.03561},
  year={2024}
}