MoCap-to-Visual Domain Adaptation for Efficient Human Mesh Estimation from 2D Keypoints

Bedirhan Uguz, Ozhan Suat, Batuhan Karagoz, Emre Akbas
Middle East Technical University
2nd Workshop on Reconstruction of Human-Object Interactions (RHOBIN)
CVPR 2024
An overview of Key2Mesh's inference and training process.

Abstract

This paper presents Key2Mesh, a model that takes a set of 2D human pose keypoints as input and estimates the corresponding body mesh. Since this process does not involve any visual (i.e. RGB image) data, the model can be trained on large-scale motion capture (MoCap) datasets, thereby overcoming the scarcity of image datasets with 3D labels. To enable the model's application on RGB images, we first run an off-the-shelf 2D pose estimator to obtain the 2D keypoints, and then feed these 2D keypoints to Key2Mesh. To improve the performance of our model on RGB images, we apply an adversarial domain adaptation (DA) method to bridge the gap between the MoCap and visual domains. Crucially, our DA method does not require 3D labels for visual data, which enables adaptation to target sets without the need for costly labels. We evaluate Key2Mesh for the task of estimating 3D human meshes from 2D keypoints, in the absence of RGB and mesh label pairs. Our results on widely used H3.6M and 3DPW datasets show that Key2Mesh sets the new state-of-the-art by outperforming other models in PA-MPJPE for both datasets, and in MPJPE and PVE for the 3DPW dataset. Thanks to our model's simple architecture, it operates at least 12x faster than the prior state-of-the-art model, LGD.

Qualitative Results

OpenPose is utilized to generate input keypoints, and the results are presented on a per-frame basis without any temporal smoothing applied.
Qualitative comparison of our pre-trained and domain-adapted model on H3.6M dataset. The first column shows the input image and keypoint detections. The second and third columns display outputs from the pre-trained model, while the last two columns present results from the domain-adapted model, which outperforms our pre-trained model, especially on complex poses.

Quantitative Results

BibTeX

@InProceedings{Uguz_2024_CVPR,
  author    = {Uguz, Bedirhan and Suat, Ozhan and Karagoz, Batuhan and Akbas, Emre},
  title     = {MoCap-to-Visual Domain Adaptation for Efficient Human Mesh Estimation from 2D Keypoints},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month     = {June},
  year      = {2024},
  pages     = {1622-1632}
}