“PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling” by Li, Zheng, Liu, Zhou and Liu

  • ©

Conference:


Type(s):


Title:

    PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling

Session/Category Title:   Motion Recipes and Simulation


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    We present a new pose encoding method, PoseVocab, for human avatar modeling. Previous methods usually directly map driving poses to dynamic human appearances through a NeRF MLP, yielding blurry avatars. In contrast, PoseVocab constructs pairs of key poses and learnable pose embeddings to encode high-fidelity human appearances under various poses.


ACM Digital Library Publication:



Overview Page: