“tARget: Limbs Movement Guidance for Learning Physical Activities with a Video See-Through Head-Mounted Display” by Han, Lin, Hsieh, Hsu and Hung


Entry Number: 26


    tARget: Limbs Movement Guidance for Learning Physical Activities with a Video See-Through Head-Mounted Display




    In the aging society, people are paying more attention to having good exercise habits. The advancement of technology grants the possibility of learning various kinds of exercises using multimedia equipment, for example, watching instruction videos. However, it is difficult for users to learn accurate movements due to lack of feedback information. 

    To explore what augmented reality (AR) or virtual reality (VR) technologies can assist the users, lots of research on movement guidance and evaluation has been proposed and developed. Chua et al. [Chua et al. 2003] built a Tai Chi Chuan training system in a virtual reality environment. They created a virtual coach in front of the user. Integrated with a motion capture system, users can see their movement in VR and compare it to the coach’s movement. However, these systems can only let the users see their avatars instead of themselves. LightGuide [Sodhi et al. 2012] used a projector hanging from the ceiling and projected visual information on users’ hands to guide hand movements. It could only guide users in a fixed space with their hands being under the projection zone. OutsideMe [Yan et al. 2015] used Kinect to capture the skeleton, RGB and depth images of users. They enabled users to see their body movements as external observers through a video see-through head-mounted display (VST-HMD). AR-Arm [Han et al. 2016] showed semitransparent arms to indicate the correct movement of Tai Chi Chuan. Users could follow the virtual arms intuitively to achieve accurate arm movement.

    In this paper, we present a full-body movement guidance system for learning physical activities with a VST-HMD. It contains a method for skeleton calibration and two interfaces for movement guidance: Coach-Surrounding Guidance and Ball-Following Guidance. We conducted a user study to evaluate the system on posture and movement learning.


    • Philo Tan Chua, Rebecca Crivella, Bo Daly, Ning Hu, Russ Schaaf, David Ventura, Todd Camill, Jessica Hodgins, and Randy Pausch. 2003. Training for physical tasks in virtual environments: Tai Chi. In Virtual Reality, 2003. Proceedings. IEEE. IEEE, 87–94. 
    • Ping-Hsuan Han, Kuan-Wen Chen, Chen-Hsin Hsieh, Yu-Jie Huang, and Yi-Ping Hung. 2016. Ar-arm: Augmented visualization for guiding arm movement in the first-person perspective. In Proceedings of the 7th Augmented Human International Conference 2016. ACM, 31. 
    • Chun-Jui Lai, Ping-Hsuan Han, Han-Lei Wang, and Yi-Ping Hung. 2016. Exploring Manipulation Behavior on Video See-Through Head-Mounted Display with View Interpolation. In Asian Conference on Computer Vision Workshops. Springer, 258– 270. 
    • Rajinder Sodhi, Hrvoje Benko, and Andrew Wilson. 2012. LightGuide: projected visualizations for hand movement guidance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 179–188. 
    • William Steptoe, Simon Julier, and Anthony Steed. 2014. Presence and discernability in conventional and non-photorealistic immersive augmented reality. In Mixed and Augmented Reality (ISMAR), 2014 IEEE International Symposium on. IEEE, 213–218. 
    • Shuo Yan, Gangyi Ding, Zheng Guan, Ningxiao Sun, Hongsong Li, and Longfei Zhang. 2015. OutsideMe: Augmenting Dancer’s External Self-Image by Using A Mixed Reality System. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 965–970. 
    • Zhengyou Zhang. 2000. A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence 22, 11 (2000), 1330–1334



    This work was partially supported by the Ministry of Science and Technology of Taiwan under Grants MOST 104-2627-E-002-001 and 106-3114-E-369 -001.