“Make-a-Face: A Hands-free, Non-Intrusive Device for Tongue/Mouth/Cheek Input Using EMG” by Nakao, Pai, Isogai, Kimata and Kunze

  • ©

Conference:


Type(s):


Entry Number: 24

Title:

    Make-a-Face: A Hands-free, Non-Intrusive Device for Tongue/Mouth/Cheek Input Using EMG

Presenter(s)/Author(s):



Abstract:


    Current devices aim to be more hands-free by providing users with the means to interact with them using other forms of input, such as voice which can be intrusive. We propose Make-a-Face; a wearable device that allows the user to use tongue, mouth, or cheek gestures via a mask-shaped device that senses muscle movement on the lower half of the face. The significance of this approach is threefold: 1) It allows a more non-intrusive approach to interaction, 2) we designed both the hardware and software from the ground-up to accommodate the sensor electrodes and 3) we proposed several use-case scenarios ranging from smartphones to interactions with virtual reality (VR) content.

References:


    • Jingyuan Cheng, Ayano Okoso, Kai Kunze, Niels Henze, Albrecht Schmidt, Paul Lukowicz, and Koichi Kise. 2014. On the tip of my tongue: a non-invasive pressure-based tongue interface. In Proceedings of the 5th Augmented Human International Conference. ACM, 12. 
    • Stanley Coren. 2003. Sensation and perception. Wiley Online Library. 
    • Mayank Goel, Chen Zhao, Ruth Vinisha, and Shwetak N Patel. 2015. Tongue-inCheek: Using Wireless Signals to Enable Non-Intrusive and Flexible Facial Gestures Detection. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 255–258. 
    • Zheng Li, Ryan Robucci, Nilanjan Banerjee, and Chintan Patel. 2015. Tongue-n-cheek: non-contact tongue gesture recognition. In Proceedings of the 14th International Conference on Information Processing in Sensor Networks. ACM, 95–105. 
    • Shuo Niu, Li Liu, and D Scott McCrickard. 2014. Tongue-able Interfaces: Evaluating Techniques for a Camera Based Tongue Gesture Input System. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS ’14). ACM, New York, NY, USA, 277–278. https://doi.org/10.1145/2661334.2661395 
    • Makoto Sasaki, Takayuki Arakawa, Atsushi Nakayama, Goro Obinata, and Masaki Yamaguchi. 2011. Estimation of tongue movement based on suprahyoid muscle activity. In Micro-NanoMechatronics and Human Science (MHS), 2011 International Symposium on. IEEE, 433–438. 
    • Makoto Sasaki, Kohei Onishi, Dimitar Stefanov, Katsuhiro Kamata, Atsushi Nakayama, Masahiro Yoshikawa, and Goro Obinata. 2016. Tongue interface based on surface EMG signals of suprahyoid muscles. ROBOMECH Journal 3, 1 (2016), 9. 
    • Qiao Zhang, Shyamnath Gollakota, Ben Taskar, and Raj P N Rao. 2014. Non-intrusive tongue machine interface. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems. ACM, 2555–2558.

Keyword(s):



PDF:



ACM Digital Library Publication:



Overview Page: