“MAScreen: Augmenting Speech with Visual Cues of Lip Motions, Facial Expressions, and Text Using a Wearable Display”
Conference:
Experience Type(s):
Title:
- MAScreen: Augmenting Speech with Visual Cues of Lip Motions, Facial Expressions, and Text Using a Wearable Display
Organizer(s)/Presenter(s):
Description:
MAScreen, a wearable LED display in the shape of a mask capable of sensing lip motion and speech, providing a real-time visual feedback on the vocal expression and the emotion behind the mask. It can transform from vocal data into text, emoji and another language.
References:
[1]
Samuel R. Atcherson, Lisa Lucks Mendel, Wesley J. Baltimore, Chhayakanta Patro, Sungmin Lee, Monique Pousson, and M. Joshua Spann. 2017. The effect of conventional and transparent surgical masks on speech understanding in individuals with and without hearing loss. Journal of the American Academy of Audiology 28, 1.
[2]
Paul Ekman and Wallace V Friesen. 2003. Unmasking the face: A guide to recognizing emotions from facial clues. Ishk.
[3]
Mose Sakashita, Tatsuya Minagawa, Amy Koike, Ippei Suzuki, Keisuke Kawahara, and Yoichi Ochiai. 2017. You as a Puppet: Evaluation of Telepresence User Interface for Puppetry. In Proceedings of UIST’17. 217–228.
[4]
William H Sumby and Irwin Pollack. 1954. Visual contribution to speech intelligibility in noise. The journal of the acoustical society of america 26, 2 (1954), 212–215.
[5]
Quentin Summerfield. 1989. Lips, teeth, and the benefits of lipreading. Handbook of research on face processing(1989), 223–233.


