“FaceType: Crafting Written Impressions of Spoken Expression” by Maher, Xiang and Zhi

  • ©Kevin Maher, Fan Xiang, and Liang Zhi



Entry Number: 06


    FaceType: Crafting Written Impressions of Spoken Expression



    FaceType is an interactive installation that creates an experience of spoken communication through generated text. Inspired by Chinese calligraphy, the project transforms our spoken expression into handwriting. FaceType explores what parts of our spoken expression can be evoked in writing, and what the most natural form of interaction between the two might be. The work is aimed to allow lay audiences to experience emotion, emphasis, and critical information in speech. Further audience reflection about patterns in their expression and the role of unconscious and conscious expression provide new directions for further works.


    Richard Brath and Ebad Banissi. 2016. Using typography to expand the design space of data visualization. She Ji: The Journal of Design, Economics, and Innovation. (Spring 2016), Tongji University Press 2, 1 59–87. https://doi.org/10.1016/j.sheji.2016.05.003Google Scholar
    Ziqian Chen, Gentiane Venture, and Marie-Luce Bourguet. 2018. A rendering model for emotional in-air handwriting. Proceedings of British HCI (July 2018), BCS Learning and Development 1–5. http://dx.doi.org/10.14236/ewic/HCI2018.111Google ScholarCross Ref
    Patricia Ebrey. 2001. A visual sourcebook of Chinese Civilization:calligraphy. Course Materials, Department of History, University of Washington. https://depts.washington.edu/chinaciv/Google Scholar
    Jongmin Kim. 2019. Leon Sans. GitHub repository. https://github.com/cmiscm/leonsansGoogle Scholar
    David Lapakko. 2007. Communication is 93% nonverbal: An urban legend proliferates. Communication and Theater Association, (Dec. 2007), CTAMJ 34, 1 7–14. https://cornerstone.lib.mnsu.edu/ctamj/vol34/iss1/Google Scholar
    Golan Levin. 2013. Seminar Series, Department of Computer Science, Carnegie Mellon University. https://www.cs.cmu.edu/~CompThink/seminars/golan/index.htmlGoogle Scholar
    Kevin Maher, Zeyuan Huang, Jiancheng Song, Xiaoming Deng. Yu-Kun Lai, Cuixia Ma, Hao Wang, Yong-Jin Liu, and Hongan Wang. 2022. E-ffective: A Visual Analytic System for Exploring the Emotion and Effectiveness of Inspirational Speeches. IEEE Transactions on Visualization and Computer Graphics. (January 2022), IEEE 28, 1 508 – 517. https://doi.org/10.1109/TVCG.2021.3114789Google ScholarDigital Library
    Ali Mollahosseini, Behzad Hasani, and Mohammad H. Mahoor. 2019. AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild. IEEE Transactions on Affective Computing. (January 2019), IEEE 10, 1 18–31. https://doi.org/10.1109/TAFFC.2017.2740923Google ScholarDigital Library
    Fangbing Qu, Wen-Jing Yan, Yu-Hsin Chen, Kaiyun Li, Hui Zhang, and Xiaolan Fu. 2017. “You Should Have Seen the Look on Your Face…”: Self-awareness of Facial Expressions. Frontiers in Psychology. (May 2017), Frontiers 10, 1–8. https://doi.org/10.3389/fpsyg.2017.00832Google Scholar
    Tara Rosenberger and Ronald L. MacNeil. 1999. Prosodic font: translating speech into graphics. In CHI ’99 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’99). ACM 252–253. https://doi.org/10.1145/632716.632872Google ScholarDigital Library
    Edward Tufte. 2001. The Visual Display of Quantitative Information (2nd. ed.). Graphics Press, Cheshire, CT.Google Scholar
    Yizhi Wang, Yue Gao, and Zhouhui Lian. 2020. Attribute2Font: creating fonts you want from attributes. ACM Trans. Graph. (August 2020), ACM 39, 4 1–15. https://doi.org/10.1145/3386569.3392456Google ScholarDigital Library
    Colin Ware. 2022. Visual Thinking for Information Design (2nd. ed.). Elsevier Science, Cambridge, MA.Google Scholar
    Philip Yaffe. 2011. The 7% rule: fact, fiction, or misunderstanding. Ubiquity, (October 2011), ACM 1. 1–5. https://doi.org/10.1145/2043155.2043156Google ScholarDigital Library

ACM Digital Library Publication:

Overview Page: