““Put-that-there”: Voice and gesture at the graphics interface” by Bolt

  • ©Richard A. Bolt




    “Put-that-there”: Voice and gesture at the graphics interface



    Recent technological advances in connected-speech recognition and position sensing in space have encouraged the notion that voice and gesture inputs at the graphics interface can converge to provide a concerted, natural user modality. The work described herein involves the user commanding simple shapes about a large-screen graphics display surface. Because voice can be augmented with simultaneous pointing, the free usage of pronouns becomes possible, with a corresponding gain in naturalness and economy of expression. Conversely, gesture aided by voice gains precision in its power to reference.


    1. Negroponte, N The Media Room. Report for ONR and DARPA. MIT, Architecture Machine Group, Cambridge, MA, December 1978.
    2. Bolt, R.A. Spatial Data-Management. DARPA Report. MIT, Architecture Machine Group, Cambridge, MA, March 1979.
    3. Reddy, D.R. Speech recognition by machine: a review. Proceeding of the IEEE, 64, 4 (April 1976), 501-531.
    4. Robinson, A.L. More people are talking to computers as speech recognition enters the real world. (Research News) (First of two articles) Science, 203, (16 February 1979), 634-638.
    5. Sondeheimer, N.K. Spatial reference and natural-language machine control. International Journal of Man-Machine Studies, 8, (1976), 329-336.
    6. Winston, P. Learning structural descriptions from examples. MIT Project MAC, TR-76, 1970.
    7. Olson, D.R. Language and thought: Aspects of a cognitive theory of semantics. Psychological Review, 77, (1970), 257-273.

ACM Digital Library Publication:

Overview Page: