Schizophrenia and Narrative in Artificial Agents






  • In recent years, computer graphics has turned to AI techniques in order to simplify the problem of modeling moving objects for rendering. By modeling the minds of graphically represented creatures, their movements can be directed automatically through AI algorithms and need not be directly controlled by the designer. But what kind of baggage do these AI algorithms bring with them? Here I will argue that predominant AI approaches to modeling agents result in behavior that is fragmented, depersonalized, lifeless, and incomprehensible. Drawing inspiration from narrative psychology and anti-psychiatry, I will argue that agent behavior should be narratively understandable and present an agent architecture that structures behavior to be comprehensible as narrative.

    The approach I take in this essay is a hybrid of critical theory and AI agent technology. It is one example of a critical technical practice: a cultural critique of AI practice instantiated in a technical innovation. In the final section of this paper, I will describe the theoretical and practical foundations of the critical technical practice pursued here, which I term socially situated AI.


  • References
    1. Philip E. Agre (1997). Computation and human experience. Cambridge, UK:
    Cambridge University Press, 1997.

    2. Susan B. (1991). The dinosaur man: Tales of madness and enchantment from the back
    ward. New York: Edward Burlingame Books, 1991.

    3. Blumberg, B. (1996). Old tricks, new dogs: Ethology and imeractive creatures. PhD
    Thesis, MIT Media Lab, Cambridge, MA, 1996.

    4. Blumberg, B. & Galyean, T. A. (1995). Multi-level direction of autonomous creatures
    for real-time virtual environments. In Proceedings of SIGGRAPH 95.

    5. Brooks, R. A. (1990). Elephants don’t play chess. In Pattie Maes, ed., Designing
    autonomous agents. Cambridge, MA: MIT Press, 1990.

    6. Brooks, R. A. (1997). From earwigs to humans. Robotics and Autonomous Systems,
    20, (2-4), 291-304.

    7. Bruner, J. (1990). Actual minds, possible world.J. MA: Harvard University Press,

    8. Bruner, J. (1990).Actsofmeaning. Cambridge, MA: Harvard University Press,

    9. Dennett, D. (1987). The intentional stance. MIT Press, Cambridge, MA, 1987.

    10. Foner, L. (1993). What’s an agent, anyway? URL:
    Published in a revised version in The Proceedings of the
    First International Conference 011 Autonomous Agents (AA ’97).

    11. Goffman, E. (1961). Asylums: ‘&:mys on the social situation of mental patients and
    other inmates. Garden City, NY: Anchor Books, 1961.

    12. Janet, P. (1889). L’Automatisme psychologique: Essai de psychologie experimentale sur
    les formes inferieures de l’activite humaine. Paris: Ancienne Librairie Germer
    Bailliere et Cie, 1889. Ed. Felix Alcan.

    13. L1ing, R. D. (1960). The divided self An existential study in sanity and madness.
    Middlesex, UK: Penguin Books, Ltd., 1960.

    14. L1ing, R.D. & Esterson, A. (1970). Sanity, madness, and the family. Middlesex, UK:
    Penguin Books, Ltd., 1970.

    15. Loyall, A. B. (1997). Believable agents: Building interactive personalities. PhD thesis,
    Carnegie Mellon University, Pittsburgh, CMU-CS-97-123.

    16. Loyall, A. B. & Bates, J. Hap: A reactive, adaptive architecture for agents. Carnegie
    Mellon University, Pittsburgh, Technical Report CMU-CS, 91-147.

    17. Mateas, M. (2000). Expressive Al. SIGGRAPH 2000 Electronic Art and Animation
    Catalog, 2000.

    18. Penny, S. (1997). Embodied cultural agents at the intersection of robotics, cognitive
    science, and interactive art. In Dautenhahn Kerstin, ed., Socially intelligent agents:
    Papm from the 1997 fall symposium, AAI Press. Menlo Park, CA, 103-105, A 1997.

    19. Perlin, K. & Goldberg, A. (1996). lmprov: A system for scripting interactive actors
    in virtual worlds. Computer Graphics 29, (3).

    20. Reilly, W.S.N. (1996). Believable social and emotional agents. PhD thesis, Carnegie
    Mellon Univeristy, CMU-CS-96-138.

    21. Rey nolds, C. (1999). Steering behaviors for autonomous characters. In 1999 Game
    Developers Conference. San Jose, CA, March 1999.

    22. Robcar, Jr, J.W. (1991). Reality check. In John G. H. Oakes, ed., !11 the realms of the
    1111real: “Insane” writings, 18-19. New York; Four Walls Eight Windows, 1991.

    23. Rone, A. (1989). The telephone book: Technology – schizophrenia – electric speech.
    Lincoln: University of Nebraska Press, 1989.

    24. Sack, W. Stories & Social Networks (1999). 1999 AAA! Symposium on Narrative
    lntellige11ce. Menlo Park, CA: AAA! Press, 1999.

    25. Sengers, P. (1998). Anti-boxology: Agent design in cultural context. PhD thesis,
    Carnegie Mellon University Department of Computer Science and Program in
    Literary and Cultural Theory, Pittsburgh, PA, 1998.

    26. Scngers, P. (2000). Narrative intelligence. [n Kerstin Dautenhahn, ed., Human
    cognition and social agent technology, advances in consciousness. John Benjamins
    Publishing Co, Amsterdam, 2000.

    27. Smithers, T. (1992). Taking eliminative materialism seriously: A methodology for
    autonomous systems research. In Francisco J. Varela and Paul Bourgine, eds.,
    Towards a practice of autonomous systemJ: Proceedings of the First European
    Conference on Artificial Life, 31-47. Cambridge, MA: MIT Press, 1992.

    28. Steels, L. (1994)
    . The artificial life roots of artificial intelligence. Artificial Life,
    1,(1-2), 75-110.

    29. Wavish, P. & Graham, M. (1996). A situated action approach to implementing
    characters in computer games. AA/, 10.