Distributed Cognition in Artificial Intelligence
Distributed cognition is the idea that cognition doesn’t only happen in the brain—it is extended across space and time. Sometimes referred to as “The Extended Mind” [1], the concept of distributed cognition is used in studies of human collaboration and tool use. To give an example, a written note can serve as the role of short-term memory—but a special kind that can be passed between people as media. I remember first reading about this idea in Don Norman’s “The Design of Everyday Things” [2]. It is a very different way of looking at the self — not as an island of mind, but as a participant in a vast network of shared cognitive activity.
Traditionally, cognition has been seen as an individual process that occurs solely in the brain. This view, that the brain is the source of all mental activity, has been dominant in Western psychology and philosophy for centuries. Modern views of cognition, on the other hand, view the body as a key participant in the creation of human mental experience.
Ed Hutchins, the cognitive anthropologist, documented human cognitive processes “in the wild” [3]. He described how processes like attention, memory and computation were distributed across people and artifacts. For instance, he showed how the cognitive processes needed for navigating US Navy ships were divided across different crew members and their tools. In another investigation, he showed how airplane pilots shared information to coordinate their independent cognitive processes. His work illustrates the fact that cognition is not just something that only happens in our own heads but occurs across and between people, their tools, and other environmental elements.
How does this play into the role of humans working with AI generative models like Stability Diffusion, DALL-E, and GPT-3? If cognition is truly distributed across people and artifacts, then it stands to reason that humans working with AI generative models are actually extending their own cognition. In other words, they are using each AI system as a tool to help them think. This viewpoint has important implications for the development of AI systems (some of which are described in a beautiful essay called “The Nooscope” [4]). For instance, it implies that AI systems should be designed in a way that makes them easy to use. As tools for human cognition, what do they do? They are powerful tools for shaping human experiences.
References:
[1] Clark, A., & Chalmers, D. (1998). The extended mind. Analysis
[2] Norman, D. A. (1988). The psychology of everyday things. Basic books.
[3] Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT Press.
[4] Pasquinelli, M., & Joler, V. (2020). The Nooscope manifested: AI as instrument of knowledge extractivism. AI & society, 1-18.
Thanks for reading!