paraafPeter A. van der Helm Demo link

About Teaching Research Publications Presentations



Metaphors of cognition
or
the dynamics versus computation debate



Reality is something we experience subjectively. People may agree something is an objective reality, but this agreement is based on shared subjective experiences. Like traditional story-telling and religion, scientific research (though more sophisticated) is basically an endeavor to understand or control what many people experience as reality. To this end, we use metaphors whether or not expressed in concrete theories and models. The idea that scientific research is about useful metaphors instead of objective truths may be uncomfortable, but as Socrates (469—399 BC) already noted, reality is in the eye of the beholder.

The currently dominant but often challenged metaphor in cognitive science is the computer metaphor. It is related to the computational theory of mind which, in the tradition of functionalism, promotes the idea that the workings of the mind can be understood in terms of information processing defined as computation, that is, as the conversion of an input by a set of rules into an output. Opponents of this idea usually argue that the brain is a dynamic physical system and that the mind should be described accordingly. Do these different modeling views really exclude each other or are they actually complementary?

First, some dynamicists, and perhaps even some computationalists, may interpret computationalism as assuming that the brain really manipulates discrete symbols, but this interpretation mistakes modeling tools for the things being modeled. The usage of symbols is inherent to all formal modeling, also within dynamic systems approaches. The very idea of formalization is that things, at a certain semantic level, are labeled by symbols — not for the sake of it, but to capture potentially relevant relationships between these things. For instance, in physics, formulas like Newton's F=ma are not assumed to be real things in nature but are merely tools to describe allegedly relevant relationships between allegedly relevant things in nature. The idea that the brain really manipulates discrete symbols is, to me, as odd as the idea that nature really applies formulas like Newton's F=ma. That is, in both cases, one merely uses convenient modeling tools to obtain some level of understanding of the things being modeled.

Second, whereas dynamicism focuses on physical change (a "how" question), computationalism focuses on semantic structure (a "what" question). For instance, it is nowadays widely accepted that a percept is a relatively stable cognitive state which arises during a dynamic neural process. Initially, computationalism focused on the informational content of such stable cognitive states, and later, dynamic systems theory focused on the dynamics of the neural transitions from any one state to the next. Of course, insight in both aspects is needed for a complete understanding of perceptual organization, that is, the two approaches are complementary rather than mutually exclusive. In other words, as Neisser (1967) put it: it is not either dynamics or computation but it is both. That is, theories about either aspect may contribute equally to a more comprehensive understanding of cognition as a whole, precisely because they focus on different aspects.

In terms of Marr's levels, the "what" question is mostly a computational and partly an algorithmic question, and the "how" question is partly an algorithmic and mostly an implementational question. This also clarifies that connectionist modeling (which starts from ideas at the algorithmic level of description) is, in many respects, in between representational modeling (which starts from ideas at the computational level of description) and dynamic-systems modeling (which starts from ideas at the implementational level of description). The "what-how" distinction reverberates the distinction which the early 20th century Gestaltists made between the molar (or behavioral, or cognitive) level and the molecular (or physiological, or neural) level. As Marr noted, answering the "what" and "how" questions may be totally different endeavors, but answers to both questions are needed for a complete understanding.




Related to the foregoing, it seems expedient to make the following distinction between a narrow version of the computer metaphor (as it sometimes is interpreted by opponents) and a broad version (as it usually is interpreted by proponents):
The narrow computer metaphor, on the one hand, follows the tradition of comparing the brain to the most sophisticated machine known at the time. In the past, machines such as the clock and the steam-engine had served as model of the brain, and in the 20-th century, it was the computer's turn to serve as model. A concrete model within this tradition aims to capture the serial development over time of a system that, as a whole, goes from one state to the next. Such a system may, for instance, be a single neuron, or a group of neurons, or the brain as a whole. Some proponents of dynamic-systems modeling may reject the narrow computer metaphor, but notice that dynamic-systems models actually fit seamlessly in this tradition. After all, they employ differential equations, which describe the strictly serial process by which a system goes from one state to the next.

The broad computer metaphor, on the other hand, suggests that cognitive processing can be modeled usefully in terms of information close to the everyday meaning of the word; these are also the terms in which computers can be programmed to process things. Hence, in contrast to previous metaphors, the broad computer metaphor does not refer to the hardware principle that the brain is a physical system, but it refers to software principles implemented in the brain to allow for cognition. Such software principles are, in representational models like SIT, modeled by regularity extracting operations to get structured representations, and in connectionist models, by activation spreading through a network (see Slimy, Hilly, and Pixy).

A connectionist network typically is a distributed representation which, via combinations of connected pieces of information, represents many wholes. This concept stems from graph theory (a subdomain of both mathematics and computer science), and it is powerful in that the metaphor of interacting pieces can be used to efficiently evaluate many wholes (see Smart processing). Notice that SIT's transparallel minimal-coding algorithm PISA also employs distributed representations (see Hyperstrings), in a way that, just as connectionist modeling does, honors dynamic-systems ideas about cognition. Indeed, regarding cognition, distributed representations seem to constitute the proverbial coin, with (a) dynamic-systems models highlighting its neuronal side, (b) representational models highlighting its cognitive side, and (c) connectionist modeling as tool to implement realistic simulations of ideas within dynamic-systems theory and representational theory.

In sum, the dynamics versus computation debate seems moot in that the difference is a matter of complementarity rather than of opposition. That is, representational, connectionist, and dynamic-systems models seem to form a continuum, and insights from all three approaches seem to be needed to obtain a complete understanding of cognition. Notice that this pluralist approach to cognition does not reflect so much a metaphysical (or ontological) reading of pluralism — which assumes that, eventually, a "grand unifying theory" is possible — but rather an explanatory (or epistemological) reading of pluralism — which, more pragmatically, focuses on differences and parallels between ideas at different levels of description to see if and how they might complement each other.

In my research, the above view on these issues is a guiding methodological principle (see also Marr's levels and Research cycles).

For extensive discussions and references on these issues, see Cognitive Processing 2012