Jump to ratings and reviews
Rate this book

Associative Engines: Connectionism, Concepts, and Representational Change

Rate this book
Connectionist approaches, Andy Clark argues, are driving cognitive science toward a radical reconception of its explanatory endeavor. At the heart of this reconception lies a shift toward a new and more deeply developmental vision of the mind - a vision that has important implications for the philosophical and psychological understanding of the nature of concepts, of mental causation, and of representational change.

Combining philosophical argument, empirical results, and interdisciplinary speculations, Clark charts a fundamental shift from a static, inner-code-oriented conception of the subject matter of cognitive science to a more dynamic, developmentally rich, process-oriented view. Clark argues that this shift makes itself felt in two main ways. First, structured representations are seen as the products of temporally extended cognitive activity and not as the representational bedrock (an innate symbol system or language of thought) upon which all learning is based. Second, the relation between thoughts (as described by folk psychology) and inner computational states is loosened as a result of the fragmented and distributed nature of the connectionist representation of concepts.

Other issues Clark raises include the nature of innate knowledge, the conceptual commitments of folk psychology, and the use and abuse of higher-level analyses of connectionist networks.

Andy Clark is Reader in Philosophy of Cognitive Sciences in the School of Cognitive and Computing Sciences at the University of Sussex, in England. He's the author of Philosophy, Cognitive Science, and Parallel Distributed Processing.

272 pages, Hardcover

First published January 1, 1993

1 person is currently reading
90 people want to read

About the author

Andy Clark

22 books178 followers
Librarian Note: There is more than one author in the GoodReads database with this name. See this thread for more information.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
6 (50%)
4 stars
6 (50%)
3 stars
0 (0%)
2 stars
0 (0%)
1 star
0 (0%)
Displaying 1 of 1 review
Profile Image for Larry.
224 reviews25 followers
July 26, 2025
Connectionism and TCP?

Superpositionality: multiple usability of certain symbolic tokens for different representational purposes. Example: -, / and \ make ‘A’, ‘V’, and perhaps ‘L’, and ‘I’, but not ‘B’ and ‘C’.

Context-sensitivity: connectionist networks typically don’t use context-free symbols, least of all as a basis for the structure of complex representations or thoughts. Instead, in a certain context, a representation of ‘coffee’ is going to have certain connections (spillable liquid, hot, keeps you awake), and not others (speeds up heart beats, has a brown color, has a certain smell), without any overarching representational core being encoded in (a token for) the concept ‘COFFEE’. Thus, semantic information is said to be ‘distributed’, and by partial overlap of surface features the network exhibits the recognitional capacities to track (encoded in hidden unit patterns of activation), it is possible to build an approximation of the concept ‘COFFEE’ of Fodorian C/RTM. Cf. Barsalou 1987.

Connectionism certainly does model individually (re)usable items of information, but without encoding them in transportable (i.e. context-free) tokens that directly (i.e. without (formal?) modification) make up the parts of complex representations; this latter option is called concatenative encoding. For example, ‘(P&Q)’ is a token in/of the complex formula ‘(P&Q) v (RvS)’. In connectionist models, then, the informational elements are taken up and preserved in the course of building up complex representations, but the preservation follows another course; both concatenative (i.e. token-preserving) encoding, and this connectionist form of encoding through structure-preserving processing, are forms of functional compositionality.

An example of the latter is given by Smolensky’s (1991) tensor-product encoding. It models the knowledge of an ordered string < a,b,c > by breaking it up into two parts: first, there is knowledge of sequential position (=role), and then, there is knowledge of the letter that has that position (=filler), and finally, we move from one to the other thanks to the vectorial representation each knowledge item gets, and thanks to which knowledge of fillers can be arrived at through some form of vectorial multiplication (that binds them to their roles). Note that, unlike Fregean two-dimensionalism, the role/filler semantics allows you to go from the resultant string back to the computations that factored into its construction. We could say that Smolensky manages to represent the structure without (explicitly) representing, or using any of its parts, i.e. to preserve structure without preserving symbols.
Another example is a network that takes as input English sentences both in the active and passive form, yields as output their tree structure, and is able to turn one form into the other. This network demonstrates a capacity to operate on structural aspects of the representations it is fed that are implicit in that representation. But, Clark notes, they only are implicit to us: maybe they are not implicit to the machine! This would eliminate any fundamental difference between concatenative and functional compositionality. Explicitness becomes a function of the ease with/computational cost at which an information can be recovered by a given processing system.

Saving preservation does not yet entail saving superpositionality (i.e. multiple usability). Clark’s move consists in shifting the subject of superpositionality from the computational content to the computer (i.e. the network), allowing it to self-modularize, and then use the modules to multitask. This multitasking is, in turn, constrained (scaffolded) by developmental trajectories so as to avoid, e.g., unlearning. In an Evansian spirit, the conceptuality is shifted away from being a characteristic of systematically structured content to being a kind of skill, or a bag of skills.

Some of the stuff about self-teaching nets flew a bit over my head in the final chapters.
Displaying 1 of 1 review

Can't find what you're looking for?

Get help and learn more about the design.