In 1950, Alan Turing, the British mathematician, cryptographer, and computer pioneer, looked to the now that the conceptual and technical parameters for electronic brains had been established, what kind of intelligence could be built? Should machine intelligence mimic the abstract thinking of a chess player or should it be more like the developing mind of a child? Should an intelligent agent only think, or should it also learn, feel, and grow?
Affect and Artificial Intelligence is the first in-depth analysis of affect and intersubjectivity in the computational sciences. Elizabeth Wilson makes use of archival and unpublished material from the early years of AI (1945–70) until the present to show that early researchers were more engaged with questions of emotion than many commentators have assumed. She documents how affectivity was managed in the canonical works of Walter Pitts in the 1940s and Turing in the 1950s, in projects from the 1960s that injected artificial agents into psychotherapeutic encounters, in chess-playing machines from the 1940s to the present, and in the Kismet (sociable robotics) project at MIT in the 1990s.
Elizabeth A. Wilson is Professor of Women's, Gender, and Sexuality Studies at Emory University and the author of Psychosomatic: Feminism and the Neurological Body, also published by Duke University Press.
In Affect and Artificial Intelligence, Elizabeth A. Wilson analyzes “the early affective networks within which mid-twentieths-century computational devices were anticipated and built” and argues that emotion lies that the foundation of building smart machines. She also seeks to make the computational objects more engaging for humanities scholars by studying the affective components (curiosity, contempt, anger, sadness, etc.) of the works of the three brilliant early computer scientists and mathematicians, Alan Turing (1912-1954), Walter Pitts (1912-1969), and Joseph Weizenbaum (1923-2008), in developing their respective machines. Krzywoszynska (2012) succinctly points out that the most interesting argument in the book is how AI, as a concept, becomes constructed in an emotionless and impassive way. Indeed, very early on in the book, Wilson provides a juxtaposition of two models of thinking about AI by quoting from Alan Turing’s famous Computational Machinery and Intelligence paper (1950): the chess-playing model and the child-like model. Through numerous biographical examples, Wilson convincingly argues that most commentaries about AI are dominated by the chess-playing model, which is all about abstraction, number-crunching, and symbols-manipulation. And this is the result of our reluctance to admit the role of emotion in smart machines’ life. There are two problems pointed out by the Stahnisch (2011) and Krzywoszynska (2012). First, Wilson’s (2010) book suffers the most from the lack of a concluding remark. And second, Krzywoszynska (2012) and Stanisch (2011) point out that Wilson’s (2010) book is lightly referenced, and its sheer brevity does not match the formidable size of its topic. I agree with these points but add three points that I think can make the argument stronger in the book. First, it is a glaring missed opportunity that Wilson did not engage with the literature on affective computing given Rosalind Picard had invented the field since the late 1990s (Picard, 2000). This field aims to teach a machine the ability to read, track, classify, and even express human emotions. Perhaps, it is the most direct attempt to endowing the “ability to feel” to a machine. By 2003, this field achieved the accuracy rate of recognizing the so-called eight basic emotions postulated by Ekman (1999) of 81% (Picard, 2003). In recent years, companies that sell AI products based on this approach claim a 95% accuracy rate in recognizing emotion. However, this is very much up for debate, given the rise of new theories and empirical results in emotion study (Heaven, 2020). Based on Wilson’s arguments in the book, I think she would argue even for affective computing, the assumption of the eventual success of the brute-force calculation approach on top of big data is too strong. This way of thinking is often called “behaviorism,” “the doctrine that psychology can only, and should only be, the science of behavior, not of minds; that it can only measure and predict relationships between people’s external circumstances (‘stimuli’) and their observed behaviors (‘responses’)” (Deustch, 2011, pp.157-158). I believe different theories of emotions, i.e., different modes of explaining how emotions are formed, expressed, and inferred, will have different implications for improving the current machine learning approaches to emotions. The current behaviorist approach might have certain successes with the explosion of big data and computational power, yet, it is prone to the algorithmic designers as well as biases in the data. Two examples come to mind. Several recent studies on the accuracy of reading emotions of different races expose that current machine algorithms are more likely to ascribe that black people are angrier (McStay, 2018; Rhue, 2019). And in cases where the initial dataset is dominated by one sex, such as among the police force, using the current AI approach to track emotions of female police officers might lead to dangerous errors (Purdy, Zealley, & Maseli, 2019). Another issue I wish Wilson explores is the acculturation of emotion and affect, i.e., how people learn and unlearn their emotional reaction and how that might influence AI developers. Wilson might have successfully conveyed that all of the pioneers of AI think we need to understand emotion better to build smart AI, or there are ways in which emotions have leaky effects in their works. Yet, the book never brought up the issue of changing emotions. This is an important issue as our emotional lives are being disrupted drastically with social media’s hyperconnectivity. The emergence of so many subcultures and ideologies consequently gives rise to new modes of emoting. This issue of acculturation (Vuong et al., 2018a; Vuong et al., 2020; Vuong & Napier, 2015) will keep coming up, and the current behaviorist approach in AI about emotions is inadequate to explain the ebbs and flows of our emotional lives as well as telling us what the appropriate emotions to feel in certain situations are. Finally, it seems that the book’s biographical historical data could benefit from a more methodological approach to organizing the data. The book talks about emotion and affects in general, but it could have categorized emotions better. There are reactive emotions (anger, disgust, joy) and meta-cognitive ones (doubt, curiosity, wonder, awe). The meta-cognitive emotions help us evaluate our normal emotional responses to situations in life. In other words, it helps us take a step back and put our emotional lives in a grander perspective. Or there can be three levels of analysis: emotions and affects, intuition, and reason, in which intuition provides a bridge from emotion to reason. Our intuition is a set of basic core values, axioms, or rule of thumb, that guides our behavior, emotion, and cognition. Our intuition can be updated and changed with learning. Once it is changed, we might have a different set of emotional responses than our original one. For example, once we learn how to fight, a certain way of standing starts to feel unnatural and dangerous. In Wilson’s book, the affective aspects of the three AI pioneers can be analyzed in terms of how they come to have certain intuitions about a machine that can think and feel. This is also an angle where the perspective of acculturation of emotions can have a role to play. These two examples of how biographical data about emotions can bring about a more systematic analysis (Vuong et al., 2018b) of the emotional landscapes of the AI pioneers and how they affect their work. Besides, an orderly organization of data could help shape the narrative arcs of Wilson’s book as well as general concluding remark too. Beyond the book, I think it can even help to improve the replicability in humanities (Peels & Bouter, 2018; Vuong, 2020). Despite such shortcomings, I think Wilson’s book is of great interest to readers of the biographical history of computer science and, more importantly, humanities scholars who would like to explore how emotions influence the works of early pioneers amongst AI theoreticians and engineers.
This is a historical take on some early scientists and theorists of AI and psychology/neurology with respect to Silvan Tomkins' theory of Affect. This book does a lot of digging to put AI in respect to affect theory. For that, this book is uncontestibly excellent.
My concerns are the lack of clarity of proposition. It isn't clear what the author thinks will move affect forward relative to machine intelligence. Instead it seems that the author is doing Tompkins-like analysis on the minds of scientists themselves rather than the machines.
The most striking message this text left me with centers around Wilson’s reference to D.W. Winnicott’s phrase “There is no such thing as a baby.” Wilson explains: “A baby does not exist as a self-sufficient creature, it is always part of a system of reciprocation” (loc1402) This reference appears during Wilson’s analysis of the MIT Kismet project and is used to describe how Kismet’s programming only functions when others are around to feed it stimuli, but the sentiment also applies to everything else discussed throughout the book. We all exist within systems of reciprocation, even after we’ve grown up and become somewhat independent human beings. To go on functioning, infants, just like Kismet, must establish circuits/relationships with the world. Neither human nor robot is a closed system. If we can say “There is no such thing as a baby," it may also make sense to say "There is no such thing as an utterly autonomous robot.” I’m tempted to take this further and wonder whether there is such a thing as autonomy. Is there such a thing as independent thought? Is there such a thing as a self? Wilson herself does not explicitly delve so deep, but she does hint at how complex and layered our humanity is. Juxtaposed with artificial intelligence, our own internal workings start to look similar.
Even more fascinating than the quandary of what makes me human is the question of what makes other people human. In Chapter 1, Wilson references the idea that other human beings’ assumptions about our capacity for thought actually teach us to think and how. Others assign meaning to our actions, and over time we come to mean those things by what we do. This is a kind of introjection--part of the way we seem to create humanity within each other. Is there anyway the same thing could happen between humans and artificial intelligences? Could the "extensive emotional entrainment of machine by man and man by machine" Wilson describes in chapters 3 and 4 result in the same phenomenon? Is the Turing test all we will need to know when or if it happens?
This framework of open, inseparable systems of reciprocation returns when Wilson describes the different responses surrounding ELIZA and Kenneth Colby’s similar program,. ELIZA was adored and viewed as a valuable confidant. Colby’s version was more often frustrating for and criticized by its users. Sensing a multitude of untraceable differences between the two programs’ contexts, Wilson recognizes that “the networked, interpersonal, affectively collaborative community into which ELIZA was released was a crucial component of the programs therapeutic viability” (loc1852). Much earlier in the book we are told that “the mind is not confined to the brain, the skull, or even the boundaries of the biological body: it takes up tools and technologies out in the world in order to expand cognitive space” (loc549). This brings to mind our modern (or not so modern) habits of filling notebooks and ledgers with important but hard-to-remember data, letting our cellphones keep track of everyone’s phone number, and turning to google for whatever trivia might come up in our daily conversation. Our intelligence is already wrapped up in the capacities of machines and artificial systems. Echoing and expanding Winnicott’s phrase, we might say “There is no such thing as any thing. There are only communities of things."
Wilson discusses the early creation of Artificial Intelligence (AI), tracking the construction through the influence of Turing, Pitts (and colleagues), two psychoterapeutic AI inventions, and an over-arching idea of affect. The narrative displayed throughout the book is far more compelling than one would expect in a book focused on technology; however, the employed narrative not only makes the book more readable from a humanities prospective (as Wilson states she sets out to do) but also provides, I would argue, the necessary construction for a book that hopes to discuss affect--which is often disassociated from discussions of technology and science. Wilson's analysis is thoroughly though-provoking, leaving the reader with various questions and a possibility for further research from those in the field. Her argument allows for a continued discussion of affect in AI and the contribution of sexuality and gender studies to our future understanding of technology. Nonetheless, I think the book is somewhat lacking in Wilson's decision not to provide a separate conclusion. While her discussion of Pitt is somewhat conclusory, it didn't feel final enough for me as I approached the end of the book.