A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls of technology sold under this banner, and why it’s crucial to recognize the many ways in which AI hype covers for a small set of power-hungry actors at work and in the world.
Is artificial intelligence going to take over the world? Have big tech scientists created an artificial lifeform that can think on its own? Is it going to put authors, artists, and others out of business? Are we about to enter an age where computers are better than humans at everything?
The answer to these questions, linguist Emily M. Bender and sociologist Alex Hanna make clear, is “no,” “they wish,” “LOL,” and “definitely not.” This kind of thinking is a symptom of a phenomenon known as “AI hype.” Hype looks and smells It twists words and helps the rich get richer by justifying data theft, motivating surveillance capitalism, and devaluing human creativity in order to replace meaningful work with jobs that treat people like machines. In The AI Con, Bender and Hanna offer a sharp, witty, and wide-ranging take-down of AI hype across its many forms.
Bender and Hanna show you how to spot AI hype, how to deconstruct it, and how to expose the power grabs it aims to hide. Armed with these tools, you will be prepared to push back against AI hype at work, as a consumer in the marketplace, as a skeptical newsreader, and as a citizen holding policymakers to account. Together, Bender and Hanna expose AI hype for what it a mask for Big Tech’s drive for profit, with little concern for who it affects.
I really wanted to like this book. I think the authors mean well, and they touch on a lot of important issues. I think this book could and should have been an important contribution.
Unfortunately, the authors let their desire to deploy "ridicule as praxis" run wild, to the point that poking fun at things was obviously more important than any kind of consistency. The book constantly contradicts itself. I honest to God could not tell whether the authors ultimately believe that AI will replace jobs, or not. Two consecutive sentences will happily tell the readers that "Goldman Sachs is saying the quiet part aloud here: we found a way to save a boatload of money by replacing you." and that "This promise of automated replacement is not new, but rather a persistent myth". So, which one is it, the quite part out loud or a persistent myth? It seems to me that it should be one or the other, I find it hart to fathom how it could be both. This kind of writing made me continually feel like the authors are not going to let any kind of consistency get in the way of a good jab. Ultimately, I couldn't even tell if the authors believe that productivity increases can exist in principle, and/or have happened in the past (I suspect that they do not believe that productivity increases are real, but I found it impossible to actually tell from their writing).
I find this particularly depressing because the authors do at times land on real insights. The book is full of little anecdotes and stories that really could shed light on many issues. They include a mostly excellent discussion of the relation between "intelligence" and eugenics (but then they tell us that the eugenicists are in it for the money, and also that actually, they are the ones who are spending the money, and also that actually the money is fake. If there was a consistent set of claims, it passed me by and left me utterly confused). They show us that sometimes something that is sold as a productivity increase is really just a hidden labor force in what they call the "Majority World" Excellent, I want to learn more, but the authors won't stop for a discussion of how we know what's productivity, and what is merely the digital version of offshoring, so we never know what is what. Surely, not all of AI is just this? But we never learn how the specific instances relate to the general theme, and so it can feel like the authors themselves can't quite get out of that hype bubble where all AI is all the same, even though they tell us quite clearly in other places that AI is not at all a unified thing.
The book is also constantly and loudly explaining things that the authors themselves clearly know nearly near nothing about. They seem to think that "auto barons" were a thing "at the beginning of the Industrial Revolution" (wrong century?). They tell us that power looms were water frames (two entirely different technologies for weaving and spinning). They confidently suggest that the destruction of single $200,000 Jaguar "may have been one of the most expensive acts of rage against the machine in recent memory, possibly since the early days of Luddite frame-breaking" (a rather severe underestimation of the history of sabotage and machine breaking in labor conflicts). The text is so chockful of such confident falsehoods and misunderstandings that it did at times make me wonder if a language model may have produced the text, and this was an elaborate trolling gotcha.
Then, there is the strange advice. You may be tempted to use ChatGPT, but you should not! Why? Well, because what if they raise the price someday, and also you should never have used google either, because Cory Doctorow said something about enshitification, and also did you know that it once replicated a piece of code for "fast inverse square root" from the wikipedia entry with that name, including the comments (which included cursing!) and all. Again, the desire to ridicule is just so obviously overbearing. I don't find this useful, nor relatable. It certainly doesn't help me to more systematically think about AI.
At best, I think, the book offers "ridicule as comfort", if you can feel like you're in on the joke as a reader. As "praxis", I think it is a failure, at least if you were hoping to get a better sense of what AI is, how the hype works, and how it's actually going to impact the world.
In July 2023, Congressman Ro Rhanna used a premium-grade ChatGPT subscription, provided by the US House of Representatives, to generate the draft of a bill (H.R. 4793) named the Streamlining Effective Access and Retrieval of Content Help Act. In the "findings" section, the bill explains that "the use of the latest available technology can significantly enhance website search capabilities, enabling faster and more accurate retrieval of information." However, as Emily Bender and Alex Hanna argue in their book, this wishful statement is logically bamboozling: it's false at the present moment and utterly unfalsifiable for the future. Right now, new technologies are not faster and more accurate. In fact, ChatGPT and Gemini are a poor replacement for current search engines, frequently misrepresenting websites, merging reliable and unreliable information from different internet sources, and even generating bogus quotations and citations. As a general statement about the future (i.e. that the most recent technology will always result in more reliable information), this is completely unverifiable and presupposes the reader's trust in the predestined direction of digital research. As a rule, though, the latest technology is often the least tested and least trustworthy.
H.R. 4793 is an example that perfectly captures the book's critique of our current AI landscape: politicians, venture capitalists, educators, business owners, the general public have all fallen victim to hype. Here we have a congressman using AI to generate a law whose explicit textual justification is premised on blind faith in the timeless reliability of new technologies. We, the public, have been so inculcated in the creed of progress and innovation that we assume that all the new technological developments are miraculous wonders that will benefit society. However, inspect the language closely, you will see the false promises and empty rhetoric: of course it is, on some level, true that "the latest available technology can enhance website searches, enabling more accurate searches", but note how amorphous the phrase "latest technology" is, note how weaselly the words "can" and "enabling" are, and note how subjective "more accurate" is. Even AI-generated hype is vague and hedges. Throughout the book, Bender and Hanna draw attention to the many times in which AI hype has been exuberantly proclaimed and then fallen short of reality. AI is, above all else, a marketing term to describe numerous different technologies that will have, and have had, deleterious consequences for the labor market, the environment, education, health care, and policing. It improves little in society but it does enrich its investors.
Overall, I think this is a strong book that delivers a powerfully worded Jeremiad against all the false hype about AI. It's not just the investors who over-sell the capabilities of AI and minimize its shortcomings. Even the "doomers" are complicit: by warning about a potential robot apocalypse or technological singularity, these critics of AI also exaggerate and over-dramatize what AI is and what it can do. They drum up interest and inspire awe in a technology, and by using such sermonizing hyperbole, they actually distract away from the very real technical failings and biases in AI. Call it automation, call it a "text-extruding machine", call it a "stochastic parrot", and then you have a better insight insight into the pitfalls of the technology. The problem with Large Language Models is less an issue of robots developing consciousness and manipulating humanity but a more insidious crisis of synthetic text simulating humans and saturating our digital eco-system with unreliable, dubious and biased content—making it harder to know what on the web is true and who is real.
Reading this book and the way it describes LLMs as "text-extruding machines", I was reminded of Borges' story "Pierre Menard, Author of the Quixote". In his story, Borges imagines a 20th-century French author transcribing the words of Don Quixote who creates a verbatim transcript but, paradoxically, this transcript is a wholly new text. While the words are identical, it's a different century with a different literary context and so Menard's version is actually more subtle than Cervantes', more historical and less fantastical, more ironic, even surreal, and intertextually richer (with connections to Paul Valery and Friedrich Nietzsche). So it is with ChatGPT. A human sentence and an automated sentence are not the same thing. Spoken by a human, with a particular intention, a human sentence will have a particular meaning and contextual resonance; automated by a machine applying certain statistical weights and perceptual patterns, the same exact sentence is nothing more than a probabilistic representation of pre-existing texts matching a particular input. There is no meaning. While Bender and Hannah have many policy concerns about AI, I found their book was most compelling when it suggested a particular ethic for reading AI, cautioning readers against confusing AI text with meaningful prose: "Mistaking our own ability to make sense of text output by computer for thinking, understanding, or feeling on the part of the computer is dangerous. At the level of an individual interaction, if we don't keep attention on who is doing the meaning making (us, human communicators, only), we risk being misled by system output, trusting unreliable information, and possibly spreading it." AI is dangerous because it exploits, and erodes, our natural empathy and trust in the written word.
This is not a technical book and it doesn't really describe in much detail the under-the-hood mechanics of large language models. It feels in part like a catalogue of all the recent failings of AI and a take-down of many press releases trying to amp up excitement for AI. I wasn't sure if it needed to be a book (and there is something a little silly and antiquated about a book responding to a technology that is changing in real time). But it offers useful ways for thinking and talking critically about different AI technologies.
A book that promises to debunk the hype surrounding Artificial Intelligence, but ends up falling into another extreme: total denial. Despite the importance of an ethical and informed critique of the dynamics of big tech, Bender and Hanna opt for a moralistic denunciation strategy, ignoring contexts of use, technical advances and the complexity of the cultural phenomenon underway. One is left with the feeling of reading a sermon, not an analysis.
Authors severely underestimate the power and influence and coming pervasiveness of the use of AI. They highly underestimate the use of AI in work and school. They cite a survey of students use of AI with less than 20 percent of students using AI. (!) that should be a red flag about their questionable sources and lack awareness of the pervasiveness of llm/AI use in education. They severely underestimate the ability of AI to replace low end knowledge workers ( para legals, coders, medical technicians,).The authors deem all AI generated art as theft but they fail to realize that all artists steal from other artists. Authors recount the impact of tech and social media on legacy media, a current echo of the Luddite movement. A lot of concern is given to reinforcing bias and the misuse to Ai in legal, hiring and professional areas. There is a spectacular lack of imagination as the increased power that gives AI to already entrenched institutions, will be used in new ways that are hard to fathom. the authors also come up short on prescriptions to the harms of AI: just don't use it (seriously?) and make fun of it. Authors are very short sighted to call AI a hype bubble when it threatens to change everything in the near future. I am stupider for having read this book. Zero stars. (Better read, Scary Smart by Mo Gawdat and Empire of AI for thorough examination of the dangers and causalities of content moderators.)
I have SUCH mixed feelings about this book. I think the authors are a hundred percent right that AI isn't going to take over the world. It's a machine, programmed with (admittedly complicated) code. And it has had negative effects on society. Namely, enabling students to cheat and not learn the valuable critical skills you learn when you write an essay yourself, and destroying the environment. Do you know how much water those data centers use to cool the computers that power AI? It's a lot!
But the authors' point is often undermined by their extreme leftist positions. They refuse to use the term "Global South" and instead use "Majority World," which I hate. Black, when referring to black people, is ALWAYS spelt with a capital B. In fact, they are so wedded to this that if they quote from a source in which black isn't spelled with a capital B, they actually use a sic after it! Obscene!
This is a worthwhile read to learn about AI hype, but you may be left gagging on the extreme leftism.
The AI Con has an agenda that it gives away in the title: AI is a con. It’s a bubble that’s going to burst. And the most interesting thing about it is what the remains will turn out to be.
It’s a very snarkastic book. There were times when I wasn’t sure what the authors were trying to tell me, but I sure as hell knew they felt it was very smart. The interesting part was that they fall neither into the camp of Boomers (AI is going to solve all our problems and create paradise!) nor Doomers (AI is going to turn us all into paperclips!) – their starting point is that AI is a con in the first place, and is inevitably going to be exposed for what it is. (I sort of agree with this.)
What made me somewhat sad was the authors’ advice as to how we can achieve this quicker: ridicule the poor effects, amplify the mistakes, and do not use it. Right. The Luddites’ example is repeated throughout the book. We should be AI Luddites. But the Luddites lost.
My ratings: 5* = this book changed my life 4* = very good 3* = good 2* = I should have DNFed 1* = actively hostile towards the reader*
Points out the hype and boosterism rampant around the tech sector generally and AI specifically, buts falls short in elucidating any solutions that have any chance of being enacted. Although the section on the importance of libraries and librarians was appreciated.
This book is a takedown of AI hype. Rather than accepting tech industry claims about artificial intelligence, Bender and Hanna expose "AI" as primarily a marketing term masking corporate power grabs, data exploitation, and the devaluation of human work. I have been reading extensively about AI and its current state, attempting to separate rather outlandish claims and concerns from what is really happening, so when I saw this title, I felt it would add to my understanding of the bigger picture.
The authors argue that "AI” is a marketing term being used to rebrand existing machine learning technologies. They debunk common claims about AI taking over the world, creating artificial life, or becoming better than humans at everything. They employ wit and sarcasm, which assists in making their points and makes the book appealing even to those without a keen interest in technical topics.
The authors argue that large language models are systems that can generate coherent language but do not (and cannot) understand the meaning behind the language they extrude. Bender and Hanna contend that the AI technologies are primarily intended to help the rich get richer by “justifying data theft, motivating surveillance capitalism, and devaluing human creativity.” They believe it is the latest trend in Big Tech's drive for profit, with little concern for its impact.
The book examines the current state of AI in industries such as healthcare, education, media, and law-enforcement. It provides examples of products already in place that are unreliable, ineffective, and even dangerous. It also looks at the environmental impact of these technologies, which are causing tech companies to miss their carbon reduction goals.
The authors provide mechanisms to resist the imposition of AI and suggest, where possible, that refusal to use it is one of the main ways to push back. The book encourages readers to think critically about the broader social implications. This book is essential for anyone seeking to understand the current AI landscape. I have read plenty of books promoting the benefits of AI. This book provides the other side of the coin.
Probably the book of the moment. Not overly long, and with a positional stance that is well-defended throughout. The chapter division and chapter titling was a bit suboptimal imo but all domains in which AI has been forced upon me in my experience is in there.
As Weapons of Math Destruction was to the 2010s and basic machine learning, so this book is to the 2020s and "AI".
So: I loved this book. It set my head straight on a lot of things I intuitively knew, as someone with enough expertise to understand how the "AI" sausages are being made, but was struggling to articulate. And hype is strong! We are social apes! Like - yes - natural language processing (NLP) is amazing and gratifying and weird. (Text is such gnarly data!) But NLP has been eerily amazing for a long while now (I have felt the icy showers for at least 10 years! tf-idf?! whaaat!?), and these stochastic parrots are indeed eerily amazing as well. But! Do they deserve all this hype? Where by "hype", the authors specifically mean: enormous venture capital investments, enormous carbon emissions and re-jiggering of our energy infrastructure, and - perhaps worst of all? - figures of authority waving their arms around a vaguely-defined but definitely civilization-altering "Artificial General Intelligence (AGI)" that is always just beyond the horizon?
The authors argue - aggressively, spicily, wonderfully - that NO! This is all bananas! And I am 100% here for it.
First, they give one of the best plain English primers on neural networks, n-grams, and embeddings. It's one chapter, and it's only semi-technical (intended for non-technical audiences), but it covers, imo, the main ideas in a very clear, comprehensive way. So bravi, there! They also offer refreshing clarity on defining "AI" - a term that is, currently, being abused in everyday conversation, but that normally captures distinct fields in machine learning/comp sci: large language models (LLMs), OCR, computer vision, blah blah, I am tired of linking.
Rather than prognosticating about the future (and, indeed, notice how much AI hype is about the very near future... it's just over the horizon, people!), they instead trace the history of AI (leveling some shots at Minsky and Hinton, wowza), the history of Luddites, and the CURRENT practices of how LLMs are trained, how they are used RIGHT NOW, and how they are talked about. There is a lot about labor (outsourced content moderation is horrible indeed; your boss being sold AI to "boost productivity" == aka, layoffs) and training data bias (duh) and basically plugging the holes in our social safety net with word-prediction machines. All of this was stuff I knew, but they structured it in a clear and helpful way.
The one thing I did NOT know, but blew my mind, was the theory of mind stuff and linguistics (Emily Bender is a linguistics prof at U of Washington). Basically, language includes a lot of "guessing what the other person is thinking/trying to say". That's why you can't teach your baby Italian via TV (believe me, I've tried). It's the *interaction* that matters. The social learning. Because LLMs are so good at sounding human, our brains naturally start to "fill in the blanks" about what they're "thinking/trying to say". This is also why people DO NOT ascribe cognition to "AI artists" when they look at those (frankly very tacky) DALL-E, Midjourney, genAI art outputs. No one thinks an "AI" was trying to "express its consciousness" - we see it as an obviously computer-generated, automated mish-mash of training inputs. But LANGUAGE. Our ape brains get real weird there. Hence all the flailing around "omg AGIIIII".
Anyway, I loved this so much. Should be required reading for everyone in tech.
Appreciated the broad look at where AI is now from a more skeptical point of view. As we’re inundated with news and applications of AI and how it’s an inevitability that it’ll replace jobs and infiltrate nearly every aspect of our lives, it was nice to hear how the loudest positive and negative forces in the AI debate are likely both wrong. Similar to a “Last Week Tonight With John Oliver” episode, this book was very informative on certain things, possibly purposefully less informative on others, and ultimately had the essence of mocking something (AI hype) for the author’s and audience’s comfort.
Who would have thought that everything about these "big" automation machines/programms are just bullshit and it's just to make richer people richer? Nothing surprising. After reading this book it makes me even madder to see people use those automation machines or, to quote the authors: "these piles of racist algebra" for fun. Even worse, using those things as means to do stuff. Like can't you write your own E-Mail like you are paid to do? What are you gonna do with "more time" at work? Work more? Work for less?
Why is everyone and their mother so stupid and misguided in thinking ANYTHING of those automations called "artificial intelligence" is intelligent or even remotly useful? Did we learn nothing?
I mean, probably not. On that note: Fahrenheit 451 didn't age well :D Anyway: I am avoiding anything labeled "Ai" like the plague and I will always think people using this earnestly are below me. And I will making fun of everyone.
A well needed antidote to AI hype and a disturbing breakdown of the intentions and ethics (or lack thereof) of the tech bros behind it.
Key takeaways: * LLMs (e.g. ChatGPT) depend on an army of underpaid and exploited workers in African and Asian countries who are subjected to disturbing content and practices (like trying to get the LLM to tell them to kill themselves so they can prevent this happening for users). * The data powering AI text and image generators was STOLEN from artists, writers, journalists, and normal internet users without their consent or fair compensation to be MONETISED by the corporations (Meta, OpenAI, Microsoft et al) and then privatised. The entire business plan based on theft. * AI is ridiculously damaging to the environment. * AI is riddled with errors, is often half baked and in no way should replace humans, especially in care roles or roles that require nuance (healthcare and law). * Most of what is marketed as AI is not even AI.
My only criticism of the book are some of the ridiculous "academic views" that get shoehorned throughout, but I can look past these as the pros outweigh the cons and the majority of the book is valuable.
I spend a lot of time thinking about AI. I have a consulting company that helps businesses adopt AI - to use the tools to automate mundane parts of white collar work. I believe in the good AI does and can do, but I’m also aware of and weary of the bad that it’s capable of.
I approached this book with an open mind. I want to be aware of the criticisms and concerns.
The tone of this book was so condescending and patronizing that I couldn’t take it seriously.
I don’t even know if it’s fair for me to say I finished it. I listened to as much as I could to get the gist. Unfortunately can’t recommend.
Between the errors (mainly NOT about AI itself) and the gotchas in many of the 1 & 2 star reviews, I could go anywhere from 2.5-3.75 stars on fractional rating points.
I eventually, for the second time, decided to do a ratingless review of a book. Maybe it will show up more readily than a 3-star rated review.
Yes, it's a screed. But, some degree of screed is needed on this.
At the same time, the authors ultimately come off as jilted lovers of some sort. And, my favorite Belarusian-American technology sociologist, Yevgeny Morozov, has already nailed this. AI is the most over the top techdudebro version of "solutionism."
On the other hand, many of the 1/2 star reviewers come off as "solutionists," especially big capitalist ones, whose ox is being gored. Others are wingnuts of various types. If I were to give stars, I couldn't go below 3 because of them.
With all of that in mind, and because a commenter on two-star reviewer Nelson Zagao’s Substack mentioned "AI Snake Oil" as a better alternative to this, and because it has the same problems with many of its 1/2 star reviewers?
This is a review of this book and both books' low-star reviewers at the same time, but especially this book's.
Let's dig in.
As noted, I can’t rate it below 3 stars, in part because of some of the types of people it triggers in 1- and 2-star reviews.
One Zionist is triggered over two mentions of Israel, even though they are using AI from Microslob and Google in the genocide in Gaza.
At least three “woke White wingnuts” are triggered. One is a Religious Right wingnut, and the other two are unions and workers haters.
Partially more thoughtful 2-star reviewers are right-neoliberals, at best. (When you 5-star Matt Yglesias AND Ezra Klein, that’s you.)
And some of these general types of reviews show up on related books like “AI Snake Oil.”
Partially more thoughtful to more thoughtful 2-star reviewers claim this book itself is doomerism. I don’t see it that way. In addition, the authors use that term in a more narrow way — and make clear what that way is.
Beyond that, a lot of what they call AI “Hype” is actually hucksterism, pure and simple. (Related? I have long called the owner of Facebook “Hucksterman” even before his own deep dive into AI.) Much of this crap isn’t even AI, in the sense it’s not actually intelligence. And, like peaceful nuclear fusion power, we’ve been hearing that strong AI is just around the corner for 50 or more years. It still is.
The real issue is that none of these people take totally seriously the destructiveness of capitalism. On Nelson Zagao’s Substack, I mentioned, riffing on Schumpeter, the “non-creative destruction” of much of this. (It’s also interesting that a media professor doesn’t really engage with copyright issues, says a newspaper editor.)
That’s where reality vs woke White wingnuts comes in. AI threatens to greatly expand their foothold; AI in the West in general and the US in particular does. In other words, it’s an accelerant, or potential one, of the worst of human behavior.
Back to Zagao. Sometimes, doomerism, in the general sense, not how the authors use it, IS realism. Like climate change now being a climate crisis, even as neoliberals pretend it’s not. Speaking of, there’s AI’s massive energy and water use, and the authors use the phrase “climate crisis.”
As for complaints the book doesn’t distinguish well enough between generative and predictive AI? It may not be perfect, but it clearly talks about both types of AI, even if it’s not saying “HERE’S GENERATIVE AI” (or PREDICTIVE) every time it focuses on just one of the two.
Doomers (outside the real doom of the climate crisis) are presented as a partial flip side of Boomers, in the idea that ‘oh, AI is so powerful it could overwhelm us, but if we fix this ‘alignment’ issue, AI’s massive potential will take off.”
Zagao isn’t even fully a semi-hater, but for the semi-haters and haters? Most know better.
Per Upton Sinclair: “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
As for the errors I mentioned above? They’re there.
The Luddite movement? Both they and some of their critics in reviews get it wrong. Machines in the textile industry were being attacked already in the late 1600s. Tis true that cropping machines, looms and knitting frames, specifically the stocking machine, were the specific target at the Luddite peak, but, there you go. Also, as Wiki notes, “Luddite” is often used more generically today.
Also, the original Luddites weren't blank haters of the machines.
Writing errors, sadly, with a linguistics professor as one of the two authors, pop up. It’s not “alright,” it’s “all right.” That’s just one of several. It’s the one that most stuck out.
Now, in the more serious errors?
Outside of AI issues, no, this newspaper editor knows Craigslist didn’t destroy newspapers in general or even newspaper classifieds. Media analysis sites agree; Poynter interviewed Newmark recently, in fact. Monster was much bigger on that. Bigger yet was the slothful response of the newspaper industry. The authors appear to know relatively little about the modern history of print media, other than knowing its implosion and the ownership by vulture capitalism firms of many larger chains.
Missing from discussion of the “alignment” issue? Mark Twain’s knock on the “moral sense,” via Satan III (a la Napoleon III) mocking the boys in “A Mysterious Stranger” for believing the “moral sense” made humans superior to animals. In other words, humans aren’t such hot shit, and even if the AI hype were real, massive human power extension wouldn’t be hot shit either. In other words, their take on the techdudebro worry about "alignment" isn't insightful enough on human nature.
Agree with their take on effective altruism, but they don’t deeper dive into the problems that utilitarianism has, namely that it cannot generate a view from “nowhen.” In other words, utilitarianism in general, let alone so-called "effective altruism," can't really see 6 months into the future, let alone 6 years, and certainly let alone 6 decades.
Now, back to what I said up top.
REALLY missing is noting how AI is Yevgeny Morozov’s “solutionism” writ large. In fact, he’s nowhere referenced in the book. Cory Doctorow gets one brief mention early, then a bit more in the end.
Climate crisis? Absolutely real, but the authors don't have a full grasp on that, either.
The Paris Accords are purely voluntary, not legally binding. They're Jell-O of voluntarism, made that way by Dear Leader Obama and Comrade Xi Jinping. Thinking they are binding? Might actually exacerbate the climate crisis.
Again, the authors strike me as a pair of jilted lovers, who think they’ve discovered this grand new secret. The neoliberal-like ignorance about Paris was the capper, but not even citing — and perhaps not even knowing — Morozov was the keystone.
I was looking forward to this book. I agree with the premise that there is a lot of hype around a technology that people call AI, misnamed because it really has no intelligence.
However, that is not what this book does. It appears that authors like this one put the word 'AI' on the cover and vaguely talk about some of the issues but they just want to use the book as a soap box to preach a woke sermon against white people (thereby being racist), racism, colonialism, climate change, and other such garbage. It isn't that impressive or even relevant, and what is worse, it is often filled with outright lies to support this irrelevance.
The author should stick to the topic and produce another book for those interested, called "I hate white people, the alt-right boogeyman that eats children, and all my other irrelevant woke ideas."
"Those who resist the imposition of technology are disparaged as technophobes, behind the times, or incompetent, sometimes even 'Luddites.' But in fact, 'Luddites' is exactly the right term, even as those using it as an insult don't realize it. In the tradition of the original Luddites, writers, actors, hotline workers, visual artists, and crowdworkers alike show us that automation is not a suitable replacement for their labor. We don't have to accept a reorganization of the workplace that puts automation at the center, with devalued human workers propping it up." (66)
"People are far from perfect, subject to bias and exhaustion, frustration, and limited hours. However, shunting consequential tasks to black-box machines trained on always-biased historical data is not a viable solution for any kind of just and accountable outcome." (76)
"The law and language have a special relationship: the law happens in language. Beyond that, it happens in language used in a particular way--lawmakers write policies into existence. On one level, the policies exist only as those words, while on another, they can have enormous impact on individuals, communities, and the entire planet through the ways that they shape behavior. And so the words must be chosen with expertise in order to have the intended effect, not only after the policy is established, but also in the long term, when the legal and social context in which they are being interpreted will certainly have changed. The drafting process therefore should be done with care and not farmed off to a system that can swiftly create something that sounds good." (78-9)
"The last thing we need is shiny tech that promised to obviate the need for the hard work of building inclusive scientific communities and putting those perspectives in conversation." (119)
"Fortunately, there are ways to resist. At an individual level, we can overtly value authenticity. Refuse the apparent convenience of chatbot answers and insist on going to original sources for answers to our own queries." (173)
"At no point, however, does calling any of this technology 'AI' help. This term obscures how systems work (and who is using them to do that) while valorizing the position and ideas of power holders. Speaking instead about automation and data collection helps to make clear who us actually being benefited by this technology, and how. If we are to create a future that is populated with technologies we want, we 'can't only critique the world as it is,' as science and technology scholar Ruha Benjamin has written; we also 'have to build the world as it should be to make justice irresistible.' Part of that vision means technology ought to be created with full participation of the people it impacts. Following disability justice advocates, we say 'nothing about us, without us.'"(190)
"We should stop giving AI boosters the benefit of the doubt. They are indexing their fortunes--and mortgaging ours--on a future that doesn't exist and that won't suit us at all." (191)
"We don't have to accept technologies that will do us harm, no matter how well they are tested or honed. Some technologies--like facial or emotional recognition--should be objected to on the grounds of what they are intended to do, and how they dehumanize and rank individuals." (192)
I think this is a great discussion of the current state of big tech and AI. if nothing else, I recommend the first chapter for a great description and discussion of what "AI" actually is without requiring much tech knowledge.
Important and well researched and probably something everyone should read. That said, I think the audience might be fairly narrow. The writing style is flippant and often ridicules AI and its questionable applications (on purpose; that's one way they suggest that we resist the hype), and so I think this will be easily written off by anyone who already considers themselves to be all in on AI or are pursuing it for their organizations (arguably the people who need this perspective the most). People who are closely following AI developments and are already skeptics will likely know a lot of what's discussed here. But anyone who is newish to AI, learning how it works, and are even mildly concerned will probably get something out of this book. It's breezy and easy to read too.
The only thing about this book which I would have liked to have read is more about actions on fighting the big tech hype in more tangible, specific terms for the average person. It had the potential to explain and focus more on the “fight” (in laypersons terms) and less on the background of the different assumed AI tech. The tone at times was cynical.
I would like to know how does the average consumer of this tech, say no to their employer; what skills do they need to find/research/assess/crap detect; what could people do (other than not using AI) to fight against the hype?
I needed more details (beyond just one last chapter of the book). it sent me down a rabbit warren of trying to find groups that are focused on this aspect.
If anything, the market is ripe for a handbook and resources for employees, educators, and any person affected by AI into their profession so that they can read and follow about how to deal with the onslaught of AI (both in process automation and worst of all, managers and colleagues banging on incessantly about how good it is) into their lives and work places.
Simple terms. Simple explanations. Calls to action.
Also references as to where to go and join with others against AI.
The references in this book are insanely in-depth near 100 pages of references and index (which I loved because they give you a starting point for more research).
I pick up this book having loved Prof Emily Bender's academic papers on AI. While I can appreciate her attempt to engage with the larger public, I must question at what cost? As a famous figure in academia, Prof Bender offers sharp arguments to her colleagues in Computer Science and Linguistics research to scrutinize pseudo-scientific practices in the so-called field AI and return to more rigorous standards in question formulation, data collection, result evaluation, and ethical considerations. However, as she tries to repackage the same criticism and ideas in this book, it's not clear which audience she's trying to serve. Is it the working class who are worried about automation threatening to replace their jobs or make labor worse per the book's argument? I doubt it, because the mocking tone doesn't match the real economic anxiety around AI faced by the working class. Or is the intended audience the managerial class who must decide whether to buy into AI tools getting pitched to their organizations? I doubt it too, since the book dwells a lot on critique of capitalism. Or is the intended audience the tech industry, so they could focus divert their effort to building other tools? I don't think so, since the technical content is very thin, and people in tech might be better off just reading her academic papers which are mostly publicly accessible. So ironically, Prof Bender's criticism about the amorphous nature of AI marketing can be applied to this book. It tries to make such sweeping arguments against AI without tailoring its content to any particular audience that it might not be immediately useful or convincing to anyone who will be affected by AI.
Anyone doubting that much of what is driving the Artificial Intelligence (AI) boom is self-interested hype need only look at the share prices of listed AI firms. Nvidia alone has a market cap of more than $US4 trillion - equal to all the funds in Australia's retirement savings pool, the fifth biggest in the world.
AI valuations are so outsized partly due to the ever-mounting, and often conflicting, claims being made for it. On the one hand, we are told that this is a technology that will totally transform and enhance our material world and humanity itself - and on the other that it will bring about doomsday and destroy human life on this planet as we lose control of its implications.
Both claims - the boom school and the doom school - are part of the same cycle of hype being pushed by the promoters of AI, according to the authors of this timely new book, The AI Con - How to Fight Big Tech's Hype and Create the Future We Want. The authors are experts on the subect - Dr Emily Bender is a professor of linguistics at the University of Washington, while her co-author, Dr Alex Hannon, is a former research scientist at Google's Ethical AI team. (I saw Bender present recently at the University of Technology Sydney).
For all the media noise, AI is essentially a marketing term, Bender and Hannon argue. It is deployed when those who build or sell AI programs stand to profit from persuading us that their products can do things that in fact require human judgement and creativity. Of course, they are perfectly within their rights to make such claims. But equally, the rest of us are equally within our rights in insisting on the application of the same degree of scrutiny, disclosure and accountability our laws demand when contemplating the deployment of any new, potentially society-upending technology
Yes, AI is profoundly exciting for many investors, because it offers the hope of large-scale automation of processes now done by humans in decision-making, classification, recommendations, translation, text and image generation, and countless other activities.
And It's also true that many people who use something like ChatGPT for the first time can be left overawed. You issue a prompt - for example 'write me 800 words on the political tensions and policy issues involved in the energy transition' - and, within seconds out comes a perfectly structured, grammatically correct and (apparently) soundly reasoned 'analysis'. It strikes novices as almost supernatural. Journalists, like me, wonder whether we will ever work as writers again.
Gripped by FOMO ('fear of missing out'), CEOs everywhere are telling their executive teams to find how such technology can be deployed in their own businesses, either as a defensive or a growth strategy. In this rush, normally rigorous processes can be easily cast aside, while internal naysayers and cautioners are dismissed as Luddites or illiterates.
"As investor interest pushes AI hype to new heights, tech boosters have been promoting AI 'solutions' in nearly every domain of human activity," the authors write. From policing to social services, healthcare, education, law, finance, human resources, transportation, energy, politics, journalism, art and entertainment - machine learning is being promoted as the answer to every known human problem. "For AI boosters, the fully automated AI future is always just about to arrive."
But Bender and Hannon are sceptical and they have spent the past several years looking at why we should exercise extreme caution before embracing this technology wholesale. For one, they write, AI is not a 'thinking' machine at all. It is not sentient. It has no judgement. It has no ethical dimension. It will not create art or solve intractable problems. It has no imagination. It only works with already known, pre-existing and (often stale) information, the sources of which are never specified or tested.
Ultimately, these are language learning models, high-powered replication machines, and souped-up autocomplete programs. They extract text from existing sources on the internet and reproduce it (essentially stealing someone else’s work) in a way that looks and sounds intelligent but that lacks any judgement or human dimension. Ultimately, AI is not about turning machines into humans, but recasting humans as machines. It is as if the entire world is turning autistic - Elon's World.
"AI hype reduces the human condition to one of computability, quantificaton and rationality," Bender and Hanna write. "If we accept that, consciousness can be judged by how it manifests in phenomena that are external to the mind."
Worst of all, AI is about serving the powerful. Not only does the AI hype machine feed the profits of companies in the sector and their investors, but it helps others get rich by giving them cover to steal and launder massive amounts of personal data. It also dangles the prospect of enormous profits for those seeking to replace stable, better-paying jobs with ones that are both more precarious and less fulfilling. ("AI is not going to replace your job. But it will make your job a lot shittier.") And, of course, aside from the insatiable monetary demands of Mammon, the AI hype serves a political purpose, allowing ideologue Ayn Rand-loving libertarians to devalue the social contract by selling the fiction that real social services can be replaced by cheap automated systems.
Watching the world's opinion leaders and many decision-makers fall over themselves for the AI hype is indeed depressing, but the authors end on a hopeful note, pointing out that we do have agency and we can push back. That includes increasing information literacy and asking tough questions of the AI promoters ('What is being automated? What goes in, and what comes out? How is the system evaluated? Who benefits? What are the sources of the information? Who checks it?'). It also means using existing regulation to crack down on illegal claims by companies about what the technology can do and enforcing laws protecting workers' rights.
For all the big numbers being casually tossed around by the boosters, the greedy and the stupid, AI cannot be allowed to blind democratic societies to the governance requirements they insist on in in other areas of the economy - in terms of accountability and transparency and disclosure and privacy rights, and in terms of the ethical dimension for our decisions. Even in terms of finance, an area AI is often touted as likely to upend, automation cannot ever replace real-time price discovery. or overcome the perennial challenge of bad data in/bad data out.
Finally, the authors conclude, we should never underestimate the power of just saying 'no'. As with all technologies, particularly ones claiming to be able to completely transform our established ways of doing things, we must not surrender healthy scepticism and human judgment. Ultimately, any technology must serve humanity, not the other way around. AI can make a useful office assistant (I used it to make the image above), but a very bad boss.
At a time when it seems half the world is impossibly infatuated with the grandiose claims being made for AI, this book offers a badly needed, feet-on-the-ground perspective from two informed experts who know how the automation sausages are made.
Will Artificial Intelligence (AI) cure cancer, solve climate change, unleash unseen levels of productivity, usher in abundance, and create heaven on earth? Or will it enslave humanity, turn us into batteries, and eventually lead to our extinction? Linguist Emily Bender and sociologist Alex Hanna join forces and answer: neither.
For the authors, both positions (termed “AI Boosters” and “AI Doomers”) are just two sides of the same coin: AI hype. AI hype is based on the idea that “Artificial General Intelligence” (AGI), or “Super-intelligent AI”, or even “Conscious AI” is just around the corner and is going to change everything; for better or worse. But AI hype primarily functions as a marketing/PR trick, and as a distraction from the concrete harms current systems labeled as AI are already causing, while helping tech companies concentrate more wealth and power.
Of course, machine learning, neural networks, deep learning, etc., have proved to be a powerful set of technologies. However, we have no good reason to believe that scaling up existing techniques (larger transformers, more data, more compute) will somehow lead us to “general” or “super” intelligence. Moreover, the authors argue that not only is there no clear or agreed-upon definition of intelligence -- let alone artificial general intelligence -- but that the definitions often invoked have roots in troubling legacies of eugenics and “race science”.
AI hype is built on vibes, loose assumptions, and fantastical sci-fi scenarios. Sure, some people might genuinely believe in the hype. But for tech companies, both Booster and Doomer narratives work as marketing strategies. An ideological veneer. The narrative goes something like: “AGI is inevitable and unimaginably powerful. In the wrong hands, it could destroy humanity -- but in mine, it will usher in a techno-utopia! So fund me!”. Billions of dollars in speculative investment keep pouring in.
Meanwhile, speculative scenarios obscures the fact that today’s “AI” tools are already causing real harm; exacerbating existing issues; or used as cheap fixes for complex social problems that they couldn't possibly solve.
Bender and Hanna shift the focus from abstract speculation to material, concrete, real-world consequences: for workers, consumers, migrants, citizens, artists, the environment etc., and across domains, education, healthcare, journalism, insurance, research, weapon manufacturing, and others.
“AI” models do not emerge in a vacuum. They enter systems that are already unequal and exploitative, and in many cases, they exacerbate existing problems rather than solving them; such as further devaluing labor (e.g., underpaying screenwriters to ‘polish’ AI-generated scripts), further eroding peer review as overworked researchers outsource evaluations to LLMs, and accelerating journalism’s decline through floods of AI-slop articles.
The authors also touch on Cory Doctorow’s concept of “enshittification”: AI systems might be cheap or free now, but once users, companies, and institutions become dependent, tech companies will monetize: raising prices, embedding ads, and selling our data. LLMs could (relatively) seamlessly blend ads into their responses. This could take surveillance capitalism (or techno-feudalism, if you prefer) and behavior modification to new extremes. This is a familiar pattern. Like Uber, Amazon, Netflix, and others before them, AI companies are following a “blitzscaling” model: burn venture capital money, grow fast, dominate the market, and once you’ve locked people in, and drive out competitors, degrade service and jack up prices.
To be clear, Bender and Hanna are not anti-AI or anti-technology; as some reviewers have claimed. They advocate for tools “designed with an understanding of both the needs and values of the people using it and of those it might be used on […] not tools that amplify oppression, centralize power, or destroy the environment. [...] we want to see specific tools geared towards specific tasks”.
Still, I do think they could have done more to highlight legitimate and useful applications of “narrow” AI; from navigation apps, spam filtering, accessibility tools, crop monitoring, monitoring manufacturing equipment, medical imaging, and many many more. Without such balance, some readers may come away with an uncritical anti-AI stance.
Moreover, while the book is wide-ranging, unfortunately, there's not enough room to go deep on any one topic. Thus, the discussion can feel under-developed in places. For example, when the authors address “AI art”, with generative models trained on scraped artwork without consent or compensation, they briefly mention class-action lawsuits and copyright enforcement as possible remedies, but don’t consider how stronger copyright protections may end up helping large rights-holders more than independent artists. But still, I appreciate that the book serves as a decent entry point into several key issues and remains accessible to a general audience.
Another (related) weakness: the book is light on technical details (~4 pages of very high-level coverage) on how deep learning actually works. You don’t need a full dive into linear algebra, calculus, or optimization algorithms -- but I think that some basic technical understanding is important to grasp how current models are trained and evaluated, and to better understand their actual capabilities and limitations.
In the final chapter, the authors propose some practical tools to fight against AI hype and concrete harm: ask the right questions, refuse to use LLMs, ridicule problematic uses of AI, apply (and update) regulations especially around data rights, privacy, and labor. All good suggestions, but I think we’ll need to go further. As long as tech development is driven by speculative investment, we’ll keep moving from one hype cycle to the next. Until we socially own and democratically control how technology is developed and deployed, it won’t truly serve the vast majority of us.
The book is not without its flaws, but if you are interested in a critical social analysis of AI and AI hype, this is worth a read.
The subtitle of this book should have been “The Woke Left’s Manifesto on AI Inequality”.
Although there were some points made here that are totally valid, the foundational worldview of the authors (anti-white, anti-male, anti-capitalist, anti-American, and anti-human) skews their understanding of the world and results in a lot of their analysis of the problems, as well as the solutions of AI, being out of alignment with actual reality.
This one definitely resides very near the bottom of my AI-books list.
Favorite Quote: “So called AI’s dirty little open secret is, none of these tools would work if it weren’t for a massive, underpaid workforce in the majority world (that is, outside the US and Western Europe, in places like Kenya, Venezuela and India).
This book focuses on the Hype of what is called AI, with a dual emphasis on definitions and tactics used to sell it. It also provides good anecdotes and referenced studies. Around that is a counter-hype of ridicule and some hyperbole, which detracts from the authors' message.
Definition is the key lever here, and that's not just pedantic. This starts with the definition of hype, which very much describes current AI. Fear of missing out (FOMO) and the near ubiquity of AI tools harkens back to other hype cycles. Maybe this time is different?
Folks describing current AI are just as buffaloed as Weisenbaum's secretary was by Eliza some 60 years ago. There is no "intelligence" behind the tool, and it seems unlikely there ever will be. For that definition, that section goes back to definitions of intelligence (and related eugenics) and ends up near the famed Turing test. The failure of an observer to tell the difference between man and machine does not prove intelligence.
Examining the customers and targets of AI also shows a lot of cracks. As an example, folks not spending money on social services fail to integrate AI also. While there is potential for aspects of AI tools in some locations, these are the ones that are already spending money. Handing a hammer to a monkey doesn't make that animal a carpenter. This is true across multiple professions and is covered in multiple chapters, with many examples.
The book also has some words on the cost of AI, but probably not enough. Maybe calculating the cost is not the best way to shut down the hype, but I think it's a very important consideration. I did appreciate the sections speculating that learning art and writing is best done by actually doing it, not just generating a prompt to do it.
As I mentioned above, the authors focused on ridiculing the hype, which may be a good way to shut it down. Not sure, and this counter-hype didn't work for me. The more effective thing right now seems to be companies actually measuring their costs versus gains and finding AI lacking, again.
AI, as we hear about it and use it today, is more hype than a real threat to humanity. That’s the main idea the authors—a linguist (Emily) and a sociologist (Alex)—try to explore in this book.
One of the facts big tech companies don’t talk about openly is how energy- and labor-intensive these large learning models really are. That’s also why many of these companies fall short of meeting their own emission reduction targets. In one of the examples shared in the book, the carbon footprint of training a particular large language model—including the manufacturing of equipment—was estimated at 50 tons. That’s roughly the equivalent of a dozen flights between New York and Sydney! That explains why some big names in the game have already admitted that they can't reach their climate targets; because of heavy investment in AI.
The book doesn’t argue that AI is useless or inherently dangerous. Instead, it highlights that while AI can be a powerful tool, it’s important to see through the hype. The authors offer questions and strategies to help readers critically assess what’s being sold to them. For example: Are these systems really being described as human? How are they being evaluated? What are their labor and data practices? And what are the ethics behind the way these models are trained? It also reminds us, as vulnerable consumers of these products, that we can always say no. Additionally, it compares the inconsiderate use of AI with fast fashion!
The authors believe that, like many overhyped technologies, the AI bubble will eventually burst. The problem is that those pouring money into it don’t seem to be thinking about the people who’ll be affected when it does.