Jump to ratings and reviews
Rate this book

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

Rate this book
Confused about AI and worried about what it means for your future and the future of the world? You're not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works, why it often doesn't, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don't work, and probably never will.
While acknowledging the potential of some AI, such as ChatGPT, AI Snake Oil uncovers rampant misleading claims about the capabilities of AI and describes the serious harms AI is already causing in how it's being built, marketed, and used in areas such as education, medicine, hiring, banking, insurance, and criminal justice. The book explains the crucial differences between types of AI, why organizations are falling for AI snake oil, why AI can't fix social media, why AI isn't an existential risk, and why we should be far more worried about what people will do with AI than about anything AI will do on its own. The book also warns of the dangers of a world where AI continues to be controlled by largely unaccountable big tech companies.
By revealing AI's limits and real risks, AI Snake Oil will help you make better decisions about whether and how to use AI at work and home.

360 pages, Hardcover

First published September 24, 2024

685 people are currently reading
6139 people want to read

About the author

Arvind Narayanan

4 books41 followers
Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He was one of TIME's inaugural list of 100 most influential people in AI.

Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was also among the first to show how machine learning reflects cultural stereotypes.

He was awarded the Privacy Enhancing Technology Award for showing how publicly available social media and web information can be cross-referenced to find customers whose data has been "anonymized" by companies.

Narayanan prototyped and developed Do Not Track in HTTP header fields.

He is a co-author of the book AI Snake Oil and a newsletter of the same name which is read by 50,000 researchers, policy makers, journalists, and AI enthusiasts.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
398 (26%)
4 stars
675 (44%)
3 stars
367 (24%)
2 stars
48 (3%)
1 star
19 (1%)
Displaying 1 - 30 of 251 reviews
Profile Image for Jason Furman.
1,379 reviews1,543 followers
November 2, 2024
Some of AI Snake Oil is very good, including its skepticism about AI hype, an excellent chapter on the limits of AI doomerism, and a focus on how AI is used by humans rather than its autonomous capabilities. But much of the book—including its ultimate recommendations—is deeply misguided, reflecting a misunderstanding of capitalism, a mix of concerns not really AI-related, a one-sided review of the evidence, and a failure to compare AI to the alternatives—namely flawed humans and non-AI technologies. They are also more skeptical about progress in AI than I would be, though I don’t have strong convictions about who is right on this.

The book opens well by pointing out that AI is an overly broad term which confuses debates about it, analogizing it to a world where we only used the word “vehicles” and some people arguing for their efficiency meant bicycles while their debate opponents were focused on SUVs.

They distinguish between predictive AI, generative AI, and social media content moderation AI (in a chapter that feels out of place). They argue that much predictive AI is based on unreproducible papers with several errors, including testing on training data (“leakage”) and lacking structural models that break down when behavior changes. Moreover, companies deploy and sell systems that are often untested and sometimes aren’t even AI (occasionally with humans behind them) as part of widespread AI hype.

I found this mostly compelling but disagreed in places. They criticize flawed machine bail decisions without engaging with literature showing how it can improve or comparing to how terrible human judges are with their limited time and information. They discuss an AI hiring system that can be gamed based on attire or interview language—again something humans do too, probably worse. They’re overly fatalistic about prediction: while perfect prediction is impossible, we can do better than coin tosses and provide uncertainty estimates for users to weigh errors.

They’re more positive about generative AI except regarding what they view as large-scale intellectual property theft. While I haven’t settled my views here, I’ve long thought IP protections are overbroad and hinder innovation—my instincts lean that way on generative AI too, though I’m uncertain. People get enormous growing benefits from generative AI; if stricter IP protections just shifted rents that might be acceptable, but radically reducing innovation would be problematic.

Their chapter on AI existential threat is masterful and should be widely read. They effectively critique doomer arguments: they expect only incremental progress toward AGI, note that AI risks can be fought with better AI making unilateral disarmament counterproductive, argue “alignment” is premature given unknown future technologies, contend that paper clip-maximizing AI couldn’t exist without human-like understanding, and emphasize focusing on human misuse through measures like restricting bioweapon ingredients.

Their deeper flaws emerge from skepticism of capitalism that leads to indefensible positions. They criticize OpenAI’s Kenyan data annotators earning $1.46-3.74 hourly while engineers make nearly million-dollar salaries at an $80 billion company. This is pure demagoguery—the relevant comparison is to these workers’ alternatives, not to AI engineers. Even their criticism that “data annotation firms recruit prisoners, refugees, and people in collapsing economies” could be read positively: AI creating employment for the least employable is potentially beneficial.

The social media chapter focuses on Facebook’s Type I and Type II content moderation errors, but as they acknowledge, this mostly reflects human judgment rather than AI. They offer no real alternative to this complex task, noting Facebook couldn’t afford to handle 83 Ethiopian languages and moderate rare but crucial events. They praise Mastodon, which is far less usable than X and, by their admission, may not be scalable.

More broadly, they seem nostalgic for public provision, nonprofits, and smaller companies. They note “The early internet was funded by public funds and DARPA... before 1990s privatization,” overlooking that pre-privatization internet was barely accessible and limited in utility. Similarly, criticizing large AI companies ignores that they’ve produced the major breakthroughs.

They argue AI progress will be slow because profit-focused companies won’t invest in understanding how AI works. While true for some firms, well-funded companies with long-term horizons are likely to invest in understanding if it creates competitive advantage.

Some recommendations are sensible—like improving research reproducibility and enforcing deception laws. Others seem tangential, like supporting randomized college admissions above certain thresholds—an AI-irrelevant proposal they wouldn’t extend to bail decisions. Ultimately, what people attempt with AI, especially predictive AI, is challenging—but the alternatives are often worse.

[DISCLOSURE - I asked Claude "Can you do a very, very light edit of this" and posted that edit. I write these reviews very quickly, originally just did them for myself, and often have typos. Hopefully this eliminated the typos and improved the language a little--but also may have introduced some changes I didn't love because I didn't check Claude's edit carefully. My hope is the improvements outweighs th worsenings--but even better is if I had spent more time to take advantage of but not fully follow the AI edits.]
Profile Image for Sebastian Gebski.
1,188 reviews1,340 followers
Read
January 18, 2025
No star rating, as I've realized I'm not the target audience for this book (so it could have been misleading for someone who is).

Word of caution: this is NOT a book for the technical audience. This is not a book for someone who already has some engineering understanding of ML, recommendations, statistical prediction, etc. This is not a book for folks who do understand the basics of Gen AI: transformers, attention, token generation. This is a book for laymen (there's nothing bad in being a layman) who need a quick upskilling & are interested in the topic mentioned in the subtitle.

What did I wish for (& didn't get here)? I'd really appreciate some deep dive into the capabilities of modern models - e.g.:
- reasoning
- non-textual knowledge
- multi-sensoric knowledge
- brain throughput vs actual "AI" througput
- glass ceiling in Gen AI
- why AGI can't be achieved with the architectures we have
- what did we learn (if anything) about conscience - thanks to Gen AI

The most interesting consideration I've found here was about randomness ("luck") and why one can't rely solely on statistics (I don't just mean incorrect interpretation of statistics).

In the end, I can't recommend this one. But I can imagine that someone who's not into software/data engineering may have really enjoyed it.
Profile Image for Beauregard Bottomley.
1,201 reviews817 followers
January 10, 2025
Sturgeon's law is 90% of everything is crap, and it's clear from this book that 90% of generative AI is crap, and that 99.9% of predictive AI is worse than crap.

Harari's "Nexus' is the perfect illustration for the crap book on AI, and this book falls into the not-crap camp.

AI can be useful. Humans don't need AI to do bad, but it helps hide their complicity behind clever programming.



Profile Image for Vinayak Hegde.
707 reviews93 followers
December 25, 2024
The book AI Snake Oil offers a comprehensive and critical analysis of the current AI landscape. Its strongest feature is the numerous compelling examples debunking exaggerated claims made by various AI products. The authors meticulously dissect these assertions, exposing fallacies and providing a more grounded perspective. They then delve into the root causes of misinformation and hype, highlighting how different types of AI are often conflated into a monolithic entity.

The authors categorize AI into three primary types: Prediction AI, which uses datasets to make forecasts or predictions; Generative AI, which is trained on data like text, images, or video and can produce new outputs such as text or images based on prompts; and Content Moderation AI, designed to monitor and manage online speech to prevent harmful content. This structured framework helps demystify AI’s capabilities and limitations.

The book’s analysis is both nuanced and contextual. Through numerous examples, the authors demonstrate how many AI models struggle with accuracy, especially when faced with evolving or culturally specific data. They highlight issues like the reproducibility crisis in AI research, where lack of access to code or datasets undermines the credibility of results. Another critical concern is data pollution, where training data unintentionally contaminates inference processes, further casting doubt on AI's credibility and reliability. These issues often render AI less a science and more akin to alchemy.

The authors also explore AI’s societal impact, from displacing workers to increasing productivity among existing employees, which can lead to reduced hiring. Certain tasks lend themselves more readily to automation, altering job roles and reducing worker bargaining power. This shift often benefits employers, reinforcing systemic inequalities.

The book concludes with potential remedies, advocating for thoughtful regulation, industry guidelines, and the introduction of randomness to disrupt feedback loops in processes like hiring and college admissions. However, the authors caution against pitfalls like regulatory capture—where regulators prioritize industry interests over public good—or regulations that favor established players, stifling competition and innovation.

Overall, AI Snake Oil provides a much-needed reality check, countering AI hype with informed, expert insights. It’s an accessible read for anyone seeking clarity on the promises and pitfalls of artificial intelligence.
Profile Image for Jean.
190 reviews
August 28, 2024
Fantastic book! EVERYONE should read it. Clearly and thoroughly sorts out the reality from the hype, explaining why we are where we are (some extremely problematic uses already exist, hence: "snake oil") and what the future may hold (no, giant, sentient robots aren't taking over). Excellent insights and discussions of the different forms of AI (predictive, generative, content moderation), the problems and promise of each, and how we might steer in the right direction.

Read this book if you're curious about AI, afraid of AI, have to make decisions about implementing AI, have kids, use social media, make policy, vote, wonder about AI in your work, are a journalist, are interested in tech, or just enjoy high-quality expository writing. Then sign up for the authors' newsletter.

I read an advance copy and reviewed it here: https://www.practicalecommerce.com/ai...
Profile Image for Ali.
1,778 reviews150 followers
April 17, 2025
This is a mostly practical approach to writing about the current state of AI, with a focus on generative and predictive technologies. While pitched at those who are AI averse, the books' authors are self-described enthusiasts about the possibilities from regulated generative AI, while roundly condemning all forms of predictive AI, and noting the many abuses that can come from gen AI in the wrong hands. The book is particularly useful for the clear account of current fields of application (eg facial recognition, content moderation, sentencing or child removal predictions, creative endeavours such as image generation, coding, chatbots), and a handy chart with their assessments of both accuracy and harm potential of current fields of application. The authors have deep knowledge of these applications, and that shows in the detail and analysis of them.
In many cases, such as predictive uses and content moderation, they rank the tech as both inaccurate and harmful. In others, like facial recognition, it is harmful despite being relatively accurate. They are kinder towards the creative and assistive uses of generatie AI - I did think they skimmed over the issues around creator rights and how the technology has been trained on the work of creatives with no recompense to date. But I also feel that their acknowledgement that Gen AI can be used by a person to speed up certain kinds of work is undeniable. This leads to a strong set of recommendations for how to regulate, shape and control AI. They chuck some interesting extras in to this, including an argument for randomised ballots in university and other highly competitive selection processes, as a way to stop an inevitable arms race in which the wealthy and privileged put more and more resources into securing their spots at the top of the tree. These are sensible, but of course rely on having a government which is interested in protecting just and happiness-supporting outcomes - or having a population which can call a government to account for such things.
Profile Image for Anthony Moreau.
34 reviews
November 27, 2024
A timely and necessary book.

Should be mandatory reading for anyone in a position of power in the private or public sector: it's clear we should'nt allow any entity to use "AI" systems into decision making positions without a clear understanding of how (and if!) they work.

Overall a great discussion of the distinctive systems labelled as AI and a great antidote to the hype cycle we have been in for the last two years. Healthily skeptical, but with a touch of optimism as well.

The final two chapters alone were worth the price of entry!
Profile Image for Gorab.
831 reviews145 followers
April 18, 2025
3.75

Highlights: Title content with case studies. Eliminating bias.

Why it was picked?
Reco by @Manish over a casual AI related conversation. Thanks bhai and do keep the recos coming.

What's it about?
A beginner friendly reality check on AI. Considering it's a big umbrella term. No prior technical knowledge needed for this book.
Types of AI versus automation. Where it excels and falters. Augmentation vs Replacement. Why there can't be a yardstick to ascertain authoritativeness and authenticity of predictive AI with repeated scientific tests. Skepticism on its accuracy, ethics, regulation etc.

What I loved:
Thought process on debunking the myths.
Intro and overall structure.
Case studies, esp where AI has failed.

What i didn't like:
Much of the content on Generative AI and its risks was pretty common, with no takeaways. But it was needed here for completeness.

Overall:
Quintessential read in current times where almost everything is "AI powered". Equips you to think where and what to look under the hood for such claims.
Ensures that you don't buy AI Snake Oil (the scam, not the book!)
Profile Image for Subashini.
Author 6 books174 followers
January 23, 2025
I appreciated the insight into differentiating predictive vs generative "AI" but in general I found that this book suffers from the kind of breathless undergrad essay voice where they have many things to tell you but not enough time to do so, or something, so it feels very surface-level in parts. And overall I was a teeny bit troubled by how sanguine they were about the potentials of generative AI without having anything to say about the environmental costs of the tech. I know I read this book when I was tired and drowsy at night but I don't quite think I missed out on chunks of this book because I was in a stupor, either, so! A pretty hefty and telling omission. A little bit too "breathless about tech, and yeah capitalism is bad but we can do better!!!" for me. Maybe the book I want is a materialist narrative about "AI", and machine learning tech in general, and this book is not it.
Profile Image for Timothy Grubbs.
1,264 reviews6 followers
May 27, 2025
The perils and potential of AI…and words of caution due to the numerous potential for fraud…

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor is a decent look at the modern use of AI and its history…while also calling out the abusers and shysters who only want to use AI to profit at the expense of others…

This book covers a wide range of AI issues covering generative AI (which “creates” material) and predictive AI (which can’t actually predict the future).

Both forms of AI have different concerns for the present day, though some of these overlap.

The writers don’t just limit themselves to the recent years as they also provide a dive into early research dating back to the mid 20th century…as a natural growth of early computer technology.

Throughout the chapters (broken up by subject), the writers also freely use examples of AI in pop culture or how certain fiction ideas compare to whatt people claim AI is capable of. It helps understand how something that is AI might pop up in the real world even if our brains do not necessarily think of it as AI (and there are even comments about what “counts” as AI).

Naturally much coverage is given to con artists and other “entrepreneurs” that market AI to solve all of society’s ills…often knowing full well they are full of crap…

Worth trying out if you have an interest in potential abuse of AI as well as how it’s been mishandled in the past…
Profile Image for Anika (Encyclopedia BritAnika).
1,429 reviews21 followers
February 26, 2025
I know I hate AI. I know it's bad and it drains the planet's resources and it is also racist a lot of times. But I don't know all the why. This book does a fantastic job of explaining different types of AI, what they can and can't do, and what people pretend they can do and fail to do and grift people. Predictive AI can't predict anything, and is harming people because insurance companies are using it anyway. AI is targeting people of color for negative outcomes. AI IS VERY BAD. But not all AI. There's some we already use that is helpful but the new stuff is bad news. And I'm really glad I listened to this to help me understand better. Very easy to understand for a non-science/tech person like myself. Strong recommend.
Profile Image for Trina.
1,263 reviews3 followers
September 10, 2024
Having read several books on AI in the last few months, this wasn't groundbreaking for me. I do think some of their approaches were different (certainly less doom and gloom than some) and I thought the final part where they imagined the world in two different ways depending on how we deal with AI was interesting. I am still waiting for a real explanation of why LLMs were able to train on data that can now be used in perpetuity without compensation to the original creators.
Profile Image for Donald Schopflocher.
1,427 reviews33 followers
August 9, 2025
This book is less about what AI is, and more about how AI has been and should be evaluated. At present, ‘hype’ about AI, most often exaggerated or false, is rewarded, while serious analysis of its true capabilities is not. This is a very important message, and virtually the entire work is oriented towards delivering it.

The authors distinguish between predictive ai, generative ai, and content moderation ai. The ‘Predictive ai’ umbrella the authors use is very broad and I suspect they would think it covers almost every computer system that outputs a prediction. At the same time, an enormous amount of work has been done by statisticians and psychologists (especially those involved with testing) about prediction and about how to establish the validity of predictions. This is not well known by AI researchers, AI marketers, or indeed the authors themselves. It is therefore no surprise that AI hype, bordering on fraud, is used to sell predictive systems. While the authors do not discuss the specifics of how to evaluate predictive ai, as I wish they had, they do call for more support of scientific research to evaluate AI systems, and especially claims about their capabilities.

Long ago I was instrumental in creating a predictive system that is still in operation. Because I was a scientist knowledgeable about Test Theory, the system was periodically evaluated and re-evaluated against real world data, and continues to work quite well. It was based on statistical equations but since it was computerized, I would not be surprised if someone took this or a similar such system and claimed, inaccurately, that it was ai.

The authors note the intrinsic difficulties in performing content moderation with AI, noting especially that what should and should not be allowed on social media networks, for example, is dependent in the final instance on policies made and modified by human beings.

They are much more optimistic about generative ai, while not embracing it outright, and they present valuable strategies to employ when using generative ai, like chatGPT, to test and verify its accuracy.

Along the way, the authors discuss the dangers posed by ai, such as the Terminator scenario where ai tries to eliminate humankind, and whether or how to prevent it. They also discuss limits that current ai has that make these scenarios unlikely. One of these is that researchers do not understand in detail how ai works.

I also had a minor role in this line of research when, at my suggestion, a research group tried to understand how a simple neural network was actually solving a task by examining the ‘mechanisms under the hood’. We knew at the time that this would not be a popular line of research, but the authors’ call for renewed efforts to understand the precise ‘how’ of ai mechanisms was welcome.
Profile Image for Jens Hieber.
516 reviews8 followers
July 13, 2025
This was pretty much what I was expecting; not anti-AI but aware of the issues. True to the subtitle, the authors' goal is to talk about the limitations of AI, which requires them to know and present an accurate understanding of what the various forms of AI are and how they work and what they're capable of. I found this informative, though I expect those who are more familiar than I with the field would find this rudimentary. This functions as a good primer for how to spot AI hype, the issues with AI (and how in many cases it's not AI that's the issue, rather it's amplifying an already existing issue in a society), and some suggestions for AI research (which seems very flawed as it is) in particular.

I appreciated the lack of panic the authors brought, along with their expertise, clear writing, and copious examples (some of which I'd come across before). I felt that their suggestions in the final chapter were a bit vague and one of the visions they presented at the end felt a bit utopic.
Profile Image for Ari Damoulakis.
401 reviews22 followers
October 6, 2024
I am really not exaggerating but for me this is a very important book I hope all you my GR good friends will read so you will know to be careful and that many dangerous things could be done by humans who are not careful with AI.
Listen, I love AI.
As a totally blind person it has already done many amazing things for me and wonderful changes in my life, but even I definitely also know that it has problems when, for example, it tells me an object is something which it isn’t.
I will rely even more once I achieve my plan to be able to soon buy Envision Smart Glasses, which I am so super excited about.
But this book will also show you the terrible consequences AI could have for many humans, especially if other people use it wrongly, or maybe even deliberately skew models to take advantage of or defraud other people, or if AI is unintentionally misused because biases are accidentally built in or by mistake many factors aren’t taken into account.
Or if humans start relying on flawed AI and do not apply their own judgments to many situations.
And as for predictive AI? Well, ai makes mistakes now. We as humans are sometimes irrational and AI could create wrong futures even if it could predict haha.
Better humans live your own lives and let us hope we just don’t become cogs in decisions made by large companies who have too much faith in the future their AI might try predict is best for us.
And yes, I am still mad at facebook’s AI for refusing to let me comment on my friends’ posts with what we all know are stereotypical South African geographic jokes. You know, you can’t even use ‘I’ll kill you,’ in a sentence on comments to friends you’ve had over 20 years without the AI refusing to post it because it thinks I am issuing death threats or hate speech.
Profile Image for Becca.
79 reviews
September 5, 2024
This was a great read! Helped expand some of my ideas and understanding of AI. As well as temper some of the things hear floating around.
Profile Image for Michael.
357 reviews11 followers
November 17, 2024
Sadly this book was really bad. It’s just lots of classic “here’s how tech is bad”. It’s somewhat well summed up in the Ted Chiang quote towards the end that all fears about AI are actually fears about capitalism. These are not interesting or insightful. A book written in 2024 that uses studies from GPT-2 and doesn’t discuss hyperscaling or agents or LLama just isn’t contributing anything new to the conversation. There’s perhaps some utility is separating out generative and predictive AI (and for some reason adding social media moderation as a third thing?) but it totally fails to explain the parts that make this new era new. It’s not just ML 2.0. it’s that we have fundamentally new capabilities because of scaling and advances in the algorithms. It’s that we have better chips. It’s that we can eliminate many coordination and interoperability challenges by having computers interact more like humans. It’s that we can unlock the potential of smart but untrained people because most innovation is just pattern matching applied in new ways.

Sad because I was hoping for much more.
Profile Image for Yaaresse.
2,151 reviews16 followers
February 13, 2025
12-2024 Need to park this one for a bit. Library wants it back and there is a waiting list. Pretty interesting so far, though.

Update: 2-2025
What is AI, how does the hype compare to the reality (so far), and what could possibly go wrong when we all go hog-wild over something that most of us really don't understand enough to know what it's doing? The authors attempt to cover the history (older than you think) of AI, how it is being used (and misused), and the strengths and weaknesses inherent in it. For the most part, they do a good job. They are decidedly not optimistic. After all, whenever we have a chance to use something for good or weaponize it in ways big and small, we tend to pick the latter.

Even with the bias (to which they admit), it's a good read for anyone either using or trying to avoid AI. Also a good read if you ever need to apply for a job, need a loan, need medical care, or are just trying to figure out why Netflix keeps suggesting movies you have zero interest in watching.
Profile Image for K..
4,610 reviews1,144 followers
April 20, 2025
Content warnings: child abuse, racism, mentions of sexual assault, mentions of eating disorders

A truly fascinating (and horrifying) insight into AI and what it can and can't do.

A quote that's stuck with me: "Large Language Models (LLMs) are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training. That means that using ChatGPT in its current form would be a bad idea for applications like education or answering health questions. Even though the bot often gives excellent answers, sometimes it fails badly. And it's always convincing, so it's hard to tell the difference."
Profile Image for Rachel Pollock.
Author 11 books79 followers
April 14, 2025
I’m doing a deep dive on reading books about AI, and this one is fantastic for explaining the different kinds of technology referred to as AI, how it works and was created, what it can do and what it can’t. Highly recommend.
Profile Image for Panz.
176 reviews
May 15, 2025
Took my time in reading through this one. This book is well researched given that this field is rapidly evolving. The points feel a little repetitive, though it serves to drive home the authors' positions in different categories of AI. AI optimists will argue that the AI we see and use can only be the worse that they will ever be, but that is besides the point. The authors are highlighting systemic and psychological flaws in our relationship with this particular form of technology, which are orthogonal to the development of said technology. This book is a must read for the layman who wants a critical analysis beyond the media hype cycle.
Profile Image for Mizuki.
3,330 reviews1,379 followers
Read
August 23, 2025
I only read the Introduction and a few chapters and then called it quit. I am thankful that this author has claritied some basic points about AI and debunk a few common AI myths and misunderstanding. However, the more I read, the less readable this book looks, so I have to quit.

this time it's not the book, it's me.
Profile Image for Raghu.
443 reviews76 followers
March 16, 2025
Artificial Intelligence (AI for short) is the buzzword of the past decade. However, the many technologies and products falsely labeled as AI often puzzle us about what AI is. Some philosophers, including Nick Bostrom, believe humanity’s last invention will be machine intelligence. Machines will then surpass human invention's capabilities. The authors of this book demystify AI by giving us some criteria to classify a technology as AI or just ‘AI hype’ or ‘AI snake oil’. This book does not sing tributes to AI, but talks about its limitations, misleading claims about its capabilities, and the harms it can create.

How do we know a technology as AI? First, we ask if the task requires creative effort or training for a human to perform it. Second, was the system’s behavior specified in code by the developer, or did it emerge, say, by learning from examples or searching through a database? For example, machine learning is AI because it learns from example. Third, does the system decide more or less autonomously and possess some flexibility and adaptability to the environment? If yes, we might consider the system AI. Autonomous driving is a good example of AI. This book aims to separate genuine AI from AI snake oil. Predictive AI, Generative AI, and Content moderation AI are the three main types of AI that this book examines. It also analyzes whether advanced AI is an existential threat to humanity.

We start with predictive AI. It is in use in myriad ways in our daily lives. Today, hospital check-in algorithms predict whether patients require overnight stays. When you apply for child welfare, an algorithm could determine if your application is valid or fraudulent. When you apply for a job, a recruiter could use an algorithm to decide whether to consider your application or discard it. In the prison system, it could decide if an inmate should get parole. We use such predictive AI-models for decision-making based on predictions about the future. The fundamental problem with predictive AI is the assumption that people with similar characteristics will behave similarly in the future. After analyzing many cases from hospitals, credit rating programs, etc, the authors say predictive AI falls far short of the claims made by its developers. They state that a model’s predictions led to the unnecessary jailing of thousands of defendants for months before trial, with no evidence of guilt.

There are reasons to predictive AI going wrong. When deploying predictive AI, we must ask who we tested it on. Claims about its performance lack sufficient evidence if it’s built for one population but used on another. Still, we see the widespread use of predictive AI. The authors say it is because decision-makers are humans - who dread randomness like everyone else. This means they deny the future’s unpredictability. They refuse to acknowledge their inability to surpass a random process. Accepting a lack of control over loan defaults or employee selection is necessary. Higher-quality data might address certain predictability limitations. Others are fundamental. Predicting success for cultural products such as books, movies, and music will probably remain difficult. In predicting individuals’ life outcomes, there could be some improvements but not drastic changes. Unfortunately, this has not stopped companies from selling AI for making consequential decisions about people by predicting their future. The authors conclude it is important to resist the AI snake oil that’s already in wide use today rather than hope that predictive AI technology will get better.

Next in line is Generative AI. It would surprise readers to learn that Generative AI has a long history of eighty years. In simple terms, it is AI technology that can generate text, images, or other media. Hence, it also reflects the biases and stereotypes captured in its training data. It is important to note that the same learning algorithm is behind chatbots, text- to-image generators, and thousands of other AI apps that perform varied functions. What differs is the training data and the architecture (the broad pattern of connections between the simulated neurons). Image classification and face recognition are big successes of generative AI. They are similar and are also dangerous because of their success in surveillance and advertisement. Companies use AI to recognize the characteristics of people looking at billboards, such as their age and gender, and change the advertised product to tailor it to their demographic. One may think it is logical, but they do it without our knowledge and we don’t know what information about us is being stored.

So, how do we ordinary folk approach generative AI? The authors believe most knowledge workers can benefit from it. Chatbots “understand” in the sense that they build internal representations of the world through their training process. However, we must recognize the fact that just about anything online today may have been AI-generated. This means the content on the internet is becoming less trustworthy. This has been called the “liar’s dividend”. Seeing is believing is no longer true. This new reality requires chatbot adaptation. The authors consider labor exploitation at the heart of current generative AI’s development and use as the most significant problem. They say it is up to individuals to avoid using it for this reason.

Next is content moderation. It works by human moderators judging on millions of pieces of content. Automating this work appears possible by training a model on decision patterns. But it is not so easy. Context often determines whether content is objectionable. The inability to discern that context remains a major limitation of AI. It is also a major limitation of AI’s inability to understand jokes. The authors give examples of how horribly it can go wrong. In February 2021, the parents of a young boy noticed his swollen private parts. They took pictures to send to the doctor. The dad in question had an android phone with photos backed up to Google Cloud. Google’s AI mistook the images for child sexual abuse. Google shut down his account and referred him to the police. The police investigated and cleared him, but Google refused to reinstate his account. I had a similar experience with Amazon. The AI deleted all my book reviews, falsely accusing me of violating their policies. Reinstatement proved impossible. I lost all my Amazon work. Yet another bizarre example was Facebook removing the photo of the ‘Napalm girl’ for child nudity. It is one of the world’s most iconic photographs, significant for depicting the horrors of the Vietnam War and for changing public opinion about the war.

These mistakes happen because AI interprets text, speech, or images too literally. It cannot take context into account and hence misinterprets things in amusing and horrifying ways. The authors claim content moderation will remain challenging, regardless of AI usage. Regardless of the technology behind misinformation detection, there is a risk of overreaching, resulting in a system that reinforces the political or scientific establishment while suppressing dissent. However, the authors believe one model worth looking at is Reddit. Instead of one giant user network, Reddit divides discussions into topic-focused “subreddits”. The critical difference is volunteer members of those subreddits rather than company employees or contractors do moderation. In conclusion, the authors declare, “Portraying content moderation AI as a solution to social media’s moral and political dilemmas is just snake oil. It is just a cost-saving measure for companies.”

Many in the AI community think that AGI (Artificial General Intelligence) or Rogue AI is an imminent existential threat requiring dramatic global action. The authors say it rests on a tower of fallacies. They concede AGI is a long- term prospect, but that society already has the tools to address its risks. The real problem is the immediate harms of AI snake oil. Given the circumstances, why do serious researchers consider this? It could be because of biases. Selection bias draws people to AI research to build an all- powerful technology that could alter human history. Cognitive bias draws people because AGI is imminent and powerful, adding an aura of grandeur to one’s work. But cognitive biases can also prevent us from recognizing the limits of our own knowledge. We are often overconfident in our understanding of how things work. We feel we understand complex phenomena in more detail than we do.

However, AGI is difficult to reach. Even in self-driving cars, edge cases have proven rather difficult. Compared to self- driving, AGI has far more unknowns and edge cases. That means extrapolating based on the rate of improvement in capabilities is likely to be overoptimistic.

The solution proposed by the authors is straightforward yet challenging. Preventing harmful actors from accessing AI is futile. “Aligning” AI so that it refuses to help harmful actors will not work. Instead, we need to defend against specific threats. In cybersecurity, we protect the software that defends critical systems. To defend against AI-generated disinformation and deep fakes, we need to strengthen the institutions of democracy. Fortifying democracy presents a monumental, ongoing challenge. Thus, the urge to restrict AI is understandable. However, such thinking distracts us from genuine problems.

The end of the book points to the source of AI snake oil. Companies that want to sell predictive AI, researchers who want to publish flashy results, and journalists and public figures who make sensationalist claims to grab attention are the sources. The authors suggest tech regulation but accept regulation has limits. They recall the similarities of how tech companies capture the regulatory framework the same way tobacco companies did in the 1950s.

What about AI and job loss? History shows it is rare for a job category to be replaced altogether by technology. Perhaps the most common type of impact from automation is a change in job duties. Of the hundreds of occupations listed in the 1950 U.S. census, only one disappeared because of automation: elevator operator. In other cases, a technology becomes obsolete, removing the need for job categories related to it, such as telegraph operator. Automation often decreases the number of people working in a job or sector without eliminating it, as we see with farming. AI has had this impact on copywriters and translators. Automation elsewhere has decreased prices, boosting demand. This happened with introducing ATMs into banks. The machines reduced the cost of running banks, and led to an increase in the number of banks, and therefore bank tellers. We call this the automation paradox. The book closes with the science fiction author Ted Chiang’s words: “Fears about technology are fears about capitalism.”
Profile Image for Yen-Yi Juo.
62 reviews1 follower
June 13, 2025
Un libro muy informativo. Los autores contextualizan el auge de la IA con su propia historia, para que los lectores puedan comprender la situación actual de esta tecnología. Muy recomendable para quienes dudan del auge de la IA y desean obtener información del ámbito académico.
Profile Image for Darcey.
84 reviews25 followers
December 30, 2024
Overall, a disappointingly shallow treatment. It covered much the same ground as other AI books I've read, repeating many of the same examples that I've seen elsewhere, but covering them in a more cursory and less detailed way. Overall, I didn't feel that this book introduced anything conceptually new, or changed the way I thought about these issues.

The book starts by pointing out that treating "AI" as a single technology, and asking whether AI is good or bad, is as silly as asking whether "vehicles" as a technology are good or bad; it's too broad of a category for us to be able to comment usefully. Instead, the book encourages breaking "AI" down into different types. It doesn't give a taxonomy, but instead focuses on three particular classes of AI: predictive AI, generative AI (into which it has lumped some discussion of perceptual / classification AI), and content moderation AI.

The book describes predictive AI as the class of AI used to make predictions about individual people's futures -- whether a criminal will reoffend, whether a job candidate will perform well in a job, whether a particular patient will require additional hospitalization, and so on. It argues that this type of AI is "snake oil" -- the technology is overhyped but not actually effective, and relying on it will cause errors and moral harms. The companies developing these technologies have very little oversight and can make whatever claims they like about the tech's effectiveness, and even though the technologies are supposed to be used with human supervision, people tend to simply defer to the algorithm. Furthermore, the authors argue that, even as AI improves, these kinds of technologies can never be truly effective; people's life outcomes can't actually be predicted because they are too dependent on random events, and also the amount of data collection needed to make anything like an accurate prediction would be an invasion of privacy. (I mostly agree with them about predictive AI.)

For generative AI, they argue that there is an opposite sort of danger: the technology is dangerous because it works too well, not because it fails to work. While optimistic about the positive uses for generative AI, the authors talk about some of the harms caused by the technology (deepfakes, lack of credit or monetary compensation to the people who made the examples in the training data, etc.).

The content moderation section was the only one which felt new to me; I haven't seen this use case for AI discussed in much detail before. The chapter described the complicated world of content moderation, and how AI tools interact with other pieces of the system (human content moderators, regulations around copyright and child pornography, pressures from various countries, etc.). The authors argue that there is a place for AI within content moderation, but that it will never be able to completely replace human judgment and deliberation.

(For a book ostensibly about snake oil, it only really declared one of these three technologies to be outright snake oil.)

All of these chapters were pretty quick, and didn't really go into much depth. The book was also very light on technical detail, in a way that I think is likely to confuse non-experts (or at least cause them to nod their heads and say "these guys seem to know what they're talking about" without actually understanding the details). The predictive AI chapters give only the barest description of what a predictive AI model is ("a set of numbers that mathematically specify how the system should behave"), but then go on to repeatedly refer to training data and learning statistical patterns without ever really explaining how any of it works. (The generative AI chapter explains more about how the technology works, but I think still leaves it mysterious, mainly talking about neural networks as big linked-up systems of artificial neurons.)

Overall, everything in the book was just too quick and high-level to really make a compelling argument. The two chapters that should have interested me most were the ones on why AI is not an existential risk, and why AI hype persists. But in practice, the existential risk chapter was very perfunctory. First, it didn't make a fully general case for why AI is not an x-risk; rather, it just argued against a few of the main arguments from the AI safety community. And while these arguments contained the seeds of actually good points, they didn't present the x-risk arguments or their counterarguments in enough detail to really evaluate them. Their hype chapter pointed out several causes for AI hype, but never asked or answered what I felt was the most fundamental question there, which is, why is there so much hype around AI as compared to other scientific fields or technologies? Without comparing to other fields, it's hard to evaluate whether the causes they name (journalists basically reprinting companies' press releases as their news, the replication crisis, and various other things) are the real causes.

It's also worth noting an important aspect of their framing -- throughout the book they speak of individual AI technologies as tools, something that you can pull out of your toolbox, use, and then put back away; they very rarely speak of AI as agents, and in their chapter about why hype persists, they list this agent-oriented language as one of the causes of hype. (Personally, I think that, as AI becomes more autonomous, it's more and more reasonable to think about and describe it as an agent, but that the tool framing also provides useful perspective.)

(A lot of people seem to be commenting on the book's overall recommendation for what to do about AI, which is more and better regulation, but I have relatively little to say there; I don't know much about regulation and don't have strong opinions.)

There were certainly things I liked about the book. I appreciated that it was not uniformly negative towards AI (the authors are actually very optimistic about certain AI use cases). I also appreciated that it was not politically partisan, and was never snide or snarky. I was also glad that it focused on a wide variety of current harms from AI, and not just social justice issues, since I've found those to be overemphasized in these discussions (they are very real and important but so are the other harms discussed in this book). And I was glad that the book did not take an overly simplistic view on how generative AI worked (such as "just repeating the training data"), or try to claim that current/future AI was incapable of doing real reasoning.

There's nothing really wrong with the book; I just mainly felt that it didn't add anything especially new to the discussion.
Profile Image for Jeannine.
154 reviews
May 6, 2025
"I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two." -Ted Chiang
Profile Image for Julian Dunn.
363 reviews20 followers
August 20, 2025
A book with the title AI Snake Oil has some pretty big shoes to fill, owing largely to the use of the phrase "snake oil", which implies that most or all of what gets marketed as AI has absolutely no value. And there is definitely a lot of hype and false/exaggerated claims being thrown around in 2025, with almost no checks on veracity -- not from the media, largely, and certainly not from regulators (the US generally is in a free-for-all, unregulated environment now with the re-election of Trump). The two authors are academics and at times sound like they are writing a textbook (which they do suggest this book can be used as such in academic settings, so that's not entirely off-base) which means they make a few odd choices about pacing and the taxonomy of the world of AI. Specifically, they create three categories of AI: predictive AI (machine learning / regression models), generative AI (language models), and social media content moderation, and devote a chapter to each of these areas, trying to debunk much of the hype and unsupported claims in each area. I agree that the first two are distinct categories, but to me, social media content moderation is just a use case for one or both of the foregoing categories' approaches, so I don't know why it gets a special call-out at the same level of taxonomy as the others.

The authors' point of view is that predictive use cases are far more dangerous today because black-box models that aren't regulated nor peer-reviewed are often employed in decision systems that can truly have life-changing effects on humans' lives. This lack of transparency has allowed for some of the most egregious outcomes that have been covered elsewhere, such as Epic Systems' sepsis prediction tool (performs only slightly better than chance), ShotSpotter (largely ineffective), HireVue (also largely ineffective), and predicting the chance of civil war (completely ineffective). It's clear that Narayanan and Kapoor's work has been focused in this area for a long time, because they make a compelling case that when lives/futures are literally at stake, corporations who employ this kind of technology have a responsibility to take much more care in ensuring these algorithms actually work and aren't impacted by biases such as confirmation bias, teaching to the test (not separating the training data set from the evaluation data set), etc. Predictive systems are mathematically the easiest to understand and could show promise in many use cases as long as biases could be accounted for, rigor is applied to the development of these systems, and their forecasts are evaluated by an outside party before products are brought to market and claims made. (Again, good luck pulling this off in a deregulatory environment, but that's a matter of will, not skill.)

The book gets a little weaker once the authors start addressing generative AI. Because I suspect it's not their area of expertise, even they get hooked a little by the snake oil. In a few places, they uncritically parrot some of the promises of the language model boosters (e.g. "Developers can prevent inappropriate outputs by ensuring that when training chatbots, the bots are given examples of the kinds of things they are and aren’t allowed to say" -- which is categorically not true, they can't "prevent" that). Although they do recognize that LMs are essentially fancy synthetic artifact creation machines, Narayanan and Kapoor deliberately and explicitly ignore the side-effects upon human creativity that the explosion in their use might create. These authors are ultimately computer scientists, and one of the areas of trouble that tech folks get into when evaluating whether technology is good or bad is when they focus only on the innovation itself and not its peripheral effects. They fall into this trap, seeing "philosophizing" about LMs to be out-of-scope, whereas I think if we are to have a robust debate about the future of these tools, those considerations must be included. However, they do a nice job of demystifying "deep neural nets" and "deep learning" for the average layperson, though, showing that these techniques are not magic, despite all of the anthropomorphism in the use of language like "learning" that might suggest somehow LMs are sentient beings.

Turning to social media content moderation: this section of the book is kind of meh. The authors make a good point that trying to apply computer algorithms to unbounded domains, particularly when inputs arise from social systems (surprise, humans are unpredictable!), is a fool's errand, because whether something is "good" or "bad" depends so much on an extremely broad context, one that is changing all of the time. Even humans themselves would have trouble keeping up, particularly if those humans aren't familiar at all with the context (which is why Facebook, without a local presence in Burma, allowed the massacre of Rohingya Muslims to be carried out). While this is a valuable topic to discuss, I think that if they were to bring it up, they really should have more strongly opened the can of worms about whether the incentives of the social media products as currently constructed are the right ones to reduce toxic content. Because I guarantee that platforms whose revenue streams depend on eyeballs and clicks will naturally prioritize the delivery of outrage-inducing, divisive content, and we should definitely be focusing on that as the root of the problem rather than debating whether or not "moderation" works in the broad. (It doesn't.)

Overall, the book is not horrible; it's less of a screed than some of the other books on AI that I've read, but even the "choice of two futures" presented in the conclusion didn't sit quite right with me. Neither of the futures they presented were compelling (one of which was obviously more nauseating than the other) but the authors didn't really go far enough to illustrate a world where we don't simply have to accept the existence of such products just because big tech companies are creating them. Two futures, one of which is "absolutely no guardrails" versus "a few minimal guardrails" is not really a choice -- just shades of the same choice. Again, I think this is a function of the authors being computer scientists with minimal background in philosophy and the humanities, so they start from a position of optimism about technology (in a microcosm) rather than skepticism about its system-level effects.
Profile Image for Susan.
211 reviews
November 5, 2024
A comprehensive deep dive into the failings and inadequacies of AI (predictive, generative, and content moderation AI), to warn readers of “AI snake oils”, i.e. AI that does not and cannot work as advertised. I loved the discussion on AGI, and how general intelligence requires a certain degree of common sense, judgement and social intelligence. Overall worth a read, although the book felt fairly long.
Profile Image for Wouter.
226 reviews
October 2, 2024
Critical and positive book about AI. It explors three forms of AI: GenAI, predictive AI, and content moderation AI.

There weren't many eye-openers or any profound insight. It solidly describes how we get here and the potential and deceit of AI.

It was ironic that the book ended with two scenario predictions whilst being very critical of predictive AI.
Displaying 1 - 30 of 251 reviews

Can't find what you're looking for?

Get help and learn more about the design.