Jump to ratings and reviews
Rate this book

Rationality: From AI to Zombies #3

The Machine in the Ghost

Rate this book
Book III within "Rationality: From AI to Zombies"

263 pages, Unknown Binding

First published January 1, 2015

6 people are currently reading
86 people want to read

About the author

Eliezer Yudkowsky

47 books1,857 followers
Eliezer Yudkowsky is a founding researcher of the field of AI alignment and played a major role in shaping the public conversation about smarter-than-human AI. He appeared on Time magazine's 2023 list of the 100 Most Influential People In AI, was one of the twelve public figures featured in The New York Times's "Who's Who Behind the Dawn of the Modern Artificial Intelligence Movement," and has been discussed or interviewed in The New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, The Washington Post, and many other venues.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
26 (45%)
4 stars
22 (38%)
3 stars
9 (15%)
2 stars
0 (0%)
1 star
0 (0%)
Displaying 1 - 9 of 9 reviews
Profile Image for J_BlueFlower.
780 reviews8 followers
November 18, 2023
Want to know about this book? Try read Chapter 131 “An Alien God”. One of my over all favourite chapters/essays. I read it twice.

Online here:
https://www.lesswrong.com/posts/pLRog...


The Machine in the Ghost is Book III within "Rationality: From AI to Zombies".

These books are gold. There are so many golden moments of “Oh, yes, I see more clearly now”. A cliché, but true in this case.

Some of my favourits:

The preface by Robin Hanson:
“You are never entitled to your opinion. … You are entitled to your desires, and sometimes to your choices. You might own a choice, and if you can choose your preferences, you may have the right to do so. But your beliefs are not about you; beliefs are about the world. Your beliefs should be your best available estimate of the way things are; anything else is a lie.

Chapter 131 An Alien God. It is both funny and exactly right and has a twist into pub culture I did not see coming.


Chapter 167 Taboo Your Words

“When you find yourself in philosophical difficulties, the first line of defense is not to define your problematic terms, but to see whether you can think without using those terms at all. Or any of their short synonyms. And be careful not to let yourself invent a new word to use instead. Describe outward observables and interior mechanisms; don’t use a single handle, whatever that handle may be.”

Interlude An Intuitive Explanation of Bayes’s Theorem

“Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism—this is the old philosophy that the Bayesian revolution is currently dethroning.”

Definitely an “oh, yes!” moment for me.
Profile Image for James.
109 reviews
September 2, 2023
This book was a bundle of two big ideas I'm very interested in - evolution and language. I had independently reasoned to most of the same conclusions as EY, like hierarchical selection, clusters in configuration space, etc. It was good to hear those ideas come from another source. The alarming thing about this book (probably the reason I gave it 5 stars instead of 4) was meta-level: I had reasoned to most, but not all, of these conclusions. My thoughts on the matter didn't seem to have crystallized quite enough (prior to reading this) to make those same logical steps. Maybe I should start a blog like OC, even if I just keep it unpublished on my computer. Writing this stuff down would probably help crystallize it.

Notes:
• The reason a watch implies a watchmaker but life doesn’t imply God is that we don't have a plausible explanation for how a watch could get there by itself. An icicle might be a highly ordered structure, but nobody offers it as proof of intelligent design because we understand how it could self-assemble. Prior to Darwin, when we didn't know how life could self-assemble, it might as well have been a watch, and it was very reasonable to assume intelligent design.
• Evolution may be too slow for cultural evolution to be significant. For example, generations to fixation formula is 2ln(N)/s. How many generations of life have there been? How many generations of human culture?
○ N=pop size, s=% advantage
○ A cursory google says there have been about 3500 human generations since the Cognitive Revolution (70,000 years, 20 years per generation for most of that time). This means that depending on how we model the number of distinct "cultures", there actually may have been time for effects <1% to reach fixation.
○ Memetic selection is very different tho because of high rate of horizontal transfer
• EY lists 7 necessary conditions for strong, noticeable evolution: I've rephrased them as
○ Replication
○ Multiple alleles (but mutation rate not too high)
○ Useful alleles (subtle additional criterion: which alleles are useful remains constant)
○ Many generations
• Hierarchical selection! Segregation distorters, cancer, perfect DNA-copying, oh my!
• EY's reply to the idea of rapidly increasing cost of improvement: evolution exerts roughly constant optimization pressure over time, and resulting fitness hasn't been dramatically slowing down
○ Is this true? Is fitness a measurable cardinal quantity? How to account for noise?
• "defining words any way you want" may be theoretically ok but is realistically dangerous, because the brain categorizes things and assigns names and connotations without all the careful rational deliberation that would be needed to keep labels entirely arbitrary and content-free
• Words point to clusters in a "configuration space" representing all possible things as vectors. EY's notion of "choosing words wisely" is picking words that match natural clusters
○ Interesting that I've been using thoughts along these same lines for a while now, alarming that I haven't reasoned to all the implications EY has
○ Possible answer to that question about ways to "carve the world" - there are other ways, but they fail Occam's Razor (min message length), since they generally take a ton of info to define and don't give much predictive power in return.
• Arguing over how something should be categorized based on its properties misses the whole point - the point of the category is to provide a useful tool for answering some other question. Wondering whether rationalism is a cult isn't a debate over where it falls relative to the category boundary in configuration space, it's a proxy for asking if rationalism is a safe and productive way to go about viewing the world. Confusion of means and ends.
• EY thinks we tend towards debating categories like this because having those intermediate "category" nodes in our brains is more efficient, but leaves the "feeling" of some questions left hanging. One brain could just have a neuron for every possible testable outcome and connections between all neurons, but goddamn. Way cheaper to just have neurons pointing to clusters, linked to the traits that form that cluster. But then we end up representing cluster membership, which isn't a real thing, but feels like a real thing.
• The illusion of inference: pretending to deduce facts about an item based on class membership, but that class membership is because of the facts about that item, and so the intermediate class-membership-words really only act to hide the tautology behind a layer of retrieving definitions
○ Logic doesn't give new information: it reveals information you already have by peeling back the layers of inference and making it explicit
• Can intensive definitions meaningful even without extensive anchors? If I have a graph with no symmetries, I could scramble up all the labels and you'd always be able to map the new labels to the old ones. Does language have this property?
Profile Image for B.
68 reviews1 follower
February 25, 2022
The first part was about evolution. The point of talking about evolution was to show it as an excellent practice of rationality - the concepts involved, misunderstandings, and most importantly the ways evolution succeeded as a theory. It was meant kind of as an example topic to show the applications of rationality, so I recommend anyone who wants to get over quickly to skip it, but anyone well invested in rationality should read it. It was great, interesting, and eye-opening for application. The second part is something I didn't read much of so I can't judge that part. The third part and the interlude were gold. Part 3 was a human's guide to words and it was excellently written, with a fantastic concluding article called '37 ways words can be wrong' which wrapped up the whole chapter well. Very well. The interlude was an Intuitive explanation of Bayes theorem, and it was amazingly explained. It may not seem important yet, but it unlocks the heart/door of rationality in terms of it's essential study of probability and belief updating. Any Lay person would come out of reading that and understand the basics of Bayesian inference.

Amazing book overall.
I do plan to read Part 2 thoroughly and update this review after that
Profile Image for Gianluca.
127 reviews
April 24, 2020
This has probably been my favourite book in the Rationality: From AI to Zombies series so far.

As always, Yudkowsky exposes nuances of human thinking that others completely overlook. This book focusses on evolutionary theory, values and anthropomorphism, and how our language can so often lead our thinking astray.

I'd really recommend section L—The Simple Math of Evolution—to anyone with a penchant for evolutionary biology or psychology. It's also a great way to level up your thinking in general (if you're already familiar with the basics of cognitive biases). You can read all the essays from this section here.

Sections:
L. The Simple Math of Evolution
M. FragilePurposes
N. A Human’s Guide to Words
Profile Image for Hilm.
78 reviews21 followers
April 5, 2023
I seem to highlight fewer stuff from this book than the earlier two books, but the last essay, An Intuitive Explanation of Bayes's Theorem points out how Popper and Bayes are connected:

Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism—this is the old philosophy that the Bayesian revolution is currently dethroning. Karl Popper’s idea that theories can be definitely falsified, but never definitely confirmed, is yet another special case of the Bayesian rules.
Profile Image for Niklas.
43 reviews
March 29, 2021
This book (the whole series, really) offers a great meta-approach to thinking more clearly, logically and just less wrong.

The first two books focused on typical fallacies of the mind, and how to avoid them.
This third book in turn dives deeply into some more abstract topics and tackles Evolution, Optimization, Cognitive Concepts and Language.

I liked this book the most this far.
Displaying 1 - 9 of 9 reviews

Can't find what you're looking for?

Get help and learn more about the design.