Thoughts On Whether A.I. Will Kill Us All

Recently I read a 1,600-page book,

“Rationality” by Eliezer Yudkowsky.

That’s a lot of pages.
You wouldn’t be impressed if I read 160 ten-page books. I get through one whopper, though,
that’s worth mentioning.


I usually dislike non-fiction, because it feels like cheating. I go to a lot of trouble
to craft rich, internally logical dynamic systems of interacting people and parts, and some
bozo comes along and just writes down what’s true. I feel like anyone can do that. But this
non-fiction book was great, because it changed my most fundamental belief. (Previously, I thought
the scientific method of investigation was the best way to figure out what’s true. Now I
realize Bayesian inference is better.)


So that’s not bad. If I’m writing a book, any kind of book, and someone reads it and changes
their most fundamental belief, I’m calling that job well done. I’m happy if my book changes
anyone’s opinion about anything. I just want to have made a difference.


“Rationality” covers a lot of topics, including A.I. Previously, I thought A.I. might be
just around the corner, because Google has gotten really good at recogizing pictures of
cats. But this book disabused me of the notion that we might be able to push a whole
lot of computers into a room and wait for self-awareness to pop out. Instead, it seems
like we have to build a super-intelligent A.I. the same way we do everything else, i.e.
one painstakingly difficult piece at a time.


Which is good, because I’m pretty sure that A.I. will kill us all. There’s a big
debate on the subject, of course, but I hadn’t realized before how much it resembles
climate change. By which I mean, in both cases, there’s a potential global
catastrophe that we know how to avoid, but the solution requires powerful people and
companies to act against their own short-term interests.


This hasn’t worked out so well with climate change. All we’ve managed to do so far is make
climate change such a big issue, it’s now in the short-term interest of more of those people
and companies to look like they give a crap. I feel like once we get to
the point where they have to choose between a financial windfall and risking
a runaway super-intelligent A.I., we’re in trouble.


I just listened to a great interview by Ezra Klein with Ted Chiang, who is a brilliant
author that you should read, called
Why Sci-Fi Legend Ted Chiang Fears Capitalism, Not A.I.” Ted has a more optimistic view than mine, but I think the premise is exactly
right. The danger isn’t that we can’t stop a super-intelligent A.I.; it’s that
we’ll choose not to.

12 likes ·   •  2 comments  •  flag
Share on Twitter
Published on April 07, 2021 19:12
Comments Showing 1-2 of 2 (2 new)    post a comment »
dateUp arrow    newest »

message 1: by Jeremy (new)

Jeremy Bursey I still find that reading ten 160-page books takes a lot of time, energy, and work, so...

But, yes, a 1,600-page book is more impressive. I gave myself a headache bingeing on Harry Potter books leading up to The Half-Blood Prince the summer it was released. Turned out, getting sunshine, taking long walks, and resting my eyes were also good things. (This was before the Kindle and smartphones, of course.)

Speaking of smartphones and A.I., I still think the concept of A.I. will only ever be as powerful as whatever the human mind can train it to be. Is it quantum theory that discusses the possibility of self-awareness? Maybe if computers can learn how to invent concepts, then maybe they can surpass the human brain. But as long as they work as calculators and binary lever-pullers, I don't think they'll ever surpass human ingenuity (not on purpose, anyway).

Now, regarding speed, yes, computers will always out-think humans. As long as the program permits it, they have almost immediate access to the answers we've already fed it, like the smartest kid taking a standardized test. Even if A.I. becomes like Einstein and figures out the Theory of Relativity (a possibility!), it'll only be because it's discovered a pattern and calculated a consistent result. And that'll be permissible because someone gave it access to the results we've already found through exploration. Without the relevant information or algorithms, the computer will be no better than a paperweight.

I'm not a science guy. I prefer thrillers and coming-of-age stories. But I also dabble in making computer games, so I know a little about programming logic (a little). And what I've learned about modern programming compared to yesteryear's programming is that today's programming is based on functions that synthesize yesteryear's languages (all based on ones and zeroes). So, the language today is designed for efficiency and easier comprehension while building on the concepts engineers have already defined. I think the programmatic abilities of today's computers have been around since the beginning, but the language to draw out those abilities hadn't yet been defined, nor had the machinery that can process those high-level computations been designed. Now that we've designed computers for more power and function, they can do more with those ones and zeroes. But they still operate in just ones and zeroes.

So, I think this is one of those cases where the function of our present-day computers would inform the function of our future computers, as yesterday's computers informed the functions of today's computers. Just as roads in the twenty-first century serve the same function as those in the fifteenth century (with the added components of power and water lines, and hopefully, maybe, computerized lanes and traveler instructions--they've already built a couple miles of this in Scandinavia somewhere, if I recall), I think the computers of tomorrow will function as the computers of today (and yesterday): a tool for rapid calculations, reconciling data, and presenting countless options and branching results on the fly. This means they'll likely discover patterns and formulas (and even theories) much faster than the human mind can, and they might even be programmed to recognize a formula that doesn't exist in any other database (part of the comparative logic that comes native to computers). But it's unlikely they'll know what to do with that discovery. I think it'll be up to humans to decide how to synthesize, rationalize, strategize, and monopolize the uncovered information. The computer, meanwhile, will go back to playing Minesweeper.

I could be wrong, but I don't think computers will ever be intentionally creative, which is ultimately what will prevent them from taking over the human race. Even if they glitch (bug) and press the wrong button (feature?), we can figure out how to fix it. Unless the computer is programmed (human interaction) to self-diagnose and fix itself, it'll simply remain buggy (a feature for any software company that likes to put out new updates for a fee), and ultimately fallible as a source for human domination.

That is, unless some malicious programmer uses those ones and zeroes to train the computer to make strictly positive or negative assumptions about human beings. If there's a danger in A.I. becoming too powerful, it'll be because Dr. Evil used his one meeeelion dollars to build a giant lever that flips according to whatever moral ratio he decides the computer should judge.

But it still won't be the computer's "desire." It'll just be its PASS or FAIL program.

So, the day A.I. becomes smarter than a human being is the day it's no longer A.I. or a computer or a robot. That's the day it becomes Iron Man. In my opinion.

But we'll see how it goes.

Fun post! And, yes, nonfiction is definitely cheating.


message 2: by Manny (new)

Manny Jeremy, you are way optimistic. Check out Bostrom's Superintelligence for a sober look at what we're dealing with here. And then consider that it was written before AlphaZero was released.


back to top