Jump to ratings and reviews
Rate this book

Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World

Rate this book
From a leading computer scientist, a unifying theory that will revolutionize our understanding of how life evolves and learns.

How does life prosper in a complex and erratic world? While we know that nature follows patterns—such as the law of gravity—our everyday lives are beyond what known science can predict. We nevertheless muddle through even in the absence of theories of how to act. But how do we do it?

In Probably Approximately Correct, computer scientist Leslie Valiant presents a masterful synthesis of learning and evolution to show how both individually and collectively we not only survive, but prosper in a world as complex as our own. The key is “probably approximately correct” algorithms, a concept Valiant developed to explain how effective behavior can be learned. The model shows that pragmatically coping with a problem can provide a satisfactory solution in the absence of any theory of the problem. After all, finding a mate does not require a theory of mating. Valiant’s theory reveals the shared computational nature of evolution and learning, and sheds light on perennial questions such as nature versus nurture and the limits of artificial intelligence.

Offering a powerful and elegant model that encompasses life’s complexity, Probably Approximately Correct has profound implications for how we think about behavior, cognition, biological evolution, and the possibilities and limits of human and machine intelligence.

208 pages, Hardcover

First published June 4, 2013

130 people are currently reading
1410 people want to read

About the author

Leslie Valiant

8 books14 followers
Leslie Valiant FRS is a British computer scientist and computational theorist. He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University, and was educated at King's College, Cambridge, Imperial College London, and University of Warwick where he received a PhD in computer science in 1974.

Valiant is world-renowned for his work in theoretical computer science. Among his many contributions to complexity theory, he introduced the notion of #P-completeness to explain why enumeration and reliability problems are intractable. He also introduced the "probably approximately correct" (PAC) model of machine learning that has helped the field of computational learning theory grow, and the concept of holographic algorithms. His earlier work in automata theory includes an algorithm for context-free parsing, which is (as of 2010) still the asymptotically fastest known. He also works in computational neuroscience focusing on understanding memory and learning.

Valiant's 2013 book is Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World (Basic Books, ISBN 9780465032716). In it he argues, among other things, that evolutionary biology does not explain the rate at which evolution occurs, writing, for example, "The evidence for Darwin's general schema for evolution being essentially correct is convincing to the great majority of biologists. This author has been to enough natural history museums to be convinced himself. All this, however, does not mean the current theory of evolution is adequately explanatory. At present the theory of evolution can offer no account of the rate at which evolution progresses to develop complex mechanisms or to maintain them in changing environments."

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
97 (23%)
4 stars
148 (35%)
3 stars
121 (28%)
2 stars
47 (11%)
1 star
7 (1%)
Displaying 1 - 30 of 46 reviews
728 reviews310 followers
December 18, 2014
This book introduces a very interesting idea – what the author calls ecorithms. An algorithm is a step-by-step instruction set to achieve the exact desired result in a controlled environment. An ecorithm, by contrast, is run in an environment unknown to the designer and it can interact with the environment and learn from it. Valiant postulates that a lot of natural phenomena, such as evolution and human cognition and behavior, are based on ecorithms.

The book didn't really deliver for me. It's a bit heavy on the computer science and not very clear on how ecorithms are actually used in nature.
Profile Image for Suhrob.
493 reviews60 followers
April 27, 2014
I was surprised to see a non-technical book on such a rather arcane and technical subject (though with rich implications in many areas).

The book gives a decent low-tech introduction to PAC learning, but if I have to make one complaint Leslie Valiant is not really an engaging writer (or a writer not interested in being engaging) - his examples and approach is extremely dry (you know drawing balls from urns etc.)

He even manages to introduce the perceptron in the most boring manner. I'd say it is a science writer dream - "look a simple algo inspired by the neurons in your very brain, how exciting!" etc. - but no, he gives a vague top level unexciting description and goes to linearly separable problems... oh well a chance missed :) It is not that he refuses to talk down to the reader (because the book is really low on technicality), it just like he is not that keen on talking to you all that much. Who knows how this book deal came to be...

In any case I give 4 stars because the topic is so interesting and fertile. More technical readers might be better served by a shorter review paper though.
Profile Image for Behzad.
79 reviews12 followers
December 26, 2014
There is no greater satisfaction for me to read a book that so nicely bridges corner stone of computer science which is computational complexity, learning systems, human cognition and evolution. Putting forward ideas that are indeed fascinating and enlightening and relieving us from great burden caused by being lost in this world with so many mysteries. For me, I hope, most of these ideas present itself ultimately as a mode of life in which one is more informed and humble but at same time focused and eager to move forward.
339 reviews
January 13, 2017
Fascinating concept and several interesting parts, but a lot in the weeds.

My favorite quote comes early on:

"Much of everyday human decision making appears to ... be based on a competent ability to predict from past observation without any good articulation of how the prediction is made or any claim of fundamental understanding of the phenomenon in question. The predictions need not be perfect or the best possible. They need merely to be useful enough." (p 8)
Profile Image for Michiel.
383 reviews90 followers
January 14, 2023
Some interesting ideas, but superficial and vague. The author purposes that the probably approximately correct framework from learning theory might be the key to biological learning. Darwinian evolution world be a special case.
Profile Image for Ben.
60 reviews8 followers
Read
July 29, 2025
Mostly dry academic writing, but with important implications for AI.

Could also be useful if you're a comp sci nerd in need of systematic evidence the human race isn't completely stupid. He makes a fairly rigorous mathematical argument about how evolution and human intuition, despite their limitations, achieve so many right answers using not logic but (essentially) a large number of guesses.
Profile Image for Zhaodan Kong.
38 reviews8 followers
January 18, 2014
Disclaimer: I finish the book in a period of a few months. My memory may not be perfectly accurate on such an occasion. So please check other peoples' comments for more serious reviews.

I would say the key to this book is "Ecorithm", a term the author coins to define the algorithms that animals and humans may use to adapt to the environments that they reside in. The adaptation can happens in a larger time scale (evolution) or a smaller time scale (learning). Thus, from such a perspective, evolution and learning (nature and nurture) can essentially be seen as a continuum. The underlying frame that author advocates is Probably Approximately Correct or PAC for short. For any people with some knowledge of machine learning, PAC shouldn't be an unfamiliar concept. Actually, there is a tendency in human history to try to use the most current theory to explain intelligence. Such a theory used to be automaton, switch board, controller, computer and so on. And now it is PAC's turn. Even though we may later find that such an explanation is not accurate or even completely wrong. I still find that the explanations provided by the author extremely compelling and it is a keystone towards further developments. (This is how science develops!)

I would say, the author give the appropriate amounts of technical details. Actually I even hope the author can provide more details about some of the "juicy parts", such as robust logic. But a person have little knowledge of complexity and machine learning may find some texts (for instance, the second paragraph of page 162) a little bit hard to understand. Though ignoring them will not affect the understanding of the high level rationale.

My only complain about the book is that the connection between Chapter 6 and Chapter 7, or, to be more specific, the connection between reasoning and learning, is kind of sketchy. The author keeps mentioning "a principled way" but it is quite hard, at least to me, to grasp what is the exact mechanism of this principled way, which I believe is the missing link between works on AI and machine learning. It would be perfect if the author have elaborated at this point.

Finally, I completely agree the author's arguments regarding the current state of evolution. I also believe that the formal framework he elaborates in this book, if gaining stream, can greatly further our understanding of evolution. But there is a somewhat more elusive question looming at the horizon, i.e., how to use PAC to explain not only the quantitative changes but also the qualitative ones. It may be able to explain the evolving progress from monkeys to humans. But what about the progress from a single cells to multicellular organisms?
Profile Image for Cloudbuster.
301 reviews17 followers
July 26, 2015
Computer Science is no more about computers than astronomy is about telescopes.

Nell’accezione comune l’informatica è vista solo come quella tecnologia che permette di scrivere documenti, preparare presentazioni, ritoccare foto e spedirle in tempo reale in giro per il mondo. In realtà, l’informatica è una scienza (non a caso, in inglese è denominata computer science) e probabilmente negli ultimi 50 anni è stata la più prolifica delle scienze e quella che ha fornito i maggiori contributi allo sviluppo delle conoscenze umane. I numi tutelari di questa nuova scienza sono senz’altro Alan Turing e John Von Neumann ma, in questi anni, un drappello di menti brillanti, se non geniali, ha raccolto il loro testimone ed ha contribuito a fornire un modello matematico rigoroso della computazione che si è rivelato utile a tutti gli altri settori della conoscenza umana, dalla fisica alla biologia, dall’economia alle scienze cognitive.
Leslie Valiant rientra a pieno titolo in questa ristretta cerchia di grandi menti della computer science. Un uomo geniale e visionario, che con le sue ricerche ha aperto nuovi campi come quello del machine learning, una delle aree fondamentali dell'intelligenza artificiale che si occupa della realizzazione di sistemi e algoritmi che si basano su osservazioni come dati per la sintesi di nuova conoscenza.
In questo piccolo, ma molto ambizioso, libro Valiant descrive la sua visione della realtà e propone una teoria matematica, sufficientemente semplice ed elegante, che possa spiegare tutti i meccanismi essenziali che governano il comportamento degli esseri viventi: l’adattamento, l’evoluzione, l’apprendimento, la ragione, la conoscenza. Il suo modello è basato sull’idea degli ecoritmi, algoritmi che sono "con alta probabilità approssimativamente corretti" e che imparano dall’interazione con il loro ambiente. Valiant analizza potenza e limiti degli ecoritmi e mostra come siano applicabili alla conoscenza umana, alla biologia evolutiva ed all’intelligenza artificiale.
Il libro non è di facile lettura ma è assolutamente illuminante ed è un “must” per tutti coloro che sono interessati a capire come funziona la nostra mente ed il mondo che ci circonda.
Profile Image for Mike.
251 reviews7 followers
Read
October 9, 2015
Can't give a star rating because I was in so far over my head. Will put a few definitions down in case I come across them again in reading about machine learning or something related to Alan Turing.

Ecorithm - algorithm that takes information from its environment so as to perform better in that environment. Algos for machine learning, evolution, and learning for the purpose of reasoning are all examples.

Theoryless - denotes decisions for which there is not a good explanatory and predictive theory, such as a scientific theory.

Probably Approximately Correct learning is learning from examples, "where the number of computational steps is polynomially bounded and errors are polynomially controlled". This PAC thesis is the mathematical definition of learning, or generalizing, so that new information can be categorized with a small error rate.

Interesting comment regarding artificial intelligence. We need not fear intelligent robots - first, because there is no reason to make them exactly like humans. Second, they will not fear being switched off, unless we provide them with the same heritage of extreme survival training that our own ancestors had been subject to on Earth.

"I believe the attempt to make a thinking machine will help us greatly in finding out how we think ourselves." Alan Turing

"When seeking to understand the fundamental character of life, learning algorithms are a good place to start."

Profile Image for Max Shen.
27 reviews11 followers
November 15, 2017
A challenging read that above all stays faithful to the discipline and integrity of academia to the sacrifice of wider accessibility. Nevertheless, the ideas are truly thought-provoking, the perspectives and paradigm of thinking quite novel and enlightening, and due to Valiant's ever-present rigor, meaningful and concrete.

If you are the type to appreciate an understated yet subtly powerful and rigorously built idea over exaggerated could-be's and fanciful speculations dressed up in scientific wording, you are the perfect audience for this book.

To me, this book proves Valiant as a leading thinker in our day and age.
Profile Image for David Wiley.
21 reviews118 followers
December 30, 2014
If you're interested in how people learn, you will definitely enjoy this book. It presents an interesting view on learning and how it emerges from interactions with the environment. There's a lot in this book to appreciate in terms of developing a better understanding of learning. I found myself agreeing frequently - but not always - with the author.
Profile Image for Alexander Leo Swenson.
58 reviews2 followers
July 13, 2017
Wonderfully dry prose that sandwiches some mindblowing ideas to the uninitiated. Anyone who shares Chomsky's crabbiness about the rise of probabilistic models should read this as a detente for theoryful/theoryless science and our impending theoryful/theoryless world.
Profile Image for Russell.
38 reviews6 followers
January 12, 2014
A decent introduction to PAC learning. Light on technical details and the less sciency chapters near the end aren't that compelling.
Profile Image for Arvind.
88 reviews1 follower
January 3, 2021
Incredibly fascinating to a beginner to evolvable and learnable complexity classes. I found it a nice bridge to the author's challenging technical work.
Profile Image for Shubhendu Trivedi.
17 reviews17 followers
January 17, 2015
-- "Biological evolution is a form of Computational Learning" Popular Science version --

The punchline of this book is perhaps: "Changing or increasing functionality of circuits in biological evolution is a form of computational learning"; although it also speaks of topics other than evolution, the underlying framework is of the Probably Approximately Correct model from the theory of Machine Learning, from which the book gets its name.

I had first heard of this explicit connection between Machine Learning and Evolution in 2010 and have been quite fascinated by it since. It must be noted, however, that similar ideas have appeared in the past. It won't be incorrect to say that usually they have been in the form of metaphor. It is another matter that this metaphor has generally been avoided for various reasons in scientific discourse. When I first heard about the idea it immediately made sense and like all great ideas, in hindsight looks completely obvious. Ergo, I was quite excited to see this book and preordered it months in advance.

This book is basically a popular science version and expansion of the ideas on the matter that appeared in a J-ACM article titled "Evolvability" in 2007. If you are looking for a technical coverage look elsewhere. Infact a monograph on the recently developing COLT based approaches to evolution does not exist yet (this gap needs to be filled!). I had been thinking that this book might be an informal plug into this gap and hence I have to say that I was rather disappointed that it wasn't. Don't get me wrong, the book is not bad, infact pretty good. Just that somebody with enough background in Learning might not be able to get enough out of it. It might however be a good starting book for those interested in these ideas but without much background in both subjects.

---
Before attempting to sketch a skiagram of the main content of the book: One of the main subthemes of the book, constantly emphasized is to look at computer science as a kind of an enabling tool to study natural science. This is oft ignored, perhaps because of the reason that CS curricula are rarely designed with any natural science component in them and hence there is no impetus for aspiring computer scientists to view them from the computational lens. On the other hand, the relation of computer science with mathematics has become quite well established. As a technology the impact of Computer Science has been tremendous. All this is quite remarkable given the fact that just about a century ago the notion of a computation was not even well defined. Unrelated to the book: More recently people have started taking the idea of digital physics (i.e. physics from a solely computable/digital perspective) seriously. But in the other natural sciences its usage is still woeful. Valiant draws upon the example of Alan Turing as a natural scientist and not just as a computer scientist to make this point. Alan Turing was more interested in natural phenomenon (intelligence, limits of mechanical calculation, pattern formation etc) and used tools from Computer Science to study them, a fact that is evident from his publications. That Turing was trying to understand natural phenomenon was remarked in his obituary by Max Neumann by summarizing the body of his work as: "the extent and the limitations of mechanistic explanations of nature".
--
The book begins with a delightful and quite famous quote by John von Neumann (through this paragraph I also discovered the context to the quote). This paragraph also adequately summarizes the motivation for the book very well:

"In 1947 John von Neumann, the famously gifted mathematician, was keynote speaker at the first annual meeting of the Association for Computng Machinery. In his address he said that future computers would get along with just a dozen instruction types, a number known to be adequate for expressing all of mathematics. He went on to say that one need not be surprised at this small number, since 1000 words were known to be adequate for most situations in real life, and mathematics was only a small part of life, and a very simple part at that. The audience responded with hilarity. This provoked von Neumann to respond: "If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is"

Though counterintuitive, von Neumann's quip contains an obvious truth. Einstein's theory of general relativity is simple in the sense that one can write the essential content on one line as a single equation. Understanding its meaning, derivation, and consequences requires more extensive study and effort. However, this formal simplicity is striking and powerful. The power comes from the implied generality, that knowledge of one equation alone will allow one to make accurate predictions about a host of situations not even connected when the equation was first written down.

Most aspects of life are not so simple. If you want to succeed in a job interview, or in making an investment, or in choosing a life partner, you can be quite sure that there is no equation that will guarantee you success. In these endeavors it will not be possible to limit the pieces of knowledge that might be relevant to any one definable source. And even if you had all the relevant knowledge, there may be no surefire way of combining it to yield the best decision.

This book is predicated on taking this distinction seriously [...]"

In a way, aspects of life as mentioned above appear theoryless, in the sense that there seems to be no mathematical or scientific theory like relativity for them. Something which is quantitative, definitive and short. Note that these are generally not "theoryless" in the sense that there exists no theory at all since obviously people can do all the tasks mentioned in a complex world quite effectively. A specific example is of how organisms adapt and species evolve without having a theory of the environment as such. How can such coping mechanisms come into being in the first place is the main question asked in the book.

Let's stick to the specific example of biological evolution. Clearly, it is one of the central ideas in biology and perhaps one of the most important theories in science that changed the way we look at the world. But inspite of its importance, Valiant claims (and correctly in my opinion) that evolution is not understood well in a quantitative sense. Evidence that convinces us of its correctness is of the following sort: Biologists usually show a sequence of evolving objects; stages, where the one coming later is more complex than the previous. Since this is studied mostly via the fossil record there is always a lookout for missing links between successive stages. Darwin had remarked that numerous successive paths are necessary - that is, if it was not possible to trace a path from an earlier form to a more complicated form then it was hard to explain how it came about. But again, as Valiant stresses, this is not really an explanation of evolution. It is more of an "existence proof" and not a quantitative explanation. That is, even if there is evidence for the existence of a path, one can't really say that a certain path is likely to be followed just because it exists.

Related to this, one of the first questions that one might ask, and indeed was asked by Darwin himself: Why has evolution taken as much time as it has? How much time would have sufficed for all the complexity that we see around us to evolve? This question infact bothered Darwin a lot in his day. In On the Origins of Species he originally gave an estimate of the Earth's age to be at least 300 million years, implying indirectly, that there was enough time. This estimate was immediately thrashed by Lord Kelvin, one of the pre-eminent British physicists of the time, who estimated the age of the Earth to be only about 24 million years. This caused Darwin to withdraw the estimate from his book. However, this estimate greatly worried Darwin as he thought 24 million years just wasn't enough time. To motivate on the same theme Valiant writes the following:

"[...] William Paley, in a highly influential book, Natural Theology (1802) , argued that life, as complex as it is, could not have come into being without the help of a designer. Numerous lines of evidence have become available in the two centuries since, through genetics and the fossil record, that persuade professional biologists that existing life forms on Earth are indeed related and have indeed evolved. This evidence contradicts Paley's conclusion, but it does not directly address his argument. A convincing direct counterargument to Paley's would need a specific evolution mechanism to be demonstrated capable of giving rise to the quantity and quality of the complexity now found in biology, within the time and resources believed to have been available. [...]"

A specific, albeit more limited version of this question might be: Consider the human genome, which has about 3 billion base pairs. Now, if evolution is a search problem, as it naturally appears to be, then why did the evolution of this genome not take exponential time? If it would have taken exponential time then clearly such evolution could not have happened in any time scale since the origin of the earth. Thus, a more pointed question to ask would be: What circuits could evolve in sub-exponential time (and on a related note, what circuits are evolvable only in exponential time?). Given the context, the idea of thinking about this in circuits might seem a little foreign. But on some more thought it is quite clear that the idea of a circuit is natural when it comes to modeling such systems (at least in principle). For example: One might think of the genome as a circuit, just as how one might think of the complex protein interaction networks and networks of neurons in the brain as circuits that update themselves in response to the environment.

The last line is essentially the key idea of adaptation, that entities interact with the environment and update themselves (hopefully to cope better) in response. But the catch is that the world/environment is far too complex for simple entities (relatively speaking), with limited computational power, to have a theory for. Hence, somehow the entity will have to cope without really "understanding" the environment (it can only be partially modeled) and improve their functionality. The key thing to pay attention here is the interaction or the interface between the limited computational entity in question and the environment. The central idea in Valiant's thesis is to think of and model this interaction as a form of computational learning. The entity will absorb information about the world and "learn" so that in the future it "can do better". A lot of Biology can be thought of as the above: Complex behaviours in environments. Wherein by complex behaviours we mean that in different circumstances, we (or alternately our limited computational entity) might behave completely differently. Complicated in the sense that there can't possibly be a look up table for modeling such complex interactions. Such interactions-responses can just be thought of as complicated functions e.g. how would a deer react could be seen as a function of some sensory inputs. Or for another example: Protein circuits. The human body has about 25000 proteins. How much of a certain protein is expressed is a function depending on the quantities of other proteins in the body. This function is quite complicated, certainly not something like a simple linear regression. Thus there are two main questions to be asked: One, how did these circuits come into being without there being a designer to actually design them? Two, How do such circuits maintain themselves? That is, each "node" in protein circuits is a function and as circumstances change it might be best to change the way they work. How could have such a thing evolved?

Given the above, one might ask another question: At what rate can functionality increase or adapt under the Darwinian process? Valiant (like many others, such as Chaitin.) comes to the conclusion that the Darwinian evolution is a very elegant computational process. And since it is so, with the change in the environment there has to be a quantitative way of saying how much rate of change can be kept up with and what environments are unevolvable for the entity. It is not hard to see that this is essentially a question in computer science and no other discipline has the tools to deal with it.

In so far that (biological) interactions-responses might be thought of as complicated functions and that the limited computational entity that is trying to cope has to do better in the future, this is just machine learning! This idea, that changing or increasing functionality of circuits in biological evoution is a form of computational learning, is perhaps very obvious in hindsight. This (changing functionality) is done in Machine Learning in the following sense: We want to acquire complicated functions without explicitly programming for them, from examples and labels (or "correct" answers). This looks at exactly at the question at how complex mechanisms can evolve without someone designing it (consider a simple perceptron learning algorithm for a toy example to illustrate this). In short: We generate a hypothesis and if it doesn't match our expectations (in performance) then we update the hypothesis by a computational procedure. Just based on a single example one might be able to change the hypothesis. One could draw an analogy to evolution where "examples" could be experiences, and the genome is the circuit that is modified over a period of time. But note that this is not how it looks like in evolution because the above (especially drawing to the perceptron example) sounds more Lamarckian. What the Darwinian process says is that we don't change the genome directly based on experiences. What instead happens is that we make a lot of copies of the genome which are then tested in the environment with the better one having a higher fitness. Supervised Machine Learning as drawn above is very lamarckian and not exactly Darwinian.

Thus, there is something unsatisfying in the analogy to supervised learning. There is a clear notion of a target in the same. Then one might ask, what is the target of evolution? Evolution is thought of as an undirected process. Without a goal. This is true in a very important sense however this is incorrect. Evolution does not have a goal in the sense that it wants to evolve humans or elephants etc. But it certainly does have a target. This target is "good behaviour" (where behaviour is used very loosely) that would aid in the survival of the entity in an environment. Thus, this target is the "ideal" function (which might be quite complicated) that tells us how to behave in what situation. This is already incorporated in the study of evolution by notions such as fitness that encode that various functions are more beneficial. Thus evolution can be framed as a kind of machine learning problem based on the notion of Darwinian feedback. That is, make many copies of the "current genome" and let the one with good performance in the real world win. More specifically, this is a limited kind of PAC Learning. If you call your current hypothesis your genome, then your genome does not depend on your experiences. Variants to this genome are generated by a polynomial time randomized Turing Machine. To illuminate the difference with supervised learning, we come back to a point made earlier that PAC Learning is essentially Lamarckian i.e. we have a hidden function that we want to learn. One however has examples and labels corresponding to this hidden function, these could be considered "queries" to this function. We then process these example/label pairs and learn a "good estimate" of this hidden function in polynomial time. It is slightly different in Evolvability. Again, we have a hidden "ideal" function f(x). The examples are genomes. However, how we can find out more about the target is very limited, since one can empirically only observe the aggregate goodness of the genome (a performance function). The task then is to mutate the genome so that it's functionality improves. So the main difference with the usual supervised learning is that one could query the hidden function in a very limited way: That is we can't act on a single example and have to take aggregate statistics into account.

Then one might ask what can one prove using this model? Valiant demonstrates some examples. For example, Parity functions are not evolvable for uniform distribution while monotone disjunctions are actually evolvable. This function is ofcourse very non biological but it does illustrate the main idea: That the ideal function together with the real world distribution imposes a fitness landscape and that in some settings we can show that some functions can evolve and some can not. This in turn illustrates that evolution is a computational process independent of substrate.

In the same sense as above it can be shown that Boolean Conjunctions are not evolvable for all distributions. There has also been similar work on real valued functions off late which is not reported in detail in the book. Another recent work that is only mentioned in passing towards the end is the study of multidimensional space of actions that deal with sets of functions (not just one function) that might be evolvable together. This is an interesting line of work as it is pretty clear that biological evolution deals with the evolution of a set of functions together and not functions in isolation.

---
To summarize: I think the book is pretty decent. Although it would disappoint you if you went in looking for a more technical treatment, especially if you come from a learning background. Like I mentioned at the start of this (long) review: Clearly this book is aimed at the non expert. But this might be disappointing to those who bought the book because of the fact that this recent area of work, of studying evolution through the lens of computational learning, is very exciting and intellectually stimulating. I have also noticed a lot of critcism by some biologists of this idea. Some of the criticism might be valid but quite some of it is phrased as: "Valiant needs to understand biology better". I think critcisms of this nature are more due to the fact that (most, not all) biologists do not have the computational and algorithmic lens to look at certain problems. To me the core ideas presented in the book are not only very interesting but also the sorts that will become more widely adapted with the passage of time. I have had some arguments about the main ideas in the book over the past couple of years with some biologist friends who take the usage of "learning" to mean that what is implied is that evolution is a directed process. It would have been great if the book would have spent more time on this particular aspect. Also, the book explicitly states that it is about quantitative aspects of evolution and has nothing to do with speciation, population dynamics and other rich areas of study. It ONLY deals with what Valiant says is the essence of evolution, which is an elegant computational process. However, I have already seen some criticism of the book on this premise. So in this sense I believe that these criticisms are unfair.

As far as I am concerned, as an admirer of Prof. Leslie Valiant's range and quality of contributions, I would have preferred if the book went into more depth. Just to have a semi-formal monograph on the study of evolution using the tools of PAC Learning right from the person who initiated this area of study. However this is just a selfish consideration.
Profile Image for Romann Weber.
85 reviews19 followers
June 22, 2024
The title of Probably Approximately Correct may sound almost like a joke, like an effort to doubly hedge one's bets. But PAC (its much easier-to-say initialism) is a serious concept, and it is a powerful framework within the theory of learning. Intuitively, the PAC formulation provides guarantees that your learning algorithm will have a very high probability of getting you arbitrarily close to your target with a limited amount of training data and in a limited amount of time. The theory of PAC learnability is one of the cornerstone results of machine learning and artificial intelligence, and author Valiant is widely regarded as a hero within the field. Imagine, then, my disappointment as a machine learning and A.I. researcher when this book simply didn't click for me.

Indeed, it is hard not to regard my low rating for this book as a personal failing. Was I not up to the task of reading and understanding it? Or did the author write a book that (ironically enough) missed its target by a wide margin? Maybe I'll hedge my own bets and say that it was a bit of both.

The issue with Probably Approximately Correct as a book is that it is written for an audience that does not seem to exist. It is presumably written for the educated lay reader, and as a result it avoids equations in most places. But where there is no mathematics used when mathematical concepts are described, the burden is on the author to describe highly technical concepts in prose without sacrificing the clarity and precision that only mathematics can deliver. Valiant puts forth a (ahem) valiant effort but in my opinion does not succeed in the crucial parts of the book that describe his main ideas. The writing is uneven and hard to parse in a number of spots. Terminology is not always clearly defined (although to his credit, Valiant does supply a glossary at the end of the book), and a flood of technical material is introduced in many places without sufficient motivation. With so much lost in translation, I formed the impression that an explicitly technical treatment of the topic would have been easier to understand.

Here is where I will own up to my own failings as a reader: I was often quite distracted while reading this book. Paragraphs would go by, and I would come to realize that I had been thinking about something else while reading them. I also decided early on that this was not a book that I wanted to spend a lot of time with, so I plowed through it more than studied it. Had I slowed down and perhaps even reread certain sections, maybe I'd have gotten more from it. But as someone already familiar with the PAC concept, I figured I could treat this book as a quick read. That didn't work, and the book felt twice as long as its page count would suggest.

I was less familiar with the author's attempt to apply PAC to evolutionary theory, which is an interesting idea. I share the author's concerns about legitimate theoretical weaknesses in natural selection as an explanatory mechanism for evolution: The historical timescales and population sizes seem insufficient to quantitatively account for the resulting range of variation in evolved creatures. Despite my interest going into this section of the book, the exposition kind of stopped me in my tracks, and I never succeeded in fully grasping how Valiant thinks PAC applies on anything other than an epigenetic level. That I am reluctant to simply reread that part to see if I missed something speaks to my overall level of engagement with the text.
Profile Image for Richard.
Author 4 books13 followers
December 31, 2017
An argument that constraints on algorithms are critical in understanding evolution and learning.

The book takes us from a discussion of evolution's lack of detail as an algorithm, to discussions on computability ("his [Turing's] importance demands comparison with that of Issac Newton", p. 28), polynomial time, P ≠ NP, and the balance between algorithm power and what can be computed (or evolved) in practice and in principle.

A useful introduction to the importants of constraints on algorithms, as provided by the probably approximately correct framework. There's an interesting balance noted in the book: too expressive an algorithm would not have enough time to produce us; too simple an algorithm might not be able to produce us.

The last few chapters of the book contains a grab-bag of big ticket items (conciousness, human congition arising from a simple model) which didn't do it for me. These topics are too large to be dismissed so quickly.
Profile Image for Dhanya Jothimani.
331 reviews36 followers
March 2, 2025
The book starts with an interesting concept called ecorithms - the algorithms that animals (and humans) use for adapting to the environment. Then, he uses the concept of evolution and learning to explain ecorithms in humans vs machine. Through these concepts - he conceives the concept of 'Probably Approximately Correct' (PAC), which serves as one of the foundations for machine learning. Even if someone hasn't heard of PAC yet - it loosely relates to model selection (in simple terms), but PAC is more than model selection. He also introduces the concept of boosting (combining many weak learning to achieve better results) as well as perceptron.

As much as I like technical books/writing, it was a 2 star at times and 5 start at times - some of the examples were a good starting point for understanding the concept. But, some places where he talked about "reasoning" was a huge miss for me.

(my 400th rating in GoodReads :) )
1 review
August 9, 2023
I stopped 1/3 of the way through, he went super heavy on math terminology without adequate explanation of the language. Also he used weird sentence structure, with long or meandering sentences, so it was tough to even follow the basic sentences. I had to re-read even the non-technical sentences multiple times in some areas to figure out what could have been stated in a sentence half as long.

I’m not an idiot, I’m a physician who has read tens of thousands of pages of science and medical texts, but this was so hard to engage with that I eventually gave up. This would be an easy read for people with Master’s or PhD degrees in certain branches of mathematics, but not for the regular 140 IQ crowd.
Profile Image for Jina.
243 reviews1 follower
July 10, 2017
I found this piece very intriguing. My favourite chapter had to be the one on trying to quantifying human behaviour, particularly the theoretical “mind’s eye” that allows for humans to create an opinion. The “mind’s eye” acts as a filter between the observed world and what we commit to memory. Going into this book I had basically no previous knowledge on the development of artificial intelligence. Movies that depict robots that mimic humans perfectly, seem even more in the realm of “fiction” to me now that I have a better understanding as to how that would have to be achieved and the unlikely hood of that happening.
Profile Image for Jiliac.
234 reviews8 followers
March 3, 2019
This is an incredible book full of insights. It explains the basis of the theory that lead the author Turing price. It presents a more rigorous definition of how algorithms should learn (which he calls ecorithms).

Were it only for me, I would give it five stars because it gave me a lot of food for thoughts. But considering score as a general recommendation for others, I think the book as two major drawbacks:
- The writing is very dry. Although it's a short book, it takes a lot of focus to read.
- It's all very general with very limited application description.
Profile Image for Aadesh.
185 reviews2 followers
October 8, 2020
Enjoyed the concepts of Ecorithms and theoryless. I really enjoyed the early chapters where the author talks about different complexity class, turing machines and introduces PAC(Proximally Approximate Correct) learning and its two principles. The way the induction principle works for PAC is elegant. I found the book verbose in the latter sections. First few chapters are more informative and interesting that the standard books on theory of computation.
Profile Image for Alessandro Piovaccari.
133 reviews7 followers
June 24, 2020
A great treatise on learning

This book clearly explain the distinction between theoryful and theoryless problems, and how the latter can be tackled through learning through a novel kind of computation. Most fascinating is the analysis of the plausibility of the theory of evolution only if inherently funded on the concept of learning itself.
10 reviews
May 5, 2020
In general, I missed the Bayesian view from the theory. In other words, the uncertainty underlying any type of knowledge. I didn't even finish the book as I was disappointed.
This entire review has been hidden because of spoilers.
Profile Image for Mary Keehan.
107 reviews2 followers
November 22, 2020
The author presents very interesting theories but I found many of the chapters to be extremely dry. A mathematics enthusiast would likely get more enjoyment out of this one.
Profile Image for Cedric.
27 reviews
March 6, 2025
3.5/5 - Great overview of learning algorithms in general
Profile Image for Bill Pritchard.
146 reviews
February 17, 2017
The score I gave to "Probably Approximately Correct" is more a reflection of my lack of knowledge than the qualities of the book. There are times when you may be suggested to read a book and find that the material is way "above your paygrade". Leslie Valiant is a professor of Computer Science and Applied Mathematics at Harvard. He is the Nevalinna Prize winner from the International Mathematical Union. He is obviously extremely qualified to speak of the Probably Approximately Correct Algorithms that he has developed to held how effective behavior can be learned by a computer. The math in some chapters is necessary but well beyond my aged eyes... my fault - not the authors. At times the prose sings - and I found myself pulled deeper into the work - but then the math would be rejoined, and I drifted. If you are in the computer science area, I think this work is a highly suggested read. But for the rest of us... be prepared to be lost - at least some of the time.
41 reviews1 follower
September 15, 2020
A popular science book by Leslie Valiant, who developed the Probably Approximately Correct (PAC) Learning framework which was all the rage before neural networks took over Machine Learning, trading high empirical performance for any theoretical guarantees. This book can be useful as a heady layperson's introduction to theoretical computer science through the development of the PAC framework. The author also speculates that the theory of evolution can/should be made into more of a theory by having some more predictive power. I had not thought Darwin's theory of evolution to be lacking until now, so it is refreshing to contemplate cranking up its predictive power and what it would take for this.

I found the discussion of two assumptions underlying scientific induction to be interesting. Namely, The Invariance Assumption: The context in which a generalization is to be applied cannot be fundamentally different from that in which it was made, and the Learnable Regularity Assumption: a little more subtle and concerns feasibility.

If you are a computer scientist or are already familiar with PAC learning, much of this book is redundant. I also found the author to repeat himself in later chapters. Still, it's worth giving a read.
Displaying 1 - 30 of 46 reviews

Can't find what you're looking for?

Get help and learn more about the design.