Random review All Reviews Rating Form Contact

The Beginning of Infinity by David Deutsch

Deutsch eats rationalism

The philosopher Dan Dennet describes evolution, his pick for the best idea that anyone's had, as universal acid: A liquid so corrosive it eats through any substance that attempts to contain it.

The ideas in The Beginning of Infinity (BoI), the 2011 book by the brilliant and eccentric[15] Oxford physicist David Deutsch, form a universal acid: They can dissolve your entire worldview. Especially if your worldview happens to contain Bayesianism, forecasting, evolutionary psychology, behavioral genetics, cognitive biases, concerns for animal welfare, and an unease about the existential risks posed from the development of artificial intelligence. Did I miss anything?

Deutsch is a physicist by trade. He discovered the theory of quantum computing in the 1980s for which he was made a Royal Society Fellow. Remarkably, this achievement isn't the most interesting thing about him. Or, at least, it isn't the reason why he inspires so many. Deutsch's achievements in physics invariably end up as footnotes or preambles in interviews to get to his other ideas. Namely, those about knowledge.

Reading BoI can feel like you're in the presence of an alien mind[16]. It completely avoids relying on conventional wisdom, both scientific and philosophical. In fact, it starts by redefining knowledge: How it’s created, its significance in the universe, and how it enables the unbounded potential of human progress. The later chapters are a tilt-a-whirl journey covering the impacts of knowledge on an eclectic range of topics from voting systems to flowers.

The book is filled with bold ideas, having one of the highest insight-per-page ratios I’ve encountered. It constantly makes provocative assertions: meta-claims that permeate all fields. And, as someone whose worldview very much does contain the rationalist canon, it leaves one in a mess, with many problems to be solved.

Problems are soluble

Fortunately, according to Deutsch, problems are soluble.

An initial problem in discussing Deutsch is the large inferential distance from his ideas to most people's background knowledge. This can be easy to miss as Deutsch uses simple prosaic words to describe deep ideas. This makes the book accessible, but it risks the reader gliding past the significance of some of the claims.

Take this sentence:

“All evils are caused by insufficient knowledge.”

It’s a grand claim. It may even strike you as absurd: What do you mean all evil? Are you saying Hitler, Stalin, and Mao are caused by insufficient knowledge? What about poverty and violence? Or young children dying from cancer? Or a con-artist seducing an innocent older widowed woman to steal her entire retirement savings?

Would avoiding WW2 have been as simple as having Führer-life-improving technologies such as perfectly crunchy vegetables, retractable dog leashes, and artificial light to make the evening sun shine extra brightly on the day that young Adolf was leaving Art school auditions?

Perhaps.

But the examples surely can’t all be the result of a lack of knowledge. I suppose the widow could've avoided her fate if she was less naive. But what about evil people? Isn't the very problem with the evil genius trope the fact that he has too much knowledge. Imagine if ISIS acquired the knowledge to create nuclear weapons.

Or take a seemingly unconnected sentence:

“Problems are soluble.”

A simple claim on the face of it. But the implications of it being true are momentous: All problems. No matter how hard. How interconnected. How ambiguous. How evil. They're soluble, in principle. The final caveat is important: There is no guarantee that we will solve our problems but it is possible that we can. And, according to Deutsch, it all depends on creating requisite knowledge.

So knowledge will solve all our problems and cure evil? Well, not necessarily. Ok, then: problems may be solved if we know how. That seems like a tautology: If we know how to solve the problem, we know how to solve the problem. Sure. The key point from Deutsch though concerns barriers. Specifically, the lack of them: Deutsch provocatively claims that there are no barriers to knowledge.

He makes a caveat around the laws of physics. He doesn’t claim that we can go faster than the speed of light. But any physical transformation “that is not forbidden by laws of nature is achievable, given the right knowledge”. Deutsch’s “momentous dichotomy”.

Spaceship earth

To appreciate the implications of the claim that problems are soluble, Deutsch provides a thought experiment where we travel far into space to a bleak nowhere where there is nothing but hydrogen atoms. Given we had the right knowledge, we can make it home. Forget Mars: In the future we can terraform seemingly empty space. We can develop nano machines to make bigger machines that can transform hydrogen into more traditional resources. Quite literally, the universe is the limit.

This thought experiment shows that it’s a parochial misconception to think earth is uniquely well suited for our survival. Deutsch calls this misconception Spaceship Earth. He points out that Oxfordshire, where he lives, isn’t well-suited for survival if it wasn’t for knowledge. Without clothes, shelter, trade, and medicine, he wouldn’t survive long. But, due to knowledge, Oxfordshire is more suitable than the Great Rift Valley, the likely place where we evolved.  

Reliance on knowledge rather than the environment captures the human condition: We, unlike other animals, can subdue our environment. As Jacob Bronwoski said: “[man] is not a figure in the landscape – he is a shaper of the landscape.” Deserts can be irrigated. Jungles air-conditioned. Oceans crossed.

Deutsch likes Rick and Morty for portraying this theme. He doesn’t claim it’s the funniest shit of all time, but Pickle Rick epitomizes the human condition: “With only his mind, and minimal access to the world, he subdues it.”

This is why Deutsch calls himself an optimist. It’s not his cheery domineer, although he has that too. And it’s not a prediction that the best outcome is always most likely. Problems are always around the corner. Deutsch’s optimism is that problems can be solved.

Not only can all problems be solved, but all people can solve problems. People have
universality. As Deutsch says: “there can be only one type of person: universal explainers”. Universal explainers can create explanatory knowledge.

This falls out of the Church-Turing Thesis where anything is computable if it can be performed by a Turing machine. Everything that a physical object can do, can be emulated by a computer program. The only limits are memory and processing speed (which can always be added on). Reality can be computed and therefore explained.

Barriers to understanding

Even if you’re on board with space faring civilizations creating nanotechnology to transmute hydrogen into hyenas, you may object that there is a different type of barrier. Namely, in our intuitive understanding. This is a common view among scientists: some concepts may be out of reach of our intelligence, like the theory of relativity is for chimpanzees. We evolved to avoid lions on the savannah. Not to understand the very small (quantum physics confuses us) or the very large (we underestimate the size of evolutionary or cosmological history).

Quantum physics and the age of the universe are of course things we know. But that doesn’t seem to be what’s in contention. It’s whether it feels intuitive to us.

Deutsch points out that this is not the standard of having knowledge. If so, most of our knowledge would fail. Spacetime doesn't feel curved to us. The earth doesn't feel round. But we know they are. Further, our intuitions mislead us, as anyone who has been to a magician can purport. And as Wittgenstein said in response to the claim that it looks like the sun orbits the earth: "What would it have looked like if it had looked as if the earth turned on its axis".

But what about the claims that some things are inherently unknowable? Such as Colin McGuin’s awkwardly named mysterianism around the intractability of the hard problem of consciousness.

Deutsch argues that invoking unknowable knowledge is equivalent to appealing to the supernatural. Both are non-explanations. Such knowledge would resemble Zeus as they would both control us but remain unknowable.

Not only that, the argument that some things are unknowable proves too much. Phenomena outside our sphere of understanding will surely affect things inside it. The theory of relativity being inherently unknowable impacts our understanding of how planets orbit and apples fall. If any knowledge is outside a barrier, then all of it is.

The cosmic significance of knowledge (and people)

If knowledge can solve all problems, it must be quite important. For Deutsch, knowledge, as a building block of the universe, is up there with the laws of physics. In fact, knowledge is arguably the most important thing in the universe. This in turn elevates the importance of humans: as we are the ones who create knowledge.

Another conventional wisdom among scientists is that humans are insignificant in the cosmic scheme of things. The universe is massive and old and wasn’t designed with us in mind. A young Alvy Singer doesn’t see the point in doing his homework when he learns that the universe is expanding[17]. A much older Stephen Hawking calls us “insignificant chemical scum”.

As Deutsch says: “Feeling insignificant because the universe is large has exactly the same logic as feeling inadequate for not being a cow.” Our significance is not due to our size or bovinity, but to our ability to create knowledge. The deserts we can irrigate are not limited to earth but lie in distant galaxies.

Understanding the world depends on understanding what knowledge exists. Earth may be the one planet where asteroids get repelled rather than attracted. An accurate model of the solar system depends on what knowledge is present on earth.

So what is knowledge?

We tend to already have a sense of what knowledge is: It’s the understanding of a subject. To know something is not to simply parrot a password: To know something requires understanding how it's related to other ideas.

Wittgenstein has a lovely quote in Philosophical Investigations:

“A wheel that can be turned, though nothing else moves with it, is not a part of the mechanism”

Having small unconnected pieces of information doesn’t help. Knowledge should be integrated. Which circumstances does this new idea apply? What existing ideas does it challenge? What ideas does it support?

Pierre Bayard goes as far as saying that a relational understanding is often more important than the content of a book itself. (Making it not completely nonsensical for people to have favorite books that they haven’t actually read.)

The educational psychologist Benjamin Bloom offers a useful classification of knowledge types: factual, conceptual, and procedural. Respectively: knowing the details and terminology; knowing the relationships between concepts and theories; and knowing how to do things.

The common sense view of knowledge mostly covers what Deutsch means by the term. Deutsch puts emphasis on explanation: We explain things we see in terms of things we don't see. The tilt of the earth and its orbit around the sun explains the seasons. This is a good explanation as it’s hard to vary, where all the details play a functional role.

Deutsch also offers a tentative formal definition of knowledge: information that causes transformations. We explain phenomena in terms of more general laws which can transcend the original application it was used for. Knowing the theories behind the trajectory of throwing a ball allows us to build satellites.

Unjustified Knowledge: Guesses and Criticism

“For even if by chance he were to utter the final truth, he would himself not know it: For all is but a woven web of guesses” - Xenophanes

Deutsch's idea of knowledge differs from what you'll be taught in Philosophy 101: Namely, knowledge as Justified True Belief: If we believe a claim, it happens to be true, and we have justifiable reasons for coming to our belief, then we have knowledge of that claim.

(Then Edmund Gettier proposed a bunch of counterexamples showing that even these conditions weren't enough. Like believing there’s a chicken on the roof because you saw a white plastic bag flapping in the wind. But the plastic bag’s flapping was awfully similar to the flapping of a chicken and any reasonable person with adequate chicken recognizing credentials would have made the same mistake. But, unbeknown to you, there actually was a chicken on the roof the entire time. It was just hiding behind the chimney.)

Gettier problems are clever (or silly) on their own terms, but the criticism they bring is that JTB isn't sufficient for knowledge. Under Deutsch's conception, these conditions aren't necessary. Quite the contrary: knowledge isn't justified, knowledge isn't true (in the sense of being free from error), and it's not even a belief! Three for three.

Reminiscent of Deutsch's response when being interviewed by the podcaster Sam Harris: "You've made three different arguments. All of which are wrong."

(That's another thing you'll notice about Deutsch: His confidence to criticize ideas that are well-subscribed to but mistaken. Entire fields even. One chapter laments how physics went awry in the early 20th century. Another on philosophy. It's the sort of untroubled confidence that perhaps only an acclaimed quantum physicist gets away with[18].)

Most importantly for Deutsch, knowledge isn’t justified. This is for quite a simple reason: Nothing can be ultimately justified. It's turtles all the way down. Any attempt to provide a justified foundation for knowledge results in an infinite regress. You are always left with the question: What justifies the justification?

According to Deutsch, we need to change the question. Instead, we should ask: How can this idea solve my problem[19]? Or: How can we find a better theory?

(Deutsch does this “let’s-just-change-the-question” move a few times. I'm not entirely sure it's legitimate. You’re still left with this nagging voice inside you asking: “How do we really know?” You don’t. So stop asking.)

Deutsch argues that ultimate justification is a chimera. And seeking it “may well have wasted more philosophers’ time and effort than any other idea.” Knowledge, according to Deutsch, is literally guesswork. Einstein’s theory of relativity was a bold guess. As was Darwin’s theory of evolution. We make guesses then submit them to criticism. This criticism can take the form of arguments or empirical tests.

The corollary is that we can never be sure that we have arrived at the truth. Deutsch is a proponent of fallibilism: the claim that all people are subject to error. Although objective truth exists, we can never be certain we have found it[20].

Deutsch’s fallibilism permeates his morality. He is non-authoritarian as it is irrational to discount other people's theories and criticisms. This is because (1) knowledge grows by correcting errors, and (2) there’s no basis to privilege his ideas over anyone else's.

(His anti-authoritarianism gets fairly serious fairly quickly. He thinks democratic voting systems should enable the peaceful removal of leaders. He doesn't teach at his university as it implies an authoritarian relationship between teacher and student. He thinks most parenting norms, where the parent assumes power over the child, are immoral. He even refuses to give advice.)

And Deutsch doesn't believe in belief either[21].

Footnotes to Popper

The book is largely a repackaging of the views of Karl Popper. In fact, Deutsch's worldview is Popperian. Although there are fingerprints from other thinkers sprinkled throughout, Deutsch happily describes his work as footnotes to Popper[22].

But you aren't required to have read Popper to start BoI. In fact, Deutsch is the best place to learn Popperian philosophy. He's a better writer and has improved on it in places by emphasizing explanations and taking Popper’s views to their natural extensions. If nothing else, BoI is a clear introduction to Popper's ideas.

We’ve all heard of Popper. We know that theories can’t be verified. A single black swan found in Australia reminds us of this.

However, the cultural osmosis of Popper is a little misleading. He’s generally thought of as a falsificationist: This translates to, at worst, the mistaken view that he thought only falsifiable statements are meaningful. And at best, the most interesting thing about him is his demarcation between science (what is falsifiable) from non-science.

But far from being the most interesting, demarcation was a relatively small part of Popper's philosophy. It’s a useful way to classify knowledge, taxonomically. But the core ideas in Popper, which everything else is downstream from, are fallibilism and knowledge creation via guesses and criticism. Which of course form the basis of BoI’s claims of the cosmic significance of humans; the parochialism of earth’s suitability for survival; the immorality of authority; and the unbounded potential of progress.

And it’s of course the basis of the many contradictions that a young rationalist finds themself in.

Contradictions with rationalism

Do I contradict myself? Very well, then, I contradict myself; I am large. I contain multitudes. - Walt Whitman

The obvious problem with universal acid is it corrodes your other frameworks and leaves you in a mess. Especially as a young rationalist. Robin Hanson argues that rational people shouldn't knowingly disagree: At least one of you must be wrong. Honest truth-seeking partners should identify areas to persuade or be persuaded.

It's fair to assume that this should hold internally for your own views. Inconsistent ideas should be ironed out. Fortunately, reconciling inconsistent ideas is simply another problem to be solved. Just like unifying quantum mechanics with general relativity. In the meantime, perhaps paradoxically, they continue to inform our understanding of the universe.

The rest of the essay will evaluate where Deutsch might disagree with rationalists. It doesn’t resolve all the inconsistencies but hopes to at least point out some fruitful areas where more work is needed.

If I was to have a go at listing some of the main facets of rationalism, my nominees would be: Bayesianism, forecasting, evolutionary psychology, behavioral genetics, cognitive biases, AI risk, and game theory[23]. With the exception of game theory, the Deutschian worldview pierces through them all.

Different rats

Deutsch's (and Popper's) philosophy is called Critical Rationalism. Confusingly, this is unrelated to Less Wrong styled Rationalism - the idea that rationality can be analyzed and improved on via the use of statistics, game theory, and identifying biases.

Even more confusingly, the rationalism label (in the LW sense) is misleading for those familiar with the history of Western philosophy. LW-styled-Rationalism arguably follows the tradition of empiricism, especially with its focus on Bayesianism following the tradition of induction. But empiricism gets contrasted with rationalism! Historic rationalism that is. Think Locke and Hume (empiricism) emphasizing sense experience as the source of our knowledge vs Descartes and Spinoza (rationalism) emphasizing internal reflective experience.

I won’t even start with post-rationalism. You can wait a few months for someone on twitter to ask what it is.

So rationalism (LW-styled) is kind of empiricism... which is opposed to (historic) rationalism... and post-rationalism kind of comes from rationalism (LW-styled) but maybe it just shares similar audiences… and Critical Rationalism is the philosophy developed by Karl Popper which states that we are fallible and that knowledge is created by making guesses and criticizing those guesses… (but many think it’s simply falsification)… and it seems to have some small inconsistencies with rationalism (LW-styled) like some of the terms having different meanings and totally dissolving its worldview.

Before getting to specific topics, one difference is that Deutsch seem more like a hedgehog and rationalists tend to be foxes. Deutsch’s big idea being of course his view of knowledge in which everything else stems from.

Hedgehog is a bit of a pejorative term for rationalists. Foxes make better forecasts. Hedgehogs try to twist everything to fit into their worldview. Maybe Deutsch is guilty of this. On the other hand, if you have sound arguments after reasoning from first principles, isn’t everything being consistent a good thing? Computation is universal. Knowledge is conjectural. Humans are fallible.

A semi-related point is that the community around Deutsch’s ideas come across as cultish. I don’t know the forces behind this. And I’d direct the accusation more at his fans than Deutsch himself. Ironically some fans seem to treat Deutsch as infallible. A milder accusation is that the ratio of how much the community knows about their thinkers (Deutsch and Popper) relative to other thinkers seems higher than other communities.

Evolutionary psychology & behavioral genetics

We can look at evolutionary psychology and behavioral genetics together. Behavioral genetics looks at differences between individual’s outcomes. Whereas evolutionary psychology focuses on what is shared across humans. But both fields attempt to attribute much human behavior to an inborn human nature.

To be clear, evolutionary psychology was a view-quake for me. Specifically, the ideas in Robert Trivers’ series of papers from the 70s (popularized by Dawkins in The Selfish Gene and a flood of popular books in the 90s). They provided a scientific basis for understanding the main types of human relationships: between the sexes; between parents and children and siblings; between friends; and with yourself. Triver’s ideas, like evolutionary theory itself, fall into the category of ideas that are simple to understand once stated, but weren’t discovered for centuries.

Then there’s behavioral genetics. Robert Plomin, a doyen in the field, provides an accessible overview of the literature in Blueprint (2018). The main findings, via twin and adoptions studies, are: all behavioral characteristics are heritable; parenting doesn’t matter for long term outcomes; and environmental effects are idiosyncratic.

But the problem with explaining human behavior in terms of genes, as a Deutschian, is it quickly runs counter to human universality. There are no barriers to knowledge creation, so genetic influence cannot be immutable[24].

Many denials of human nature are due to commitments to ideology or political correctness. This is the topic of Steven Pinker’s The Blank Slate (2002). If people differ, in part, due to their genes, then hopes of an egalitarian society are weakened. The most taboo being any discussion about group differences.

Deutsch, however, does not seem motivated by political correctness. Elsewhere in BoI, he’s willing to defend politically sensitive views such as open societies in the West being better than closed societies or primitive societies[25].

Deutsch doesn’t totally reject genetics either. As we’ve covered, knowledge can be instantiated in genes. We are not blank slates. It actually neatly solves the problem of where human knowledge originally comes from. New ideas build on earlier ideas. Which were built on even earlier ideas. Going back to inborn ideas. His main criticism is that these influences are not immutable.

To be fair to evolutionary psychologists and behavioral geneticists, many don’t claim immutability. As Richard Plomin outlines: “genetics explain what
is not what can be.”

So where’s the disagreement? I think it lies in (1) to what extent current outcomes are due to genetic influence, and (2) how easy this influence is to overcome.

Deutsch seems to think that attributing current human behavior to genes is wrongheaded. That it’s easy to override our inborn instincts and perhaps even to overwrite them given the right knowledge. In fact, any genetic component is itself dependent on culture. He doesn’t explicitly say this in BoI, but he hints at it in interviews and it’s a corollary of his views on knowledge[26]: There are no barriers.

It’s hard to square this with the behavioral genetic literature where identical twins turn out so similarly (even when they’re reared apart) and adopted children turn out so differently to their adoptive parents. The most parsimonious explanation seems to be genetic influence on personality and IQ[27]. If not immutable, it seems to be very sticky. But any stickiness must not violate human universality. An important problem to be solved indeed.

Bayesianism

Bayes pops up a fair bit in the rationalist canon.

 

If Deutsch’s worldview conflicts with it, we want to understand why. Don’t worry. It’s not like Deutsch attempts to disprove Bayes rule. That would be a bigger catastrophe than the most pessimistic AI foom scenario.

There’s no issue with Bayes in a statistical setting: When we have a dataset and make conditional statements around it, Bayes is fine. Even better: it can point out unintuitive results so we can diagnose breast cancer more accurately than every doctor in the world.

The main gripe is when Bayes is used as an epistemology. There seems to be three main issues for Deutsch: (1) the source of our theories, (2) the irrelevance and arbitrariness of priors, and (3) the invalidity of evidence increasing the likelihood of theories.

Firstly, Bayes alone cannot create new knowledge. Where in the rationalist worldview do new ideas come from? It seems like it’s only ever evaluating beliefs in ideas that we get elsewhere. If they simply come from data, then this is induction.

(Deutsch spends a lot of time in BoI criticizing induction. He points out that it often refers to two things: (1) knowledge being sourced from data/experience, and (2) further evidence supporting a theory by increasing its likelihood. Both of which, he argues, are false.)

Secondly, Deutsch emphasizes explanations: Priors and updates are irrelevant if we lack explanations. They’re also often arbitrary. When they’re not (like if they’re based on historical rates) they, again, start to sound like induction.The future does not necessarily resemble the past. As Deutsch points out, we know the sun will rise, not because of repeated previous instances, but because we have a good explanation.

(Priors also lead to an infinite regress. How confident are you in your 90% prior?)

Finally, confirming evidence doesn’t increase the likelihood of a theory being true, as shown by examples of black swans in the antipodes and the anthropomorphic chicken that gets eaten for Christmas dinner[28]. Newton’s gravitational theory had as much confirming evidence as anything. It got displaced too.

Is it not simply induction to base our priors on the likelihood of war between North and South Korea on historic rates? And what are the correct rates to use as our outside view anyway? Wars in Korea? Wars between countries where at least one is developed? Or those with nuclear weapons?

And what about when there is no dataset? What would it have looked like if we were assigning values when traversing the hornet’s nest of the minimum wage literature.

We may have a strong prior that minimum wage hikes increase unemployment. Due to our understanding of supply and demand. We assign our confidence level at 90%.

Then we read a bunch of empirical papers that show small or no impacts. Surprising. But like good Bayesians we lower our priors. Say 40%.

But then we read some papers that do show an impact. 65%.

And then we read Caplan who argues that we should discount the papers that showed small impacts (which we are 83% sure is not for ideological reasons). Back up to 90%.

The main point I’m trying to demonstrate is that these values are arbitrary and they’re not describing reality. They are metaphors for subjective belief. Bayesians may say: “Yes! They are estimates in our credence of a theory. We are well aware that the map is not the territory.” But the growth of knowledge, for Deutsch, “does not consist of finding ways to justify one’s beliefs. It consists of finding good explanations.”

In truth, I don’t exactly know how this cashes out in every specific instance. When is Bayes theorem a valid use of statistics and when is an invalid use of epistemology for Deutsch? The Bayesian patient at the doctor’s office has a (rational?) belief that she has an 8% chance of breast cancer.

I’m not exactly sure how Deutsch would respond. Maybe it’d be rational to make bets on that information but we shouldn’t call it knowledge. I don’t know if Deutsch is a betting man. But whether we know a woman actually has breast cancer depends on the explanation. This knowledge is fallible, but, for Deutsch, there is no concept of numerical uncertainty.

Forecasting

Deutsch makes strong claims against predictions in BoI:

“The future of civilization is unknowable, because the knowledge that is going to affect it has yet to be created.”

He also claims that predictions without good explanations amount to “prophecy”.

(Some of his concerns around predictions seem to resemble Taleb’s around the misuse of probability. Namely, it’s a fallacy to assume gamelike probabilities in the real world (e.g. 47.4% chance of hitting red on roulette). The real world has unknown unknowns. However, Deutsch won’t yell at you.)

We don’t know tomorrow’s scientific discoveries. If we did, it would be today’s. Malthus’s math wasn’t wrong. His assumptions about the future were: He didn’t predict Norman Borlaug.

If early 20th century proto-rationalists were sitting around betting on world affairs, and Borlaug strolled in, a stem of wheat protruding out the side of his mouth, and pushed the chips off the table, they may look up, exasperated: Deutsch who was quietly sitting in the corner would calmly explain to them: “prophecy”.

If your model of the 20th century didn’t include the discovery of nuclear fission, your predictions around war, energy, and the environment will suffer. But how could it have been included? Nuclear fission hadn’t been invented.

But Deutsch makes predictions in BoI, for example, on aging[29]:

Illness and old age are going to be cured soon — certainly within the next few lifetimes — and technology will also be able to prevent deaths through homicide or accidents by creating backups of the states of brains, which could be uploaded into new, blank brains in identical bodies if a person should die.

Not against predictions so much after all. “Within the next few lifetimes” may be non-vague enough that even Tetlock would be happy with it.

And in Deutsch’s recent discussion with Robin Hanson around the validity of forecasting stock prices among other things, Deutsch not only didn’t object but described it as “morally required”.

​​It’s hard to square these with his strong claims in BoI.

His main criticism is that probabilistic predictions are dangerous when the thing being predicted depends on the future growth of knowledge. Appealing to a historic base rate, in and of itself, is illegitimate. We need an explanation for why the historic base rate applies, despite any unpredicted future discoveries.

However, Deutsch seems to remind us of this point most loudly when we’ve made a pessimistic prediction.

I have a feeling he’d wince at Toby Ord’s 20% estimate of existential catastrophe. Or if someone invoked the average age of previous civilizations (which, depending on how you count, are 400 years and 300 since the Roman Empire). It’s illegitimate to extrapolate to our civilization as we have an explanation of how it differs. Namely, the way we create knowledge via good explanations and institutions that enable error correction. Poverty, war, climate change, nuclear destruction, and asteroid impacts scenarios are all problems that can be solved.

And even if we didn’t have an explicit theory for how our civilization is different, it would still be illegitimate to extrapolate. Any estimates of future survival depend on future knowledge that can’t be predicted today. A doomsdayer would need an explanation for why the world will collapse despite potential future discoveries.

AI Risk

Deutsch isn’t concerned about paper clips.

Not because he thinks AGI is impossible. To the contrary. It follows from the universality of computation: Anything that a physical object can do can be emulated by a program.

However, he doesn’t think we will break through with current approaches. Rather, we need a breakthrough in philosophy: “a theory that explains how brains create explanations”.

This is not to say that we can’t break through soon: All it’d take is a bold new idea. And the arguments around preventing AI existential risk don’t depend on a singularity happening in the next ten years. Regardless of when it happens, it is a vital problem to be solved.

Deutsch also thinks it would be immoral to constrain an AGI. Just as slavery is immoral. For Deutsch, an AGI is a person. In fact, that’s his definition of what a person is: a creator of new explanatory knowledge. An AGI would be qualitatively different to all existing AIs. Therefore, we won’t make the breakthrough simply from more brute force.

Deutsch isn’t concerned because he thinks the alignment problem is misconceived. Namely, an AGI would be a person. So there are no barriers for it to create new knowledge. An AGI wouldn’t be constrained to solely seek a single goal the same way humans aren’t immutably constrained by their genes.

In fact, there’s an irony where our attempts to constrain the AGI may cause the very risk we are trying to prevent. Those enslaved tend to revolt. Quite right too, according to Deutsch.

Another reason he’s not concerned is that moral progress tends to come along with technological progress. This is not because the is-ought distinction isn’t real. But because progress in both depends on the same thing: namely the ability to create new knowledge via bold guesses and an open environment to give and receive criticism. It’s not a coincidence that our society is more moral than it was 500 years ago as well as being more technologically advanced.

I’m unsure if any of this is strawmanning those in the AI-is-the-biggest-existential-risk-ever camp. And it did feel like BoI, which is now over ten years old, tended to underrate the advances that AI development would make over the last decade. Although the reverse has been true too: Turing expected that by the year 2000, we’d describe machines as “thinking”. Arthur C. Clarke set his dystopia in 2001.

Animal welfare

The way we treat animals, especially on factory farms, gets suggested as a contender for things we will look back on and lament our immorality. This concern is supported by Peter Singer’s argument that suffering is the appropriate criterion to assign moral weighting and that animals suffer as well as humans.

However, pain ≠ suffering. In BoI, Deutsch points out that scientific studies that claim to measure animal suffering, such as stags being hunted, invariably measure things like nerve endings, blood levels, and pain receptors which are then assumed to have the associated qualia (suffering) that it purports to measure.

We can suffer without experiencing physical pain. And we can be in pain without suffering. In fact, people choose to endure pain when running or lifting weights and even grow to
enjoy it. Arnie described it as heaven (in so many words). And BDSM can be enjoyable too, I heard from a friend.

As like most other things, it depends on knowledge. Whether our pain causes us to suffer depends on our explanation for the pain. However, animals do not create explanatory knowledge.

The psychologist Paul Bloom describes that our theories around an experience affect the pleasure we receive. Imagine someone informing you part way through eating Sunday roast dinner (ignore the fact we are eating animals for the minute) that the food you are eating is the family dog. The same physical sensations would suddenly become disgusting. Incidentally, I heard from a Muslim guy who described his attitude towards eating pork as how we in the west would react to eating dog meat[30].

However, I’m not sure if this obviates Singer’s argument for animal welfare. Singer may have been too quick to claim animals self-evidently suffer, but the new criterion may just be if an animal can create universal explanatory knowledge.

We walk into Singer’s other argument that we can’t cleanly demarcate humans from other animals based on a criterion (in Deutsch’s case the ability to create explanatory knowledge) without either (1) excluding some humans (such as infants or severely cognitively impaired) or (2) including some animals (such as primates).

Deutsch points to infants as knowledge creators because they learn language, but we might need to bite the bullet with the severely cognitively impaired or risk being “speciesist”.

Deutsch’s main point is that we do not currently know if animals suffer:

“In reality, science has, and will have, no access to this issue until explanatory knowledge about qualia has been discovered.”

Perhaps it pays to be prudent in the meantime?

Problems to be solved

I have tentative conjectures to reconcile some of these problems (and in some cases, it’s hard to pinpoint where the substantive disagreement even is). However, I’m concerned that I already have enough Deutschians on one side yelling at me for misrepresenting how knowledge works and Rationalists on the other side lamenting at yet another person strawmanning AI risk and bastardizing Bayesianism and genetics.

(It would be great if a representative of rationalism like Yudkowsky or Galef discussed these differences with Deutsch to better steelman rationalism.)

However, I am confident that BoI is both totally inspiring and at odds with many ideas in rationalism. All because of Deutsch’s understanding of knowledge: Genetic predispositions can be easily overcome; breakthroughs in AGI won’t happen until we understand creativity; predictions without explanations are prophecies; and knowledge doesn’t become more likely with more evidence.

Fortunately, if there really are no barriers to understanding the cosmos, we can approach these problems with excitement and wonder. After all, that is what the human condition is all about.

Appendix:

High-level tentative guesses of the differences between most people, rationalists, and Deutsch.

Most people

Rationalists

Deutsch

Genetic influence

Mostly blank slates. But some things are genetically determined like sexuality.

Parenting heavily impacts the outcomes of children.

Behavioral genetic literature shows that outcomes are influenced by genes and random environmental effects.

Much behavior is explained by our evolved predispositions.

Genetic influence is not immutable. This is due to human universality.

It will be easy to override and overwrite inborn predispositions with the right knowledge.

Invoking genes to describe current human behavior is a bad explanation

Bayesianism

Can ignore base rates and estimate the likelihood of breast cancer about as well as a doctor.

Beliefs start as priors (based on relevant base rates). Update these via +ve or -ve evidence.

Act and bet based on these beliefs.

Knowledge does not come from data. It is theory laden. We make guesses and submit them to criticism to correct errors.


Confirming evidence does not increase the likelihood of a theory.

Knowledge is not based on belief.

Forecasting

How did Trump win when there was a 90% chance of him losing?

Make predictions that are precise and measurable. Calibrate these predictions.

Betting markets improve future predictions.

Future of civilization is unknowable.

Predictions that depend on future knowledge are prophecies.

Blindly invoking historic occurrences is a bad explanation.

Animal Welfare

Chick-fil-a is a gift from heaven.

Killing whales and mistreating dogs is evil.

Animal suffering is a serious problem. It’s speciesist to privilege humans over animals.

Eating meat while “offsetting” with donations is defensible

Current science doesn't demonstrate that animals suffer. Pain ≠ suffering.

Suffering likely depends on the ability to create explanatory knowledge. Animals lack this ability.

AI Risk

Terminator scenarios are science fiction, right?. And why would they want to harm us anyway?

AGI is plausibly the biggest x-risk along with nuclear destruction.

The alignment problem is key. We need to obviate the risk rather than react to it.

Timeline doesn’t ultimately matter re importance of preventing risk, but plausible AGI will be developed in 10-20 years.

The current approach won’t achieve AGI breakthrough.

AGI are people. It’s immoral to try to constrain them.

A true AGI won’t be constrained by a  goal because they will be a universal explainer.

Once AGI is developed it will be trivial to improve our hardware to catch up. Software is universal.