Random review All Reviews Rating Form Contact

Anthropic Bias by Nick Bostrom

Recently, I saw a meme that included the new-to-me phrase “anthropic reasoning.” A reasonable person would have kept scrolling. Instead, I read Anthropic Bias by Nick Bostrom. For posterity, the meme that nerd-sniped me was:

Panel 1: Alice from Alice in Wonderland looking into a rabbit hole saying “Ranking charities empirically by impact? How curious…”

Panel 2: Alice falling into the rabbit hole, surrounded by the following phrases: worldview diversification; insent sentience; anthropic reasoning; cluelessness; AI timelines; Great Filter; acausal; S-Risk; replaceability; patient vs urgent longtermism.

The fact that this is the meme that made me read Anthropic Bias is ironic. Someone should update the meme so panel 1 says “A meme mentioning ‘anthropic reasoning’? How curious…” and panel 2 contains the text of this review.

(In addition to the book Anthropic Bias, I also relied on blog posts by Katja Grace at Meteuphoric and Joe Carlsmith at Hands and Cities to understand anthropic reasoning. Anything that sounds smart in the below book review comes from one of these three people, but all mistakes are solely mine.)

Fine Tuned

If our universe had been very slightly different, it would not be able to support life. Bostrom gives the example of the early expansion speed of the universe right after the Big Bang. If the speed had been faster, the universe would have expanded too fast for dense galaxies to form. If the speed had been slower, the universe would have collapsed again. Luckily, the actual early expansion speed was neither too fast nor too slow, and so galaxies were able to form, which eventually led to life on Earth. Many other parameters show the similar property of allowing life to exist by not being too big or too small: the ratio of the electron mass to the proton mass, the magnitudes of force strengths, the smoothness of the early universe, the neutron-proton mass difference, perhaps even the metric signature of space-time.

The fact that the universe seems to be fine tuned cries out for an explanation. There are a few possible explanations.

One, maybe the universe isn’t fine tuned at all? Maybe the theories that tell us that the universe has many parameters that could have taken on different values are wrong. Maybe the correct theory of everything has no free parameters, and physicists just haven’t found it yet.

Two, maybe the universe was fine tuned on purpose to give rise to life? Depending on your theological inclinations, the designer who fine tuned the universe could either be God or a programmer running a simulation.

Three, anthropic principles. A bunch of thinkers came up with their own anthropic principles, ranging from tautological to terrible, but all of them try to account for the fact that we exist when explaining why the universe is fine tuned. The tautological end of the spectrum simply says “Any intelligent living beings … can find themselves only where intelligent life is possible.” The terrible end of the spectrum hypothesizes “Intelligent information-processing must come into existence in the universe, and, once it comes into existence, it will never die out.”

Bostrom’s contribution to anthropic reasoning narrows in on observer selection effects. As observers, we are not omniscient. We are only able to observe select universes. Intelligent life can only observe universes capable of supporting intelligent life. We should account for this bias when drawing conclusions.

An analogy is useful here. Consider the following famous (but maybe apocryphal?) stock picking scam. A scammer sends 100 people an email predicting Tesla stock’s future. Half the emails say it will go up, the other half say it will go down. The stock goes up. The next week, the scammer emails the 50 people who received the correct prediction another prediction. Again, half the emails say the stock will go up, the other half say it will go down. The stock goes down. 25 people have now received correct predictions two weeks in a row. The scammer does this a few more times, until he is left with a handful of people who received correct predictions several weeks in a row. He then approaches these people, points to his past emails as evidence that he is a stock picking genius, and gets them to trust him with their money.

Bob, one of the targets in this scam, would be incorrect to think “this stock picker is a genius.” Bob should account for observer selection effects. Bob observed a set of emails that had correct predictions, but Bob should think about the fact that he was not able to observe all emails the scammer sent. The scammer looks like a genius based on only the emails that Bob received, but looks less like a genius when Bob realizes that the scammer could have sent many more incorrect emails to others.

Similarly, when reasoning about our fine tuned universe, we should not simply think “we are lucky our universe is fine tuned.” We should account for observer selection effects. We are not able to observe all possible universes, only this one. Bostrom goes on to describe frameworks to calculate probabilities that take into account such observer selection effects.

Dark Rooms

Ordinary probability theory, the kind taught to schoolchildren, asks unthreatening questions like “if you pick a marble out of this urn with 1 red and 9 blue marbles, how likely are you to pick the red marble?”

Anthropic bias theory, not constrained by school boards’ opinions, asks questions that sound like they could be from a future Saw franchise, one where Jigsaw becomes obsessed with obscure philosophical questions. “Imagine you wake up with no memory in a dark room, and a voice announces over a loudspeaker that you are part of an experiment where God flipped a coin and brought 10 people into existence if it came up heads, one of whom has a red jacket–” AAAAAAH!

Look, I want to acknowledge that the upcoming thought experiments are weird. If it helps, in my head, when you wink into existence in this hypothetical dark room, you are not-at-all bothered by that fact. Instead, you calmly calculate the probabilities for which way the coin landed and then exit the room to live a fulfilling life.

I.

That said, imagine you wake up with no memory in a dark room, and a voice announces over a loudspeaker that you are part of an experiment where God flipped a coin after promising to create one human in this room if it came up heads, and to leave the room empty if it came up tails. How did the coin land?

Heads = 1 human; Tails = nothing.

This is the simplest experiment I can think of that illustrates anthropic bias. The coin had a 50-50 chance of landing heads or tails. However, the fact that you are awake in the dark room to contemplate God’s experiment is evidence that you should take into account. You only exist in a world where the coin landed heads. Therefore, the coin landed heads.

II.

You again wake up, except this time God’s experiment is slightly different. God planned to create one human in this room regardless of whether the coin came up heads or tails.

Heads = 1 human; Tails = 1 human.

This time, the fact that you exist provides no evidence for which way the coin landed. You would exist no matter how the coin landed. Therefore, your probabilities for the coin landing are unchanged from what they would be otherwise: 50-50.

* * *

So far, so good. In simple thought experiments like these, it is easy to reason about probabilities. However, things get trickier when different numbers of people are created when the coin lands heads versus tails. Bostrom discusses two theories, SIA and SSA, to account for anthropic bias in these situations. An example will illustrate both of these.

III.

You again wake up. This time, God planned to create one human in one room if heads and two humans in two rooms if tails. Which way did the coin land?

Heads = 1 human; Tails = 2 humans.

The first theory, which I find easier to understand, is the self-indication assumption (SIA). Under SIA, you are more likely to exist in worlds with more people like you. How I think of this is: imagine you are playing a game of cosmic musical chairs. “You” (some metaphysical, not-yet-existing version of you) have one chair available in the heads world and two chairs in the tails world. You are more likely to find a chair to occupy in the tails world.

Therefore, under SIA, you should assume there is a ⅓ chance the coin came up heads and a ⅔ chance the coin came up tails.

The second theory, and Bostrom’s preferred one, is self-sampling assumption (SSA). Under SSA, you are more likely to exist in worlds with a greater proportion of people like you. How I think of this is: imagine there are two urns: urn A and urn B. Urn A is filled with only red marbles; urn B has both red and blue marbles in equal numbers. God is planning to pull out marbles from an urn, and create you (again, some metaphysical, not-yet-existing version of you) or someone like you everytime he pulls out a red marble. You have a greater chance of existing if God pulls from urn A than from urn B.  

In our coin-toss situation, the proportion of people like you in heads-world is 1, and in tails-world is also 1. In both worlds, you could be any of the resulting humans. Therefore, under SSA, each world is equally likely, and the chance that the coin came up heads or tails is ½.

IV.

You wake up, this time in a well-lit room. Again, God planned to create one human in one room if heads and two humans in two rooms if tails. However, this time, all the people are not indistinguishable from each other. Specifically, on heads, God planned to create one person in a red jacket. On tails, God planned to create one person in a red jacket and one in a black jacket. You notice you’re wearing a red jacket. Which way did the coin land?

Heads = 1 human in red jacket; Tails = 1 human in red jacket, 1 human in black jacket.

Under SIA, you should think about how many people like you (that is, people in a red jacket) exist in each world. In heads-world, there is one person like you, and in tails-world, there is also one person like you. Both worlds are equally likely, so the chance that the coin came up heads or tails is ½.

Under SSA, you should think about what proportion of people in each world are like you. In heads-world, 100% of people are like you, and in tails-world, 50% of people are like you. So heads is twice as likely as tails. The chance that the coin came up heads is ⅔ and the chance the coin came up tails is ⅓.

* * *

Bostrom abdicates the duty to come up with a cute/cringy mnemonic device to distinguish SIA from SSA, which I would have thought would be the best part of coming up with similarly-named theories. I’ll fill in for him with my best attempt.

SIA sounds like the singer Sia, who according to her songs seems to enjoy partying pretty indiscriminatorily. The more people you get dancing in a room, the more likely it is that one of them will be Sia, swinging from chandeliers, enjoying cheap thrills, shining bright like a diamond, etc. etc. Similarly, under SIA, the more people like you a world has, the more likely one of them is to be you.

SSA shares initials with the Social Security Administration, the US government agency tasked with distributing retirement benefits. The Social Security Administration cares a lot about which reference class you belong to. Are you 67 years old? Did you work for ten years? Did you pay taxes? Similarly, under SSA, you need to care about your reference class. You should decide what your reference class is, and then calculate what proportion of that reference class is like you.

Both SIA and SSA have flaws. SIA becomes way too confident that there are LOTS of people in the universe, while SSA gives wildly different probabilities when different reference classes are chosen. The next two examples will illustrate their respective flaws.

V.

Bostrom’s knockdown thought experiment to discredit SIA is called The Presumptuous Philosopher. In the year 2100, physicists have narrowed down all possible theories about the universe to two candidates. Under the Small Theory, the universe has a trillion trillion beings. Under the Big Theory, the universe has a trillion trillion trillion beings. Based on experimental evidence, the physicists think both theories are equally likely to be true. A presumptuous philosopher walks into their office and says, “You all can stop working now; I’ve solved it. Under SIA, the Big Theory is a trillion times more likely to be true.” After all, the Big Theory has a trillion times more beings, so the philosopher and the physicists are way more likely to exist under the Big Theory.

The presumptuous philosopher’s math checks out under SIA, but his conclusion still sounds nonsensical. There’s no way he should be this confident that the Big Theory is true! In contrast, under SSA, both the Small Theory and the Big Theory are equally likely.

VI.

Bostrom prefers SSA, but acknowledges that selecting the reference class is a subjective endeavor. A common criticism of SSA is that this process is too subjective. It allows people to select whatever reference class will give the best answer. Bostrom says that people should select a non-arbitrary reference class, but that doesn’t seem like a strong enough criteria to prevent reference class abuse. I trust Bostrom to come up with good reference classes, but others could use reference classes for more nefarious purposes.

Consider our coin toss experiment again. You wake up. God planned to create one human in one room if heads. However, if tails, God planned to create a whole menagerie: one human, one chimpanzee, one dolphin, and one newborn infant. Which way did the coin land?

Heads = 1 human; Tails = 1 human and a zoo.

Under SSA, in heads-world, 100% of the observers are like you. Tails-world is more tricky. Bostrom tells us to pick a non-arbitrary class of observers, but who counts as an observer? A dolphin is fairly intelligent, so maybe it should be included? A chimpanzee is not as intelligent, but is genetically pretty close to a human, so maybe it should be included? A newborn infant is human, but needs to grow up more before it can observe anything, so maybe it should be excluded? God is technically an observer who exists in this world; should God be included in the reference class? Depending on your answers, the proportion of observers who are like you in the tails-world can range from 20% to 100%. This, in turn, will change your likelihood that the coin came up heads.

(Technically, Bostrom prefers a variation of SSA called Strong Self-Sampling Assumption (SSSA). In this variation, your reference class is made up of “observer-moments,” which are observers at different points in time. SSSA avoids some paradoxes that SSA creates, but still has the problem that defining the right reference class is tricky.)

Doom Soon?

I. 

Anthropic bias can help us reason about a fine tuned universe. As I mentioned above, a fine tuned universe cries out for explanation. These explanations fall into two broad hypotheses.

Hypothesis one: This universe is one of many. There are many universes, all with their free parameters set to different values. In 49% of them, the early expansion speed of the universe was too fast to permit life; in another 49% of them, the early expansion speed of the universe was too slow to permit life. In a mere 2% of them, the early expansion speed of the universe was just right, goldilocks-style. (All numbers are made-up.) Same for every other free parameter.

This is easier to see with an analogy. Say a friend comes up to you and says, “I bought a Magic card pack today, and guess what? I found the Black Lotus inside it!” The Black Lotus is a super-duper rare card. Is it more likely that this is the only card pack your friend has bought in his life, or that he has bought more than one card pack? The latter is obviously more likely. Your friend would have to be absurdly lucky to find a rare card in the first pack he ever bought.

Similarly, we would have to be absurdly lucky to find ourselves in a rare universe capable of supporting life if there was only ever one universe in existence.

Hypothesis two: There is only one universe. Maybe our theory of physics is wrong and there are no free parameters, so no other universes could possibly have existed. Or, maybe God created this universe to be hospitable to life.

Anthropic bias doesn’t require you to believe in the multiverse theory over the single-verse theory or vice versa. Rather, you should set your priors to predict you live in one or the other, and then use anthropic bias theory to update and adjust your prediction. If you like Doctor Strange, you might set your prior to 99% multiverse, whereas if you like certain flavors of string theory that have no free parameters, you might set your prior to 99% single-verse. These priors might seem wooly and not based on hard data and unfalsifiable, but we’re just getting started and it gets much worse!

Then, you need to decide whether you’re going to use SIA or SSA to account for anthropic bias. Bostrom prefers SSA, but they both have their problems. (There are other theories too, but Bostrom doesn’t discuss them so I don’t know them.)

Under SSA, you should update towards believing in a hypothesis where a greater proportion of people are like you. However, in this situation, we don’t know what proportion of all intelligent life is “like us” in the multiverse vs the single-verse. If we assume that the proportions are identical in both cases, then we should use SIA as the tiebreaker. However, if the proportions are different, then we should select the hypothesis where more of the intelligent beings are like us.

Under SIA, you should update towards the hypothesis that results in more people like you. A multiverse is almost guaranteed to have more people like you than a single-verse, so SIA dictates you should revise upwards your credence in the multiverse.

All of this seems very wooly. I feel weird making up numbers for (i) how much alien life a multiverse would hold, (ii) how similar this alien life is to me, and (iii) how many people “like me” exist in the universe. I trust smart philosophers to come up with sensible numbers here, but these theories give me way too much rope to hang myself with.

II.

In addition to fine tuning, the other big problem anthropic bias can help us reason about is the doomsday argument.

The doomsday argument goes like this: let’s say each human gets assigned a serial number corresponding to their birth rank. Adam was #1, Eve was #2, and so on. I am roughly #60 billion. Assuming that my birth rank is random, it is 95% likely that I am within the last 95% of humans to ever be born. Therefore, the first 5% of humans numbered roughly 60 billion. If 5% of humans are 60 billion, then 100% of humans are 1.2 trillion. So, we can conclude that it is 95% likely that no more than 1.2 trillion humans will ever live.

This is a really scary conclusion for a very simple argument!

The argument seems so simple that a lot of people assume it is somehow wrong or incomplete. Bostrom devotes a whole chapter to explaining why this argument deserves to be taken seriously. You might be coming with objections like “Isn’t a sample size of one too small?” or “Couldn’t a Cro-Magnon man have used the Doomsday argument?” or “But we know so much more about ourselves than our birth ranks!” Bostrom addresses all of these and more. (In short, respectively: a small sample is sufficient to prove/disprove certain hypotheses to a high degree of certainty; the doomsday argument’s conclusion is only 95% likely and the Cro-Magnon would have been in one of the 5% of cases when the conclusion is incorrect; the extra information we know about ourselves doesn’t change the information the doomsday argument takes into account.)

I think an analogy helps here. Consider the German Tanks problem. During World War II, each German Tank had a serial number. Allied forces took note of the serial numbers of tanks they destroyed or captured. Statisticians were able to use these serial numbers to calculate how many tanks Germans were manufacturing. Impressively, statisticians came closer to guessing the correct number of tanks than spies who were gathering intelligence about tank manufacturing. To get some intuition here, imagine that the Allied forces found tanks numbered 9, 61, 185, 186, and 231. Guessing that the Germans manufactured 250 tanks will bring you closer to the correct answer than guessing 1,500.

This is similar to the doomsday argument, but with tanks instead of people. In the doomsday argument, instead of finding tanks with serial numbers 9, 61, 185, 186, and 231, you find yourself with birth rank #60 billion. Given this, if you want to estimate the total number of humans that will live, you’ll be closer to the correct number if you guess 1 trillion than 1,000 trillion.

Fortunately, not all is lost. Anthropic reasoning can help disarm the doomsday argument.

First, consider SIA, which offers the more elegant solution. Under SIA, you are more likely to exist in worlds with more people like you. Putting numbers to it, you are more likely to exist in worlds with 1.2 trillion possible humans than 10 possible humans. However, you are also more likely to exist in worlds with 12 trillion possible humans than 1.2 trillion possible humans. And you are also more likely to exist in worlds with 120 trillion humans than 12 trillion humans. The bigger the number, the more SIA likes your chances. SIA completely counteracts the doomsday argument, and I think this is the biggest advantage of using SIA over SSA.

However, SSA does offer its own counterargument. Under SSA, you are more likely to exist in worlds with a greater proportion of people like you. The doomsday argument assumes the reference class is “humans” and then attempts to calculate the total number of humans. However, that is not the only possible reference class, and Bostrom argues that it is a poorly-chosen, “not scientifically rigorous” reference class.

For example, we could assume the reference class is “all intelligent beings.” Doing the doomsday calculations again, let’s assume AD-01 of planet Zalrog was #1, EV-01 of planet Zalrog was #2, and so on. I am roughly #600 gazillion. Assuming that I am within the first 5% of intelligent beings to ever be born, it is 95% likely that no more than 12 thousand gazillion intelligent beings will ever live.

You could also come up with more exotic reference classes. For example, you could assume that future humans will be part-cyborg, and so different from us that they are not part of our reference class at all. Maybe there will truly be only 1.2 trillion humans as the doomsday argument predicts, followed by a thousand gazillion cyborgs.

Changing the reference classes like this seems very arbitrary, but this is Bostrom’s best argument against doomsday. In light of this, I can see why people prefer SIA. SIA has its own problems, but at least its problems are simple. SIA just wants there to be a lot of people in the world. SSA wants you to dive into Ship of Theseus like paradoxes about how many parts a human must replace with machines to belong to a different reference class.  

III.

I read somewhere, I can’t remember where, about a family with a recipe for grass soup. They had this recipe so that, if civilization ever collapsed and they were reduced to eating grass, they could at least make it taste good. The family did not prepare for civilization collapse in any other way. They just had this recipe for grass soup.

My opinions on anthropic bias are similar to my opinions on grass soup. It is better to have a recipe for grass soup than to not have a recipe for grass soup, but ideally I’d want a lot more than just that in case of civilization collapse. Similarly, it is better to know about SIA and SSA than to not know about them, but ideally I’d want to have a lot more data before I start forming confident beliefs about whether we are in a multiverse and how many humans will ever live.

This is not a universal rule. I trust some other people to use anthropic reasoning well. In the right hands, used by people with good judgment who have reasonable priors, anthropic bias could lead them closer to the truth. It’s just that… I’m not in that reference class (bah dum tsh).