Random review All Reviews Rating Form Contact

Rationality: What It Is, Why It Seems Scarce, Why It Matters by Steven Pinker

Steven Pinker's Rationality Versus Western Philosophy

Rationality is Pinker's third book that touches on politics, which makes him a rare breed: a politically moderate political writer.  I know he's a moderate, because everybody hates him[zz3].  But he's also a professor at Harvard, so Penguin is legally obligated to publish anything he writes anyway.  He has a political bias.  But I'm not gonna talk about that.  Half of you would think he was biased the other way and quit reading in disgust, and it's irrelevant anyway, except to show that even people writing books about how to avoid bias are biased.

About 80% of Rationality consists of examples and psychological experiments to explain common errors, biases, fallacies, and coordination problems; or introductions to probability, Bayesian reasoning, game theory, and correlations.  You could have found these in Nisbett, Kahneman, Tversky, Thaler, Tetlock, and Hanson, or on LessWrong.  But now you don't have to; it's here in one book, in a simple conversational style, with plenty of friendly, minimalist diagrams for the math-perplexed.[zz8].  I'm not going to enumerate them, because there's not much point in enumerating biases without explaining them, and to explain them would be to rewrite Rationality.

Pinker brackets the logic-chopping with ecological rationality, saying people aren't as stupid as these failings suggest.  Replication experiments show that people perform better at many of these tasks if the logic problem is restated in a form in which it might appear in everyday life.

Whether training can reduce bias is controversial.  Pinker says it can (chapter 10), but punts the argument to others in footnote 84, which says simply: "Bond 2009; Hoffrage, Lindsey, et al. 2000; Lilienfeld, Ammirati, & Landfield 2009; Mellers, Ungar, et al. 2014; Morewedge, Yoon, et al. 2015; Willingham 2007."

Pinker brings in important points not often found in the rationality community: the correct Bayesian reasoning of San hunter-gatherers (chapter 1); communal outrage, the ontology of classical versus real-world categories, and neural-network pattern association (chapter 3); mythological "belief", and the human default to essentialist metaphysics (chapter 10); and the factuality of human progress (a brief recap of Enlightenment Now! in chapter 11; also in chapter 2).  This is another 10% of the book.

Doesn't sound very political, does it?  But it is.  The picture above is set within a framing story of American politics, whose spectre haunts every chapter.  But the the picture is loose in that frame.  Biases, fallacies, and game theory can't explain why people believe in Q-Anon or don't believe in sex, refuse to take a Covid-19 vaccine, think global warming is an existential risk yet fight against nuclear power, or don't trust the media or the universities; or why the gun shops near my town now have almost nothing but AR-15s.  Every American agrees that half of America has gone crazy; they just disagree about which half. [zz1]  The message they'll hear in Rationality is, "America could be one big happy family, if only the <insert your political opponents here> could learn to think straight."  Which is the kind of thinking that got us into this mess.

Rationality is Not the Solution

I'm not saying that reason couldn't cure our politics.  But it's like saying we could solve the housing crisis in Mariupol by building more houses, without noticing the Russian army is still lobbing artillery shells.  Rationality indexes 186 references to biases and fallacies, yet none of these appear: Dionysius, Diogenes, Tertullian, pseudo-Dionysius, Rousseau, Goethe, Schopenhauer, Kierkegaard, Romanticism, William Blake, Pope Pius IX, Stéphane Mallarmé, Nietzsche, Wilhelm Dilthey, Henri Bergson, Alfred Jarry, Guillaume Apollinaire, Henri Rousseau[zz6], Husserl, phenomenology, Pope Pius X, Lebensphilosophie, continental philosophy, Gertrude Stein, dada, Heidegger, Hitler, Adorno, Horkheimer, existentialism, Jackson Pollock, Samuel Beckett, Derrida, Kuhn, Feyerabend.[zzbutler]

The problem isn't people trying to reason but falling prey to biases.  The problem is that reason has been under constant attack since the Axial Age.  Worse, most of our greatest intellectuals have been leaders in the attack[zzwest].  Anyone fighting unreason must first notice that it isn't a grass-roots movement; the worst of it comes from the top down.  Think Catholicism, communism, and post-modernism.   Think the École normale supérieure Paris and the Harvard Puritans.  Nazism eventually had considerable grass-roots support, but even it was based on pre-existing systems, including the philosophy of Friedrich Nietzsche, the aesthetics of Wagner (Waite 1977 p. 67; Ross 2020), and the tactics of the Marxists (Hitler 1926, throughout).  German intellectuals had been planting its seeds for centuries.[zznaziintel]

The kind of irrationality that leads people to buy too much insurance, I can live with.  It's the kind that sends jackboots stomping across Europe I have a problem with.  Pinker doesn't recognize that that kind of irrationality isn't caused by bad reasoning; it's caused by bad systems of reasoning.  They don't start with somebody forgetting to account for regression to the mean, but with myths and rationalizations made by very smart people.  Any fool can say something dumb, but it takes a genius to say things this stupid with a straight face:

Most of these people were philosophers.  Rationality changed my mind in a way that dozens of other books tried to and failed:  It convinced me that philosophy isn't completely useless, by showing how far astray even someone as smart as Steven Pinker could go by dismissing it.

(In defense of Dr. Pinker, it's unreasonable to expect him to have done otherwise.  It isn't that philosophy can get us out of this mess; it's that philosophy got us into this mess.  Most philosophy is useful only in the way that Mein Kampf is: to help us figure out how large groups of people can go so horribly wrong, and notice if people are doing the same thing again.)

Obviously, Pinker couldn't deal with all these people in one book.  But he should try to figure out what they have in common, and why some consciously rebel against reason.  He's aware of them; he just doesn't take them seriously, because in his world, they're obviously wrong.  (See "Locker Room Talk".)  He dismisses all arguments against reason in chapter 2, "Reasons for Reason", arguing that one can't give reasons to reject reason, because one must use reason to do so.

I've said this myself, but it's wrong.  It's correct that,  to prove something logically, you must presume as a hypothesis that logic works.  But in a proof by contradiction, you disprove and discard your hypothesis.  Given a set of axioms which contains just one inconsistency, you can prove any arbitrary proposition.[zz12]  If a logic can derive a contradiction for any set of axioms, the logic itself is inconsistent and unusable[zzrelevance].  So a logic can prove itself useless.  That's what the champions of unreason believe they've done with any mathematics applied to the real world, or with logic that operates over any statements expressed in human language.  They're all wrong, either in their conclusions, or the implications they draw from them.  What's interesting is why they're wrong.  It's always the same reason.

More disconcertingly, some of them may be right on a deeper level.  Pinker gets near this in chapter 2 when he writes of situations when irrationality can be advantageous.  I'll get back to this at the end of this essay.

Argument by Redefinition

"The question is," said Alice, "whether you can make words mean so many different things." – Lewis Carroll, Alice in Wonderland

On Oct. 21, 2018, the New York Times published two-and-a-half sentences from a memo allegedly circulated in the Department of Health and Human Services:

[quote]

The department argued in its memo that key government agencies needed to adopt an explicit and uniform definition of gender as determined “on a biological basis that is clear, grounded in science, objective and administrable.” The agency’s proposed definition would define sex as either male or female, unchangeable, and determined by the genitals that a person is born with, according to a draft reviewed by The Times. Any dispute about one’s sex would have to be clarified using genetic testing.

The section quoted from the memo used the word "sex"; the Times said the memo was talking about "gender".  The debate about the memo was over its applicability to the Civil Rights Act of 1964 and Title IX of the Education Amendments Act of 1972, which both use only the word "sex".

What followed was a bizarre non-argument between media outlets, gender activists, religious groups, and scientific organizations, with the first two saying the memo was about "gender", and last two saying it was about "sex".  The real issue was whether the protected status given to sex should be extended to gender identity.  But in 50 news articles I read about it at the time[zzgender], not one word was uttered about that.  The battle was fought solely by spewing irrelevant words whose only purpose was to subconsciously instill a preference for the word "gender" or "sex".

It reminded me of the 3-century-long war between Arian and Nicean Christians (500-800 CE), which destroyed the Western Roman Empire and cast western Europe into its Dark Age.  Here’s a map of Europe in 500 CE[zzmap], with diagonal white stripes across the Nicene territories and horizontal black stripes across the Arian territories:

And here’s a map with diagonal white stripes across the kingdoms that survived into the 9th century, and horizontal black stripes across the kingdoms that were conquered, enslaved, dispersed, or exterminated:

The wars were allegedly over a distinction between the metaphysical Greek words homoiousios and homoousios, in an age when only a handful of scholars knew Greek.  Not one person who died in those wars could have explained what they were officially fighting about.

Fighting over what words mean is common in politics and philosophy.

Usually, when someone wants to say something that won't go over very well, like "freedom is bad" or "racism is good", they redefine a word so as to express their conclusion in the traditionally-accepted language.  As (Popper 1945/1966) pointed out, falling for this is stupid.  If you believe that vegemite is delicious, and then somebody explains to you that what you've been eating is not vegemite, but apple butter, and that vegemite is actually fermented yeast, you should re-evaluate your belief that vegemite is delicious.  We should take at least as much care with our beliefs about things like "freedom" and "racism".

This process of argument by redefinition has been going on for so long that the languages of the two cultures, the humanities and the sciences, have diverged.  This linguistic divide cuts across languages.  "Civilization", in German, English, and French, is good to empiricists and economists, but bad to continental philosophers (Botz-Bornstein 2012[zzbb]).  And like Republicans and Democrats, the two cultures have evolved, perhaps more by random symmetry-breaking than by logical consistency, to hold opposite beliefs, even while continuing to proclaim their commitment to the very same value-expressing sentences.

Pinker is trying to argue across this gap without knowing it.  To see the confusion that results, let's look at some dictionaries:

The Scientist's Dictionary*

Belief: A proposition which so far seems, in my estimation, to give mostly correct results to those who use it

Justified:

  1. A property that a hypothesis about the real (natural) world has, if a random sample of experimental data has validated it in a statistical significance test with a p-value < 0.05
  2. A property that an expectation has if it reduces the entropy of subsequent real (natural)-world experimental tests
  3. Based on observation of the world or inference from some other true belief (Pinker 2021, chapter 2, "Reasons for Reason")

Real:

  1. Something detectable by sensory observations, not restricting "sensory" to the human senses, but to anything that can be mediated, by other real things, into a signal expressible to the human senses.
  2. A reliable statistical correlation in such observations

Realism: Paying attention to all the dirty details of the real (natural) world (Pinker 2021, chapter 10, "Two Kinds of Belief")

Science: What you get when you abandon the childish demand for logical certainty, and trust your fallible observations more than your rational conclusions

True: See Justified.

The Humanities Dictionary**

Belief:

  1. A proposition which I claim is eternally, universally, and necessarily true (classic Idealism)
  2. A myth that binds my society together
  3. A leap of faith (Kierkegaard)
  4. A value I choose for no other reason than to make a choice (Nietzsche, Sartre)

Justified:

  1. Proven formally using logic
  2. Demanded by the present circumstances

Real: Existing beyond time and space, changeless and eternal, like the integers.

Realism: Ignoring all the dirty details of the natural world, and thinking only about the Real world, which is abstract and transcendental (spiritual).

Science:

  1. The faith in rational proof, combined with a set of unquestionable axioms, to arrive at absolutely certain conclusions.
  2. A body of knowledge: the science of war, the science of medicine.[zzvico]  Distinguished from "art" in that "art" refers to technique, craft, or engineering, whereas "science" refers to deductive knowledge.

True:

  1. Of an abstract fact about the Real (transcendental, spiritual) world, the property of being necessarily true in every context, in every place, and at all times
  2. Conformant with personal lived experience

[* Not a real dictionary.]

[** A Real(TM) dictionary, which exists outside of space and time.  Earthly instantiations of it are subject to change.]

So a scientist and a philosopher can both agree that knowledge is justified true belief (Pinker 2021, chapter 2, "Reasons for Reason"); but they have different definitions of "justified", "true", and "belief".

Mimics

Words have also been confused by attempts to smuggle philosophy into religion (Christianity[zzchrist]; the Universalist Church), science into religion (Auguste Comte's Religion of Humanity), politics into religion (the French Revolution's Cult of the Supreme Being), religion into philosophy (e.g., Hegel), religion into science (Naturphilosophie; vitalism), or politics into science (Marxism; Critical Theory).[zzreligion]  Many belief systems fly under false flags.  As in the animal world, weak ones are more likely to survive when they mimic something else.

Nominalism prevents incommensurability

Thomas Kuhn's infamous Structure of Scientific Revolutions (1962) used a lemma saying that adherents of competing theories ("paradigms") are unable to understand each other, because they use the same words to mean different things.  Kuhn called that incommensurability.  Scientists seldom take it seriously, because science is never like that.  The greatest paradigm difference ever among empirical scientists was between the particle and wave theories of optics, whose ontologies didn't share a single term.  But neither they, nor the proponents of geocentricism and heliocentrism, had difficulty understanding each other.

This is because scientists, as empiricists, believe in nominalism (which also doesn't appear in Pinker's book). Nominalism is a theory of how words mean.  It has a long history, yet most philosophers today–even philosophers of science–don't understand it[zzfails].  The best explanation I've heard is in (Popper 1945/1966)[zznom].  Rather than get into the weeds about universals and particulars, I'll summarize nominalism as developing into, or being superseded by, Einstein's operationalization.

Say you want to test whether artificial food coloring causes hyperactivity in kids.  So you're going to give a bunch of kids cookies with or without food coloring, and see how hyperactive they get.  To compare results with other scientists, you both must measure hyperactivity the same way.  So you make a list of behaviors to count, like: Getting out of a chair.  Having both feet off the ground simultaneously.  Running.[zzindex]

Now instead of talking about hyperactivity, you talk about the "global hyperactivity aggregate" (Stevenson 2010 p. 1110) or the "hyperactivity and impulsivity score" (APA 2013 p. 60), which can be measured numerically.  It isn't a "real thing in the world", but a shorthand for the processes someone must use to make their measurements comparable to yours.  You hope it's a good proxy for whatever it is people mean to talk about when they talk about hyperactivity.

19th-century physicists united the observations which produced Dalton's chemical law of multiple proportions (the content of elements in compounds differ by ratios of small whole numbers), the periodic table of the elements, and the kinetic theory of gases (that they consist of tiny indestructible particles bouncing around), to define a concept.  They gave it the name "atom", drawn from ancient Greek theories which also posited that matter is composed of tiny indestructible particles.  Some of these physicsts thought atoms were real things; others thought they were just theoretical abstractions.  They argued about this sometimes, but they were still able to understand each others' experiments.

But in the humanities, incommensurability is common.  The Stoics and the Christians couldn't understand the Skeptics (Johnson 2008) or the Epicureans (Hale 2006); Marxists can't understand economists; and everyone in the humanities ever couldn't understand Einstein[zz13].  This is partly because they're not nominalists.  But I think there's more to it than that.

Enlightenment and Pseudo-Enlightenment

Learning from slander

Let's look at cases where people with certain views consistently misunderstand people with certain other views.  Skepticism began in ancient Greece.  Many Greek and Roman philosophers dismissed skeptics for saying that "it is impossible to know anything" (e.g., Numenius (2nd century CE), Cicero (106-43 BCE) (Johnson 2008)).

Johnson (2008 p. 4-5) shows that Arcesilaus (315-240 BCE), the skeptic Numenius and Cicero were writing about, was arguing against the Stoic theory of knowledge using Stoic terminology.  The Stoics claimed that "knowledge is a firmly grasped presentation that cannot be shaken or reversed by argument" (Cicero, Academica, cited in Johnson 2008 p. 4).  Stoic epistemology said that knowledge comes through the senses, but that some "graspable" sense perceptions are entirely reliable and can be the basis of certain knowledge.  When Arcesilaus said nothing can be known, he meant "known" in the Stoic sense of being 100% certain.

Plato defined the philosophical terms for certain and uncertain knowledge in Republic, which are rendered respectively as "knowledge" and "opinion" in English.  Johnson (2008 p. 5) summarizes another argument against Stoic epistemology, by the skeptic Sextus Empiricus (late 2nd century CE) in Against the Mathematicians:

  1. Everything is ungraspable because there is no criterion of knowledge [no way of knowing which sense impressions are complete and infallible]
  2. If the wise man assents to anything, he will [also] assent to the ungraspable
  3. Assent [the expression of approval] to the ungraspable is [mere] opinion
  4. The wise man does not opine, so the wise man will not assent to anything
  5. Refusal to give [verbalize] assent is the suspension of judgment
  6. Therefore, the wise man suspends judgment

It doesn't say the wise man has no opinions.  It says he doesn't verbalize them.  This is still extreme, but at least it lets Diogenes find his way back to his barrel at night instead of convincing himself that he doesn't know where it is.  Besides, we know that Sextus Empiricus believed in learning and in practical knowledge, because he was of the empirical school of medicine (hence the moniker "Empiricus"), which taught learning from experience.  He even verbalized his opinions:  Sextus argued against the rational school of medicine, which followed the dictums of ancient authorities and the rational 4-humours theory (Handelsman 1938 p. 330).

Classical thinkers misunderstood the skeptics because they were uninterested in the middle ground between complete uncertainty and absolute certainty.  "Mere opinion" might be good enough for craftsmen and slaves, but not for philosophers.  Up to the time of Hume, philosophers never extended the word "knowledge" to include uncertain or impermanent facts.  They couldn't represent the partial set membership, probabilistic set membership, or family resemblances Pinker wrote about in chapter 3.

Now consider the traditional slander of utilitarians as being "hedonistic" and "selfish".  Without going into quotations and citations, it has three main causes:

If a utilitarian objects that her utility function values the happiness of other people, a curmudgeonly theologian can only interpret that as meaning that she's restraining her own selfish wants with the help of religious belief.  He equates "desire" with eros (lust), and you can't lust for good things to happen to other people.  So his case frame for "desire" won't let him store the sentence "I desire your happiness" in memory.

Cognition as compression

Brains compress sensory information by learning to categorize stimuli (Wolff 1999, Maguire 2016).  The categories learned aren't usually genetically hard-wired; they depend on the environment.  In learning them, brains must do what all lossy compression algorithms do: throw away the least-important information.

One general-purpose lossy compression algorithm is Principal Component Analysis (PCA).  This is a dimension-reduction algorithm, meaning it takes in points in a high-dimensional space and converts them into points in a lower-dimensional space.  Suppose you work at a ski resort, and you need to remember what kind of ski people want to rent.  You have kids' skis and adults' skis.  You only know 2 things about each ski: its length and width.

Now suppose your brain uses PCA to compress that information about the skis.  It figures out that skis that are narrow (kids' skis) are also short, while skis that are wide (adults' skis) are also long.  So it adds up the length and the width of each ski, and stores just that 1 number (call it a "size"), instead of remembering 2 numbers.  That's dimensionality reduction.

Then you go to another ski resort, which doesn't have kid's skis, but has adult downhill and cross-country skies.  The cross-country skies are long and narrow.  In fact, they're just long enough that, when your brain adds up the (normalized) width and length using its old PCA training to get a "size", the downhill skis are the same "size" as the cross-country skis.

But (to stretch the analogy, because we're interested in abstract thoughts, which you can't just "look at") all this is subconscious, because your conscious mind is way too tiny and dumb to do PCA.  Your brain just tells your conscious mind the "size".  So you can't even tell that there are 2 different kinds of skis at this other place.  They now all look the same size to you.  What (Kuhn 1962) calls the "paradigm" of a theory is, in this model, the dimensions that PCA constructs at a particular ski resort.

I think something like this is what's going on in genuine incommensurability.[zzmemory]

Enlightenment is reversible; pseudo-enlightenment is one-way

Recently a friend wrote to me about the transitions she's seen, in acquaintainces and in herself, from mistake theory to conflict theory.  She didn't know of a single case of anyone making the opposite transition, from conflict theory to mistake theory.

I think the one-way aspect of this transition is a more-general phenomenon.  When people speak of being enlightened by Zen[zzen], est, McLuhan, LSD, or Foucault, they often describe it as suddenly coming to understand many disparate things.  They may at the same time describe it as initiation into a mystery.

Compare them to clear instances of true enlightenment: the moment when one first grasps infinitesimal calculus or relativity theory.  Neither of these enlightenments suddenly make all sorts of things clear.  Their further implications unroll over the course of a lifetime.  They don't answer old questions; they raise new ones, and give you tools to work them out for yourself.  They never create mystery; they dispel it.  They make the world more complex, not simpler.

I think (a little arrogantly) that the first kind of "enlightenments" are pseudo-enlightenments like conflict theory, which aren't ways to see more, but ways to see less.  They train your brain to throw certain features away before they reach your consciousness, or to ignore certain possibilities, making the world look simpler, but also making things that don't fit the new model incomprehensible and mysterious.  In the "fullest" such enlightenments, people buy so completely into the new over-simplification that they see no mystery, because they no longer notice what they've simplified away.

Take the Buddhist doctrine of emptiness.[zzempty]  I interpret it as involving the recognition, also found in Hinduism, Heraclitus (c. 535-475 BCE), some Borges stories[zzborges], and post-modernism, that concepts, things, and people aren't as clearly-delineated or as constant as words imply they are, so that (to take a key example) the concept "I" is slippery and problematic.  There is no "I", if by that one means the constant "I" of logocentric rationalism.  But to go to the opposite extreme and say "there is no 'I' ", unbracketed by reference to a rationalist vocabulary, is just to substitute one radical over-simplification for another.

Take the Marxist-in-the-street habit of speaking without logical quantifiers, saying "X are Y" ("capitalists are parasites", "property is theft", "the government is corrupt").  This automatically generates motte-and-bailey arguments, in which the speaker draws radical conclusions using the reading "all X are Y", claims he meant "some X are Y" if challenged, then immediately uses "all X are Y" again in his reasoning.  They can interpret quantifiers if the sentence is written down on paper, but can't remember them for more than a few seconds.[zzmotte]

Hatred of GMOs, nuclear power, and Jews share a common over-simplification: the concept of purity.  Purity was what the ancient world had instead of germ theory, and sometimes it worked.  It held that "one rotten apple spoils the barrel" applied not just to fungus and disease, but to morality, race, virginity, and community.  Nuclear power plants aren't seen as a calculable risk, but as an aberration, an affront to nature.  Don't reason about how many Jews or nuclear power plants you should have; get rid of them all!

The only pseudo-enlightenments which survive and propagate are those which produce a closed, self-consistent system of thought.  They delegitimate all of the sources of information which could be used to detect their vacuousness.  Religious people can use the "god of the gaps" to provide an explanation for literally anything they don't understand, while criticizing science as inadequate for admitting that it has no explanation, as the vitalist biologist Driesch did (Lillie 1914 p. 842).  Critical race theory prohibits inquiries into the causes or the existence of racial outcome disparity using its own god of the gaps, "structural racism".  Marxism dismisses competing ideas as "ideology" and "bourgeois philosophy".

Pseudo-enlightenment always locks one into a smaller, simpler, self-consistent view of the world.  That's why it's usually irreversible.  The box is constructed so that the tools you need to escape the box are outside the box.

Worldview Differences

Linearity and non-linearity

In chapter 3, under "Logical Computation versus", Pinker makes a small mistake whose significance is not immediately obvious.  He wrote,

After hundreds of thousands of training examples, the connection weights settle into the best values, and the networks can get pretty good at classifying things.

But that’s true only when the input features indicate the output categories in a linear, more-is-better, add-’em-up way. It works for categories where the whole is the (weighted) sum of its parts, but it fails when a category is defined by tradeoffs, sweet spots, winning combinations, poison pills, deal-killers, perfect storms, or too much of a good thing. Even the simple logical connector xor (exclusive or), “x or y but not both,” is beyond the powers of a two-layer neural network, because x-ness has to boost the output, and y-ness has to boost the output, but in combination they have to squelch it. …

The problem may be tamed by inserting a “hidden” layer of neurons between the input and the output, as shown on the next page. This changes the network from a stimulus-response creature to one with internal representations—concepts, if you will.

The bolded text shows that Pinker is talking only about perceptrons with linear transfer functions.  The transfer function is the function each neuron computes – the output it produces in response to an input vector.  Given the input <x1, x2, … xn>, a linear transfer function can only output a signal of the form c1x1 + c2x2 + … + cnxn.  If you stack another layer after it, each node in that layer outputs a linear combination of those signals, which… is again of the form c1x1 + c2x2 + … + cnxn.  So a 3-layer linear perceptron still can't compute any non-linear function.  Nor can an N-layer linear perceptron, for any N.

To learn the XOR function, the network does (IIRC) need at least 3 layers.  But also, it must use a non-linear transfer function like the sigmoid function.  This is terribly important; or rather, non-linearity is terribly important.

"Common sense" assumes linearity.  People with no mathematical training are generally incapable of reasoning consciously in terms of non-linear functions, as Pinker demonstrates with the "patch of weed" problem in "Three Simple Math Problems" and diminishing marginal utility in "How Useful is Utility?"  Yet they may use non-linear functions unconsciously, as he shows for exponential and hyperbolic discounting in "Conflict among Time Frames", and our "squirrelly sense of gains and losses" in "Violating the Axioms".  The non-linear normal distribution, which he shows in "Signals and Noise", is also important in everyday life, but foreign to people with no mathematical skills.

Aristotle's classification of political systems in the Politics doesn't make distinctions based on mechanisms like voting, laws, or checks and balances; it classifies them according to what kind of people rule.  Aristotle takes the properties of a polity to be the sum of the properties of its rulers.  Political mechanisms involve non-linear interactions, like the step function between passing and not passing a bill in Congress, or the three-body interactions of the three branches of American government.

I believe that unreason lies less in biases and fallacies than in a lack of certain mental abilities, including the ability to reason using non-linear functions.  I consider these abilities to be part of a person's metaphysics.  Maybe that's not kosher.  The normal distribution isn't spiritual, but neither is it physical.  It's like what Plato's Forms were supposed to be – a universal, timeless, perfect abstraction, which things on earth crudely approximate.  But the material world doesn't emanate from it; it emanates from the coalescing of uncountable individual material events.  Metaphysical concepts like "soul" and "causality" are the building blocks one can use to build pictures and theories of reality, and the normal distribution operates very much like those – with the difference of being well-defined and understandable.

Order versus chaos

One of the first concepts humanity invented was that of order versus chaos.  It features in an astonishing number of creation myths, in which the universe begins in chaos, and some god divides the light from the dark, the earth from the waters, or otherwise brings order to the world.

The ancients saw their only political choice as being between order and chaos.  They believed, for the most part, that societies were either governed from the top-down, or chaotic.  This may be why Aristotle, in his Politics, said that democracies collapsed into tyrannies, even though the history of his own city, Athens, had a history of tyrannies leading to democracy, and no history AFAIK of democracy ever collapsing into tyranny (only of Athenian democracy twice being overthrown by conspiring aristocrats).  Democracy was very hard to reconcile with the order good  / chaos bad dichotomy.

But in the 1990s, we replaced Order / Chaos with a tripartite system: Randomness / Chaos / Order.  Chaos, in the mathematical sense, is a narrow zone in between the consuming fire of randomness and the icy stasis of order.  Chaos is where life exists.  All self-organizing systems, including life, brains, and economies, have chaos at their computational hearts. (Langton et al. 1992; Kauffman 1993; Bak 1996)

The ancients, in striving to avoid chaos at all costs, fought against every one of the changes that led to the Enlightenment.  Their intellectual children are still fighting it now.  The Enlightenment could be described as learning how to profit from chaos.  It couldn't take place until after the necessary infrastructure was in place: roads, private property, money, literacy, mail, uniform weights & measure, legal codes, tort law, banks, corporations, insurance, and especially freedom of speech.  These greased the wheels of intellectual and material commerce, so that people and organizations could adapt quickly enough to chaotic change, and to each other, to enable the analog of the mechanisms by which physical systems that self-organize to the edge of chaos, do so.

Old versus new

Scientists look to the most-recent work in a field as the best.  They don't study people, but ideas, so they seldom bother studying the writings of a field's founders, which quickly become outdated.

In classics and the humanities, many people are inclined to look to the oldest as the best.  Philosophers are likely to dismiss the observation that science is not about obtaining absolute certainty by pointing to Newton, or Leibniz, or even Aristotle, transitional figures who did some things that could be called "science", and much that could only be called "metaphysics" or "religion".

Science believes in evolution, a process by which complexity develops over time.  In philosophy, evolution is the domain of irrationalists like Nietzsche.  Classically, and in mythology before the classical period, the assumption was always that everything ran downhill, from creation to the present day, as in the ages of man of Hesiod and Ovid.

This is also a result of the linearity assumption.  The development in time of a dynamic system can be described by some function of many variables, giving an energy-minimization surface.  If the function is linear, that surface can only be a hyperplane, and a ball dropped on that surface can only roll downhill forever.  Put differently, evolution requires non-equilibrium dynamics, a situation where the dissipation of energy (the ball rolling downhill) is tapped and used to push other things uphill.  This requires non-linear interactions.  A dynamic system that's described by a linear equation has nowhere to go but down, like everything in the neo-Platonist or Christian worldview runs downhill from God in the Great Chain of Being, like Tolkien's world slowly degenerates through the ages from noble to base.[zzsimplicity]

Originality and creativity

Plato's theory of Forms says that there is no such thing as an original concept or an original invention, because all concepts and all objects have an eternal Form, which exists outside of time.  Plato's Meno says there's no such thing as an original idea.  And, sure, Plato is basically the founder of Western Civilization, but I'd just taken it for granted that people had figured out pretty quickly that they could invent things.

So I was stunned when I read in Baudelaire (1859 part 4) an explication of the idea that "nature is only a dictionary."  That meant that "copying from nature" doesn't mean copying scenes from nature, but treating your knowledge of the world as a dictionary, giving many different isolated concepts which must be combined in a new way to say anything meaningful.  "Copying from nature" in the sense that Plato meant it would be like copying out the dictionary.  Baudelaire was saying that writers could write about things they had never seen nor heard of.

I wasn't stunned because I thought this was an original idea; I was stunned because I thought it was so obvious that it couldn't even be said to be an idea.  But the odd thing was, I’d just read 2,000 years' worth of earlier essays on art and literature, and none of it included that idea.  In fact, Western academia took this idea seriously only twice in the history of the world that I know of: in the Hellenistic era (323-31 BCE, the part of antiquity that classics scholars ignore); and again between about 1820 and 1900.[zzbeethoven]

I used Google n-grams to look at the usage over time of the word “creative” from 1600-1900.

creative 1600-1900.jpg

Looking at all usages from 1600-1647, and the first 10 from 1650-1700, I found not a single one claiming humans could be creative.

The word "innovation" showed up in English in the 1540s, which did mean to propose a novel idea.  But "innovation" wasn't creation; rather, it was error, and sinful presumption.  Theologians reserved the privilege of creation to God.  St. Bonaventure (1260/1610 p. 46) used "innovate" to describe the invention of new sexual sins.  St. Augustine (Eco 1959/1986 p. 108), St. Aquinas (1270-1273), G.K. Chesterton (1931/2012 p. 82), and C.S. Lewis (1939/1967 chapter 1 p. 6-7) all insisted that human creativity was impossible.[zzcreate]   I failed to find a single text before Shakespeare in which human creativity and innovation were admired rather than condemned as impious or disloyal.  There was even an argument in the Irish Parliament when Henry Grattan rebuked a representative who slandered his bill as being an "innovation", claiming that his bill was merely restoring things to their former state:

It is an abuse of terms to call improvement innovation.  Salutary alteratives which amend the debilitated constitution are justly termed restoratives.  (Bingley 1799, “Extract from the debates of the Irish House of Commons, Thursday, Feb.  14, 1788, On Tythes,” p.  43, recounting the words of Henry Grattan)

This is why there was no distinction between artists and craftsmen in the middle ages (Huizinga p. 246-7, Eco 1959/1986 p. 93).  Neither were thought to be original.

Creativity was rediscovered in the early 19th century, by chemists discovering that elements could be combined to form substances which had properties not present in the original elements (Macvicar 1833, "On the forms and structure of the molecules of bodies", p. 123).  But it soon came under heavy attack, first from modern artists, then from post-modernist philosophers (Barthes 1969, 1971; Fish 1980).  Even Ezra Pound didn't support creativity; when he said "Make it New!", he meant "Re-make it anew!"  All he and the writers he approved of ever did was re-work, or allude to, old myths and poems.

Dichotomies

Order versus chaos is a classical dichotomy, which assumes that the proposition "X is orderly" has a Boolean truth-value for any X.  The belief that our only choices were order or chaos kept the world under the thumb of tyrants for millenia.  Other dichotomies include upper class / lower class, and true / false.  Pseudo-enlightenments often introduce simplifying dichotomies (pure / impure, orthodox / heretic, saved / unsaved, enlightened / unenlightened).

I often use the theoretical distinction rational / empirical as if it were a dichotomy, to find correlated beliefs and to compress information about particular individuals.  But it's based on family resemblances, not necessary and sufficient conditions.  Actual people or schools of thought are usually a mixture of the two, as well as of other epistemologies.  Even Plato found some uses for empirical knowledge (see, e.g., Popper 1945/1966 chapter 5 part 8).  "Rational / empirical" turns out to be the same as the distinction made in (Sorokin 1937-41/1957) between "ideational" and "sensate", where he extends those categories to their correlates in art, literature, law, society, and personality (and identifies a few other epistemologies as well).[zzsorokin]  He uses "idealistic" to refer to a mixture of the two.  I reconstructed these categories myself, unknowingly and unintentionally, not to classify systems of thought, but to help me remember what cultures made what kinds of art, and who said what.   I'll enumerate more distinctions later, but have space only to mention those that apply to philosophy and science.

Spirit versus mechanism

This is a real dichotomy, with no middle ground.  The ancients, not knowing how astronomically big and more-than-glacially slow humans are on an absolute scale, and not having much familiarity with machines more complex than a cart or a catapult[zzmachina], had great difficulty imagining themselves as mechanisms.  The mystery of consciousness is still as difficult today to reconcile with mechanism as it was then.  They posited that living things (in Aristotle's case, all things) moved, when they moved, under the direction of an immaterial soul.

But this stopgap for ignorance could be used to explain anything.  Once life was explained in terms of essence, everything else could be explained in the same way: civilizations, races, art.  Belief in spirit prevents people from thinking of mechanisms, and seems everywhere to lead to a rabid hatred of machines when they become common.  The belief that mechanisms are incapable of behaving intelligently, or at least adaptively, is another crippling metaphysical belief.

Single versus multiple causation

Pinker got this one!  Chapter 9, "What Is Causation?":

We can make sense of these paradoxes of causation only by forgetting the billiard balls and recognizing that no event has a single cause. Events are embedded in a network of causes that trigger, enable, inhibit, prevent, and supercharge one another in linked and branching pathways.

Static versus dynamic

Plato's metaphysics posits that all Real things are static, unchanging.  Hence metaphysicians contruct ontologies for nouns but not for verbs.  For them, processes literally don't exist.  Aristotle's physics treat objects as existing in some state, and undergoing "motion" when acted on (the Greek word should be translated as "change", as it includes changes like being painted red).  The action is assumed to happen instantaneously, as do any transitions from one to another state.  Aristotle knew that moving objects slow down gradually, but had no math to deal with continuous changes over time, nor with infinitesimal intervals.  You can see a present-day example of this inability to conceptualize gradual changes in the Hegelian and Marxist doctrine of "the transition from quantity to quality" (Carneiro 2000), which states that gradual changes can somehow cause an instantaneous transition in kind when they pass a threshold – a confused attempt to fit fast, dramatic changes, such as the freezing of water, to Aristotelian physics.

Because rational change is in theory instantaneous, there is no need to consider cases of a motion having more than one cause; time is so subdivisible that it just isn't possible for three things to collide all at the same time.  This conception of the world, and the things in it, as existing in states, is thus one origin of the "common sense" belief that every event has a single cause, and of the practice of.  Inability to think about gradual, non-linear change over time also inhibits the development of the concept of mechanism, as mechanisms – at least ones that stay in one place – always have dynamic non-linear motion.[zzdynamic]

Foundationalism

Foundationalism is the rationalist epistemological assumption that reason must begin with an arbitrary foundation of unquestionable assumptions, as geometry begins from a foundation of unprovable axioms.  Unless your reject foundationalism, you can't accuse medieval scholastics, fundmentalist Christians, Marxists, or Nazis as irrational, nor even of being wrong.  Each of them derive their beliefs from a foundation of unquestionable assumptions, drawn from holy scriptures and/or deified leaders.  Rationalists belief there is no possible way to prove that anyone's foundational assumptions are right or wrong.  This is the most-widely-recognized critical failing of rationalism, and what prompted Kierkegaard and the Existentialists into their respective leaps of faith.

Foundationalism is false.  It results from the linearity assumption and the limitation of number to ratios.  I'll explain how later.

In the discussion about "What the Tortoise Said to Achilles" in chapter 2, Pinker at first seems to be recognizing that rationalism is foundationalist.  But no; he uses that discussion only to support an argument for accepting rationality in general.  The problem of foundations already presumes rationalism; it asks how to choose the values you want to rationally maximize.  My reading of Pinker's discussion of the Golden Rule in chapter 2, Morality, is that, like Sam Harris, he thinks we can deduce the proper values from our sensations of pleasure and pain (in a very general sense), plus game theory.

I admit that this gets us a lot further than religious fundamentalists think it does, but I'm not convinced that it's sufficient to resolve the most-dangerous disputes.  When we try to deduce our values from game theory, our results depend on our payoff matrix, which depends on… our values.  There is a game-theoretical argument for settling things by debate instead of by fighting, but not only does it not apply if you're the stronger party by far in a two-sided, zero-sum game; it also doesn't apply if you're a medieval noble who delights in war, or an ancient Greek who believes military prowess is the greatest virtue.

Just as importantly for the narrative I'm constructing, this is cheating.  If you base your values on your sensual preferences – a preference for health, love, pleasure, and friendship – you're no longer being strictly rational.  I would do the same thing; I don't think you should be strictly rational.  But I think the bigger problem is a long philosophical war over epistemology, and winning it requires knowing what you're fighting over; and one of the main issues is the problem of foundational values.  To derive more of the humanist values he holds, Pinker uses the lemma that we can derive a sufficient starting set of values from sense perception.  But that lemma is what is being fought over.

Art

Art?

Yes, art.  Art is deeply, perhaps regrettably, intertwined with philosophy, and hence with reason.

Plato developed the mimetic theory of art, which says that art can never be creative or original, but can only imitate life.  Mimetic theory is implicit in "representative art"; it's only re-presenting what has been presented before.  This is why modern artists said, and still say, that representational art is not Art at all, as it can have nothing new to say.

This theory didn't seem as ridiculous in Plato's time as it does today.  Art, at least plastic art (like painting and sculpture), was either formal art – decorative repetition of patterns – or it did just imitate.  Every piece of pre-Hellenistic Greek pottery I've ever seen had either a geometric pattern, or a re-presentation of a familiar scene, a historic person, or a familiar story.  It wasn't until after Plato's death that Hellenistic artists began sculpting more imaginative scenes, like the death of a barbarian warrior, or a lion pouncing on a horse.

Here's Norman Rockwell's refutation of Plato: Girl at the Mirror, 1954

We see a young girl looking at her face in a large mirror, while a magazine on her lap shows the actress and beauty queen Jane Russell.  On the floor beside the mirror lies an old doll, carelessly discarded, as if the girl is moving past childish things.  On her face, in the mirror, Rockwell captures a critical moment in her development as a person, one that comes to every plain girl once in her life: the realization that she will never be beautiful.

It isn't mimeses, a representation of life, because Rockwell never saw that scene in real life.  He saw a young model who had no idea what Rockwell wanted, posing for him in front of a mirror, not even thinking about the magazine (Denny 2013).  The scene was Rockwell's imaginative way to make even men or beautiful women empathize with plain girls.

Neither is the meaning of this painting just a linear sum of the meanings of the things in the painting.  It's a structure, a mechanism of interlocking parts.  Move the mirror behind the girl, or have her face away from it; then it would be just a collection of objects.  And it wouldn't be art.

Neither does it fit into the orderly / random or static / dynamic dichotomies.  There is order in the relation between the girl and the mirror; her things are heaped carelessly on the floor.  It's chaotic, because it's alive, and because it depicts a moment of change.  It isn't static, because it depicts the event of thinking; but it's hardly dynamic.

But if someone hasn't got the concepts of mechanism, nor of chaos as the productive region between fire and ice, nor of originality, they're not going to be able to conceive of a painting like this.  Let's look at two types of paintings such people could (and did) conceive of.

Piet Mondrian 1920. Composition No.II

Here we see

Jackson Pollock 1948. Number 5.

Here we see:

Now I'll finally propose my answer to the question, "Why are different paradigms in philosophy and the humanities often incommensurable, while paradigms in science are not?"

Rationalism is the Problem

Pinker is butting into a 2500-year-old conversation which he isn't especially interested in.  If he'd paid attention in philosophy classes, he would at least be able to distinguish between nominalism and realism; empiricism, rationalism, phenomenology, and faith; and utilitarianism, pragmatism[zz2], deontology, and virtue ethics.  He doesn't confuse these things in his own arguments in Rationality as often as he did in Enlightenment Now!  But he's a science reporter in a war zone, who not only doesn't know where the battle lines are, what armies are fighting, or what their uniforms look like; he doesn't even know there's a war.

Nonetheless, Pinker comes close to the awful truth about irrationality many times, such as when noting:

Remember when I wrote about how scientists and philosophers agree on certain sentences, while using some of the words in them very, very differently?  Well, one of those words is "rational".

Nearly all scientists think that "rational" is a synonym for "reasonable".  So do nearly all philosophers.  But "rational" means something entirely different from what scientists think it means.

Note I didn't say "rational" means something different to philosophers.  "Rational" is their word.  They were using it, and its Latin and Greek equivalents, for 2,000 years before empirical scientists arrived on the scene.  You don't get to change it.  If you use a wrong definition of it ignorantly, as Pinker does, you are aiding the forces of unreason.  They're better at dictionary judo than you are.  Like Marx, they'll point to your claim that science is rational as proof that they, too, are scientists.  Then when their untrammelled rationalism inevitably goes horribly wrong, they'll blame it on science, like Lyotard blamed the Holocaust on liberal humanism.  If you fail to draw the distinction between rationalism and empiricism, your opponents will make it, and always against you.

The distinction is difficult to maintain, because rationalism includes lots of things that empiricists need, like math, logic, and deductive reasoning.  The "ism" is rational thought, plus a set of metaphysical beliefs which accumulated around it to support the belief that nothing but pure rational thought, untainted by the material world, is valuable.

Rationalism is an epistmology which radically simplifies reality in order to make it amenable to logical deduction.  It takes geometry as its inspiration and model (e.g., Kepler described in Koyré 1973 p. 139 as cited by Sihvola 2000)[zzgeom], presuming that we can logically deduce absolutely certain, timeless, immutable truths about the world.  It uses the classical conception of categories defined by a list of necessary and sufficient conditions, and assumes the resulting category boundaries are clear and infallible.  It distrusts the senses, and does not allow talk of probability, or family resemblances.  It's uncomfortable with non-linear continuous functions[zzzeno].  For thousands of years it opposed the use of zero and of the real numbers, but loosened up a bit after Newton (although the Jesuits, Hobbes, Hegel, and Marx, smelling the epistemological heresy, all devoted considerable effort to refuting or at least reformulating Newton's calculus).

To use the terms we went through so tediously above:  Rationalism is any pseudo-Enlightenment which radically over-simplifies the world to make it amenable to dialectic.  The most-common over-simplifications include: assuming that words unambiguously denote clearly-defined categories; assuming linearity; assuming the order / randomness dichotomy and prioritizing order; prioritizing the old over the new; taking a static view of words, meanings, and mental states; using a static ontology which does not allow for new creations; believing in a world based on spirit rather than mechanism, in which events are transitions between states and every event has a single cause; and reasoning using a deductive logic based on geometry, which requires a foundation of axioms which must be taken on faith (religion), on someone's command (Hegel), or at random (dada, Existentialism).

Scientific Reasoning

The opposite of rationalism is empiricism, which is the epistemology of modern science.  It believes that all knowledge arrives either through the senses, or using genetically-programmed information which evolved as a result of ancestral interactions with the material world.[zzblankslate]  It defers translating data into language as long as possible, to avoid simplifications.  It defines words operationally, priorities the new over the old, uses statistically-defined categories, believes in a world based on mechanism, sees events as dynamic and as having multiple causes, reasons using inductive logic and probability, and forms opinions using optimization theory or energy minimization techniques, which don't require a special choice of initial beliefs.

Going back to the cases in "Learning from slander", we see that in both cases, a person using a rationalist concept (certain knowledge, senses as sinister) was unable to grasp its empiricist analogue (uncertain knowledge, senses as helpful).

I think that's why different theories in the humanities are sometimes incommensurable, while different scientific theories aren't.  The empirical sciences must have a model of reality that's general enough to accommodate any aspect of the material world that can make a difference in their experiments.  The humanities all have more rationalist over-simplifications, but they don't all have the same over-simplifications.  They can't correctly recall the thoughts of someone from a sufficiently-different school, or of a scientist.  Since they don't use operationalized definitions, they can't follow someone else's words back to the material world and figure out what they mean.

Scientific logic

Scientists do use logic.  But it's different from rationalist logic in crucial ways.

The scientist develops theories by alternating between inductive logic, deductive logic, and experimentation.

Francis Bacon (1620) and the Royal Society of London (Sprat 1667) explicitly condemned rationalism as useless.[zzbacon]  People at the time understood that this was the true scientific revolution: the abandonment of the quest for absolute certainly, which had led them to spend the past 40 years murdering each other on battlefields all across Europe, in the 30 Years' War (1618-1648) and the Interregnum (1649-1660), over religious arguments few people understood.

But they over-reacted in condemning deductive logic entirely.  This led to an over-reaction to the over-reaction, in which people such as Ernest Nagel (1961, cited in Stark p. 367) and Karl Popper (1994 chapter 4 section 14) argued that theory development uses only deductive logic, and what makes it different than philosophy is that it alternates between empirical experiment and deductive theory.

As it happens, I'm one of the few scientists who have tried to reason – well, to get a computer to reason – using nothing but deductive logic.  I tried to design symbolic knowledge representations and inference methods that would let me translate an English sentence into logic, then make logical deductions whose results could be converted back into English.  I can summarize 12 years of work by saying, It's extremely labor-intensive and error-prone, involves too many trade-offs to design a general-purpose representation suitable for everything (see also Guha 1992), suffers from combinatorial explosion, and never works well at anything but chess.

I don't use much deductive logic to develop a theory or an algorithm.  I do use a lot of math.  But you have to be careful translating real-world observations into math, and then translating your deductive mathematical conclusions back to the real world.  Empirical scientists have spent the past century developing a workable interface between formal mathematics and the real world, which they call "statistics".  It solves many of the trickiest problems in epistemology, at the expense of putting all sorts of provisos on the knowledge that comes back out of the math.

Think about experiments you've done, or developing theories or computer code.  You don't write out the code logic on paper, debug it in your head until you think it's right, then type it all in and have it work the first time.[zzonce]  You develop it gradually, deducing effects of changes, making inductive guesses, doing some math, testing little pieces of code, trying to run the entire program, failing, and repeating all of the above.  That's how the scientific thought process works, even the development of theories, which after all are a lot like computer programs.  And neither are infallible.

The scientist's logical symbols represent statistical, not classical, categories.

Pinker addresses this in chapter 2, "Classical versus Family Resemblance Categories", and throughout chapter 7 on "Hits and False Alarms", but he never connects the two to make the point that scientists define categories statistically.  Chapter 7 presents the problem of deciding which of two distributions a sample comes from as if it had nothing to do with categorization – the categories in his example already exist, and the decision that a defendant is innocent or guilty can be said to be right or wrong.  But an empirical epistemology says that categories are built this way, from sense data.

Rationalism is logocentric; it attributes certain magical powers to words.  For one, it assumes words divide the world up neatly into categories that are metaphysically "real".  This leads rationalists to argue about what words "really" mean, as described above under "Incommensurability".  Empiricists instead use nominalist definitions.  One way of saying that is that they define a word by building a decision surface in a high-dimensional feature space.  Every event or object whose features are measured is defined as belonging to the category if the point those features specify in that space falls inside the decision surface, and as not belonging if the point falls outside.

This figure from chapter 7 shows the simplest possible decision surface: a point in a one-dimensional feature space (a point on a line), dividing that space into 2 categories:

When you construct categories statistically, however, you don't call anything a "miss" or a "false alarm".  That dividing line defines the categories, even though you know that your measurements have noise, that you're counting some things as A which you'd rather call B if you knew more about them, and that the decision surface is your own construction, not God's.

The purpose of words is to convey information, so the right way to construct categories is to choose decision surfaces in a way that makes their word definitions convey the most information in the fewest words, given future observations drawn from the same probability distributions you used to choose the decision surface.  That's all that's needed for words to convey meaning.  In this case, the decision surface that would make your two new words convey the most information would be the x-coordinate of the point where the two bell curves cross.  (This is the point that would give the fewest mistakes, If we had pre-existing categories and could talk about mistakes.)

This refutes Buddhism, which is based on the claim that the choice of a decision surface between "X" and "not X" is arbitrary.  That's true, if the distributions of the feature vectors of your observations are flat.  If instead of two bell curves, you had just two overlapping intervals, and observations were equally likely to fall on every point on the line, a decision point anywhere within that overlap would be just as good as any other point within that overlap, so it would be arbitrary.  But the nonlinearity of the probability distributions means there is a unique, optimal decision surface for distinguishing the categories, even though we know it will still make "mistakes".[zzclt]

Symbolic artificial intelligence (AI) uses classical categories, while connectionist AI uses statistically-defined categories.  The competition between them paralleled that between rationalist and empiricist epistemologies.  But since computer scientists tested their theories on data, the failings of symbolic representations were clear after just 30 years; and now, another 30 years later, connectionist representations have already solved the big problems of epistemology.  Studying the contrast between the two is an easier way to understand the difference between rationalism and empiricism, than is studying rationalism and empiricism.

The scientist does not have absolute certainty in her conclusions.

If I were writing this for people in the humanities, then here I'd give a long list of quotes to prove that scientists really don't claim absolute certainty, because people in the humanities believe that scientists do.  Since I'm writing this for Astral Codex Ten, I'm assuming most of you have the opposite illusion:  You don't believe that most people in the humanities believe absolute certainty is attainable.

Most rationalist philosophers really do claim dialectic is infallible (and the rest of the humanities generally follow along).  Recall that Plato, arguing against trusting the senses, said (IIRC) that they can give illusions when one is looking down at a stick poking out of the water, or when one sees a mirage in the desert.  That's, like, one observation out of every 100,000.  The implication is that abstract reasoning is more reliable than that.  And Plato didn't live anywhere near a desert, and hated the waterfront.

Recall Descartes' Cogito ergo sum.  Its whole point was to be absolutely certain.  A lot of philosophers still seem to think Descartes wasn't just being childish.

Recall Hume's arguments about causality and whether one can know the sun is going to rise tomorrow.  He wasn't saying it's a coin-flip whether the sun will rise tomorrow; he was saying you can't be absolutely, 100.000000% certain, using rational argument, that it will.  Most philosophers still think he was wrong.  None of them have asked why it matters.

Recall Kant's Critique of Pure Reason (1781), which philosophers also still think was a big deal.  His conclusion wasn't that rationality is fallible.  It was that rationality, though infallible, can never perceive the ding an sich (I just read that as "Platonic Form") of things in reality.  Kant said that your sense perception organizes the world, so you never sense the world unmediated.

For an example of what this might mean, imagine that your brain got its wires crossed before you were born, so that you see everything that's "actually" on your left as being on your right, and vice-versa.  How could you detect this?

You couldn't.  Even if there are laws of physics that break left-right symmetry, and you test them yourself, when you observe the results, your brain flips them so that they match what you call "left" and "right".  When someone says "Raise your left hand", you'll raise the hand you call "left", even if, in your mind, you have the phenomenological experience that other people do when raising their right hand.

Rationalist philosophers found any such irrelevant grain of uncertainty shocking.

Nor is absolute certainty dead today.  CS Lewis[zzlewis1] and, IIRC, Ayn Rand believed in it.  The post-modernist Paul de Man and the semiotician Jonathan Culler, in their analyses of Proust, both said an absolute certainty in the commonality between two things being compared was necessary for metaphor to successfully capture essences (Culler p. 215-220).[zzculler] The pragmatist John   gave a whole lecture series against the quest for certainty in 1929, collected as a book in (Dewey 1929).  Its first two chapters are a good introduction to the problem of certainty in philosophy.

In contemporary continental philosophy, keep on the lookout for the word "contingent".  It literally means "historically contingent", the opposite of "necessary".  Evolution is contingent; it wouldn't produce the same species if you started over.  But continental rationalists today use it as a synonym for "arbitrary" (assuming all possible outcomes were equally likely), and "arbitrary" as a synonym for "meaningless" or "social construct".  The "nausea" that caused Sarte to invent existentialism came from feeling he was "superfluous" and that life was "absurd" because he was "contingient" and not logically "necessary" (Sarte 1938 / 2013[zzsartre]).  After Foucault, "social construct" was more-often taken as a synonym for "oppressive" (assuming all social institutions are oppressive).  For instance:  Driving on the right side of the road is contingent, therefore arbitrary, therefore a social construct, therefore oppressive.

Okay, they don't actually apply that argument to driving on the right side of the road.  But they should, to be consistent.  (Chandler 1994-2022, retrieved 2005, glossary, "Relativism, epistemological") attributes the evils of colonialism to it being "historically contingent".  To continental philosophers, anything society does that isn't logically and universally necessary should be abolished.

Absolute certainty is why rationalism is so very, very terrible.  True rationalists, like Marx or the medieval scholastics, uncorrected by real-world observations, eventually deduce insane conclusions.  If their followers then try to force them on the world, the execution of their program usually involves executions.  If the theory doesn't fit the real world, the real world must be bent to fit the theory, and everyone who won't go along must be silenced or eliminated.

The scientist does not need to assume anything.

When Christians say that science is just another faith, they always mean that science, like Christianity, must rest on some foundation of unquestionable beliefs.  Many scientists will even agree with them.  But it doesn't.  A large number of questionable beliefs will do, if most of them are correct, and if you don't need absolute certainty.  As long as these beliefs are correlated, meaning not independent, you can use various mathematical techniques to find the assignment of probabilities to the entire set of beliefs which is most-probable.  The simplest is to assign each belief's probability an initial non-committal guess greater than 0.5, then repeatedly go through all of the beliefs, one at a time, and update its probability of being true by pretending that all of the other guesses are correct.  This usually works pretty well.

You might argue that to do this, you need to already know the conditional probabilities between beliefs.  Actually, you don't.  If you can make lots of observations of the things you have beliefs about, you can guess those conditional probabilities too.  In fact, using a technique called Gibbs samping, you can guess only the conditional probabilities, and use a similar updating process to iteratively use each observation to update your guess at one conditional probability, producing the independent probabilities as you go.  (I hope I've got that right.  Anyway, it works.)

Another way to think about this is in terms of function optimization.  Function optimization works something like this:  Instead of trying to directly deduce the best answer to a question, you phrase the question in a quantitative way, construction a function that assigns a fitness to any point in some N-dimensional space.  That fitness function then defines a surface in (N+1)-space, and finding a good answer to your problem can be done by starting anywhere on that surface and climbing upwards until reaching a nice mountaintop.

A rationalist would say that can't possibly work – your conclusions, they say, always depend on your starting assumptions.  That's because rationalist never imagine arguments, or anything else, as undergoing continuous motion.  They think of logic, which proceeds in large, discrete steps.  All the non-foundational methods I know, by contrast, take a random starting point, and incrementally improve it.  But just thinking about this requires familiarity with real numbers, continuous motion, and non-linear functions.  People who aren't comfortable with those things are likely to believe in foundationalism, and likely not to think in terms of incremental change, but of jumping immediately to the final solution. So foundationalism and radical politics are highly correlated.

The place for rationality

I'm trying to reserve the word "rational", at least within this text, for types of reasoning that a rationalist might use, in order to keep the crucial rationalism / empiricism distinction clear.  This is important to get straight, but difficult, both because "rational" is overloaded, making too many distinctions; and because we need to make other distinctions we have no words for.

One problem is mathematics.  Mathematics is a formal system which has the absolute certainty rationalism loves.  Rational math uses only integers and ratios, while empirical math has extensions rationalists dislike, to interface with the material world, like real numbers, calculus, error theory, and statistics.[zzanalysis]  Scientists use statistics and error theory to prevent the absolute certainty of mathematical proofs from seeping out into certainty in the real world.  But the way that works is so complicated that most scientists don't understand it; they just click the "Regression" button in SPSS (and then, as often as not, misinterpret the results rationally).  They still talk about "falsifiability" without realizing that their  statistical t-tests assume falsifiability is theoretically impossible.

Algebra is technically rational (it can be done with integers), yet difficult to work with without thinking of functions as continuous; so it blurs the line.  Discrete math is quite useful, and 100% rational.

So applying the rational / empirical distinction to mathematics is impractical.  I'll use the term "empirical math" or "empirical rationality" for the middle ground which has the absolute certainty of rationalism, but the real-world applicability of empiricism.  The purpose of this distinction isn't to criticize one form of math; all are valid.  The purpose is to make people aware of the metaphysical differences between empiricism and rationalism, and to clarify that "rationality", defined as the kind of thought rationalists believe is valid, is useful if it's handled carefully.

The other problem is that deductive argument is inherently rational.  Measurements and statistics are great, but cumbersome.  Symbolic logical deduction lets you cover a lot of ground quickly, at the cost of greater error in conclusions and in meanings.

So rationalist methods of thought are important for empirical science, but must be confined to the particular places where simplification and abstractions have the most value, and where we have methodologies such as operationalization that ensure our terms are meaningful and internally consistent.  These include

Pure Kantian rationalism emphatically does not belong in the processes of

Using the word "rationality" to mean reason in general legitimizes the absolute certainty which is the cause of our current political woes.  To stem the tide of irrationality and of rationality run amok, scientists must dispel the caricatures rationalists draw of the empiricist alternative to rationalism, and must make it clear that science uses empirical, not rational, epistemology.

Rationalism and Irrationalism

Irrationalism is the Bastard Offspring of Rationalism

Rationalism is better than nothing.  But pretty soon, people like Buddha, Ibn Taymiyyah (Taymiyya 1309), Nietzsche, Borges, and Derrida start noticing the internal contradictions and ridiculous conclusions.  The one reason they all turn against reason is that they don't know there's any alternative to rationalism.  They never studied the Hellenistic era, or science beyond Newton, or any practical real-world domain in which they might learn how to learn from their mistakes.  Nearly all irrationalists lived in or studied the classical or medieval ages.  If you read Derrida carefully, you'll see he assumes Aristotelian metaphysics in everything he does.  After he proves they don't work, he's got nowhere to turn, and throws his hands up in the air and says everything is a big game.

This is the flip side of the demand for certainty–when it proves unreachable, rationalists despair of ever knowing anything, and turn to irrationalism.  Which is just rationalism, turned on its head.

Mondrian, above, is a rationalist modernist.  Pollock is an irrationalist post-modernist.  Those are the only options rationalists have.

That is, other than "Why not both?"  In continental philosophy, "rational" versus "irrational" is a false dichotomy.  As I mentioned under "Rationality is Not the Solution", Derrida and others believed their irrationalism was rational.  Much ink has been spilled over whether existentialism is rational, irrational (Barrett 1958), phenomenological, or nihilist.  It's all four, simultaneously.  Sartre's anxiety was caused by his rational conclusion that life is not logically necessary; he explored this using Heidegger's phenomenology and his own fictionalized subjectivity[zzsartre]; and decided this intolerable superfluity could be overcome only by choosing some irrational Kierkegaardian "leap of faith"; leaving him with a nihilist philosophy which said the only thing that made an action "moral" was that it was your own free choice[zzbn].  Nazism can also be regarded as nihilistic, phenomenological, irrational, and rational, all at once.[zznn]

Saussurian structuralists make a really big deal out of the fact that the word "dog" is not itself a dog, and that the letters "d-o-g" are not logically determined by the nature of dogs – as if anybody but Plato and religious conservatives who believe their language was given to them by God didn't already know that.[zzchandler]  This makes no sense until you realize that they think that because words have no necessary (God-given) connection to their referents, this means that, without God, language can't refer to anything real (Baudrillard p. 1560)[zzbaudrillard].  "There is nothing outside of the text" – (Derrida 1967/2010 p. 1692).  Pinker could have done more to fight irrationalism by sharing some of the developments in linguistics and psychology over the past 100 years, like those mentioned in (Medin & Heit 1999), which continental philosophers remain willfully oblivious to.

The giant green Pac-Man in the Graph of Philosophy

In 2012, Simon Raper (his real name) used dbpedia, a database of relational information extracted from Wikipedia, to make a graph showing every philosopher on Wikipedia and the "influenced / influenced by" links between them.[zzraper]  The size of each philosopher's circle indicates the number of links to or from that philosopher.  The color is determined by some unspecified clustering algorithm.  Each link represents an influence.

This graph (link) has a lot of problems.  It doesn't distinguish between "influenced" and "influenced by", or between "followed" and "reacted against".  It doesn't count indirect influence, so for instance Thales, the "first philosopher", is tiny.  It's almost entirely Western.  It's wrong in many details.[zzwrong]

 

But the big-picture view is basically correct.  The green part of the graph, with rationalists on the left and mostly irrationalists on the right, is a giant fish or Pac-Man about to gobble up the small violet pizza-slice of sane people.

If we plot the course of philosophy through this graph over time, the picture is even worse.  Western philosophy begins at the top of the upper-left third that's colored Cyan, with Zeno, Pythagorus, Thales, Parmenides, Democritus, and the other pre-Socratics.  Then it moves south to Plato and Aristotle, detours west to spend a thousand years among the scholastics, then travels back east to Hobbes, Leibniz, and Descartes.  Then it splits in two, one branch going southwest into the British Enlightenment and toward empirical science, and another going southeast, passing through Spinoza and Rousseau on its way to Kant, the mild-mannered and impartial spider at the center of this web.

After that, the empiricist school produces some analytic philosophers, then slowly sputters out as students favorably inclined towards empiricism begin going into science rather than philosophy.  Meanwhile, on the continent, we go from Kant to Hegel, who spins off an enormous cancerous green growth of irrationalism – the result of rationalism shattering on the revelations of empirical science.  Almost everything important in philosophy after 1930 is up in that great green glob of madness.

Why?  What happened in the 1930s?

I'll tell you what happened…

The Nazis Won the War

Recently I read an obscure, poorly-written, and insightful book called Hitler: Philosopher King (Morris 2017).  Its thesis is that the Nazis were able to destroy German culture in such a short time because they were the first post-modernists.  They used deconstructionist techniques, romanticism, and mysticism of a Hegelian flavor to attack existing moral norms.  The Nazis popularized a dramatization of post-modern thought, which had been in the air since before WW1 (see e.g., Tristan Tzara's 1918 Dada manifesto), before French philosophers were able to put such a slippery thing down on paper.

Then I read a book about the Vienna Circle (Sigmund 2017).  In the 1920s and early 1930s, continental European philosophy was divided into two opposing camps, providing two competing epistemologies.  One was the metaphysicians, descended mostly from Hegel and all from Plato, consisting of Aristotelians, phenomenologists, Marxists, Freudians, and fanatical Catholics.  The other was what we might call the positivists, inspired by physics and scientific methodology.

The Nazis destroyed the non-Hegelians, including the Ernst Mach Society and the Vienna Circle, both of which the Nazis officially disbanded.  Their members either fled Germany and later France, or they were dismissed, persecuted, exiled, or killed.  Meanwhile, the Nazis elevated Hegelian philosophers, especially phenomenologists like Heidegger, to the highest posts in Germany and France, just as Bismarck and Friedrich Wilhelm III had before, and for the same reason – to combat liberal humanism.[zzfw4]

What we call "continental philosophy" today is that philosophy which the Nazis chose to force upon continental Europe.  They favored the Hegelians because they were anti-reason, anti-science, anti-individualist, and anti-Enlightenment.  They favored phenomenology because of its moral and scientific relativism, which gave them a philosophical justification for treating feelings and personal anecdotes as "inner truths" more-valid than scientific studies, and for dismissing German morality and civil institutions as socially constructed.  (Berger & Luckmann 1966) introduced the term "social constructivism" later; but Luckmann was a student of Alfred Schütz, an Austrian phenomenologist who introduced social constructivism to sociology in (Schütz 1932).  The Nazis also embraced Nietzsche's dim view of "bourgeois" civilization and will-to-power moral relativism; and the collectivism of ancient Sparta, Plato, and Hegel.  These are the defining characteristics of continental philosophy today.

After the war, the more-obviously Holocaust-inducing superficial elements like hatred of Jewishness and social Darwinism were quickly cast off, but the metaphysics – the worldview – remained.  The Nazis were gone; but the philosophy professors they'd put in place, and those who'd fled, were both left where they were.  The next generation of philosophy professors in France and Germany was drawn from students who had studied under the Nazis.  Herbert Marcuse's thesis advisor was actual-Nazi Heidegger, who was also Sartre's primary influence.  Michel Foucault and Roland Barthes studied in Vichy France under professors who had sworn an oath of loyalty to the Nazis.  Paul de Man was a Nazi collaborator.

It took decades for the Nazi metaphysics of the French and German academies to percolate through the European arts and the 1960s radical movements, into American universities and mass media.  But it has arrived, and now most American intellectuals from the "best" universities are unknowingly walking around with heads full of Nazi metaphysics, seeing the world through Nazi eyes.

Today's Social Justice movement, and the anti-racist movement in particular, use a lot of Nazi

The Social Justice movement (SJM) inherited all of these from continental philosophy, and chose them for the same reasons the Nazis did:  because they're tailor-made to help destroy an existing culture in order to replace it with something completely different.  But they were also tailor-made to replace reason with irrationality.  No amount of philosophizing based on these ideas can create a movement that allows reason to thrive.  The existentialists tried, but the best Sartre could achieve was a gloomy and illogical nihilism, one which justifies any action one chooses solely because one chooses it, in which the only value is the freedom to choose between meaningless alternatives (e.g., Flynn 2006 p. 45-48)[zzbn].  No wonder it produced such gloomy and nihilistic plays (Sartre, Beckett).

The SJM inherited other key ideas from Marxism, like the concern for oppressed classes, the belief in determinist historicist progression towards a final state of perfection, and the use of rational argument when it serves their purposes.  Metaphysically, they're the dialectical re-synthesis of left-Hegelianism (Marxism) and right-Hegelianism (Nazism).  Which is exactly what Hegel would have predicted would result from the opposition of Nazism and Marxism.

Hidden Reasons to Reject Reason

Near the beginning of this article, I said that the irrationalists may be right on a deeper level.  The reason they usually give to reject reason is that rationalism doesn't work–that it's provably inconsistent or indeterminate.  This is the only reason I can imagine that could be logically valid, given that you're reasoning logically.  As it happens, it's a bad reason; rationalism doesn't work, but there is an alternative that's recently been demonstrated to work, in the accomplishments of machine-learning applications which had to solve the problems of epistemology in order to give you your Google image search results or drive your car: empiricism.  None of arguments I've read against rationalism have any traction when directed against empiricism (but showing that would take another long post).

But there are some not-obviously-wrong reasons to reject reason in general, rational or empirical.  You may have to read between the lines to detect them; people pretending to reason logically might not come out and say any of them openly.  They are:

Death

Death has always spread religions that promise a way out of it.  I didn't understand how powerful a motive this could be until my father died.  There was a snap frost the day we buried him, and I kept wishing we'd dressed him warmer.  It felt barbaric to leave him behind in the cemetery, alone in that cold, dark little box.  I knew he was dead, but I didn't believe it.  For him not to be, would be like waking up and finding Mt. Everest missing.  It seemed so obviously impossible, for a while it made Christianity seem possible.

For many people, it isn't the seeming impossibility of death, but just their not wanting to experience it personally, that motivates them to get religion.

Reason might make you unhappy

Once I had a Mormon co-worker.  I couldn't understand how someone so smart could believe in something that seemed, to me, transparently false.  But I gradually noticed that he was happier than me.  He had certainty, self-assurance, and a tremendously supportive Mormon community which smoothed his road through life significantly.  Was this stupid?

Rationalists may win, but empirical scientists don't.  That's because science, and a devotion to reason, was never meant to win.  It's a mental illness, a perverse devotion to truth at all costs.  Unlike philosophy and religion.

Science seeks truth; philosophy and religion are hedonistic

Back when I was Christian, I read books which made rational arguments for Christianity.  But when the book's author described how he had become a Christian, it was usually the result of a purely emotional or irrational experience (Augustine 397, C.S. Lewis[zzlewis2], Guillen 2021).  And when I talked to friends and family about why they believed, most eventually said that Christianity just "worked better", or made them feel better.  Christianity is hedonistic.

The great figures of philosophy have seldom claimed to be seeking truth.  Thales and Anaximander, maybe.  But most of the ancient Greeks and Romans were unanimous on this point:  They were seeking the good life.  Socrates, Plato, Aristotle, the Epicureans, and the Stoics were emphatic on this point.  Buddha thought life was painful, and wanted relief from suffering and angst.  Kierkegaard and Sartre disdained seeking happiness; but both gave up on seeking truth or true morals, and settled for numbing the pain of disappointment with a narcotic "leap of faith" (Kierkegaard) / "free choice" (Sartre).  Philosophy is hedonistic.

Reason may lead us all to kill each other

It's usually religious conservatives like Dennis Prager who fall back on this in the end, the kind who believe humans are intrinsically evil and selfish.  If you talk with them long enough, they'll talk about the "necessity" of either religion or Hobbesian authoritarianism, to keep people from abusing and killing each other.  This is the view Glaucon expresses in Plato's Republic with the myth of the Ring of Gyges, arguing against Plato's claim (Republic, "Apology") that unrighteousness can only do harm to the unrighteous person.  I haven't got citations, because people usually admit this only in personal conversations.[zzpenn]

I don't find "we all secretly want to kill each other" a plausible hypothesis.  But history shows that rationalism does lead us to kill each other.  So it's no surprise that it's rationalist religious conservatives who make this argument.  Rational Marxists could be holding this view secretly.  If you're a cynical rationalist who believes humans need an authority to keep them from killing each other, a false Marxism might seem like an improvement over a false Christianity.  Buddhists believe that all life is suffering, and that the best thing you can possibly do for another person is to guide them to Nirvana (which is what we who don't believe in reincarnation call "death").  The only logical excuse for them not to run around freeing as many people as they can from life is that they don't think that kind of death sticks.

Rationalist ideologies give people crazy motives for killing.  But self-interest can give people rational motives for killing, or at least exploiting.  I've been talking until now about rationalism and empiricism as epistemologies, but "rational self-interest" isn't an epistemology.  You can be an empiricist, and still be modelled well as being rationally self-interested.  Evolution, human history, and free-market economics all show us that selfishness and aggression can be successful strategies.  And right reason seems to require freedom of thought, which seems to go hand-in-hand with freedom of trade.  Is the payoff from freedom of thought and action worth the cost of free-market competition?  I think it is, but many people don't; and sometimes this isn't because they reason differently from me, but because they don't have the compulsions and inclinations that make freedom and truth so important to me.[zzgoetz]

Reason may prevent us from killing each other

This is the one I don't like to think to about.  Many religious conservatives have argued that Darwin's theory of evolution led directly to eugenics and social Darwinism.  Well… yes.  (Wiker 2008 p. 85-97) has some quotes[zzdarwin] if you weren't convinced of that already.

I'm not really worried about reason favoring eugenics.  "Eugenics is bad" is one of those sentences you're supposed to assent to, regardless of what it means.  Here the word hasn't been redefined, but recontextualized:  "The Nazis favored eugenics".  The argument is:

  1. The Nazis used eugenics.
  2. The Nazis were bad.
  3. Therefore, eugenics is bad.

Even a rationalist should notice the flaw in this argument.  Let me recontextualize it:  One part of the Social Justice movement is against "ablism", the idea that not having a handicap is better than having one.  The most-debated disability is deafness.  Blastocyst screening can tell which blastocysts (taken from the mother and in-vitro fertilized with sperm from the father) has the genes for deafness.  Some deaf parents choose to implant the embryo that will not.  Some choose the one that will.  If you think that people should be allowed to use embryo screening to ensure their child won't be deaf, but not to ensure that it will be, you're in favor of eugenics.

I am worried about social Darwinism.  The theory of evolution says that species evolve when the pressure that selection exerts to increase the fitness of a species outpaces the pressure random mutation exerts to decrease its fitness.  You can't believe in evolution without believing that our genome is subject to constant degeneration by mutation.  You therefore must believe that, for the human species to survive, we must have either continued selective pressure, or continual corrective germline genetic engineering.

Nietzsche and probably some Nazis understood the basic problem, but didn't have the option of genetic engineering on the table.  So they developed a philosophy to solve that problem by increasing selective pressure.  The choice of the Jews as a target wasn't accidental; the Jews have, in fact, done a disproportionate amount to create the comfortable, selection-deprived civilization that Nietzsche and the Nazis saw as leading to species death.  So why do we call them "evil" instead of "good"?

Part of the answer is the cruel and stupid way they went about it.  But if they had painlessly sterilized the "feeble-minded" and people crippled by genetic diseases, supposing they were able to identify them correctly, instead of killing people, it would still seem icky.

We could attribute the ickiness to the time-discounting Pinker explains in "Conflicts among Time Frames".  If we did no time-discounting, and cared just as much about future generations as about the present one, we'd tolerate a lot more present-day suffering or violation of rights for the sake of future generations.  That's what rationalists do.  Instead of trusting their time-discounting instincts, they try to reason it out consciously, and rationalize that the perfect ends justify any means.  Catholics were willing to commit or cooperate with something approaching genocide in southern France, Bohemia, southern India, North, Central, and South America, the Congo, Yugoslavia, and Rwanda, to save the souls of future generations.  Marxists in Russia and China were willing to live through hell so that future generations might live in socialist utopia.  Nazis were willing to do what they did.

But if someone doesn't know why time-discounting makes sense, and thinks genetic engineering is immoral, they might rationally conclude that the needs of the future outweigh the needs of the present, and that the only way to protect future generations from ending up like the passengers in Disney's WALL-E is to invent a philosophy to increase selective pressure by destroying sympathy for others.  If Pinker is right in saying that reason leads to sympathy (chapter 2, "Morality"), then that plan requires destroying reason.  At least, that's always worked in the past.

I do think time-discounting in this matter makes sense at present, but our public discourse doesn't allow us to present that argument to the people who might need it.  You're not supposed to say, "Well, even supposing the Nazis were right about the dangers of genetic degeneration…"  The problem is that what the Nazis did was so revolting that no one ever felt the need to make a logical argument against Nazism.  It's been rejected, but never debunked.  We should be able to show that fascism is a bad idea even if a person identifies more as a member of a race than as an individual.  But we never try to do that, so the individuals who are out there, secretly brooding over perceived wrongs to their race, find no counter-arguments that make sense to them.

Worse, I can easily imagine a new, non-racist Nazism, which again sought continual strife and violence for the sake of improving the breed or shaking off bourgeois complacency.  If it's even-handed, unbiased violence, or based on attacking people based on their class or ideology instead of on their race, philosophy today has more arguments in favor of it than against it.

Fight the Systems

When fighting infectious bacteria or cancer, you need to deliver your drugs systemically, to clean up infectious cells that are floating about; but you also need to attack the large growths, where infectious cells cluster together behind a protective barrier of biofilms or efflux transporters.  It's great to make people aware of biases and fallacies, but we must also expose the big systems and institutions of rationalized irrationality, whose members are too ensconced in their enclaves to be budged by occasional gusts of rationality.

American politics

The problem in America today isn't that individuals believe crazy things, but  that philosophies teach crazy things.   The two warring camps are often called Democrat and Republican, left and right, or liberal and conservative.  Only the first of these dichotomies is clear or accurate, and all are misleading.  It's far more informative to say the divide is between rural and cosmopolitan culture.  The most intransigent subset of each are, respectively, Christians and Hegelians.  Both are rationalist ideologies, so both are supremely self-confident.

I don't see much hope for reconciling the two.  Even splitting the US into a red-state nation and a blue-state nation wouldn't do it; you'd have blue cities in red states, and red towns in blue states.  I'm for bringing back the city-state and letting both sides rule themselves, but I don't have a lot of hope for that either.

Rural culture

It's hard to over-emphasize the hegemony of Christianity in rural culture.  I say "Christianity", but I mean Christian culture, which is only partly based on the Bible.  Some of the social issues Christians worry about are emphasized as problems in the New Testament (sex, poverty, riches), but most aren't (abortion, polygamy, slavery, addiction, homelessness, protecting children, and "family values").  Most of what they do is good, but it's easy for them to be panicked by anything that pushes several of their buttons at the same time, like protecting homeless children from being enslaved into sex tracking by addictive drugs.

Contrary to popular urban opinion, evangelical or activist Christianity doesn't associate with the American right, which I take to mean racist nationalism and opposition to democracy.  You won't find books in a Christian bookstore calling for a return to monarchy, racial purity, or even tighter borders.  The main areas I'm aware of in which Christians oppose urban culture due to their faith are

That's a short list, but not trivial.  Polygamy, prostitution, homosexuality, pornography, abortion, and the teaching of evolution would all be banned widely if it were up to rural Christians (and the first 2 already are).

Fighting Christianity intellectually is very difficult.  You can write books debunking the debunkers of evolution, but they won't read them.  About the only subject they might read about that could disrupt their certainty is early Church history.  The highly contingent path that led from the Cross to the First Council of Constantinople in 381 CE, at which the doctrine of the Trinity was finally ratified, would shock nearly all Christians.  I hope.[zzornot]

I think the main reason for the dominance of Christianity in the country is that there isn't much else to do out here.  The main social institutions are churches, bars, and the VFW.  Maybe instead of trying to throttle rural economies with taxes and over-protective legislation in the hopes that the Christians will die out, urbanites should figure out how to help rural areas make more money instead of less, so they can afford to go to something besides church barbecues sometime.

Rural culture has its own, separate gripes with urban culture.  These include:

These issues are more cultural and pragmatic than ideological.  I don't think this is the place to explain them.  But the next time you find yourself imitating a southern accent to imply that someone is stupid, stop and think.  The next time you talk about "diversity" or "multiculturalism", remember that rural American culture is one of those cultures you say you want to be tolerant of.  Cosmopolitan Western culture is closer to rural American culture than it is to just about any culture in the world.  Culture isn't just music and food; it's also about beliefs and values.  So how are you going to get along with Syrian Muslims if you can't get along with people from Iowa?

There are two areas I need to address.  One is racism.  As Pinker documents in chapter 15 of Enlightenment Now!, racism, sexism, and homophobia in America have decreased dramatically over the past 50 years.  Racism is a political bogeyman which gets a lot of traction in urban environments, but it's one of the key factors driving a wedge between urban and rural culture.  Rural whites are sick of being singled out as racist, especially as racism is more common in big progressive cities, and blacks are more likely to commit racial hate crimes than whites are[zzracism].  But rural America is helpless to fight back.  They have one news network, a few newspapers, and no representation in the major movie studios or in elite universities.

This helpless feeling rural communities have, of being under attack by far greater powers and given no voice in public discourse, is the cause of rural extremism, and the source of the "Trump miracle".  Trump is an asshole.  Most rural people recognize that.  But he's a winner.  Trump being elected was to rural people what Obama being elected was to black people, even though there's nothing rural about Trump.  He's outside of the big hegemonic system bent on crushing them, and that's all that matters to them.  They'd vote for anybody who can defeat the left-leaning party that hates them so.

The other is one Pinker touches on in chapter 10, under "Reaffirming Rationality": trust in academia and the press.  The main reason rural people don't want to take covid vaccines, and don't trust arguments about the economic benefits of immigrants, is that not only don't they trust academia and the press; they've become convinced that the main goal of academia and the media at present is to destroy their culture and their way of life.  And they're right.  So rural people respond to statements by the press the way Russians respond to articles in Pravda: There are always rumors; but once you hear it on CNN, you can be sure it's a lie.  (Or, rather, you can be sure it's not the whole truth.  The mainstream media don't lie; they just omit inconvenient truths.)

This is the fault of academia and the media.[zzmedia]  Don't look to rural people to fix this; fix academia and the media.  Give all the masses a voice, not just the ones living in big cities.

Cosmopolitan culture

Western Cosmopolitan culture still resembles the medieval nobility:  a network of people from many countries who identified with each other as a class more than they identified with the people and countries they ruled over.  Most people who consider themselves part of cosmopolitan culture are under the delusion that it's "their" culture.  Not so.  Unlike rural culture, cosmopolitan culture belongs to those at the top.[zzbuffalo]

Dethrone the Ivy League

The University of Paris, Oxford, Cambridge, Harvard, Yale, Princeton, and, much later, the University of Chicago, were all founded as seminaries, to indoctrinate the future intellectual leaders of society into conservative theologies.  They've only grown more powerful since, especially in the second half of the 20th century.  Since Carter, every US President except Biden, and every Supreme Court justice but two, attended an Ivy League school.  A large fraction of congresspeople attended Harvard, Yale, or Stanford.  Most new billionaires also attended either an Ivy, or a tech Ivy-equivalent such as Stanford or MIT, because big venture capitalist firms and big financial firms are also now run by graduates of those schools, and prefer to invest in others of their kind.  The situation in Europe is different in one big way:  at most elite universities there, anyone who can get in has their college paid for.

Most cosmopolitans think it's great that elite universities are now the gatekeepers to power.  It used to be a central tenet of cosmopolitan culture to believe in the superiority of people educated at elite universities.  Now, that's degenerated into a belief in the superiority of people chosen to attend elite universities – dropping out of Harvard is better than graduating from Duke.  (Perhaps it takes only a short time of rubbing shoulders for the superiority to rub off.)

Rather than get into the question of who can and can't, or could and couldn't, attend elite universities, let me just point out that this is the most intellectually inbred bunch in the nation.  If you value diversity, representation of all sexes and ethnicities isn't enough.  There should be at least one person in the room who didn't go to Harvard or Yale.  It's asking for trouble for a nation founded on democratic principles to choose all its leaders from institutions founded on, and still devoted to, elitist principles.

More to our present purposes, these few institutions are the vectors through which continental philosophy is being spread across the US.  They are the founts of rationalist and irrationalist ideology.  Reducing irrationalism requires reducing the power of the elite universities.

Donations and Foundations

But, you might protest, cosmopolitan culture isn't all run by rich people.  They have grass-roots movements, too!

But do they?  Every "grass-roots" cosmopolitan politician or movement I know is sucking at the teats of some foundation funded by rich people, like the Ford, Rockefeller, Pew, Tides, William & Flora Hewlett, Kellogg, MacArthur, Carnegie, Heinz, Packard, and Dorothy Duke Foundations; or some charitable institution like The Center for Popular Democracy, the Indivisible Project, Run for Something, EMILY’s List, QQQ, and the Equal Justice Initiative.

The Giving USA Foundation estimated that Americans gave $449.64 billion to charities in 2019.  That's more than $1000 per American, and very nearly $4000 per household.  You can therefore be sure that most of this money comes from rich people.

Many of these foundations and institutions have embraced radical leftist/cosmopolitan causes.  QQQ  Most people who consider themselves cosmopolitans are more to the center than the professional activists who controls these foundations and institutions.  What cosmopolitans take to be their own culture is being invisibly foisted upon them by armies of professional activists, hired with tens of billions of dollars donated by the fabulously wealthy.

Beware of other gods

If you focus entirely on fighting Christians, the Hegelians will eat your organization from the inside, as they did the New Atheism movement around 2012 (Scott Alexander 2019).

Beyond politics: Opposing irrationalism generally

Be prepared to be a cultural emissary

The disproofs of rationalist epistemology have finally been found, but are still unknown to rationalists.  There's an urgent need to summarize and systematize the epistemology of today's science, using insights from statistics, cognitive science, evolutionary theory, complexity theory, and machine learning; and to communicate it to the humanities.

The first step is to object every time they say science claims absolute certainty, or seeks only universal truths without regard to human biology or psychology.  Scientists are the people who don't claim absolute certainty.

The second step, I'm afraid, is to learn some continental philosophy.  Things have gone as far as they have because there are so few people who understand both science and continental philosophy well enough to recognize philosophical error, and to explain it in a language a philosopher respects and understands.  If you can't sling at least a little jargon, you won't be taken seriously.  You can get most major radical or rationalist texts as audio now, many of them free from LibriVox.

Read Mein Kampf

That includes Hitler.  With so much Nazi philosophy floating around, unrefuted, it's inevitable that some new form of Nazism will rise again.  But it won't be clad in swastikas and jackboots.  Learn to recognize its internal logic, not words or flags.

Dethrone physics

Scientists would see the contrast between rationalism and empiricism more clearly if they stopped using physics (the "queen of the sciences") as their model of science.  Physics was the first empirical science developed because it's the simplest science.  It doesn't suffer from the complexities that make rationalism so problematic.  That's why scientists who talk about finding universal laws are usually physicists, astronomers, or chemists.

(Mathematics isn't an empirical science at all.  That's why we don't call mathematicians "scientists".  It's a tragedy of history that it was taken as a model for science.  It's a purely formal system, like (or a superset of) rationalist logic.  That's why the last bastion of Platonism in the sciences is in mathematics.)

Fight rationalist cultural imperialism

We must oppose the hegemony of rationalism within the cultural sphere, as in contemporary art and literature, and in the demonization of utilitarianism.  We should not allow any kind of art to be completely dominated by one narrow school of thought, as the plastic arts are today.  The very notion that there is only one proper way of making, displaying, and interacting with Art is moronic.  This is another idiocy of rationalist metaphysics – the belief that, because we have the word "art", it must have a True Meaning somewhere in outer space, and all we need to do to make great Art is to figure out what that one true meaning is.  Poppycock.  There is no way of defining art; the idea smacks of self-contradiction.  We can hope at best to find some family resemblances.  Literature is anything you do with text that someone else finds interesting, and if that means calling a crossword puzzle literature, well, we already call Finnegan's Wake literature, which is just about the same thing.  We must cease our endless wars over whose way of making and interpreting art or literature is best.  The only good theory of literature is Kipling's:  ""There are nine and sixty ways of constructing tribal lays, And every single one of them is right!"

Address the unspoken reasons people may have for rejecting reason

People falling into irrationalism for one of the reasons I listed under "Hidden Reasons to Reject Reason" might never verbalize these reasons, or even be conscious of them.  We can fight subconscious motives only pre-emptively, by telling people things that make those motives less compelling.

Bridge the Gap

I'm not saying that every empiricist is obligated to join a political movement.  I certainly don't want to.  But scientists can no longer afford to hide from the world in their labs or at their computers, paying no attention to anything outside their specialties.

At the very end of Rationality, Pinker wrote, "My  greatest  surprise  in  making  sense  of  moral  progress  is  how  many  times  in  history  the  first  domino  was  a reasoned  argument."  Then he gave examples: Castellio writing against religious intolerance; Erasmus writing that war is, on the whole, bad; Beccaria, Voltaire, and Montesquieu arguing that torture isn't always necessary when punishing crime; Bentham arguing against cruelty to animals and the criminalization of homosexuality; Bodin, Locke, and Montesquieu writing against slavery; Astell and Wollstonecraft writing against the oppression of women.

Ideas have consequences.  It usually takes decades for anything written by an academic to percolate through the university, into the arts, and out to the public, but it eventually does.  To paraphrase John Maynard Keynes[zzthrall], everyone now lives in the thrall of some dead philosopher's artists.

Some time around 500 BC, Pythagorus made a startling discovery, seemingly millenia ahead of its time:  that the particular sets of notes which sounded beautiful when sounded together could be explained mathematically.  What might have been the first Greek scientific discovery accomplished something that has never been accomplished as well since: the reduction of aesthetics and values to mathematics.

If Pythagorus had interpreted this phenomenon as a sign that math was a gateway through which earthly experimentation could explain even what we took to be spirit, how different history might have been.  But instead, he took it to mean just the opposite: that math was a gateway to the spiritual realm, through which we could explain earthly phenomena.  Philosophy has been going in the wrong direction ever since, but not until the 18th century did it exhaustively explore all the dead-ends of this turn in the maze.  Instead of tracing its way back to the beginning, it now throws up its arms and declares there is nothing outside the maze, which it condemns us all to run forever.

We live in a society so used to experts and specialists that we expect that every subject has its practitioners, and they should do their job and you should do yours, everyone "minding his own business" as Plato recommended.  But that system has broken down.  Western philosophy is broken, and philosophers can't fix it.  They don't have the tools, the experience, or the methodology.  They don't know what they don't know.  They need help from science, but the paradigm gap is too wide for Pinker to shout across.  Philosophy now must either be saved, or swept aside.

If it is to be saved, someone has to go down into that gap between it and science, and find a way across.  Someone with a degree from one of those places they respect, like Harvard or the U. of Chicago.  But not bearing just a bag of tips and techniques.  They need to know the anatomy and physiology of continental philosophy well enough that together, working with someone on the other side, they can diagnose not just the symptoms, but the underlying causes of its sickness.


Footnotes

[0] There is no reference to this footnote.  Are you planning to read all of them?  That's not necessary.

[zz3] Much like our host, Scott Alexander.

[zz8] Sadly, I found no math-phobic reviewers who wrote "Pinker's diagrams clarified set theory to me!"  They were as frightened by diagrams as by equations.  Yet these same people think nothing of spending a year struggling through Heidegger, trying to understand a meaning that isn't there.

It isn't any easier to understand Hegel than to understand calculus or relativity theory.  I would say it's much harder.  It's only easier to convince yourself that you understand Hegel.  The more ambiguous, indeterminate, or vacuous a book is, the easier it is to persuade yourself that you understand it, and the prouder you feel when you do.  But someone who merely persuades himself that he understands calculus hits reality hard when he runs into the first question at the end of the chapter, like "Explain geometrically why the definite integral of y = x dx from a to b is b2/2 - a2/2."

(If someone really can't understand calculus, that's no shame.  But that puts them in the bottom 83%, smarts-wise[zz9], so they should seek some line of work other than philosophy.)

[zz9] About 21% of American high school students take calculus (Bressoud 2016 footnote 1, adjusted for 12% growth in AP exams).  22% of American high school graduates go to a 2-year college and 44% go to a 4-year college.  700,000 of 20 million college students take calculus each year.  700K x (2 years x (22%/66%) + 4 years x * (44%/66%)) = 2.3 million (11.5%) of each incoming class of 20 million college students take calculus.  25% of students taking calculus in college have never taken it before (I think the author misinterpreted the cited source, but it's the only data I have).

So 21% of high school students take calculus, and (1/4 x 11.5% x 66% = 1.9%) of high-school graduates went to college and there took calculus for the first time.  That implies that 23% of high school graduates took calculus (and also that over 90% of people who took calculus, took it in high-school).  The Mathematics Association of America (MAA) reported that 25% of college Calculus 1 students failed the course (Ayerdi 2017, abstract, probably referring to Bressoud et al. 2015)[zz10].  That gives us a lower bound of 0.17 for the fraction of the population that is able to pass calculus, assuming that people who didn't go to college aren't more likely to be able to pass calculus, and that failing calculus is no more common in high school than in college.

[zz10] Also, holy shit, that kid's honor's thesis has been downloaded 5,911 times.

[zz1] The most worrying thing isn't how far the parties are apart from each other –  Democrats and Republicans are much closer in their views today than most opposing political parties everywhere in the world have been throughout history.  The worrying thing is the abandonment of the principles of rational debate, and of the conception of democracy as being based not on winning, but on compromising.  It doesn't matter that the parties aren't very far apart, or that they aren't moving apart very fast.  What matters in the long run is that there's no hope in sight of either party ever halting that movement.

[zz6] I feel a little guilty including Henri Rousseau in this list.  He may have been guilty of much, but he was the most innocent of men–the Forrest Gump of art.  He was unaware that his reception was the final straw that broke the French avant-garde's mind.

It's a strange story:  Rousseau was initially just a very bad artist with the unshakeable conviction that he was a very great artist.  He painted in an accidentally iconic and cubist style because he never could master shadows, faces, perspective, or proportion.  When his paintings were first shown publicly, at the Salon des Champs-Elysées in 1885, they were slashed and thrown in the trash (Shattuck 1968 p. 49).  The avant-garde found his naive trust and patient self-assuredness so hilarious that around 1894 they turned to a new game: baiting him with false praise and phony awards (Uhde 1911/2005 p. 54-55).

But new people joined the avant-garde who didn't realize the praise was ironic, and Rousseau won over many of his original mockers with his friendly and gentle nature.  And, too, Rousseau began to sometimes achieve striking effects which he never could have had in the Academy style he strove in vain to imitate.  After Picasso threw his (in)famous 1908 party to recognize Rousseau as a great artist, some of the attendees wrote accounts describing it as a lampoon, while others described it as sincere (Shattuck p. 66).  What I think happened in the end was the culmination of the spirit of the fin de siècle: people no longer knew whether they were praising or ridiculing Rousseau, and, as both praise and scorn were subsumed in the same gay and careless spirit, ceased caring.  That was the moment the avant-garde learned to embrace contradictory beliefs with joy rather than guilt.

[zzbutler]  To Pinker's credit, Zeno and Wagner each appears once.  Freud, Foucault, and Judith Butler each appear once, but all in a single sentence whose purpose is to explain the argument from authority.

[zzwest] I'm speaking only of Western intellectuals and Western philosophy in this essay, and mostly skipping the time between Plato and the scholastics.  The world is too big a chunk for me to chew.  More importantly, the general patterns and oppositions I'm writing about don't apply as neatly outside of the Western world, nor between Plato and the scholastics.

[zznaziintel] Nazism was not a movement of the masses.  Nazism was a movement of the German intellectuals.

Yes, the masses cheered for Hitler.  Yes, some (though not most) voted for Hitler.  But the masses didn't love Hitler and his ideas because they appealed to the baser instincts of humanity, unpurified by higher education.  The masses loved Hitler and his ideas because the German intellectuals and cultural elite had been programming them, through religion, art, poetry, music, history, and philosophy, with Nazi ideals for over a hundred years.  Nietzsche was just one among them.

Nazism did not begin with Hitler.  The Nazi party did, but every one of its ideas was already planted firmly in German culture long before.

On July 8, 1915, Germany's intellectual community wrote the "Petition of the Intellectuals", demanding that Germany keep the conquered parts of France, that it also take Belgium and a large part of Eastern Europe, and that its enemies be required to pay high reparations "without mercy".  1,347 intellectuals signed the petition, including 352 university professors, 252 writers, and 158 clergy and teachers.  A more moderate petition circulated by Hans Delbrück gathered 141 signatures. (Waite 1977 p. 291)

In May 1933, in Heidegger's inaugural address as Rector of the University of Freiburg – a position he took over from his former teacher, Edmund Husserl, who had been removed from office by the Nazis as a non-Aryan and whom Heidegger did nothing to defend – Heidegger spoke of "the inexorability of that spiritual mandate which forces the destiny of the German Volk into the stamp of history," and said that "The Führer himself, and he alone, is Germany's reality and law today and in the future." (Waite 1977 p. 325)

[zznazi2] (Wagner 1871) was difficult to find, doesn't seem to exist in English, is written in an exceptionally ambiguous gothic script, and Google translate can't translate it.  So I'll give my own translation here.  It says: "Nur aber, wann der Dämon, der jene Rasenden im Wahnsinne des Parteikampfes um sich erhält, kein Wo und Wann zu seiner Bergung unter uns mehr aufzufinden vermag, wird es auch -- keinen Juden mehr geben."  Probably, "Only when the demon, the one raging madly in the party struggles around it, finds no place or time among us to recover, will there be--no more Jews."  Or else, "Only when the demon that rages in you in your mad party struggles finds" etc.  (I'm not fluent enough in German to feel confident in how I'm grouping the nested hierarchy of relative clauses.)

[zz16] (Barthes 1969), "The Death of the Author", is an instructive example of irrationality, less in what it says than in its reception.  There are 3 interpretations of it:

But I'm being slightly dishonest when I say the second interpretation is false.  It does prove that originality is possible, by the standards of proof of today's literary theory community.  Those standards are based on notions of "the discourse" promulgated by Foucault and Derrida, which say, in different ways, that an idea counts as being proven true when enough people repeat it.

[zz17] "An original author can say nothing false" is a consequence of "Words can't communicate meaning."  An original author can say nothing false because an original author begins a new discourse, and as there is nothing outside the discourse, the world of the discourse consists only of what the original author said.

[zzblacks] I don't like to capitalize "black" or "white", because that grants those words the status of a nation, culture, or a geographically-defined group.  That's why colonialists often capitalized "white".  Check Google n-grams: "White race" (case-sensitive) was chosen over "white race" 5 times as frequently in 1848 as in 2019.

[zzmenthol] Biden's 1986 law to protect blacks from themselves by imposing much harsher sentences on crack cocaine than on powder (Viebeck 2019) was sadly not appreciated at the time as a step for racial justice.  A further idiocy: The menthol ban of 2021 was based on the allegation that blacks, and especially black children, are unfairly impacted by smoking; but the American Lung Association's statistics at the time said that blacks and whites smoke at the same rate, and white children are twice as likely to smoke as black children (ALA, Adult Smoking Rates among Racial and Ethnic Populations).

[zzbiden] The link leads to Biden's Executive Order 13985, which, among other things, revoked Trump's Executive Order 13950, "Combating Race and Sex Stereotyping".  You need to click through to Executive Order 13950 to see what Biden revoked.

[zz12] That's how the computer language ProLog ("Programming in Logic") works.  You ask a Prolog system a question phrased as the logical proposition Q.  Glossing over some details, Prolog then temporarily adds not(Q) to its database and crunches out the logical consequences, until either it finds some proposition P for which it can prove both P and not(P) (and your original proposition is proven true, because it's impossible for it to be false), or there are no more deductions left to make (and it's proved nothing, but says false anyway; this is called "negation as failure", and is reasonable if you know everything there is to be known.)

But this means you can prove anything if your database contained P and not(P) to begin with.  Take any proposition Z.  Prolog can add not(Z) to its database, ignore it, grab P and not(P) instead, and has then formally proven that, given your initial assumptions (which include P and not(P)), it's impossible for not(Z) to be true given your assumptions, because your assumptions are impossible.

[zzrelevance] Except for relevance logics, a kind of nonmonotonic logic which is designed to be able to accommodate contradictory beliefs.

[zzgender] Including nytimes.com, washingtonpost.com, nbcnews.com, advocate.com, sciencealert.com, thehill.com, sciencealert.com, christianexaminer.com, American Sociological Association, Saint Pius X Catholic Church, Quartz, catholicnewsagency.com, and The National Queer Organisation Of Iceland, Intersex Iceland, Trans Iceland, and Reykjavík Pride.

[zzmap] Which I drew over a map from TimeMaps.

[zzfaith] Paul's inversion of the meaning of "faith" was probably accidental.  Hebrews 11:1, translated in the KJV as "Now faith is the substance of things hoped for, the evidence of things not seen", is more-literally translated as ""Faith is the hypostasis [the Platonic Form] of our hope for the future, which cannot be disproved by mere visible evidence".  People at the time would understand this was invoking Platonist doctrine, implying that Paul's hopes were based on abstract dialectic reasons, which (Platonists believe) are infallible, as opposed to physical evidence, which is not only unreliable but speaks only of the degraded temporal world.  So Paul was actually using "faith" to mean the same thing, de re; but because he believed that only abstract reasoning could be certain proof, he stated his definition of "faith" in a way which now reads as "belief without evidence".

[zzbb] (Botz-Bornstein 2012) doesn't directly say this; it focuses more on the fact that "culture" and "civilization" are more likely to be seen as opposing concepts in continental Europe, and synonymous in English.  This makes the distinction appear not to cut across English as well as French and German.  But if you read carefully, you'll see that those people who use the terms different ways in English are on different sides of this larger linguistic split.  See esp. p. 14: "Retrospectively, the American contrast between culture and civilization, as it had been constructed by Beard, might look like a debate opposing the culture of Old Europe to the civilization of the New World. In the late 1920s  and 1930s, many traditional humanists would see American civilization, with its mass-culture, hedonism, and technology, as the collapse of “Western” traditional values."

[zzvico] Giambattista Vico used a bizarre combination of definitions (1) and (2) in "The New Science" (3ed 1744):

We should begin our study of gentile learning by scientifically ascertaining this important starting-point where and when that learning had its first beginnings in the world and by adducing human reasons thereby in support of Christian faith, which takes its start from the fact that the first people of the world were the Hebrews, whose prince was Adam, created by the true God at the time of the creation of the world. It follows that the first science to be learned should be mythology or the interpretation of fables; for, as we shall see, all the histories of the gentiles have their beginnings in fables, which were the first histories of the gentile nations. By such a method the beginnings of the sciences as well as of the nations are to be discovered, for they sprang from the nations and from no other source.

[zzchrist] This merits a really big footnote, but I doubt I'll have time to write it before the deadline.  Suffice it for now to say that Jesus seems to be based on myths about Socrates and Plato (son of a carpenter, born of a virgin, descended from God, killed by a mob who furiously rejected his teachings, went to death voluntarily for our good), and that Orthodox (including Catholic) theology blatantly rejects Biblical claims in favor of doctrines pulled directly from Platonic texts, such as the properties of God, meditation on the properties of God in order to achieve henosis, aesceticism, authoritarianism and hierarchy, the "fall" of man, the process of "conversion", and the immortality of the soul rather than of the body.

[zzreligion] Mimicry became common in the 19th century, when Christianity became intellectually untenable, and a horde of philosophers were determined either to pass Christianity off as science, or to patch it up with science.  But I feel guilty about that entire paragraph, because it pretends that "science", "philosophy", "religion", and "politics" are meaningful categories, and all different from each other.  Before the Axial Age, religion had nothing to do with morality or beliefs; it was more like rooting for your city's football team.  And science, philosophy, and religion all dictate different approaches to politics; does that make them subclasses of politics?  I don't think so; I think we need to think in terms of multiple inheritance hierarchies to get around this issue.  But I haven't got space to do that here.

[zzworldspirit] This helpfully prevented Marxists from studying the relation of Marx to Hegel (Holloway 2002, chapter 7) and thus discovering that Marx's transformation of Hegelianism into Marxism, by simply re-labelling the vertices of Hegel's structure without paying attention to the causal relations between them, had destroyed its structural integrity.  Most notably, by not noticing that Hegel's historicist notion of progress relied on God (sorry; "the world spirit") to manage everything.  Marxism is thus devoid of any explanation of why history should proceed inevitably on its progressive path.

[zz5] "In Logical Investigations (1900-01), Ideas for a Pure Phenomenology (1913), and other works, the German philosopher Edmund Husserl (1859–1939) attempted to reestablish first philosophy—though as a “rigorous science” rather than as metaphysics." (Wolin 2015, section "Phenomenology, hermeneutics, and existentialism", subsection "Husserl")

[zzberger] Berger & Luckmann later admit that "Since the constitution of reality has traditionally been a central problem of philosophy, this understanding has certain philosophical implications."

[zzfails] For instance, (Kuhn 1962), (Campbell 1997), (Kasser 2006).  (Chandler 2001, "The Word is Not the Thing"), discussing nominalism, keeps conflating it with its opposite, realism, using the oxymoronic term "nominative realism".

[zznom] "While we may say that the essentialist interpretation reads a definition ‘normally’, that is to say, from the left to the right, we can say that a definition, as it is normally used in modern science, must be read back to front, or from the right to the left; for it starts with the defining formula, and asks for a short label to it. Thus the scientific view of the definition ‘A puppy is a young dog’ would be that it is an answer to the question ‘What shall we call a young dog?’ rather than an answer to the question ‘What is a puppy?’. (Questions like ‘What is life?’ or ‘What is gravity?’ do not play any rôle in science.) The scientific use of definitions, characterized by the approach ‘from the right to the left’, may be called its nominalist interpretation, as opposed to its Aristotelian or essentialist interpretation. In modern science, only nominalist definitions occur, that is to say, shorthand symbols or labels are introduced in order to cut a long story short. And we can at once see from this that definitions do not play any very important part in science. … Our ‘scientific knowledge’ … remains entirely unaffected if we eliminate all definitions; the only effect is upon our language, which would lose, not precision, but merely brevity. There could hardly be a greater contrast than that between this view of the part played by definitions, and Aristotle’s view. For Aristotle’s essentialist definitions are the principles from which all our knowledge is derived; they thus contain all our knowledge; and they serve to substitute a long formula for a short one. As opposed to this, the scientific or nominalist definitions do not contain any knowledge whatever, not even any ‘opinion’; they do nothing but introduce new arbitrary shorthand labels; they cut a long story short." (Popper 1945/1996, V2, The Rise of Oracular Philosophy, section 11)

[zzindex] Actually you'd take a lot more care to make the counts unambiguous, but I don't have space to get into that.

[zz13] Everything I've ever read that was written about relativity theory by someone in the humanities said that it introduced relativity into physics.  The fact is that it banished relativity from physics.

Recall that the 1887 Michelson–Morley experiment was considered paradoxical not because it showed that observers in different reference frames could find different values for the speed of light, but because it showed that observers in different reference frames could find the same value for the speed of light.  They were supposed to get different values, for the same reason that passengers on a train throwing a ball back and forth would get a different measurement for the speed of the ball than people standing beside the railroad track and looking through the windows.  The movement of reference frames should affect the measurement of movement.  The fact that observers moving in one direction measured the speed of light as being the same as observers moving in the opposite direction seemed to imply that the people standing beside the railroad track should see the ball as moving at the same speed as did the passengers on the train!

What Einstein did was to show more completely how moving references frames affected measurements of motion, re-establishing the non-relativistic measurement of motion.  For measurements to be relativistic would mean that measurements by different observers couldn't be reconciled with each other.  But, using "relativity theory", all observers do in fact agree on what measurement every observer should make.

This is why Einstein hated it when people called his theory "relativity" (though he eventually gave in and called it that himself).  It was Max Planck and Alfred Bucherer who first used the phrase "relativity theory", invoking the principle of relativity, which says that the laws of physics are the same in all proper reference frames.  This is the opposite of saying that the laws of physics are relative to the observer!

The part that blew the minds of philosophers was that this proved that "time" and "space" are properly understood as being defined via a nominalist (operationalist) epistemology (they're just words we use to denote the results of measurements), rather than a Realist epistemology (time and space somehow Really Exist TM).  But this proved that Kant was right when he claimed (Kant 1781) that time and space aren't real, but merely categories our brains use to organize our perceptions!  Not only did Einstein not disrupt reason; he validated a key doctrine of German metaphysics!  Why aren't continental philosophers unbearably smug about it?

Well, because they're all rationalists.  They'd already gotten over Kant.  Just like Pythagorus with his lyre, and Plato with his geometric proof, they took Kant's earth-shaking revelation and drew exactly the wrong conclusion from it:  If pure reason couldn't access the material world, so much the worse for the study of the material world!  A hundred years into this rebellion against reality, they would rather say that there was no such thing as objective reality, than admit that Einstein had proved that they were wrong about it, and the empiricists were right.

[zzmemory] There are many stories of classical scholars, ancient and contemporary, who "had fantastic memories", like G.K. Chesterton.  They could quote long passages of books from memory.  This was an important skill in the days when books were rare, and you might have to spend days traveling to a monastery to read a book, and then either copy it, or hold it in your memory.

This "fantastic memory" could be partly the result of scholars compressing the hell out of what they read.  We know this was so for ancient bards, who recited oral poems that might be hours long.  Those epic poems are, by modern standards, staggeringly repetitive.  Studies of contemporary Serbian epics, which are still told in Homer's dactylic hexameter, string together hundreds of well-worn tropes and scenes in new orders, with new names.

So in the ancient world, it was adaptive to have an aggressive compression algorithm which threw away a lot of information, thus allowing one to remember more of the things one wanted to remember – at the expense of simplifying one's world model and rendering one incapable of remembering or understanding ideas which didn't fit into that model (like G.K. Chesterton).

[zzen] "When the moment of seeing struck me, I fell over laughing and basically didn’t stop laughing for two days, because it was so incredibly stunningly obvious" (Valentine 2018, on Zen).

[zzempty]  I could be wrong, and it may be that different Buddhists mean significantly different things by "emptiness".

[zzborges] (Borges 1942) is the story of a man whose memory is so perfect in detail that he has great difficulty recognizing that a person seen now is the same person he saw yesterday, or that a dog seen from the left side is the same as when seen from the right side, because he's conscious of every miniscule difference, and incapable of abstracting them away.  "Borges and I" (Borges 1962) is an interior monologue of Borges' own struggle to understand which Borges he is, the public one or the private one, at any moment.

[zzmott] I had an argument with one Marxist friend after he said "property is theft".  I replied by describing a case where some workers got together and built a bread oven so that they could bake and sell bread.  After they'd built the oven, they began renting it out to other bakers when they weren't using it, making them capitalists.  Was this theft?  He said no, but then went right back to assuming that all property is theft.  I was unable to get him to notice that he kept switching forth between the two ways of quantifying his statement, even by pointing it out.

[zzsimplicity] I think that maybe what happened in the Axial Age was that philosophers figured out clever ways to make religion plausible, thus getting people to take religion seriously for the first time.  The behavior of Western "pagans", notably the Vikings, Romans, Greeks, and Sumerians, was what we today would call very irreligious:  they didn't like most of their gods very much, didn't have any theology, didn't think much about the afterlife, and religion scarcely affected their morals, thoughts, or actions.  Religion was practical, binding communities together and declaring political allegiances; and transactional, exchanging praise and sacrifice some immediate, tangible rewards.  Sumerian hymns to their gods are blatantly dishonest; worshippers praise whichever god they're talking to at the moment as the most-important of all the gods, even though for some gods this was ridiculous, and usually the worshipper was going to give the same praise to some other god the next day.

Before the Axial Age, religion was inherently ridiculous.  Gods were posited as the creators and bringers of order who first rolled that great ball uphill, but any fool could ask, "But who created the gods?", and most of them probably did.  Plato found a very clever and complex way around that problem.  He posited that God was not complex, but perfectly simple – never changing, having no parts – and developed an ontology in which the complex develops from the simple by a process of subtraction.  God's simplicity consists of having all properties and being perfect in all of them.  God is simple in information-theoretic terms in that all one must say or can say to describe God is, "God is perfect."  The universe emanates from God in an iterative process of losing things; beings give rise to lesser beings and things lacking some of their generator's properties and having others to a lesser degree.  Earthly beings thus take much longer to describe.

Plato's process of simplicity giving rise to complexity is, in information-theoretic terms, equivalent to saying that entropy always decreases, while its manifestation in an endless chain of ever-more-degraded beings says that entropy always increases.  Thermodynamics tells us this is incoherent.  Plato said that God was perfectly simple, yet also infinite in knowledge; information theory tells us this is impossible.

The terrible consequence of Plato's error was that, in order to make it plausible that God could have been the first thing, with no need for a creator of God, Plato had to associate simplicity, stasis, and death (which is simple and unchanging) with goodness; and complexity, change, and life with evil.  This inversion of thermodynamics thus inverted the natural value system of pagans, who thought life was good and death was bad.

[zzbeethoven] I suspect the first European widely recognized to be creative was Goethe or Beethoven, but I haven't found a reference.

[zzcreate] Shakespeare's character Polixenes approved of the artificial creation of hybrid plants in The Winter's Tale, but apparently because, though a human provided direction, nature did the work.  Other early quotes on creativity:

He ever invented new waies of more perfect abstinence, and by exercise did daily more & more increase therein : and although he had already attained unto the highest degree of perfection , yet some things alwaies he did as a new beginner innovate ? punishing, with afflictions, his fleshly concupissence [sexual lust]. -- (Bonaventure 1260/1610 p. 46; the '?' seems to be a typo for ';')

What a notorious follie were it to innovate without infallible assurance of the better? -- (Harvey 1593 p. 133)

Men cannot make worlds; nor can their will create goodnesse in acts indifferent, nor can their forbidding will illegittimate or make evil any actions indifferent, and therefore things must be morally good, and so intrinsically good without the creative influence of humane Authority, and from God only are they apt to edifie, and to oblige the conscience in the termes of goodnesse morall. -- (Rutherfurd 1646 p. 204)

If I have read the New Testament aright, it leaves no room for 'creativeness' even in a modified or metaphorical sense… an author should never conceive himself as bringing into existence beauty or wisdom which did not exist before, but simply and solely as trying to embody in terms of his own art some reflection of eternal Beauty and Wisdom. – C.S. Lewis 1939, in Lewis 1967 chapter 1 p. 6-7

In ultimate philosophy, as in ultimate theology, men are not capable of creation, but only of combination. – G.K. Chesteron, 1931 (Chesterton 2012 p. 82).

The one quote that approved of creativity, I could find no source for:

He who can copy, can create. – Attributed to Leonardo da Vinci (1452-1519) in (Matisse 1908 p. 39)

[zzsorokin] Sorokin mistakenly classified modern art as sensate, because he thought the artists themselves were Bohemian sensualists, and because he hated it.  He was a passionate rationalist, so pure that he formed his own "purer" church, which splintered from a church that had splintered from the allegedly even less-pure Russian Orthodox.  You can feel the spittle on your cheeks when you read his venomous denuciations of modernity and "sensate" culture.

[zzmachina] The phrase "Deus ex machina" (god from the machine) came about because the ancient Greeks called the crane for hoisting actors into the air "the machine".  I'm not aware of any equally-complicated machine which the classical Greeks knew of.  The Greek theater used another impressive machine, the ekkyklēma, a rolling or rotating stage used to mark scenes that took place indoors.  So machinery may have developed as much for the arts as for the sciences.

[zzdynamic] If they had only linear motion, they couldn't stay in one place.

[zzpollock] I'm cheating a little here.  Pollock generated some of his paintings using a chaotic process: dripping paint from a spinning paint can swinging from a rope.  So this may be "on the edge of chaos"; but if it is, it's on the edge nearer to randomness than to order.

[zz2] Charles Peirce's original pragmatism was an epistemology, not an ethical system.  I grouped pragmatism with ethical theories here because Pinker seems to give the values of humanism the same sort of faithy free pass that William James' pragmatism gives to Christianity (Campbell 1996).

[zz11] Pinker is obsessed with conspiracy theories.  He uses "conspiracy" and "conspiracies" 43 times.  4 of those 43 are while explaining that "Conspiracy theories, for their part, flourish because humans have always been vulnerable to real conspiracies" (in The Psychology of Apocrypha).  The others all imply that anyone who holds any conspiracy theory at all is necessarily insane.  Is that rational?

I hate this derogatory use of the phrase "conspiracy theories".  I've never understood, for example, why the idea that Covid-19 escaped from the Wuhan Institute of Virology was dismissed as "a conspiracy theory".  (See Bloom et al. 2021 for a more-moderate view.)  We know perfectly well that there was a conspiracy, by some elements in the Chinese government, to keep the outbreak secret as long as possible, and then to obstruct outsider investigations into the outbreak or the lab.  My gripe is that the only argument presented to the public against the theory was that it was a conspiracy theory.  The only good reason to claim that conspiracy theories are crazy is if you're part of some conspiracy.

Pinker could have used the dismissal of Wuhan Lab origin theory as an example of the neglect of priors.  There was only one laboratory in China studying the coronavirus, and the virus broke out in only one city, and it was the city with that one lab.  The prior odds of that under the lab-origin theory are 1.  If we neglect cities with fewer than a million people, the priors under the animal-origin theory are 1/65.  The priors alone put the lab-origin theory over the 98% confidence level.

[zzgeom] I should have more citations here, but I never bothered recording them, and it isn't as common as you might expect for people to come out and say it explicitly.  Plato allegedly said that "God was a geometer", and allegedly wrote "Let no man ignorant of geometry enter here" over the entrance to his Academy, but after much searching, I've failed to find a source for either quote.  (As Plato's "Academy" was a large garden, it might not have even had an entrance one could write over.)  And whenever Plato tries to use math in his writings, he either bungles it completely (as in the Meno, in which he fails to understand how the proof he has mechanically recited conveys knowledge of its conclusion, and instead concludes that someone who understands a proof must just be remembering something he already knew), descends into mystical numerology (as in the Republic, notably the discussion of the "marriage number"), or draws ridiculous conclusions from the naive application of combinatorics (also in Republic).

[zzzeno] Zeno used a non-linear discrete function, zn+1 = zn / 2, in his famous paradoxes.

[zzblankslate] You'll still find rationalists accusing empiricists of believing the human brain is a "blank slate" at birth.  That notion has been dead since Darwin.  Ironically, it's now the rationalists who believe in the "blank slate" (Pinker 2003).

[zzbacon]

The [Aristotelian] syllogism is made up of propositions, propositions of words, and words are markers of notions. Thus if the notions themselves (and this is the heart of the matter) are confused, and recklessly abstracted from things, nothing built on them is sound. The only hope therefore lies in true Induction. -- Bacon 1620, Aphorism 14

There are and can only be two ways of investigating and discovering truth. The one rushes up from [past] the sense and particulars to axioms of the highest generality and, from these principles and their indubitable truth, goes on to infer and discover middle axioms [deductive rationalism]; and this is the way in current use. The other way draws axioms from the sense and particulars by climbing steadily and by degrees so that it reaches the ones of highest generality last of all [inductive empiricism]; and this is the true but still untrodden way. —  Bacon 1620, Aphorism 19

The Subjects about which they [the medieval theologians] were most conversant, were either some of those Arts, which Aristotle had drawn into Method, or the more speculative parts of our Divinity. … They began with some generall Definitions of the things themselves, according to their universal Natures: Then divided them into their parts, and drew them out into severall propositions, which they layd down as Problems: these they controverted on both sides: and by many nicities of Arguments, and citations of Authorities, confuted their adversaries, and strengthned their own dictates. But ...  yet it was never able to do any great good towards the enlargement of knowledge: Because it rely'd on generall Terms, [17] which had not much foundation in Nature;and also because they took no other course, but that of disputing.

That this insisting altogether on established Axioms, is not the most usefull way, is not only cleer in such airy conceptions, which they manag'd : but also in those things, which lye before every mans observation, which belong to the life , and passions, and manners of men; which, one would think, might be sooner reduc'd into standing Rules. As for example: To make a prudent man in the affairs of State, It is not enough, to be well vers'd in all the conclusions, which all the Politicians in the World have devis'd, or to be expert in the Nature of Government, and Laws, Obedience, and Rebellion, Peace, and War: Nay rather a man that relyes altogether on such universal precepts, is almost certain to miscarry. But there must be a sagacity of judgement in particular things: a dexterity in discerning the advantages of occasions: a study of the humour, and interest of the people he is to govern: The same is to be found in Philosophy; a thousand fine Argumentations, and Fabricks in the mind, concerning the Nature of Body, Quantity, Motion, and the like, if they only hover a-loof, and are not squar'd to particular matters, they may give an empty satisfaction, but no benefit, and rather serve to swell, then fill the Soul.

But besides this, the very way of disputing itself, and inferring one thing from another alone, is not at all proper for the spreading of knowledge. It serves admirably well indeed, in those Arts, where the connexion between the propositions is necessary, as in the Mathematicks, in which along train of Demonstrations, may be truly collected, from the certainty of the first foundation: But in things of probability onely, [18] it seldom or never happens, that after some little progress, the main subject is not left, and the contenders fall not into other matters, that are nothing to the purpose: For if but one link in the whole chain be loose, they wander farr away, and seldom, or never recover their first ground again. In brief, disputing is a very good instrument, to sharpen mens wits, and to make them versatil, and wary defenders of the Principles, which they already know: but it can never much augment the solid substance of science itself: And me thinks compar'd to Experimenting, it is like Exercise to the Body in comparison of Meat. – Sprat 1667 p. 16-18

It was therefore, some space after the end of the Civil Wars at Oxford, in Dr. Wilkins his Lodgings, in Wadham College, which was then the place of Resort for Vertuous, and Learned Men, that the first meetings were made, which laid the foundation of all this that follow'd. … Their first purpose was no more, then onely the satisfaction of breathing a freer air, and of conversing in quiet one with another, without being ingag'd in the passions, and madness of that dismal Age.  And from the Institution of that Assembly, it had been enough, if no other advantage had come, but this: That by this means there was a race of yong Men provided, against the next Age, whose minds receiving from them, their first Impressions of sober and generous knowledge, were invincibly arm'd against all the inchantments of Enthusiasm. … [p. 54] ... such spiritual Frensies, which did then bear Rule, can never stand long, before a cleer, and a deep skill in Nature. … [p. 56]  The contemplation of that [Nature], … never separates us into mortal Factions; [it] gives us room to differ, without animosity ; and permits us, to raise contrary imaginations upon it, without any danger of a Civil War. – Sprat 1667 p. 53-56

[zzonce] I did write out and debug a program on paper (a simple text-based arcade game), then type the whole thing into a computer and have it work at the first go – once.  But that was only because I was very, very practiced in debugging code on paper, because

So someone could become good enough at debugging his deductions to draw a valid conclusion rationally, once in a great while, if he first worked as an empiricist, experimentally testing out all of his thoughts and theories.

[zzclt] We expect the probability distributions of features of {natural kinds, such as humans}, to have normal distributions on every dimension, because the features of natural kinds are determined by a large number of underlying causes, and the central limit theorem tells us that the sum of N probability distributions approachs the normal distribution as N increases.

[zzlewis1] C.S. Lewis claimed that the knowledge of what things are good and what things are bad was obvious, universally agreed-on, and universally and eternally true (Lewis ~1943 p54-55, Lewis 1943 chapter 1); and therefore it was unproblematic to follow Plato's doctrine that we must indoctrinate a child completely "before he is of an age of reason" (Lewis 1943 p. 17).  The only concession he made to this was that he happened to live in the only time in the history of civilization when others were so perverse and "corrupt" as not to recognize these universal morals (see, e.g., chapter 7 of Lewis 1961, or chapter 1 of Lewis 1943).

[zzculler] "Essences" means Platonic forms.  De Man and Culler were claiming that metaphor could be effective only if author had divined that the Forms of the things being compared shared an essential property, pointing to it in the programmer's sense of "pass-by-reference".

[zzsartre]

As we shall see … the existentialists will place great significance on such emotions as ‘anguish’ (which Kierkegaard called our awareness of our freedom) and feelings like ‘nausea’ (which Sartre characterized as our experience of the contingency of existence and a ‘phenomenon of being’). – (Flynn 2006 p. 7)

The essential thing is contingency. I mean that one cannot define existence as necessity. To exist is simply to he there; those who exist let themselves be encountered, but you can never deduce anything from them. I believe there are people who have understood this. Only they tried to overcome this contingency by inventing a necessary, causal being. But no necessary being can explain existence: contingency is not a delusion, a probability which can be dissipated; it is the absolute, consequently, the perfect free gift. All is free, this park, this city and myself. When you realize that, it turns your heart upside down and everything begins to float, as the other evening at the “Railwaymen’s Rendezvous”: here is Nausea; here there is what those bastards—the ones on the Coteau Vert and others—try to hide from themselves with their idea of their rights. But what a poor lie: no one has any rights; they are entirely free, like other men, they cannot succeed in not feeling superfluous. And in themselves, secretly, they are superfluous, that is to say, amorphous, vague, and sad. – (Sartre 1932/2013, the third Monday, 6:00 pm; Google's scan of the New Editions 2013 printing has no page numbers)

De trop: it was the only relationship I could establish between these trees, these gates, these stones. In vain I tried to count the chestnut trees, to locate them by their relationship to the Velleda, to compare their height with the height of the plane trees: each of them escaped the relationship in which I tried to enclose it, isolated itself, and overflowed… And I—soft, weak, obscene, digesting, juggling with dismal thoughts—I, too, was de trop. … I dreamed vaguely of killing myself to wipe out at least one of these superfluous lives. But even my death would have been de trop. De trop, my corpse, my blood on these stones, between these plants, at the back of this smiling garden. And the decomposed flesh would have been de trop in the earth which would receive my bones, at last, cleaned, stripped, peeled, proper and clean as teeth, it would have been de trop: I was de trop for eternity. – (Sartre 1932, 1949 New Directions edition, quoted in (Sartre 1943/1956 p. xvi-xvii).  The de trop here is sometimes translated as "superfluous". Lloyd Alexander (the New Directions and Penguin translator) rendered it "in the way"; Hazel Barnes restored the original French when quoting Alexander.)

Sartre's response to the "nausea" of being "superfluous" rather than "necessary" was to invent a morality of freedom, which meant that the only moral action was to act freely.  So he came to praise this very contingency which disturbed him so much.  But his praise of choice and action, for the sake of choosing and acting, has a distinctly nihilist and Nazi[zznn] flavor.

[zznn] For the claim that Nazism was nihilistic:

Franz Borkenau, R. G. Collingwood, Aurel Kolnai, Karl Polanyi and F. A. Voigt … Georges Bataille, Ernst Bloch, Lucie Varga and Eric Voegelin… all concluded that beyond ideology, the fundamental characteristic of Nazism was its quest for power; power as an end in itself, not power as the means for implementing a particular social programme; power as an affective force, an inherent aggression that permanently needed an outlet. Only this belief in life-sustaining struggle, they suggested, managed to hold disparate groups of supporters together, and maintained such a high level of support whether or not the individual promises of the Nazis were fulfilled. – (Stone 2003 p. 18)

Regarding its rationalism, "the Jewish conspiracy runs everything" isn't irrational.  It's unempirical.  It would have been easy to show that Jews did not, in fact, run everything, by listing the people in power and counting how many of them were Jewish.  But listing and counting is empirical, not rational.

[zzanalysis] Empirical and rational math are connected by analysis, a logical, mathematical system to justify the use of infinitessimals and real numbers.  So rationalism does, technically, work in the real world.  But we're running into something I call the problem of levels.  Symbolic logic can encompass connectionist logic, if you take your symbols as being the nodes in a neural network.  But that's very misleading, because those symbols don't represent things on the same level that everyone who uses symbolic logic wants to use.  You want to talk about "dog", but your symbols have meanings like "a circle of angular diameter of 1 arcminute, centered at (angle 1.3 radians, radius .3 radians) in the visual field, is more red than green in the center and more green than red around that center."  In just the same way, rationalist logic can be used to talk about real numbers and infinitessimals, but doing so introduces a new level of abstraction that's so complicated humans can't do it.

[zzutility] Empiricists usually use utility theory, which is "rational" in being mathematical.  Rationalists usually use deontology or virtue ethics, which are even more "rational" in that they don't allow the use of real numbers, or of any numbers or measurement at all.  Measurements apply only to the material world, so using measurements in your value system implies it isn't spiritual, and wrecks the rationalist conflict-resolution method, which is hierarchical: every value has a position in the Great Chain of Being, and higher values always trump lower values, regardless of quantity (e.g., one human life is worth more than all the redwood forests).

[zzbn] Sartre came up with the philosophy he described in Being and Nothingness (Sartre 1943) while a prisoner of the Nazis.  Yet its explication of freedom as the basis of morality (p. 437-438), if one has the patience to decode it, justifies Nazi genocide as well as any other action, if it's done freely, in "good faith":

The nihilation by which we achieve a withdrawal in relation to the situation is the same as the ekstasis [ecstasy] by which we project ourselves toward a modification of this situation. The result is that it is in fact impossible to find an act without a motive but that this does not mean that we must conclude that the motive causes the act; the motive is an integral part of the act. For as the resolute project toward a change is not distinct from the act, the motive, the act, and the end are all constituted in a single upsurge. Each of these three structures claims the two others as its meaning. But the organized totality of the three is no longer explained by any particular structure, and its upsurge as the pure temporalizing nihilation of the in-itself is one with freedom. It is the act which decides its ends and its motives, and the act is  the expression of freedom.

This is probably because he came up with these ideas while studying Heidegger, the most-Nazi of major philosophers.

[zzchandler] (Chandler 2001) explains at length that the word is not the thing it refers, and is not logically determined by the thing it refers to, at least eleven times in just the first 2 of its 7 chapters (after which point I quit reading in disgust).

[zzbaudrillard]

All of Western faith and good faith was engaged in this wager on representation: that a sign could refer to the depth of meaning, that a sign could exchange] for meaning and that something could guarantee this exchange--God, of course.  But what if God himself can be simulated, that is to say, reduced to the signs that were to test his existence? Then the whole system becomes weightless, it is no longer anything but a gigantic simulacrum — not unreal, but a simulacrum, never again exchanging for what is real, but exchanging in itself, in an uninterrupted circuit without reference or circumference.

                        -- Jean Baudrillard, "The Precession of Simulacra", in Leitch 2010 p. 1560.

[zzraper] I'd link you to his original post, but it's been taken over by a malware distributor.  Fortunately, you can still find it on archive.org.

[zzwrong]Poor old Epicurus is snuggled up against his opponent Pascal (Guyau 2021 p. 166), because "influenced by" can mean "reacted against".  Paul Feyerabend is a cuckoo's egg planted among the semi-sane people.  Baudrillard is tiny; Murray Rothbard is enormous; and Gautama Buddha and most other Asian philosophers aren't even listed (perhaps owing to the unfortunate habit of Westerners of calling their theologians "philosophers" but calling Asian philosophers "religious".)

It's hard to make sense of the Islamic section, other than to note that, according to its color, it clusters together (as it should) with Plato, Aristotle, St. Augustine, the medieval scholastics, Descartes, and Leibniz.  But the Islamic part of the graph is disconcertingly heterogeneous; radical Islamists aren't clustered together, but peppered throught it.  The gentle poet Rumi is sandwiched between Islamic extremists Ahmad Sirhindi[zzSir] and Abul Ala Maududi.  Muhammad ibn Abd al-Wahhab, founder of the evangelical fundamentalist Wahhabi movement (Al-Malki 2014, Crooke 2015), is grouped with the continental philosophers, between Simone Weil and Albert Camus–apparently due to ties to Nietzsche and Kierkegaard.  Ibn Taymiyyah, the favorite medieval philosopher of Islamic terrorists (Irwin 2001, Islamic Philosopher 2015, Wikipedia entry on al-Qaeda, ISIS 2016 p. 16), appears to be a link between Islam and continental philosophy, sitting just north of Jacques Derrida (and sized much too small for his influence).  I think this is an artifact of squeezing the network into 2 dimensions, because the connections between them travel far away before coming back again.  Yet it's correct, because Ibn Taymiyyah's arguments against Aristotle in (Taymiyya 1309/1993) were repeated, probably unknowingly, by Derrida and other 1960s French postmodernists as arguments against science.  A more-recent founder of radical Islam, Sayyid Qutb, is missing entirely despite there being more references to him on Wikipedia than to Ahmad Sirhindi, Abul Ala Maududi, or Muhammad Iqbal (the largest nodes in the Islamic neighborhood).

[zzfw4] On the other hand, Freidrich Wilhelm IV tried to suppress Hegelianism because he thought it was too liberal.

As reported by Arnold Ruge recalls (see Hoffmeister’s edition of Hegel’s Letters, vol. III, p. 472, footnote to letter #687), the Crown Prince claimed, addressing Hegel directly: “It’s outrageous that Professor Gans wants to transform all our students into Republicans. His lessons on your philosophy of RIGHT, Professor Hegel, are always attended by several hundreds of students, and it is widely known that he gives to your own thought a completely liberal, I would say Republican, color. – (Froeb & Canfora 2007)

[zzgorgias] The doctrine of "will to power" can be traced all the way back to the Archidamian War, in which it was held at least by some representatives of Athens – an early instance of loss of faith in the gods, combined with a lack of knowledge of empiricism, leading to the opposite extreme of complete relativism.  It's defended by Callicles (and by implication Gorgias, and attacked by Plato) in Plato's Gorgias (Taylor 1930 p. 370).

[zzhitler] Hitler claimed that he carried a copy of (Schopenhauer 1818) with him all through World War 1 (Spotts 2002).

[zzlewis2] C.S. Lewis wrote several times that the foundation of his Christianity was his sensation of the existence of faerieland.  "Faerieland" was his term for another world bordering our own, yet somehow more real, and the source of everything that happened in our world--the true world, of which ours is a reflection or allegory.  His view of it isn't quite Plato's; Lewis thought that the physical world gained rather than lost stature through its contact with "Faerieland".

It must be more than thirty years ago that I bought -- almost unwillingly, for I had looked at the volume on on a dozen previous occasions--the Everyman edition of Phantastes. A few hours later I knew that I had crossed a great frontier.  … the whole book had about it a sort of cool, morning innocence, and also, quite unmistakably, a certain quality of Death, good Death. What it actually did to me was to convert, even to baptise (that was where the Death came in) my imagination. It did nothing to my intellect nor (at that time) to my conscience. … But when the process was complete… I found that I was still with MacDonald and that he had accompanied me all the way and that I was now at last ready to hear from him much that he could not have told me at that first meeting. … The quality which had enchanted me in his imaginative works turned out to be the quality of the real universe, the divine, magical, terrifying and ecstatic reality in which we all live. I should have been shocked in my 'teens if anyone had told me that what I learned to love in Phantastes was goodness. But now that I know, I see there was no deception. The deception is all the other way round — in that prosaic moralism which confines goodness to the region of Law and Duty, which never lets us feel in our face the sweet air blowing from the "land of righteousness," never reveals that elusive Form which if once seen must inevitably be desired with all but sensuous desire — the thing (in Sappho's praise) "more gold than gold."

              -- CS Lewis, Introduction to George MacDonald, Phantastes, reprinted 2000, Grand Rapids MI: Eerdmans.

Lewis also wrote that it didn't really matter to him whether faerieland was real; it just made him feel better:

Suppose we have only dreamed, or made up, all those things--trees and grass and sun and moon and stars and Aslan himself. Suppose we have. Then all I can say is that, in that case, the made-up things seem a good deal more important than the real ones.

        – Puddleglum, in (Lewis 1953)

[zzpenn] And these personal conversations are always a little creepy.  As the stage magician Penn Jillette said,

The question I get asked by religious people all the time is, without God, what’s to stop me from raping all I want? And my answer is: I do rape all I want. And the amount I want is zero. And I do murder all I want, and the amount I want is zero. The fact that these people think that if they didn’t have this person watching over them that they would go on killing, raping rampages is the most self-damning thing I can imagine.

[zzgoetz] (Goetz 2009) made another argument that reason might kill us all, which, if clarified, steel-manned, and restricted, says

An additional complication not mentioned by Goetz is that, if this is a good argument, then reasonable agents may habitually commit local, pre-emptive genocides or cullings, to prevent the inevitable universal conflagration, like fire-fighters who regularly start small forest fires to prevent larger ones from bursting out.  For example:  If you believe it's likely that someone will build an Artificial General Intelligence (AGI) within the next 30 years that will kill us all, shouldn't you be trying to stop them by provoking a global thermonuclear war?  And if rational agents do such things, will one of their small forest fires inevitably grow out of control?

But as I don't think this sort of thinking is a common reason for embracing unreason, I didn't mention it above.

[zzdarwin]

With savages, the weak in body or mind are soon eliminated; and those that survive commonly exhibit a vigorous state of health. . . . We civilised men, on the other hand, do our utmost to check the process of elimination; we build asylums for the imbecile, the maimed, and the sick; we institute poor-laws; and our medical men exert their utmost skill to save the life of every one to the last moment. There is reason to believe that vaccination has preserved thousands, who from a weak constitution would formerly have succumbed to small-pox. Thus the weak members of civilised societies propagate their kind. No one who has attended to the breeding of domestic animals will doubt that this must be highly injurious to the race of man. It is surprising how soon a want of care, or care wrongly directed, leads to the degeneration of a domestic race; but excepting in the case of man himself, hardly any one is so ignorant as to allow his worst animals to breed.  – Charles Darwin, The Descent of Man, and Selection in Relation to Sex, cited in (Wiker 2008 p. 85-86).

If...various checks...do not prevent the reckless, the vicious and otherwise inferior members of society from increasing at a quicker rate than the better class of men, the nation will retrograde, as has occurred too often in the history of the world. – Ibid.

[zzracism] I checked the number of hate crimes reported in the most-recent year, 2019, on the ADL Hate Crime Map, for these cities, which were the only ones I checked.  Other stats are from the Census Bureau, except for votes in the 2016 Presidential election, taken mostly from Harvard.

I began with the two most-progressive big cities in America, two small cities whose voting slant I didn't know, and what I thought was the Trumpiest big city, Colorado Springs.  The results showed a strong correlation between progressive big cities and hate crimes, so I figured people would say the dataset was just too small, and added a bunch more.  I didn't check any stats before adding a city, and didn't remove any cities after looking up their stats.

Population April 2020

White alone, percent

Black alone

Bachelor's degree and over 25

Per capita income (2020 $s)

Pop. per square mile

fraction

vote for Trump

Hate crimes, total

per 100K

Race based, total

per 100K

Sexual orient, total

per 100K

Alexandria

159,467

59.80%

21.90%

65.10%

64,835.00

9,314.30

0.188

5

3.10

2

1.30

2

1.30

Austin TX

961,855

69.40%

7.80%

53.40%

44,829.00

2,653.20

0.307

10

1.00

5

0.50

3

0.30

Billings MT

117,116

87.80%

1.20%

35.60%

36,709.00

2,399.50

0.623

4

3.40

2

1.70

1

0.90

Boston MA

675,647

52.10%

24.20%

51.30%

46,845.00

12,792.70

0.147

188

27.80

113

16.70

47

7.00

Cambridge

118,403

64.30%

10.40%

79.10%

61,036.00

16,469.10

0.067

20

16.90

9

7.60

3

2.50

Colorado Springs

478,961

77.30%

6.30%

39.60%

35,506.00

2,140.60

0.624

12

2.50

6

1.30

4

0.80

Detroit MI

639,111

14.40%

77.10%

16.40%

19,569.00

5,144.30

0.032

52

8.10

30

4.70

18

2.80

DC

689,545

46.00%

46.00%

59.80%

58,659.00

9,856.50

0.043

202

29.30

107

15.50

60

8.70

Houston TX

2,304,580

51.50%

22.80%

34.30%

33,626.00

3,501.50

0.758

29

1.30

13

0.60

9

0.40

Lubbock TX

257,141

76.30%

8.00%

33.70%

28,662.00

1,875.40

0.701

3

1.20

1

0.40

2

0.80

Minneapolis

429,954

62.90%

18.90%

51.80%

40,368.00

7,088.30

0.128

32

7.40

17

4.00

6

1.40

Philadelphia

1,603,797

39.30%

41.40%

31.20%

29,644.00

11,379.50

0.157

12

0.70

7

0.40

1

0.10

Raleigh NC

467,665

57.50%

28.90%

51.40%

40,520.00

2,826.30

0.393

14

3.00

7

1.50

5

1.10

Richmond VA

226,610

47.70%

46.90%

41.20%

35,682.00

3,414.70

0.177

5

2.20

3

1.30

2

0.90

San Francisco

873,965

44.90%

5.10%

58.80%

72,041.00

17,179.20

0.099

64

7.30

35

4.00

22

2.50

Seattle WA

737,015

65.80%

7.10%

65.00%

63,610.00

7,250.90

0.084

304

41.20

178

24.20

80

10.90

Spokane WA

228,989

83.70%

2.40%

31.90%

30,791.00

3,526.20

0.547

9

3.90

4

1.70

3

1.30

US

331,449,281

76.30%

13.40%

32.90%

35,384.00

87.40

0.489

6406

1.90

You can tell by eyeballing this that big cities have much higher rates of hate crime, racist and otherwise, than small ones.  But wait – the cities with low rates of racial hate crime seem to be whiter than the cities with high rates.  I thought maybe there just aren't enough people of color in those cities for many racial hate crimes to be committed against them.  But then I checked the FBI's Uniform Crime Reporting statistics on hate crimes for 2019 and saw this:

In 2019, "race" was reported for 6,406 known hate crime offenders. Of these offenders:

52.5 percent were White [including Hispanic].

23.9 percent were Black or African American.

6.6 percent were groups made up of individuals of various races (group of multiple races).

1.1 percent were American Indian or Alaska Native.

0.9 percent (58 offenders) were Asian.

0.3 percent (22 offenders) were Native Hawaiian or Other Pacific Islander.

14.6 percent were unknown.

"Race" was recorded for 85.4% of these offenders.  The US Census Bureau says that America is 76.3% white alone (including Hispanic) and 13.4% black or African American alone.  So the odds of a random black person committing a hate crime are ((23.9/85.4) / 13.4) / ((52.5/85.4) / 76.3) = 2.59 times as great as the odds of a random white person committing a hate crime, and (23.9 / 13.4) / (0.9 / 5.9) = 11.7 times as great as the odds for a random Asian-American.  This makes controlling for "number of race-based targets available" problematic.

I decided to do a multiple linear regression of hate crimes per 100K on education, income, fraction of people who spoke English at home, population density, and voting, mostly because that was the only data available that anybody would argue might be relevant.  (Male vs. female would be relevant, but differed little from city to city.)  Language appeared to be unimportant.  I don't see any consistent impact of racial demographics in the table, and its impact on the regression didn't make sense – it was telling me that increases in both whites and blacks reduced hate crimes.  Unless we're willing to blame all hate crime on Asians, as offenders or victims, that meant that my regression was sucking effect from some other variables into those, and that I didn't have enough data for 6 dependent variables.  The sum of whites, Hispanics, and Asians (the major races which commit hate crimes at the lowest rate) mostly fell in the 80%-88% range, so that didn't have much variance anyway.  So I dropped race.  Regressing on the 4 remaining dependents showed that population density and voting for Trump are much too correlated with each other to use them both.  So I ran two linear regressions, using the Google Sheets XLMiner Analysis ToolPak, producing these equations (the dependent variables are all normalized):

racial hate crimes = .84*education + 1.46*income + 1.93*log(pop/mi2) + 5.13, R-squared = 0.29

standard errors of coefficients = (3.84, 4.17, 2.13); p-values of coefficients = (0.83, 0.73, 0.38)

racial hate crimes = .22*education + 2.05*income - 2.49*Trumpiness + 5.14, R-squared = 0.34

standard errors of coefficients = (3.69, 3.74, 1.80); p-values of coefficients = (0.95, 0.59, 0.19)

That R-squared shows these variables accounts for only about a third of the variance in racial hate crimes.  The regression on voting beat the regression on population density in R2, standard error, and p-value, but not by much.  So I can't say that either low population density or voting for Trump is more-causal in reducing racial hate crimes, but it doesn't matter for my purpose, which was to show that racism really is much worse in big cities than in rural, Trumpy areas.  It's intriguing that both higher education and higher income in a city appear to increase hate crime.  I don't think better-educated and well-off people are more likely to commit hate crimes, but I do think they're more-likely to incite hate crimes.  But those standard errors and p-values are lousy, so meh.

It could be that the coefficients on higher income and education are really just showing that they're both correlated with voting against Trump.  So I ran a final regression, with Trumpiness as the only dependent variable:

racial hate crimes per 100K = 5.14 - 3.48*Trumpiness, R-squared = 0.26

standard error of coefficient = 1.55; p-value of coefficient = 0.04

Then it occurred to me that, given such a great p-value, it seemed likely the R2 was lousy because the model wasn't linear.  The distribution of hate crimes per 100K was obviously not linear in anything; it had some extremely large values.  What if hate crimes interact, each one possibly triggering another, like an avalanche in a sandpile (Bak 2006)?  Then I'd be using a linear model to model a power-law function.

So I plotted that last regression:

Sure enough, it looks like an exponential function.  So this time, I tried to fit the regression to the logarithm of the Clinton vote, and got this:

log(racial hate crimes per 100K) = 0.37 + 0.35*Clintonality, R-squared = 0.39

standard error of coefficient = 0.11; p-value of coefficient = 0.01

That's proof at the 99% confidence level that voting for Clinton in 2016 correlated with racial hate crimes.  I'm not saying it's proof in the rationalist sense.  Just proof of correlation at the 99% confidence level.  It could be that people vote for Clinton not because they're racist, but because they see lots of racism.  Nobody polled the hate-crime offenders to see whom they voted for.  But that, too, would necessarily decouple Trumpism from racist attitudes, as we would then still expect to see more, rather than fewer, hate crimes in Trumpy places.

You could try to attribute the lower level of hate crime in Trumpy cities to the lower rate of crime in general in rural areas, and hope to show that the reduction in hate crimes in rural areas was less than for other crimes.  I don't think it would work; ADL doesn't report hate crimes for cities with fewer than 100,000 people, so all of the cities in that dataset have pretty high crime rates, even Billings, MT.  And most other crimes are motivated by desire or by status, so they should scale differently with poverty or competition anyway.

I later tested racial demographics again, using the (normalized) probability that 2 people chosen at random from a city were of 2 different races out of {white+Hispanic, Asian, black}.  This looks like a significant factor when plotted by itself, with a coefficient of 0.24 and a p-value of 0.09.  But the R2 was only 0.18; and when I added Trumpiness back in, the racial composite variable dropped out completely (coefficient -0.01, p-value 0.93), while the R2 was the same as with Trumpiness alone, and the adjusted R2 was less.  The likelihood of two people of different races meetings seems not to have any impact on the number of racial hate crimes.  Perhaps the effect of a more target-rich environment is cancelled out by the effect of acclimatization to other races.

Northern and coastal or waterfront cities seem to have more racial hate crime than southern and interior ones.  I don't have quantitative data for this, but perhaps the coastal factor is related to the fact that coastal cities northeast of Philadelphia, and on the west coast, have a culture descended from the Puritans, while interior ones were established by agrarians, often Quakers (Fischer 1991, Garreau 1992, Colin 2011).

[zzmedia] I should probably make an annotated list of incidents showing that the major media systematically suppress any information which opposes their narrative, but I don't think it's worth taking the time to do it properly.  It's intrinsically futile, as we need a survey of a random sample of cases of media censorship.  For that I have no other source but those cases I've stumbled across myself accidentally, since cases of media censorship by definition can't be found in the media doing the censorship, and can usually only be found via opposing groups, which don't give a random sample.  But I can at least list ones that come to mind immediately:

[zzornot] Or not.  The New Testament makes it clear that St. Peter wasn't Pope of anything, not even of the Jerusalem church; which shoots to Hell Catholicism's foundational myth of the Apostolic succession, but Catholics know the texts and don't care.  The number of people killed in Christ's name, from the Battle of the Milvian Bridge to the present, was well over 10 million, and by percentage of population, would give communism a run for its title as the most-lethal ideology in history.  (I've compiled a list of well-substantiated Christian atrocities, but it's about as long as this document.)  But the Christians I've explained this to, like Marxists presented with a history of Marxism, reply with "No true Scotsman."  (Or, in some cases, "They deserved it.")

[zzfunding]

[zzbuffalo] Once, when I lived in Buffalo, I knew the daughter of a billionaire.  She had the habit of referring to Buffalo, grammatically, as if it were a person.  I asked her why.  She said that when rich people say "Buffalo", they don't mean the city, nor the people in the city, but the rich people who control it.  So it makes perfect sense to them to say, "I don't know; let's get on the phone and see what Cleveland thinks about it."

[zzthrall] Keynes allegedly said, "Even the most practical man of affairs is usually in the thrall of the ideas of some long-dead economist."  But no one can say when or where he said or wrote it.

References

Hazard Adams 1992. Critical Theory Since Plato, 2nd edition. Forth Worth, WA: Harcourt Brace Jovanovich.

Abdullah Al-Malki 2014. Wahhabism, the Brotherhood of those who obeyed Allah and ISIS: Has history repeated itself?

American Psychiatric Association, 5ed 2013. Diagnostic and Statistical Manual of Mental Disorders: DSM-5. Wash. DC: Amer. Psychiatric Publishing.

Anonymous 2015. "Ibn Taymiyyah : The Founder of ISIS." Islamic Philosophy, Dec 9 2015.

Thomas Aquinas 1270-1273, transl. Paul Vincent Spade 1997. De mixtione elementorum (On the Mixture of the Elements). Date of composition from Loonan 2008 p. 75.

Aurelius Augustine 397. Confessions.

Joshua Ayerdi 2017. Relative Rates of Success of Students in Calculus 1.  Honors thesis, Western Michigan University, Dept. of Math.

Francis Bacon, 1620. Novum Organum.

Per Bak 1996. How Nature Works: The Science of Self-Organized Criticality. NYC: Springer-Verlag.

William Barrett 1958. Irrational Man: A Study in Existential Philosophy. NYC: Doubleday.

Roland Barthes, 1968. "The Death of the Author." In Leitch et al., pp. 1322-1326.

Roland Barthes, 1971. “From Work to Text.” In Leitch et al., pp. 1326-1331.

Charles Baudelaire, 1859. The Salon of 1859. Translated in (Baudelaire / Hyslop & Hyslop 1964); summarized & excerpted in (Adams 1992: 621-623).

Jean Baudrillard 1981. "The Precession of Simulacra." Extracted in Leitch 2010 p. 1556-1566.

Jean Baudrillard 1984 lecture, published 1987. "The Evil Demon of Images." Sydney: Power Institute of Fine Arts.

Ronald Beiner 2018. Dangerous Minds: Nietzsche, Heidegger, and the Return of the Far Right. U Pennsylvania.

Peter Berger & Thomas Luckmann 1966. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. Anchor Books.

William Bingley 1799.  An Examination into the Origin and Continuance of the Discontents in Ireland, and the true cause of the Rebellion: Being a faithful narrative of the particular sufferings of the Irish peasantry; with a plan which, if adopted, cannot fail to bring back the Roman Catholic insurgents to their allegiance; without injury to the Protestant interest; or, What They Never Asked, Emancipation.  London: William Bingley, No. 2 Red Lion Passage, Fleet Street.

Jesse Bloom, Yujia Alina Chan, Ralph Baric, Pamela Bjorkman, Sarah Cobey, Benjamin Deverman, David Fisman, Ravindra Gupta, Akiko Iwasaki, Marc Lipsitch, Ruslan Medzhitov, Richard Neher, Rasmus Nielsen, Nick Patterson, Tim Stearns, Erik Van Nimwegen, Michael Worobey, David Relman 2021. "Investigate the origins of COVID-19." Science V372N6543, May 14, p. 694. DOI:10.1126/science.abj0016

Saint Bonaventure 1260, transl. Aloysius Lipomanus 1610. The Life of the Holie Father S. Francis.

Jorge Luis Borges 1942. "Funes the Memorious". La Nación, June; republished 1944 in Ficciones; translated into English 1954.

Jorge Luis Borges, transl. 1962. Labyrinths. New Directions.

Botz-Bornstein 2012. "What is the Difference between Culture and Civilization? Two Hundred Fifty Years of Confusion." Comparative Civilizations Review N66, Spring 2012.

David Bressoud, editor, 2016. The Role of Calculus in the Transition from High School to College Mathematics: Report of the workshop held at the MAA Carriage House, Wash. DC, Mar 17-19 2016.

David Bressoud, V. Mesa, & C. Rasmussen 2015. Insights and recommendations from the MAA national study of college calculus. Mathematics Association of America.

James Campbell 1996. William James, Charles Peirce, and American Pragmatism. Audio only. Nashville, TN: Knowledge Products.

Robert L. Carneiro 2000. "The transition from quantity to quality: A neglected causal mechanism in accounting for social evolution." PNAS 97(23): 12926-12931.

Daniel Chandler 1994-2022. Semiotics for Beginners.  Online branch of (Chandler 2001), with different and changing text.

Daniel Chandler 2001, 3ed 2017. Semiotics: The Basics. Routledge. Audio version available from Audible.

GK Chesterton, ed. Dale Ahlquist 2012. The Soul of Wit: G.K. Chesterton on Shakespeare. Mineola, NY: Dover.

Alastair Crooke, 2015. "You Can't Understand ISIS If You Don't Know the History of Wahhabism in Saudi Arabia." New Perspectives Quarterly 32. 10.1111/npqu.11504

Jonathan Culler 1981. The Pursuit of Signs: Semiotics, literature, deconstruction. London & NYC: Routledge (2005).

Diana Denny, 2013. "Rockwell’s Favorite Model, Part III." The Saturday Evening Post, March 29, 2013.

Jacques Derrida 1967, transl. 1976. Of Grammatology. Extracted in Leitch 2010 p. 1695-1697.

John Dewey 1929 or 1930 [the date in my copy is smudged]. The Quest for Certainty: A Study of the Relation of Knowledge and Action. London: George Allen & Unwin.

Umberto Eco 1959; transl. Hugh Bredin 1986. Art and Beauty in the Middle Ages. New Haven, CT: Yale.

Chris Ferguson 2007, "Evidence for publication bias in video game violence effects literature: A meta-analytic review". Aggression and Violent Behavior 12(4): 470-482.

David Hackett Fischer 1991. Albion’s Seed: Four British Folkways in America.

Stanley Fish, 1980. "How to Recognize a Poem When You See One." In Is There a Text in this Class? The Authority of Interpretive Communities. Cambridge, MA: Harvard University Press, 1980, p. 322-337.

Jack Flam, ed., 1978. Matisse on Art. NYC: Dutton.

Thomas Flynn 2006. Existentialism: A very short introduction.  Oxford U.

Michel Foucault 1969. "What is an Author?" In Leitch 2010 p. 1475-1490.

Kai Froeb & Maurizio Canfora 2007. Illustrated Hegel Biography V. Retrieved April 2022.

Joel Garreau 1992. The Nine Nations of North America. Avon

Jacques le Goff 1985, transl. Arthur Goldhammer 1988. The Medieval Imagination. U Chicago.

Michael Guillen 2021. Believing Is Seeing: A Physicist Explains How Science Shattered His Atheism and Revealed the Necessity of Faith. Tyndale.

Jean-Marie Guyau 2021. The Ethics of Epicurus and Its Relation to Contemporary Doctrines. London: Bloomsbury.

Daryl Hale 2006. Stoics and Epicureans. Audio only. Nashville, TN: Knowledge Products.

Milton Handelsman 1938. "Sextus Empiricus and the Schools of Medicine During the Time of Galen." The American Journal of Surgery 41(2): 328–335.

James Hannam 2009. God's Philosophers: How the Medieval World Laid the Foundations of Modern Science. London: Icon Books.

Gabriell Harvey 1593. A New Prayse of the Old Asse.  London: John Wolfe.

Adolf Hitler 1926. Mein Kampf, volumes 1 and 2. Translated 1939 by James Murphy. London: Hurst & Blackett.  I’m using a differently-formatted version of this edition, which has an additional Epilogue and a different pagination, running to 525 pages.  I don't have an unexpurgated edition; some of the most-inflammatory passages, such as the one calling the German people "sheep", have been excised from the English versions found free online.

John Holloway 2002. Change The World Without Taking Power: The Meaning of Revolution Today. Pluto Press.

Johan Huizinga 1949. The Waning of the Middle Ages.  Citations from 1954 Garden City, NY: Anchor Books edition.

Robert Irwin, 2001. "Is this the man who inspired Bin Laden?" The Guardian, Wednesday 31 October 2001.

ISIS 2016. Rumiyah #4. Also on archive.org.

David Johnson 2008. "What Does Academic Skepticism Presuppose? Arcesilaus, Carneades, and the Argument with Stoic Epistemology." Lyceum Vol. X No. 1.

Daniel Kahneman, Paul Slovic, & Amos Tversky 1982. Judgment Under Uncertainty: Heuristics and Biases. Cambridge U.

Daniel Kahneman 2011. Thinking, Fast and Slow. NYC: Farrar, Straus & Giroux.

Stuart Kauffman, 1993. The Origins of Order: Self-Organization and Selection in Evolution. Oxford U.

Immanuel Kant 1781, transl. Norman Smith 1958. Critique of Pure Reason. London: Macmillan.

Immanuel Kant 1797, transl. Thomas Abbott 1889. "On a Supposed Right to Tell Lies from Benevolent Motives" ("Über ein vermeintes Recht aus Menschenliebe zu lügen"). In Kant's Critique of Practical Reason and Other Works on the Theory of Ethics, London: Longmans, Green & Co.

Jeffrey Kasser 2006. Philosophy of Science. Audio lecture series. Springfield VA: The Teaching Company.

A. Koyré 1973. The astronomical revolution. Paris: Hermann, 1973. I don't have this book.

Thomas Kuhn 1962. The Structure of Scientific Revolutions. U Chicago.

Chris Langton, Charles Taylor, J. Doyne Farmer, & Steen Rasmussen, ed., 1992. Artificial Life II: Proc. of the 1990 workshop on artificial life. Reading, MA: Addison-Wesley.

Vincent Leitch et al., eds. 2nd ed. 2010, The Norton Anthology of Theory & Criticism. New York: Norton.

Ramanathan V. Guha 1992. Contexts: A Formalization and Some Applications. Stanford University PhD thesis about the KR&R system Cyc.

CS Lewis 1939. Christianity and Literature. In Lewis 1967 p. 1-11.

CS Lewis ~1943. "On Ethics." In Lewis 1967 p. 44-56.

C.S. Lewis 1943. The Abolition of Man. Oxford University Press.

CS Lewis 1953. The Silver Chair. London: Geoffrey Bles.

CS Lewis 1961. An Experiment in Criticism. Cambridge U Press.

CS Lewis, 1967 (posthumous). Christian Reflections. Grand Rapids MI: Eerdmans.

Ralph Lillie 1914. "The Philosophy of Biology: Vitalism Versus Mechanism." Science New Series V40 N1041, Dec. 11, 1914, p. 840-846.

Conleth Loonan 2008. "The De mixtione elementorum of Thomas Aquinas." In Simon Nolan, ed., Maynooth Philosophical Papers 5: 75–88.  Maynooth, Ireland: Department  of Philosophy, National University of Ireland.

Jean-François Lyotard 1986. "Defining the Postmodern." In Leitch 2010 p. 1465-1468.

John Gibson Macvicar 1833. Inquiries Concerning the Medium of Light and the Form of Its Molecules. Edinburgh: Adam & Charles Black.

Phil Maguire, Oisin Mulhall, Rebecca Maguire, & Jessica Taylor 2016. "Compressionism: A Theory of Mind Based on Data Compression." In Gabriella Airenti, Bruno G. Bara, Giulio Sandini, & Marco Cruciani, Proceedings of the EuroAsianPacific Joint Conference on Cognitive Science / 4th European Conference on Cognitive Science / 11th International Conference on Cognitive Science, Torino, Italy, September 25-27, 2015: p. 294-299.

Benjamin Martin & David Rumelhart 1999. Cognitive Science.  San Diego: Academic Press.

Henri Matisse, 1908. "Notes of a Painter" (Notes d’un Peintre). In (Flam 1978): 32-40.

Karl Marx 1844. "On the Jewish Question." Deutsch-Französische Jahrbücher, February 1844.

Douglas Medin & Evan Heit 1999. "Categorization." Chapter 3 of Martin & Rumelhart 1999, p. 99-143.  The most-important references in it for our purposes are to works by Eleanor Rosch & Douglas Medin.

Hugo Mercier 2020. Not born yesterday: The science of who we trust and what we believe. Princeton. I don't have this book.

Mark Morris 2017. Hitler: Philosopher King. Bedfordshire, UK: Arlesey Press.

Ernest Nagel 1961. The structure of science: Problems in the logic of scientific explanation. New York: Harcourt, Brace, and World. I don't even have this book.

William R. Newman & Anthony Grafton, eds., 2001. Secrets of Nature: Astrology and Alchemy in Early Modern Europe. Boston: MIT Press.

Richard Nisbett & TD Wilson 1977. "Telling more than we can know: Verbal reports on mental processes." Psychological Review 84: 231–259.

Richard Nisbett 2003. The Geography of Thought: How Asians and Westerners Think Differently... And Why. Free Press.

Steven Pinker 2003. The Blank Slate: The Modern Denial of Human Nature. NYC: Penguin.

Steven Pinker 2018. Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. NYC: Penguin.

Steven Pinker 2021. Rationality: What It Is, Why It Seems Scarce, Why It Matters. NYC: Viking, Penguin, Random House.

Karl Popper 1945, revised 5th ed. 1966. The Open Society And Its Enemies. Princeton University.

Karl Popper 1994. The Myth of the Framework. Routledge.

Derek de Solla Price 1976. From Neolithic to Now. Yale lecture series.

Tom Rockmore 1992. Irrationalism: Lukacs and the Marxist View of Reason. Philadelphia: Temple U.

Alex Ross 2020. Wagnerism: Art and Politics in the Shadow of Music. NYC: Farrar, Straus & Giroux.

Samuel Rutherfurd 1646. The Divine Right of Church-Government and Excommunication: Or a Peaceable Dispute for the Perfection of the Holy Scripture in Point of Ceremonies and Church-government; in which the Removal of the Service-book is Justifi'd ...  To which is Added, a Brief Tractate of Scandal; with an Answer to the New Doctrine of the Doctors of Aberdeen, Touching Scandal. London: Printed by John Field for Christopher Meredith at the Crane in Pauls [sic] Church-yard.

Jean-Paul Sartre 1943, transl. Hazel Barnes 1956. Being and Nothingness. New York: Philosophical Library.  I've read only tiny fragments of this book.

Schopenhauer 1818. The World as Will and Representation.  I haven't read any of this book.

Alfred Schütz 1932. Der sinnhafte Aufbau der sozialen Welt ("The Meaningful Structure of the Social World", published in English as "The Phenomenology of the Social World).  I haven't read any of this book.

Roger Shattuck 1955, revised 1968.  The Banquet Years: The origins of the avant-garde in France 1885 to World War I. NYC: Random House.

Kevin Simler & Robin Hanson 2018. The Elephant in the Brain: Hidden Motives in Everyday Life. Oxford U.

Larry Shiner, 1990. The Invention of Art: A Cultural History. U. of Chicago.

Karl Sigmund 2017. Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science. NYC: Basic Books.

Ari Sihvola 2000. "Ubi materia, ibi geometria." Unpublished revision of an expansion of the presentation "Ubi materia, ibi geometria" which was given at the Bianisotropics 2000 meeting (8th Intl Conf on Electromagnetics of Complex Media) in Lisbon, Portugal, 27–29 September 2000.

Pitirim Sorokin 1937-1941, condensed 1957. Social & Cultural Dynamics, revised & condensed edition, 1 volume. 720 pages. Sargent (Porter), U.S.  An e-book of the 1991 edition is searchable on Google.

Frederic Spotts 2002. Hitler and the Power of Aesthetics. London: Hutchinson (Penguin Random House UK).

Thomas Sprat, 1667 (mostly written in 1663). History of the Royal Society of London, for the Improving of Natural Knowledge. London.

Oswald Spengler 1918, 2ed 1922, transl. Charles F. Atkinson 1926. The Decline of the West. NYC: Knopf.

John H. Simpson 1990. "The Stark-Bainbridge Theory of Religion." Journal for the Scientific Study of Religion V29 N3 (Sep.), p. 367-371.

Jim Stevenson, Edmund Sonuga-Barke, Donna McCann, Kate Grimshaw, Karen Parker, Matt Rose-Zerilli, John Holloway, & John Warner 2010. "The Role of Histamine Degradation Gene Polymorphisms in Moderating the Effects of Food Additives on Children’s ADHD Symptoms." Am J Psychiatry 167:1108–1115.

Stone, D. (2003). "The Energy of Nihilism: Understanding the Appeal of Nazism." In: Responses to Nazism in Britain, 1933–1939, p. 17-44. Palgrave Macmillan, London.

A.E. Taylor 1930. "Plato and his contemporaries." Mind 39(155): 367-371.

Ibn Taymiyya 1309, transl. Wael Hallaq 1993. Against the Greek Logicians. Oxford: Clarendon Press.

Philip Tetlock 2005. Expert Political Judgment: How Good Is It? How Can We Know? Princeton.

Tristan Tzara 1918. "​​Dada Manifesto​".

Wilhelm Uhde 1911, transl. Ralph Thompson 1949. Recollections of Henri Rousseau. London: Pallas Athene, 2005.

Elise Viebeck, July 28 2019. "How an early Biden crime bill created the sentencing disparity for crack and cocaine trafficking." WashingtonPost.com.

Richard Wagner 1850. "Das Judenthum in der Musik" ("Jewishness in Music").  Neue Zeitschrift für Musik, Sept. 1850. Leipzig.

Richard Wagner 1871. Gesammelte Schriften und Dichtungen.

Robert Waite 1977. The Psychopathic God: Adolf Hitler. NYC: Basic Books. Pagination is from the 1993 Da Capo Press edition.

Elspeth Whitney 2004. Medieval Science and Technology. Westport, CT: Greenwood Press.

Benjamin Wiker 2008. 10 Books That Screwed Up the World: And 5 Others That Didn’t Help. Wash. DC: Regnery.

Thomas Williams 2007. Reason & Faith: Philosophy in the Middle Ages. Audio lectures. Springfield, VA: The Great Courses (formerly The Teaching Company).

WK Wimsatt & MC Beardsley 1946. "The Intentional Fallacy". The Sewanee Review 54 (3): 468–488. Revised and republished in The Verbal Icon: Studies in the Meaning of Poetry, University Press of Kentucky 1954, ISBN 0813128579.

J. Gerard Wolff 1999. "`Computing' as Information Compression by Multiple Alignment, Unifcation and Search." Journal of Universal Computer Science V5 N11: 777-815.

Richard Wolin 2015. Britannica.com (online), "Continental Philosophy".

Colin Woodard 2011. American Nations: A History of the Eleven Rival Regional Cultures of North America. Viking.