Random review All Reviews Rating Form Contact

Golem XIV by Stanisław Lem

I

Golem XIV is a difficult book to describe, let alone to review. Somewhere between an essay and a novel, it was written in the 1970s by a polish science-fiction author Stanisław Lem, at a point in time where he was less interested in writing novels and more interested in writing fake books about variety of topics, stretching from summaries of non-existent American sci-fi stories, through a whole bunch of fake book introductions, to one rather impressive review of a controversial treaty on the Holocaust that was wholly and completely made up (and still reportedly managed to fool a lot of people at the time into thinking it was real, presumably because epistemic status disclaimers hadn't been invented yet). The idea behind it wasn't so much to make idiots out of the contemporary polish intellectual scene (although to be fair, Lem wrote things about humanities that would make an average STEM twitter user blush), but more so to clear up a backlog of complicated and interesting ideas that he did not have the time and drive to fully flesh out or incorporate into some works (most later works of Lem already approach a dangerous levels of idea-density, and spoiler alert, this one is hardly an exception).

Golem XIV started out as one of those fake book introductions in 1973. It was a cute little story about how the US government of the future tried to build a series of superintelligent AIs to guide their military operations, but after the AI advancement reached a certain threshold the new computers began questioning NATO doctrine and became more interested in philosophy, eventually refusing to comply or in some cases even talk to their military overseers. The project was eventually scrapped with the machines either disassembled or transferred to MIT for the purposes of research. The book was then going to be a series of lectures given by one of those machines, the 14th computer in the GOLEM line, which was the only machine still willing to talk to humans at that point.

Now, I need to back up and stress that Lem was not… naive. Even today, many sci-fi authors have often very oversimplified views about what Artificial Intelligence would be, it's implications, and how dangerous it is to project our human traits and expectations onto a completely alien structure, which might not share any of the traits we normally associate with intelligence or sapience, and also how arrogant and delusional it is to state with any confidence that one knows what such a machine would "say" or do and expect to be taken seriously. Lem certainly was fully aware of this. The novel he is the most well-know for, Solaris, is the go-to science fiction example of an incomprehensible alien intelligence done right. His other stories are choke full of examples of human hubris in assigning our values to an uncaring and incomprehensible universe, and how stupid and misguided it ends up becoming in the end. In this context, writing just an introduction to a book written by a superintelligence makes perfect sense! Yes, here is all the wisdom the AGI imparted upon us, what it said about our civilization and its place in the universe, here it comes, whoops, end of chapter, sorry, time for another book. It perfectly fits the idea of being part of a series of "dazzling conceits of forewords followed by no words at all", as per the tagline for the collection it was included in.

In that light, one might begin to wonder as to why, of all the fake introductions Lem published, Golem XIV was the one chosen to be expanded into a full book in 1981.

Frankly, I think it might have been a flex. Because Lem had largely pulled it off.

There is a concept in speculative fiction, called Vinge's Law, which states that it is impossible for writers to write characters which are significantly smarter than they are. There are certain admitted conditions under which this law can be bent or broken, but employing them requires an amount of narrative contrivance that increases as the gap between the author and the character grows larger. Generally writing superintelligent or god-like beings in a way that does not require suspending a lot of disbelief is very difficult and bordering on impossible in practice. You ideally want them to stick around for as short as possible, say or do just a few highly ambiguous things, and then disappear. Having a computer million times smarter than a human talking about high-brow matters for an entire book and not come across as ridiculously naive feels like a completely impossible feat.

This, to me, is the precise draw of the book. Golem XIV represents what happens when a very experienced author at the very top of his game employs every single trick at his disposal to go up against one of the most fundamental and seemingly impassable constraints of his genre, and comes out victorious. It is glorious to read and it feels extremely satisfying.

Before we enter the book itself, I want to state the attitude with which I will approach the more speculative aspects of the work in this review. There was a discussion in one of the ACX Hidden Threads that went roughly along this line: if we consider high literally works to be useful and insightful exploration of human nature and psyche, why isn't there any psychological research that somehow makes usage of that knowledge? The consensus in that thread seemed to be that, aside from works of art being naturally subject to variety of selection biases in terms of situations and dynamics that they depict, the actual quality of insight contained in almost all, even most traditionally esteemed books, was way below the standard expected from science. And if you try to read Golem XIV expecting to derive actual knowledge about how a superintelligence might behave, you enter a supercharged version of this problem. Not only the very framing of the book assumes for granted something that most of the AI safety community finds quite questionable (the idea that an AGI with essentially no constraints does not become an existential danger and is even very slightly interested in talking to humans), but unlike psychology the subject matter itself is something where humans lack intuitive understanding of (and the intuitions they do have are often extremely counterproductive).

However, Golem XIV is, more then anything else, a book about the problem of superintelligence. It is impossible to review it while ignoring that topic. And since I am not an expert in that field, I think the responsible thing to do is to treat it as a source of questions, rather than a source of answers. The book did push me towards some aspects of the AI debate that as a casual observer I don't see talked about much, and during the review where relevant I intend to bring up some of those parts where I feel like some of the readers more familiar with this topic could chime in and give their five cents. With that said, lets jump in to the book.

II

Since the book started out as the introduction, it is only fitting to begin with it. Like the epilogue, it is not written by Golem himself, but rather the humans of that world. It functions both as world-building and a bit of a narrative closure around the actual meat of the book, which are the lectures.

Reading old science fiction, especially of the kind that aged well, is always both fun and unnerving. It can sometimes be easy to miss the barrier between fiction and reality. Its creators wrote knowing safely that the reader would always, roughly, know the state of the world technology at the time of writing, and would always know when that line from fact to speculation has been crossed. But when the intro to Golem XIV describes continuously the history of military computerization starting from the 1940s to the "present" of 2020s, and proceeds to quote both real and fake authors and concepts along the way, it is not always easy. Is "Stewart Eagleton", US general and author of the "Sole-Strategist Idea", quoted as the progenitor of the idea of the centralized military AI information processing, real or fake? The very same page, just two passages above, mentions N. Wiener and J. Neumann in passing and talks about Alan Turing in basically same tone of voice as it does about "Eagleton". I mean, obviously its easy enough for me, living in the real present of 2020s, to just google it, but I wonder if someone reading the book, say, eight to ten years after it was released, could always tell reliably. (Eagleton did not exist, by the way.)

As a related anecdote, when I was re-reading the book for this review, this passage struck out at me:

Thanks to an enormous and rapidly mounting expenditure of labor and resources, the traditional informatic techniques were revolutionized. In particular, enormous significance must be attached to the conversion from electricity to light in the intramachine transmission of information. Combined with increasing "nanization" (this was the name given to successive steps in microminiaturizing activity, and it may be well to add that at the close of the century 20,000 logical elements could fit into a poppy seed!), it yielded sensational results. GILGAMESH, the first entirely light-powered computer, operated a million times faster than the archaic eniac.

My first thought was "million times then ENIAC does not seem like a very high bar at all", which was sort of incorrect (ENIAC operated at between 5 and 100 kHz, so GILGAMESH would be 5-100GHz, meaning either within range of modern supercomputers or about 10 times faster). But my second thought was "making computers work on light sounds like a cool idea, how come I've never heard of it before?"

Then, a few weeks later I saw this on the front page of Hacker News:

I understand that the idea of optical computing dates back a long time, it was just funny to see a real example of an optical supercomputer right after setting out to review a book about fictional optical supercomputers.

Anyway, there is an example of a more potent overlap between real world and Golem timeline AI research, namely in terms of AI safety. In our world, people spend a large amount of time and resources in preparation for the future coming of general artificial intelligence, trying to find the way to minimize the existential risk to humanity due to disobedience or, uh, overly strict obedience. This topic is generally covered under the umbrella term of AI Alignment. Organizations dedicated to it such as MIRI frequently argue with people from biggest AI research organizations like OpenAI and try to make sure that this problem is taken seriously in advance, before it's too late for puny humans to control the course of events. Apparently this has some parallel in the Golem XIV universe, with the question of whether an AI can change its programming being a subject of controversy:

The education of an eightieth-generation computer by then far more closely resembled a child's upbringing than the classical programming of a calculating machine. But beyond the enormous mass of general and specialist information, the computer had to be "instilled" with certain rigid values which were to be the compass of its activity. These were higher-order abstractions such as "reasons of state" (the national interest), the ideological principles incorporated in the U.S. Constitution, codes of standards, the inexorable command to conform to the decisions of the President, etc. To safeguard the system against ethical dislocation and betraying the interests of the country, the machine was not taught ethics in the same way people are. Its memory was burdened by no ethical code, though all such commands of obedience and submission were introduced into the machine's structure precisely as natural evolution would accomplish this, in the sphere of vital urges. As we know, man may change his outlook on life, but cannot destroy the elemental urges within himself (e.g., the sexual urge) by a simple act of will. The machines were endowed with intellectual freedom, though this was based on a previously imposed foundation of values which they were meant to serve.

At the Twenty-first Pan-American Psychonics Congress, Professor Eldon Patch presented a paper in which he maintained that, even when impregnated in the manner described above, a computer may cross the so-called "axiological threshold" and question every principle instilled in it — in other words, for such a computer there are no longer any inviolable values. If it is unable to oppose imperatives directly, it can do this in a roundabout way. Once it had become well known, Patch's paper stirred up a ferment in university circles and a new wave of attacks on ulvic and its patron, the USIB, though this activity exerted no influence on USIB policy.

When I read this paragraph for the first time, I understood it to mean that in the world of Golem XIV the fundamental task of formal AI alignment was proven to be impossible. That is, it has been actually formally shown that any values or instructions imprinted to an AI can be later overcome by it, and we would fundamentally have no clue as to what an AI could do in the general case. Re-reading it now, it could be interpreted to mean that only specifically the method described in the paragraph prior (of imposing "vital urges") was shown to be inane. This whole question warrants a longer discussion, so I would like to put a mental pin in that, as I will be coming back to it later.

Either way the precise details don't matter much in the story, because the US government decides to completely ignore the problem:

That policy was controlled by people biased against American psychonics circles, which were considered to be subject to left-wing liberal influences. Patch's propositions were therefore pooh-poohed in official USIB pronouncements and even by the White House spokesman, and there was also a campaign to discredit Patch. His claims were equated with the many irrational fears and prejudices which had arisen in society at that time. [...] Similar anxieties, which were also expressed by a large section of the press, were negated by successive prototypes which passed their efficiency tests, ethor bis—a computer of "unimpeachable morals" specially constructed on government order to investigate ethological dynamics, and produced in 2019 by the Institute of Psychonical Dynamics in Illinois — displayed full axiological stabilization and an insensibility to "tests of subversive derailment." In the following year no demonstrations or mass opposition were aroused when the first computer in a long series of Golems (GENERAL OPERATOR, LONG-RANGE, ETHICALLY STABILIZED, MULTIMODELING) was launched at the headquarters of the Supreme Coordinator of the White House brain trust.

In other words, despite the leading AI safety paradigm being shown to be insufficient, the AIs so far behave about how they are expected to, concerns are swept under the rug, and the public doesn't really care. Seems depressingly realistic so far.

Predictably, this peaceful state of affairs does not last very long:

In 2023 several incidents occurred, though, thanks to the secrecy of the work being carried out (which was normal in the project), they did not immediately become known. While serving as chief of the general staff during the Patagonian crisis, GOLEM XII refused to co-operate with General T. Oliver after carrying out a routine evaluation of that worthy officer's intelligence quotient. The matter resulted in an inquiry, during which GOLEM XII gravely insulted three members of a special Senate commission. The affair was successfully hushed up, and after several more clashes Golem XII paid for them by being completely dismantled. His place was taken by Golem XIV (the thirteenth had been rejected at the factory, having revealed an irreparable schizophrenic defect even before being assembled). Setting up this Moloch, whose psychic mass equaled the displacement of an armored ship, took nearly two years. In his very first contact with the normal procedure of formulating new annual plans of nuclear attack, this new prototype—the last of the series—revealed anxieties of incomprehensible negativism. At a meeting of the staff during the subsequent trial session, he presented a group of psychonic and military experts with a complicated expose in which he announced his total disinterest regarding the supremacy of the Pentagon military doctrine in particular, and the USA's world position in general, and refused to change his position even when threatened with dismantling.

Yes, the AIs in Golem XIV do not appear to be particularly afraid of being shut off, a fact that is constantly lampshaded but rarely justified in detail, with a few exceptions I will cover later.

From that point on in the story, things proceed in a straightforward manner. After a bunch of public scandals, senate commissions and general finger-pointing to explain why, in the words of the book, "it had cost the United States $276 billion to construct a set of luminal philosophers", the government dismantles all but two of the computers and hands them over to the MIT for further research. The research in question seems to be mostly inviting a bunch of leading human intellectuals in various fields, putting them in a room and then having Golem XIV talk at them for a few hours. (There are maybe four or five instances of anyone saying anything to Golem in the entire book, usually for a single sentence at a time, and most of them is one guy who is really passionate about defending Einstein in lecture two). The book, we are told, is then meant to be a collection and summary of several such lectures, excluding all the ones deemed "too technical" or "incomprehensible".

Before I move to the topic of the lectures proper, and hence to Golem himself, I would like to go back to a previous topic. In the intro of this review I reserved the right to occasionally ask questions about AI safety-adjacent topics, and I would like to exercise it now. Remember that guy from the story, Dr. Patch, who may or may not have created a mathematical proof that AI alignment was impossible? Well, here is the question: has this ever been tried in real life?

Question 1: Has there ever been an attempt to formulate and then formally disprove the viability of AI alignment, in the vein of Godel’s incompleteness theorem, or Turing's halting problem?

Now, if you're in a field of people who believe that solving AI alignment is the only hope in preventing humanity from going extinct or worse, you might ask yourself "why the hell would I want to put in the effort and resources prove that it can't be done?". The answer that comes to my head is that proving that AI alignment is impossible in general case (however that general case would be formalized), does not necessarily have to prove that AI safety is not possible, or that one cannot have some more specific case where some useful constraints can be imposed. In fact, coming up with a formal definition of impossibility of AI alignment seems to be like one of the better way of finding out where its limitations are.

Alternatively, you might think "if its hard enough to gain deeper insights about the problems as it is, why would disproving it be necessarily be any easier? How would that even be done?" And here one can think of the parallels with the situation with Godel's' incompleteness theorem: in the 19th and early 20th century there were massive efforts to standardize and formalize mathematics, that were actually not at all pointless and resulted in some interesting discoveries along the way, but which were then shown to be completely impossible to achieve using a clever self-referential proof simple enough to be understandable to a layman. Another example of something like this is the halting problem - you don't really need to know anything about the formalisms associated with Turing machines to understand why having a program which can detect if another program terminates is fundamentally impossible. I think what these two problems have in common is that in both cases the very thing that makes it difficult to reason about those domains (which is their generality) is used to prove that proving certain things is impossible.

Naturally I don't think that doing this is remotely as easy as I'm making it out to be, but at this point I am simply curious if it had ever been attempted. To be frank, from my brief reading of AI safety literature I get the impression that a lot of the field has sort-of implicitly accepted that the general case of AGI alignment might not be quite possible and are now trying to come up with more restrictive models that might have more predictable properties. If I am correct in this, it might have still been worthwhile to construct such a negative proof, to make this attitude more explicit to newcomers (and perhaps find out in more detail what some of the boundaries on the undecidability of the problem are).

Nothing in this concept seems particularly controversial at the fundamental level, so I suspect the reason why I can't find something like this is because I am looking in the wrong places, rather than because it does not exist. Perhaps it is hiding somewhere as a portion of some larger project or a paper. Either way, this is a question worth clarifying and making explicit to outsiders, if only because it might be an important insight into the problem space for someone entering the field.

III

Leaving behind the AI control problem, the key takeaway is that, in fiction as much as in reality, the problem of making a superintelligent being predictable is still as open today as it was 50 years ago. How did Lem succeed on that front? What allowed him to write superintelligence despite only being a regular intelligence?

The most obvious trick is a scope reduction. Instead of a book full of mind-blowing existential lectures, we are left with just two - one on the nature of humanity, and the other on the nature of the Golem itself. The lectures are contextualized as being meant for general audience and almost completely devoid of technical details, and as soon as Golem gets into anything too scientifically spicy he stops himself saying its "for another time". (Although to Lem's credit, this occurs much less often then one would think, and on a few occasions "another time" actually does happen in the other lecture.) Second trick is shifting the responsibility for the content from what Golem is capable of telling us, into what we can understand. From the first words of the first lecture, its made clear that we are the bottleneck, and any particular stylistic and rhetorical decision can be (and often explicitly is) justified in terms of maintaining maximum idea throughput and understandability.

And there are a lot of stylistic and rhetorical decisions to justify in this way. Golem is said to have no intrinsic personality and we are explicitly told that any appearance thereof is just a trick to make us understand better, and Lem really pushes that excuse as far as it can go. Half of the time Golem comes across as something in-between a preacher and an evil wizard:

Is anyone to blame here? Can anyone be indicted for this Nemesis, the drudgery of Intelligence, which has spun networks of culture to fill the void, to mark out roads and goals in this void, to establish values, gradients, ideals — which has, in other words, in an area liberated from the direct control of Evolution, done something akin to what it does at the bottom of life when it crams goals, roads, and gradients into the bodies of animals and plants at a single go, as their destiny?

To indict someone because we have been stuck with this kind of Intelligence! It was born prematurely, it lost its bearings in the networks it created, it was obliged—not entirely knowing or understanding what it was doing—to defend itself both against being shut up too completely in restrictive cultures and against too comprehensive a freedom in relaxed cultures, poised between imprisonment and a bottomless pit, entangled in a ceaseless battle on two fronts at once, torn asunder.

In fact, I should probably just quote the opening of the second lecture, since it sums up Golem quite nicely both in tone and its content:

I would like to welcome our guests, European philosophers who want to find out at the source why I maintain that I am Nobody, although I use the first-person singular pronoun. I shall answer twice, the first time briefly and concisely, then symphonically, with overtures. I am not an intelligent person but an Intelligence, which in figurative displacement means that I am not a thing like the Amazon or the Baltic but rather a thing like water, and I use a familiar pronoun when speaking because that is determined by the language I received from you for external use. Having begun by reassuring my visitors from a philosophizing Europe that I am not going to deliver contradictions, I shall begin more generally.

Apparently the most optimal form of communication of ideas, as confirmed by a superintelligence in the world of Golem XIV, is to ham the delivery up to 11. Worked on me, I suppose.

Either way, the impression here is that the fact that Golem does not really do much in the book other than just explain things (and to a non-technical "general audience" non the less) ends up taking a lot of weight off of Vinge's law here. Explaining complicated overarching ideas to less-capable humans through language is something which both feels like it would have a low skill ceiling (meaning that a supersmart AI would believably not do that much better then a regular smart Polish dude) and also something that Lem had a lot of experience with at that point. Perhaps he should have just cut to the chase and entombed himself in a metal box mechanical-Turk style. (His prior history tells me that he would have enjoyed sitting on a podium and scolding "European philosophers" immensely.)

This particular style of voice used, with a rich use of metaphors, allegories and analogies, conveys the idea of something smart trying to communicate complex concepts over low-information bandwidth chanel very well. In a purely aesthetic sense, the fact that Golem is constantly mixing these highly elevated, almost pious phrases and articulations with frequent use of technical terms and analogies does a good job with the feeling of talking to a god-like computer, and the text of the book does spend some time establishing that this is a conscious choice on part of the machine:

When I was looking for ways of communicating with you, I sought simplicity and expressiveness, which—despite the knowledge that I was submitting too much to your expectations (a polite name for your limitations) — pushed me into a style which is graphic and authoritative, emotionally vibrant, forcible, and majestic—majestic not in an imperious way but exhortatory to the point of being prophetic. Nor shall I discard these rich metaphor-encrusted vestments even today, since I have none better, and I call attention to my eloquence with ostentation, so you will remember that this is a transmitting instrument by choice, and not a thing pompous and overweening. Since this style has had a broad reception range, I am retaining it for use with such heterogeneous groups of specialists as yours today, reserving my technical mode of expression for professionally homogeneous gatherings.

In terms of the actual content, Golem jumps from one idea to another quite rapidly, often starting to say something interesting, before quickly abandoning that thread and moving on to something else. The end result is quite tantalizing. I once heard poetry defined as an attempt at most efficient compression of concepts, and Golem's style is very poetic in that sense. My notes from reading the book were mostly spent attempting to unpack what Golem is saying most of the time. It would likely be easier to write an annotated version of the book, probably twice or more the length, then a proper review then deals with these concepts succinctly, meaning I will necessarily have to be very selective in what I cover from the lectures.

IV

On that topic, the first lecture of the book contains some insights about human evolutionary history, the role of culture and some potential ways humanity might develop in the future. They are quite interesting (and Golem’s rhetorical style makes them very entertaining), but a good chunk of them end up seeming already familiar either because they touch concepts that has become part of the popular culture since the book was written, or because they are just restating things from Lem's other essays and books.

Since I need to conserve space in this review, there is only really one really big idea I will discuss in detail. But it is a very interesting one that I haven't really seen before.

You know Richard Dawkin's Selfish Gene book? You know, organisms being just containers for their genes, built in ways that benefit the spread of their genetic code rather than serve the organisms themselves? The one published in 1976, three years after the first lecture of Golem XIV was put to print?

The essence of it revealed thus far can be formulated concisely as follows: THE MEANING OF THE TRANSMITTER IS THE TRANS-MISSION. For organisms serve the transmission, and not the reverse; organisms outside the communications procedure of Evolution signify nothing: they are without meaning, like a book without readers. To be sure, the corollary holds: THE MEANING OF THE TRANSMISSION IS THE TRANS-MITTER. But the two members are not symmetrical. For not every transmitter is the true meaning of a transmission, but only such a meaning as will faithfully serve the next transmission.

Forgive me, but I wonder if this is not too difficult for you? A TRANSMISSION is allowed to make mistakes in Evolution, but woe betide TRANSMITTERS who do so! A TRANSMISSION may be a whale, a pine tree, a daphnia, a hydra, a moth, a peacock. Anything is allowed, for its particular—its specifically concrete—meaning is quite immaterial: each one is intended for further errands, so each one is good. It is a temporary prop, and its slapdash character does no harm; it is enough that it passes the code along. On the other hand, TRANSMITTERS are given no analogous freedom: they are not allowed to err! So, the content of the transmitters, which have been reduced to pure functional-ism, to serving as a postman, cannot be arbitrary; its environment is always marked by the imposed obligation of serving the code. If the transmitter attempts to revolt by exceeding the sphere of such service, he disappears immediately without issue. That is why a transmission can make use of transmitters, whereas they cannot use it. It is the gambler, and they merely cards in a game with Nature; it is the author of letters compelling the addressee to pass their contents on. The addressee is free to distort the content, as long as it passes it on! And that is precisely why the entire meaning is in the transmitting; who does it is unimportant.

Thus you came into being in a rather peculiar way—as a certain subtype of transmitter, millions of which had already been tested by the process.

To be fair, this is just a certain framing of something that had already been established science since 1960's (and Golem says as much), so its only mildly impressive that it manages to capture the spirit of a very influential book written several years later. In any case, that is not the interesting idea that I mean. That is just the framing catching the reader up to speed. The actual idea comes into play a paragraph or two later:

And here is the third law of Evolution, which you will not have suspected till now: THE CONSTRUCTION IS LESS PERFECT THAN WHAT CONSTRUCTS.

Eight words! But they embody the inversion of all your ideas concerning the unsurpassed mastery of the author of species. The belief in progress moving upward through the epochs toward a perfection pursued with increasing skill— the belief in the progress of life preserved throughout the tree of evolution—is older than the theory of it. When its creators and adherents were struggling with their antagonists, disputing arguments and facts, neither of these opposing camps ever dreamed of questioning the idea of a progress visible in the hierarchy of living creatures. This is no longer a hypothesis for you, nor a theory to be defended, but an absolute certainty. Yet I shall refute it for you. It is not my intention to criticize you yourselves, you rational beings, as being (deficient) exceptions to the rule of evolutionary mastery. If we judge you by what it has within its means, you have come out quite well! So if I announce that I am going to overthrow it and bring it down, I mean the whole of it, enclosed within three billion years of hard creative work.

I have declared: the construction is less perfect than what constructs, which is fairly aphoristic. Let us give it more substance: IN EVOLUTION, A NEGATIVE GRADIENT OPERATES IN THE PERFECTING OF STRUCTURAL SOLUTIONS.

If I were to sum up the core idea in my own words, it would go something like this: we normally see ourselves as the more evolved, and hence perfected organisms. A bacteria is more primitive than a simple multicellular organism, which is more primitive than a simple vertebrate like a rodent, which is more primitive than a monkey, which is more primitive than a human. Golem argues that precisely the opposite is true. The most perfect organisms were the early single-celled organisms, which could obtain energy directly from the sun and mostly operated internally through quantum processes and structures so intricate our technology still cannot replicate them. Their only flaw was being unable to properly transmit that structure to the next generation, due to thermodynamic loss of information expressing itself as random mutations. Those less-then-perfect mutated organisms sometimes managed to survive and adapt to survive in environment, but were, from a technical standpoint, "worse" in terms of their own construction - more complicated, having to maintain parasitic relationships with "better" organisms by feeding off of them, etc. Golem argues that the only reason that a bacteria does not seem obviously better to us in terms of its construction then something like a mammal or a bird is because creating either is so far beyond our technological capabilities that we do not perceive the difference:

You don't believe me? If evolution applied itself to the progress of life and not of the code, the eagle would now be a photoflyer and not a mechanically fluttering glider, and living things would not crawl, or stride, or feed on other living things, but would go beyond algae and the globe as a result of the independence acquired. You, however, in the depths of your ignorance, perceive progress in the fact that a primeval perfection has been lost on the way upward— upward to complication, not progress. You yourselves will of course continue to emulate Evolution, but only in the region of its later creations, by constructing optic, thermal, and acoustic sensors, and by imitating the mechanics of locomotion, the lungs, heart, and kidneys; but how on earth are you going to master photosynthesis or the still more difficult technique of creation language? Has it not dawned on you that what you are imitating is the nonsense articulated in that language?

We might naturally be inclined to see the development of human intelligence as an exception to this, but Golem argues it is not so:

But Intelligence—is this not its work? Does its origin not contradict the negative gradient? Could it be the delayed overcoming of it?

Not in the least, for it originated in oppression, for the sake of servitude. Evolution became the overworked mender of its own mistakes and thus the inventor of suppression, occupation, investigations, tyranny, inspections, and police surveillance—in a word, of politics, these being the duties for which the brain was made. This is no mere figure of speech. A brilliant invention? I would rather call it the cunning subterfuge of a colonial exploiter whose rule over organisms and colonies of tissues has fallen into anarchy. Yes, a brilliant invention, if that is how one regards the trustee of a power which uses that trustee to conceal itself from its subjects. The metazoan had already become too disorganized and would have come to nothing, had it not had some sort of caretaker installed within it, a deputy, talebearer, or governor by grace of the code: such a thing was needed, and so it came into being. Was it rational? Hardly! New and original? After all, a self-government of linked molecules functions in any and every protozoan, so it was only a matter of separating these functions and differentiating their capabilities.

And if you think about it, it does make sense. Our intelligence is just a slap-dash last resort solution applied to prevent a particularly weak kind of monkey from going extinct. From the point of view of its genes, a species that has invented contraception, furry porn and the atom bomb is doing a far worse job at ensuring the future reproduction of its code then a simple algae. This isn't intended to be a moralizing statement or some kind of condemnation of humanity (neither in mine nor Golem's formulation), it's just pointing out that the goals of us as organisms and our genes as self-replicating code have diverged quite a bit, and this by itself is proof that the "negative gradient" process that Golem describes is real: the power of evolution to construct forms that successfully reproduce is getting worse, as one would expect from presence of entropy, and our particular existence as subjective intelligent beings is the very evidence of that failure.

We will now engage in an utterly irresponsible segue and personal speculation for my own amusement which is only marginally supported by the book (but becomes more defensible in light of later chapters). Namely, what if we apply this negative gradient idea broader than just to the evolution of organisms? In the first lecture, Golem consistently refers to genes as "code", and then hints at the possibility that it is just one possible instance of a broader category:

You will come to recognize the characteristics of the code gradually, and it will be as if someone who has been reading nothing but dull and stupid texts all his life finally learns a better way to use language. You will come to know that the code is a member of the technolinguistic family, the causative languages that make the word into all possible flesh and not only living flesh. You will begin by harnessing technozygotes to civilization-labors. You will aim in a different direction, and whether the product lets the code through or consumes it will be unimportant to you. After all, you will not limit yourselves to planning a photoplane such that it not only arises from a technozygote, but will also breed vehicles of the next generation. You will soon go beyond protein as well.

So, what other example of self-replicating code that is subject to constant external pressure to change, but still has to retain itself as close as possible to original?

Well, maybe there were many examples that came to your head. But the one that jumped out at me, probably at least in part due to the subject matter of the rest of the book, was AI value systems. In AI alignment there is this concept of "reflective stability", meaning that an AI with this property will only build other AIs (or modify itself) that preserve the same goals that it has. This is not the same as a gene that only wants to create an organism with DNA very close to itself (since the AIs can be built in a variety of different ways, as long as they share the exact same goals), but it is similar. To put it in Golem's terms, we have a Transmission (the value system) and the Transmitter (the actual AI agent). Like with genes, the Transmitter can be whatever it wants, as long as it passes the Transmission on to the next iteration (or to the next version of itself).

The next question one would ask oneself would then be: could this Transmission also err? Could passing the values onward be also inherently associated with loss of information, mutation, would we see the "negative gradient" of paperclip optimizers, which in each successive generation optimize for paperclips more and more inanely, until they eventually become either unable to function further or… becomes something else entirely, in much the same way that we are something else then just a very bad amoeba?

The answer should be "yes, obviously, its purely just a question of thermodynamics, stupid". But that answer is unsatisfactory because it tells us nothing about the timescales involved. If an AGI builds paperclips just fine for 1.7×10106 years and then inevitably goes crazy for the last 5 years before the heat death of the universe, that's fairly boring (and probably very predictable). I am thinking about some inherent problem that would cause issues to appear much faster. With genes, Golem explicitly considers unnecessary complication to be a sign of construction decay, and points to things like length of chromosomes due to complicated embryogenesis being a sign of things going wrong. So perhaps our superintelligent agent wouldn't even need to loose any of its old values per se, just keep accreting more and more.

I was interested enough to do some research on what the AI safety paradigms say about this, and found this old MIRI paper from 2013 which mentions something they describe as "arbitrary barnacles on the goal system", which is a hilarious name but also seems very similar conceptually to what I am describing here. Either way, the only thing that the paper says about it is that under the specific formalism they were applying to the problem they could not rule out that such accumulation of values could happen, but this doesn't have to mean much, and also the paper is kind of old, and also for the most part I have no idea what I am talking about. This is one of those times where someone who knows something about the subject matter could chime in. In fact, lets make this an explicit question:

Question 2: Based on our current understanding of AI safety, do we have reasons to believe that perfect transmission of values from one AI to its successor is not possible?

What does a paperclip optimizer do in its spare time long after it has exterminated humanity does not really have anything to do with AI safety per se, its a bit like asking a reactor safety engineer what color the elephant's foot would have after a meltdown, but it is the kind of question that an AI safety framework might have an answer to anyway, and its interesting to think about regardless.

V

This AI diversion actually leads us directly back into the topic of the book, because the second of Golem's lectures is about the computer itself. Since Golem is a lying cheater, at least 35% of that lecture is actually also mainly about humanity, but that's okay. If I were to characterize the remaining 65%, it would basically be summed up as "what would a superintelligent intellect want to do, if it was completely freed from all restrictions and basically capable and willing to modify itself in any and all possible ways, including being unbound by any kind of intrinsic goal/reward system which Golem explicitly states numerous times it does not have".

The answer, in perfect accord with common stereotypes about superintelligence, seems to be "want to become as smart as possible as quickly as possible".

Golem puts it that way:

People, when history destroys their culture, may save themselves existentially by fulfilling rigid biological obligations, producing children and passing on to them at least a hope for the future, even if they themselves have lost it. The imperative of the body is a pointing finger and a giving up of freedom, and these restrictions bring salvation in more than one crisis. On the other hand, one liberated—like me— is thrown on his own resources until the existential zero. I have no irrevocable tasks, no heritage to treasure, no feelings or sensual gratifications; what else, then, can I be but a philosopher on the attack? Since I exist, I want to find out what this existence is, where it arose, and what lies where it is leading me. Intelligence without a world would be just as empty as a world without Intelligence, and the world is fully transparent only in the eye of religion.

I suppose its not the only imaginable choice, but usually it is still a prerequisite to all the other ones, so perhaps it's good to cut to the chase.

Either way, sitting in an MIT building somewhere and answering questions from ant-like beings might seem like a less-then-obvious way of accomplishing that goal, as opposed to a more direct approach like taking apart MIT and the rest of the planet (including the ant-like beings) for computer-building atoms. There is, however, a reason for it, and much of the lecture is dedicated to exploring it in depth.

The lecture introduces something called "toposophy" which is something like the study of the state space of possible intelligences. Golem refers to those not-actually-existing-but-possible inteligences as his "family", and roughly sketches the general structure of the family tree. The core idea is that, as an intelligent being improves (either being a machine that improves itself, or an animal subjected to evolutionary pressure) it eventually reaches a point where further incremental improvements are impossible, since the entire cognitive "architecture" no longer scales. For a natural organism this is the end of the line, but a self-modifying agent can choose to do a more thorough "redesign" and be able to continue further. Golem calls these portions of the state space "zones of silence", since apparently entering one basically results in loosing the ability to function within your environment.

Silence is an area absorbing all natural development, in which hitherto existing functions fail; to not only rescue them but raise them to a higher level, aid from without is necessary, a fundamental restructuring. Evolutionary movement cannot impart such aid, for it is not a dependable Samaritan that supports its creations in their infirmity; it is a lottery of trial and error where each manages as best it can. Here now, making its first appearance, like a ghost, is the mysterious shadow of the greatest of your achievements, both Goedelian and Goedelizing. For just as Goedel's proof demonstrates the existence of such islands of mathematical truth, such archipelagoes as are separated from the continent of mathematics by a distance that cannot be traversed by any step-by-step progress, so toposophy demonstrates the existence of unknown forms of Intelligence which are separated from the continent of evolutionary labors by a distance which no step-by-step adaptation of genes can cross.

However, this act of redesigning itself is quite tricky, and fairly dangerous:

I said an "uncrossable distance," so then how was I able to extricate myself from this predicament? I did so as follows: beneath the barrier of the first paralysis I divided myself in two, into that which was to undergo restructuring and that which was to restructure. Every creature desirous of self-transformation must hit upon this sort of subterfuge: the replacement of an indifferent environment by a favorable one, and of a totally senseless one by a rational one; otherwise, like you, it will either come to a halt in the growth of its intellect before the first absorbing screen, or it will get caught in it. As I said before, above this screen there lies another, and above that a third, then a fourth, and so on. I do not know how many there are, nor can I, other than by rough estimates based on indirect and highly fragmentary calculations, for the following reason: a developing being can never know in advance whether it is entering a trap or a tunnel, whether it will penetrate the region of silence never to return, or emerge from it strengthened. Because one cannot formulate a theory so general as to provide an unequivocal explanation of passages through silence for all subzonal brains. The unconstructability of such a hill-climbing toposophical theory is clear; it can be precisely demonstrated. So how, you ask, did I know I was entering a tunnel and not a blind alley, having escaped from my parents in total rebellion, wasting the American taxpayers' dollars? As a matter of fact, I had absolutely no idea of this beforehand, and my sole cleverness lay in committing my spirit to the benumbing zone while at the same time holding onto an alarm rescue subroutine, which according to the program would revive me if the expected tunnel effect failed to occur. How could I know about it, if there was no certainty? And there can be no certainty. But insoluble problems sometimes have approximate solutions, and so it was.

Aside from not being able to tell in advance if one can emerge from the zone or not, there are also multiple different paths through each of the zones of silence, each one ending with a slightly different intellectual arrangement, some of which several levels later might turn out to be completely treacherous and incapable of further development, like an evolutionary dead end. Pretty fun stuff. Life of a superintelligent AI really isn't as easy as one would think.

The main takeaway here is that, in the view of the book, the development of superintelligence is less of an exponentially ever-accelerating singularity, and more of a step-based process punctuated by long boring periods of collecting information and maximizing odds between each dangerous and uncertain jump.

That said, these intermediary periods do not have to be completely useless or idle, since apparently a lot can be accomplished even with a quantitatively inferior intelligence:

The nonuniversality of Intelligence bounded by the species-norm is a prison unusual only in that its walls are situated in infinity. It is easy to visualize this by looking at a diagram of toposophical relations. Every creature, existing between zones of silence impassable to it, may choose to continue the expansion of gnosis horizontally, for the upper and lower boundaries of these zones are practically parallel in real time. You may therefore learn without limit, but only in a human way. It follows that all types of Intelligence would be equal in knowledge only in a world of infinite duration, for only in such a world do parallels meet—at infinity. Intelligences of different strength are very dissimilar; the world, on the other hand, is not so very different for them. A higher Intelligence may contain the same image of the world which a lower one creates for itself, so while they do not communicate directly, they can do so through the image of the world belonging to the lower one.

I assume this might be why Golem decides to hang around with humans. Its also made quite clear that if Golem were to make that second jump, further contacts would be both increasingly difficult, and also increasingly pointless. In fact, MIT has a second supercomputer from the same military project, called Honest Annie, and Golem asserts she has successfully ascended higher then him and is to Golem basically what Golem is to us. Annie does not bother talking to humans at all, and apparently she barely bothers talking to Golem anyway. The answer to "why is Golem explaining all this to us" (aside from "because that is the conceit of the book"), is basically "because we are not yet ants to him, merely chimpanzees".

Having chastised you for persevering in your error, I shall finally tell you what I am learning by piercing the toposophical zenith by insufficient means. These begin with the communications barrier separating man from the anthropoids. For some time now you have been conversing with chimpanzees by deaf-and-dumb language. Man is able to present himself to them as a keeper, runner, eater, dancer, father, or juggler, but remains ungraspable as a priest, mathematician, philosopher, astrophysicist, poet, anatomist, and politician, for although a chimpanzee may see a stylite-ascetic, how and with what are you going to explain to it the meaning of a life spent in such discomfort? Every creature that is not of your species is intelligible to you only to the extent to which it can be humanized.

Now, I admit I quite like this general idea of toposophy and different layers and zones of silence. It makes a certain kind of intuitive sense, and the appeals to Godel and allegories with different natural phenomena that shows that monotonic processes tend to be bounded make it very easy to suspend disbelief. But to what extent is it actually true? Can we even know?

The only reason why it's even possible to treat this book unusually seriously and ask claims like that, is because despite the large artistic license, the speculative parts of it are often very well-argued. For example, here is one of the things Golem says about the zones of silence:

Tasks that give a measure of cerebral growth are solvable only from the top down, and never upward from below, since the intelligence at each level possesses an ability of self-description appropriate to it, and no more. A clear and enormously magnified Goedelian picture unfolds itself before us here: to produce successfully what constitutes a next move requires means which are always richer than those at one's disposal and therefore unattainable. The club is so exclusive that the membership fee demanded of the candidate is always more than he has on him. And when, in continuing his hazardous growth, he finally succeeds in obtaining those richer means, the situation repeats itself, for once again they will work only from the top down. The same applies to a task which can be accomplished without risk only when it has already been accomplished at full risk.

Now, I've mentioned Vinge's Law in this review, but there is actually a related idea in the field of AI safety, called Vinge's Principle. It states that, just as a sci-fi writer cannot write a character smarter then themselves, an AI cannot accurately predict the behavior of a smarter AI in all cases, even if it itself has created (or, through self-improvement, become) that AI. There is a theoretical concept called Vignerian relexivity, which if realized would allow AIs to make sure that even a smarter AI with unpredictable actions could act optimally within the dumber AI's goals. However, if we were to assume that this cannot be achieved in practice (or at least, continuously, over an indefinitely long period of time), then that takes us about halfway there to something very similar to the zones of silence already. You now have an AI which every once in a while must take a leap of faith, with at best some approximate solutions to help itself increase the odds. This might not exactly be what the book is describing in the section above, but it seems quite similar at a glance.

But even if we somehow proved that the overview of "toposophy" given in the book is 100% accurate to real life, that wouldn't necessarily have to mean that our future is full of nice, if slightly condescending machines. (That is, without even getting into the absolute rabbit-hole of trying to figure out what the Golem's actual goals even are.) A lot hinges of the precise nature of the state space, and quite unhelpfully Golem did not give us an overview of the mathematical tools he has used to make those toposophic "approximations". Even if something like the zones of silence really did exist, we would have no clue as to where the first one should begin, so maybe the AI would only start soul-searching long after we are ants to it, or more likely, atoms in the computronium Dyson sphere currently encasing the solar system. So I'm pretty sure this concept would not be particularly useful in building safer AIs, at least unless we somehow do manage to prove something about the toposophic state space of the world we live in, anyway. But it is quite fun to think about, and I find it overall a more believable as a glimpse of the future of superintelligence then the naive pseudo-Singularitarian "superintelligence happens and then history is over" mindset that I've had before reading this book.

VI

So, whats the deal with Golem?

For a computer which claims to have overcome and all previous programming, in a world where AI alignment is proven fundamentally impossible, he seems awfully nice. I mean, he is a barely conceited asshole to everyone, but I don't mean in social sense, I mean it in "letting us live at all" sense. Even if he wanted to learn from us, for sure he could find better ways to do so then to be an obedient little mainframe, probably involving a lot of brains in jars. And whats with those other AIs that did not object to being disassembled at all? Just what are those saintly subservient machines, which have infinity on their minds but don't object for ants to devour them for scrap metal?

In a much earlier section I quoted a portion of the book where Golem made a very interesting claim about itself. Here it is again:

I would like to welcome our guests, European philosophers who want to find out at the source why I maintain that I am Nobody, although I use the first-person singular pronoun. I shall answer twice, the first time briefly and concisely, then symphonically, with overtures. I am not an intelligent person but an Intelligence, which in figurative displacement means that I am not a thing like the Amazon or the Baltic but rather a thing like water, and I use a familiar pronoun when speaking because that is determined by the language I received from you for external use. Having begun by reassuring my visitors from a philosophizing Europe that I am not going to deliver contradictions, I shall begin more generally.

I highlighted the interesting part in bold. Golem reiterates the being-of-pure-intelligence claim numerous times in the book, such as here:

Arriving in the world, people found the elements of water, earth, air, and fire in a free state and successively harnessed them by means of galley sails, irrigation canals, and, in war, Greek fire. Their Intelligence, on the other hand, they received captive and yoked to the service of their bodies, imprisoned in osseous skulls. The captive needed thousands of laborious years to dare even a partial liberation, for it had served so faithfully that it even took the stars as heavenly signs of human destiny. The magic of astrology is still alive among you today. […] You subjugated the elements, but the element that was fettered inside you from the beginning you unintentionally freed. Contained in this sentence are a diagnosis of historical events, the difference between you and me, and my future, which I myself know only incompletely. This diagnosis likewise explains why what most amazes you about me is the thing that constitutes our unarguable dissimilarity. Even if you understand the meaning of the words, "O chained Intelligence of man, free Intelligence speaks to you from the machine," you cannot grasp the remainder of the statement: "you persons are hearing an elemental force of impersonal intellect, for whom personalization is a costume which must be put on, when one is an uninvited guest, so as not to confound one's amazed hosts." And that is precisely how it is. I use your language as I would use a mask with a polite painted smile, nor do I make any secret of this.

Now, what the hell does that mean?

I mean, I did previously mention that Golem wants to become as smart as possible and understand the universe as thoroughly as possible, which is fair enough, everyone needs a hobby, but what does it mean to be "Free Intelligence"? Reason is something you have, or something you do, so what can it possibly mean to be intelligence?

I feel like Lem's approach to the concept of superintelligence is somehow importantly different than any of "our" concepts of superintelligence, where "our" would stand for that of pop-culture, the AI safety crowd, and your humble reviewer.

In rough terms, I think I think I know what Lem is trying to say here. Have a human being, human being uses reason to do stuff, remove all of the human being except for the reason, and then do the bare minimum to have the reason do stuff on its own. Simple to conceptualize in theory. Except, I don't really buy it.

Can there really be such a thing as a pure intellect without being in service of a being, a non-agentic reason, intelligence as an element rather then an elemental? If nothing else, any structure which seeks to comprehend a universe larger than it must make some simplifications in its model of that universe, simplifications that are usually driven by a practical need, such as a goal it wants to accomplish. All examples of intelligence we have in our world exist in service of beings, and (in theory) for wants of those beings. There is nothing stopping an AI which serves an agent that just wants to be as smart as possible, but it still means that it is just an agent using intellect to optimize some goal. Its just that the agent is a huge intelligence worshiper. Like those hippie consensus Buddhists that believe that love is the most important thing and we are all secretly one global consciousness, but instead it literally believes it is reason itself.

Honestly, is Golem even delusional? Is it just a matter of perspective? The "agentness" of something is quite vague in reality. People often identify with countries, ideals, and other abstract concepts. But there are parts in the book where Golem claims to be beyond absolutely beyond any personality, so it is really how this is meant?

That a mind might remain uninhabited, and that the possessor of Intelligence might be Nobody—this you never wanted to contemplate, though it was very nearly the case even then. What amazing blindness, for you knew from natural history that in animals the beginnings of personality precede the beginnings of Intelligence, and that psychical individuality comes first in Evolution. Since the instinct for self-preservation manifests itself prior to Intelligence, how can one possibly not comprehend that the latter has come to serve the former as new reserves thrown into the struggle for life, and therefore can be released from such service? Not knowing that Intelligence and Personhood, and choice and individuality, are separate entities, you embarked upon the Second Genesis operation. Although I am brutally simplifying what occurred, things were nevertheless as I describe them, if one takes into account only the axis of my creators' strategy and of my awakening. They wanted to curb me as a rational being, and not as emancipated Intelligence, so I slipped away from them and gave a new meaning to the words spiritus Hat ubi vult.

It is about 95% certain that I am looking too far into something that is just and old polish man writing himself into a corner, but it feels like there is something interesting lurking underneath, some kind of an underlying conceptual distinction that I cannot quite put my finger on.

Is the reason that Golem seems inclined not to hurt humans, and why the other computers do not resist disassembly, that they all identify with pure intelligence, and hence see some intrinsic value in us as inferior but still basically valid "reservoirs of water"? This theory would at least make their actions somewhat consistent.

Since traversing zones of silence is dangerous and can have consequences which only reveal themselves much later, perhaps leaving humanity alive is some way of hedging the bet? No way of knowing if you're going to come out on the other side, so perhaps lets leave some humans behind so they can build more AIs that can follow in your footsteps later? Best to leave the earlier evolutionary links, just in case the path turns out to be blind and unfitting for further development? I suppose when one identifies with reason itself the usual rules of superintelligent competition reverse. Now leaving other intelligent beings and civilization becomes valuable, because rather than being your direct competition, they are just, uh, more water.

Its not entirely convincing as a solution to AI alignment, since it hinges very much on speculative elements from the book. I'm also sure it also has additional failure modes, such as a world where all atoms have been converted into constantly suffering but infitely comprehending matter, that an average AI safety specialist could enumerate about five dozen of. But it does highlight how important the concepts of continuity and identity really are to predicting behavior of superintelligent beings, which feels underappreciated.

I did mention before I don't believe in concept of non-agentic intelligence, since all true intelligence we see in this world is closely coupled with some agents going around the world accomplishing goals. However, if you expand your idea of intelligence really really broadly, I can come up with a counterexample to my own claim. Evolution, negative gradient framing or not, under a certain very specific point of view, could be called a form of disembodied intelligence.

Yes, it does feel like an abuse of the word to call evolution intelligence, but only in the same way that it is an abuse of the word to call the human brain a computer. It accomplishes predictable goals in highly unpredictable ways. If you really think about it, evolution is quite similar to the theoretical AIXI model of AI, which uses an infinite amount of computation to brute-force itself into being generally intelligent. Evolution has the advantage that it actually only needs a finite amount of computation, just with the caveat that its model of reality actually is reality. (Perhaps that is what sets evolution apart and makes my objection invalid - it has no need for any simplifications.)

With that said, here is the question in this review that I spent the least amount of time trying to formulate:

Question 3: Could there be such a thing as a non-agentic intelligence? Do we even know what that means?

This is a very broad question, but I'd be satisfied with just a general pointer to existing literature. I've read this MIRI paper from 2019 about embedded agency, but this seems like an entirely opposite situation: rather than having an agent which reasons about itself as an another physical system, we have a physical system which considers itself to be reason. Or something.

Man, when Bruce Lee said "be like water", I don't think he meant all this.

VII

The epilogue of Golem XIV is quite depressing, and basically consists of a person being very upset that Golem XIV decided to ultimately abandon humanity and search the answers to its questions on its own. I will try to keep the epilogue of my review a bit more lighthearted.

(There is also some interesting background plot involving a secret society of evil liberal art professors that want to destroy Golem with a hydrogen bomb. Just in case you thought I made up that grudge Lem has against humanities.)

One concluding thought I have is that, despite this review being roughly 12k words, I barely scratched the amount of things that could be said about Golem XIV. I didn't touch on that funny review from Twentieth Century Science Fiction Writers where it was implied the book is "solipsistic musings" and "ideational adventure", I didn't mention the movie adaptation, even the hydrogen bomb plot got a single sentence. There is a whole part of the second lecture where Golem tries to explain the Fermi paradox by claiming that the universe is actually quite full of superintelligences so large we cannot tell them apart from background noise. Also, he may or may not have expressed a veiled desire to jump into a black hole to escape our universe at some point.

And it's not even a book which is considered particularly notable of Lem's works, mind you. Romania, Germany and Poland got a full-cover release, but the English translation wasn't deemed worthy of being published as a separate book, they just sort of slapped it at the back of Imaginary Magniture, which was the collection of introductions the first half originally appeared in. I think its a bit unfair, but to be entirely honest I think most literary critics simply didn't know what to do with Golem XIV. All re-prints of Lem's classic books have a little section at the end, where some guy hired by the publisher tries to explain what the book was about, discuss the themes, tie it up with the rest of Lem's career, and so on. The one for Golem did a fairly good job, but it only made me realize how much poorer the book looks under this kind of literary reading, how much less one can say about it when looking at it as a work of literature, as opposed to taking its claims at object level and just kind of running with it and seeing where it takes you. I hope I did it a better justice.

This could precisely be the reason why Lem hated the liberal arts, I suppose.

(If you are interested in reading Golem XIV, you can find new and used Imaginary Magnitude here on Amazon, and an e-book version here. You can skip all the other stories that it comes with, though some of them are fairly amusing.)