Random review All Reviews Rating Form Contact

Inventing Temperature: Measurement and Scientific Progress by  Hasok Chang

I shall only add, that whereas the usual Thermometers with Spirit of Wine, do some of them begin their Degrees from a Point, which is that whereat the Spirit stands when it is so cold as to freeze Oil of Anniseeds; and others from the Point of beginning to freeze Water: I conceive these Points are not so justly determinable, but with a considerable Latitude: And that the just Beginning of the Scales of Heat and Cold should not be from such a Point as freezes any Thing, but rather from Temperature, such as is in Places deep under Ground, where the Heat of the Summer, or Cold in Winter, have (by the certain Experiment of the curious M. Mariotte, in the Grottoes under the Observatory at Paris) been found to have no Manner of Effect.

Edmond Halley in the “Philosophical Transactions, Giving Some Accompt of the Present Undertakings, Studies, and Labours of the Ingenious in Many Considerable Parts of the World, Volume 17” - Royal Society of London, 1694

Science advances. That seems as incontrovertible a statement as can be (well, in some circles more than others). And gladly so, too, given how much a positive correlation there is between our knowledge of the natural world and our overall quality of life. And yet, something seems off about the whole thing. The boisterous claims of an autonomous science, free from all the murky metaphysics of the past made by the Logical Empiricists soon met with what appeared to be insurmountable issues of circularity or outright inconsistency. The whole barrage of constructionist thought (what most people are referring to when they speak of ‘post-modernist truth-hating’) that marked the latter half of the twentieth-century seems to be built on the failure of these champions of science to adequately ground the theoretical edifices of their disciplines in solid ground. The cynical among us might be tempted to dismiss the whole thing as a waste of time, after all, history has shown time and time again that it just works, regardless of whether we can safely say we’re approaching ‘truth’, whatever that is, or the whole thing is just a castle of cards that’ll inevitably come tumbling down in the future - and that’s more than we could ask for, really. Except when it doesn’t work that well, I guess. “What’s important isn’t being able to make sense of the theory, it’s our ability to derive practical applications from it.” Or not (though, perhaps?)

It can’t help but to feel frustrating that, after all the hard work we’ve put into understanding reality, after all the major advances made, we’re still at a loss when it comes to thinking systematically about science and what qualifies as such, especially given the mounting attacks it's been getting from all sides (see "entirety of 2020" for reference). What are we to do? Hasok Chang offers a possible answer.

In his seminal work of virtuoso history-cum-philosophy commentator, "Inventing Temperature: Measurement and Scientific Progress, Oxford Studies in the Philosophy of Science" (New York: Oxford University Press, 2004), Chang paints a fascinating picture of the surprisingly obscure and meandering origins of the simple thermometer, using it as a backdrop for expositing his own thoughts about the how (and why) science works, not in spite of the many pitfalls it gets itself in but, in a sense, because of them. It's a refreshingly realist and positive (not 'positivist', the other kind) take on the field, one that doesn't shy away from the criticisms of the likes of Kuhn and Feyerabend - the "big guns" of anti-realists all-around -, yet serves as a very conservative extension of operationalist scientific practice overall. Most of all, though, whether it was the author's intention or not (I'd most definitely assume not), it makes a curious case for some very 'out-there' ideas regarding the role of purely theoretical considerations in solving scientific issues when mere empirical observation and exhaustive data-gathering may fail to do so. I'll explore these in the epilogue. But first, let's talk about heat.

It's getting hot in here - or is it?

Chang's book is structured in six chapters. The first four are each devoted to a specific episode in the development of thermometry (respectively: the creation of thermoscopes, the setting up of temperature scales, the extension of these scales beyond their usual domains, and the achievement of an absolute scale on the basis of thermodynamics) with both historical background and analysis segments; the last two are dedicated to his own musings about the ways the whole experience can inform our attitudes about scientific methodology, and the merits of epistemic iteration (chapter 5) and of the practice of complementary science (chapter 6) as an auxiliary source of insight for the professional scientist. I'll cover most of the book's themes, but delve more deeply in the "lore" behind chapters 1 and 2 here; not because the others are any less interesting or historical, but because the former address the main conceptual issues that'll form the core of Chang's argument for the entirety of the work, so naturally will demand a bit more background to follow. The first thing that struck me is how big a challenge it was to establish even the most basic facts we now take for granted when talking about temperature. The list of minds that poured over the challenge of establishing a reliable gauge for heat and cold reads like a roll call of the most eminent thinkers in the west from the 17th to 19th centuries: Galileo, Boyle, Huygens, Hooke, Hutchins, Newton, Halley, Lavoisier, Laplace, Faraday, Rumford, Dalton, Carnot, Joule, Helmholtz, and of course, Thomson (better known as Lord Kelvin). The main issue, as it soon becomes evident, is one of solid foundations: the concept of temperature is such a basic, fundamental one that it's hard to define it without having to resort to something like it in the first place.

Imagine you're an aspiring scientist in pre-modern times. You have an intuitive grasp of warm, cold, and degrees thereof, but know it's wholly subjective and internal, meaning you can't extend it beyond the realm of your particular experience. You'd like something a bit more robust, preferably an objective measure that is independent of any single person's perception, but how do you go about establishing it? The first thing you'll need is to find a material resistant enough to be able to withstand a significant thermal amplitude without losing its integrity and responsive enough that it'll vary in some physical dimension with every increase in heat content - another, more subtle point, is that this variance should be monotonic in respect to heat, that is, it should only increase with a corresponding increase in heat content and decrease otherwise, with no admissible third behavior. You probably see the issue: isn't that precisely the meaning of temperature? How can we be sure that the heat content being measured is increasing or decreasing without resorting to a thermometer in the first place? We've fallen into our first circularity trap. Another problem we'll soon encounter is: there's actually a host of different materials and implementations which provide us with the properties we're looking for, but they all differ in their characteristics and don't usually agree with one another's readings. So, how do we choose among them?

The first stroke of genius was to use noticeable cutoff points - like phase transitions of the materials under consideration - to establish lower and upper bounds of temperature. These would act like fixed-points, or boundary conditions, and provide the most important element of a scale: ordinality. A scale is ordinal if elements follow a natural progression, or order. The natural numbers obey a strict order - if a is bigger than b and b is bigger than c, a will necessarily be bigger than c, because of the way the successor function works. Scientists had reason to believe that temperature followed a similar pattern precisely because of the existence of these fixed-points, represented by phase transitions: water in a solid state will never boil before passing first through a liquid stage, and vice-versa (though the same isn't true for other substances); hence, it's natural to assume that whatever is taking it from one phase to next is a monotonic progression of heat, which played in favor of the ascendancy of water as the go-to reference material - ie, the one absolute temperature is defined against. Already though, and despite the resounding success of the approach, we can't help but to imagine what if nature had been any different: what if temperature didn't increase monotonically with heat content? What if it evolved instead in a closed cycle - like a clock - or according to partial order with regards to its phase transitions (to the Pokemon aficionados out there: think Eevee's evolutions instead of Pikachu's)? Worse even: temperature might even be a function of a single variable (in this case, heat content), but a quantity defined based on so many parameters that any macroscopic attempt to define it other than painstakingly measuring all the microscopic 'heat' in a body would be pointless. You might be thinking this is all baseless, after all we don't live in any of these worlds; and yet, there are many such properties in our world that don't scale so nicely from micro to macro like temperature or charge. Think spin, for instance. Imagine a world in which we could macroscopically determine a compound's "spinness" by simply taking a look at it - what a dream world that would be! Chemistry would be nearly trivial. The thing is: we had luck that temperature turned out to be so amenable to our methods of inquiry up to that point, as it was in no way guaranteed that it should be.

The best example of that comes from the very difficulty of fixing a fixed-point - as it turned out, they weren't as 'fixed' as the name suggests. To begin with, they depend on more than temperature, and in fact, it had already been known that pressure can affect the amount of heat it takes to melt or boil water. But the problems didn't stop there: there are at least two substances that we need in order to define temperature - a reference one (usually water) and the thermoscopic one (the one we make our thermometers with) -, and there were different materials vying for that coveted second spot (such as mercury and "spirit", aka, ethyl alcohol). The trouble is, even the reference-thermoscopic combinations seemed to behave completely differently depending on the most random circumstances, with minute details such as the material of the enclosure (whether glass or metal), the cleaning method employed or the presence of air altering their corresponding phase transitions when exposed to the same heat source (spoiler alert: it had to do with dust), which provided a further blow to attempts of establishing a universal gauge. The solution, as it turns out, came from the most surprising of places: a theory of how heat worked that would later turn out to be wrong. Caloric theory was based on the idea that heat was a material substance, called caloric, that was present in every body in various degrees. What we experience as an increase in temperature was, in reality, the transmission of caloric from one body to another, much like atoms trading electrons in chemical reactions. Of course, we now know that image is false (in a similar way that the notion of electric current being about the "movement of electrons" between bodies that we learn in school is, at the very least, an incomplete picture, as charge is more accurately a quantity that is conserved but is fluid, being transmitted not only through electrons but ions and many other particles as well); but at that time, with the resources at hand, this was the best theoretical approximation available to understand temperature in quantifiable terms. The point is, caloric theory had in its two major factions (the Irvinists or "heat-capacity of a body" defenders, and the chemical calorists or "free vs bounded caloric" apologists) support for a notion of 'latent heat', or heat that isn't immediately expressed in a change of temperature, but rather employed on the very structural transformation of a body undergoing a phase transition. The solution to the quandary hence couldn't be simpler: instead of focusing on the temperature of the body in question (ie, the water in the container), for which we can't be certain which microscopic mechanics are taking place at any given time, take the more phenomenological approach and measure the product of the transition itself - in this case, water vapor. This obviates the need to establish precisely at what point the water begins to or fully boils with its many mixed-up states; as was soon verified, the temperature of steam was always constant, even if the water generating it would fluctuate wildly (not to mention the even more confounding effects of supercooling and superheating). The fixed-points of water were thus given a solid enough grounding - despite the fact that, theoretically, their interpretation disguised a terrible misinterpretation of the underlying workings of heat. That comes to show the resilience of fixed-points in general, and why they're so powerful: even though their production is not a sufficient condition for the accuracy of our models, it certainly is a necessary one for the establishment of successful ones.

The next step in the development of the thermometer would be, of course, to grade each specific point on the scale. Easy enough, right? Well, it would be, were it not for the second hidden assumption we tend to impress upon nature: that said scale should not only be ordinal but also cardinal (ie, that the degrees were fungible among themselves). That is the assumption of linearity. Linearity in this context basically means that temperature should be construed as to progress in proportional (and fixed) amounts relative to the heat content of a system. In other words: if for every caloric of heat our temperature were to grow two, four, or eight times as much (or some other unwieldy amount), that would signal a failure on the metric's behalf, as there'd be no sensible use for such a scale in our day-to-day situations. Just imagine your run-of-the-mill peasant having to perform logarithmic tables in his head to determine the smelting temperature of a metal. No, a most desirable metric is one that our monomial brains can follow without great strain. But then again, who says nature gives a shit what we'd like? As it turns out, not only does the overwhelming majority of things in nature obey some nonlinear logic, most materials respond to heat in some fairly creative ways. As it turned out, even the most reliable yardsticks tended to vary dramatically in their readings to corresponding progressions of heat content. Fixing upper and lower boundaries wasn't enough, as these fluctuations happened even when there was agreement in regards to the extremes - it's as if there were many different paths available for a given substance to travel from point A to B in 'caloric space'.

The first half of the puzzle came from Jean-André Deluc, a prodigious Swiss geologist, who saw his frustration grow with every new failure of establishing a regular standard for temperature. Deluc then decided to formulate a genius plan: instead of trying to divine whatever the 'true' degrees lie in between the extremities of the thermoscope by admittedly fraught estimations of heat volume (which, again, invites circularity), why not check for the 'stability' of temperature within a given substance? The way to implement this idea came almost naturally: if a substance obeys linearity, then a portion of the same substance must do so as well. Why not, then, take the same quantity of a reference substance in each extremity (the liquefying- and steam-points) and mix them? If the thermometric substance gave the equivalent of a half-way measure to both the upper and lower limits to the combined mixture, then logic would dictate that it is precisely gauging the temperature. Not only does this free the scientist from trying to calibrate temperature with heat directly, but it also allows him to derive any metric he wishes to make sense of the context of interest. No need to assume that nature uses this or that scale if all of them are ultimately constructions we superimpose to phenomena to make sense of them, anyway. Let us then choose the one which is most convenient to our own intents - hence the 'centigrade', or 100-degree gradation, used by most of the world (guess the USA didn't get the memo). According to Deluc himself, it seemed that the mercury thermometer was the most reliable in providing consistent and approximately linear readings in a diverse set of circumstances, but as it turned out mercury only behaved nicely in the domain denoted by the usual fixed points (something that is explored more in-depthly in chapter 3); the thermometric substance that would prove to most readily fit the bill wouldn't be mercury but rather common air, and we owe this realization to the frankly astonishing and obsessively thorough work of French chemist Henri Victor Regnault.

Tasked by the French government in devising an evaluation system for steam-engine output, it immediately dawned on him that temperature would play a key part in any such attempt. Problem is, up to that point the thermometric controversy was still a hot-button issue, with passionate discussions and great minds constantly being humbled by their very public shortcomings. The main issue can be said to be one of paralysis: an abundance of theories and unproven assumptions seemed to plague the subject, and an over-reliance on unobservable entities dictating macroscopic behavior appeared to preclude any attempts to study the subject in a practical light. It would take Regnault and his no-hostages-taken approach to empirical analysis to cut through the stalemate. Regnault was a man of democratic sensibilities, in accordance with the spirit of post-revolutionary France: he treated all theories equally, which is to say - they were all garbage. With a passionate belief in experimental science that would make a teary Hume, he set himself the task of settling the matter the only way he thought possible: by assuming the absolute minimum about the behavior of temperature. He would investigate every possible parameter and setup in a quest to identify the exact domain of validity for each and every thermometric ensemble. His output was prodigious, so much so that researchers would come from all around to visit his workshop, furnished as it was with the utmost bleeding-edge of scientific instrumentation. Even a young Thompson would later use Regnault's notebooks as the basis of his own shot at the problem of thermometry. If a man more clearly embodies the virtues and character of a true experimentalist, that man is certainly Regnault, which makes all the more surprising that he isn't as well-known outside historical circles as many of his contemporaries. I guess theory always commended the big laurels. After much toiling and tinkering, and hundreds to thousands of investigations into every conceivable variable surrounding the topic, he did arrive at the conclusion that the air thermometer was the superior one, but this accomplishment still fell short of Regnault's ambitions. In a way, the apparent random and arbitrary behavior temperature seemed to adhere to proved frustrating even to him, the most rabid anti-theorist; he couldn't, by the end of his long career, shake the feeling that somehow it was all spent chasing ghosts, and nothing of substance (in a figurative sense, of course) was behind the apparent messiness of heat phenomena in nature. But Chang would argue that Regnault's success was exactly in achieving such a sound understanding of temperature without the support of a solid theoretical apparatus; and even go as far as to suggest that he, in a way, demonstrated the general vacuity of these concepts we cling so much to in our quest for understanding nature. The key insight for Chang here was the adoption of an approach as simple as it was revolutionary: instead of assuming any of the purported models would obey the "true" law of thermometry, just assume they're all ultimately relative or continent, and choose instead one which behaves the most consistently within the most variable set of circumstances. It may not be universal, but this self-consistency provides us with something much more valuable: a benchmark on which to evaluate (and, presumably, improve on) our further measurements against. This is the basis for the method of comparability, which will figure prominently in Chang's own musings on the lessons learned from the thermometric odyssey.

Comparability's success itself hinges on the extension of an idea the author refers to as "single-value principle" - or the notion that to study nature is to look not only for fixed-points but, more crucially, for points of convergence, or single-values. What's the difference? Fixed-points are, in a sense, embedded into the structure of a system; they are necessary but salient features of the system itself - think of the vertices of a polygon or the minima/maxima of a curve. They are fixed not in terms of any specific value (remember the boiling point thing and how it actually fluctuates if we scramble the parameter?), but in their relation to other parts of the system, not to mention the very system as whole. This is a very nuanced point I don't think gets across very often, and is crucial to understanding what it is that our theories are actually trying to do at the end of the day.

A single-value, on the other hand, is fixed but also definite (as in: it doesn't fluctuate between a set of numbers); but even more crucial is the origin of its 'fixity' - not as a result of constraints set forth by any specific system, but rather the verifiable fact that many, different, and - presumably - independent systems somehow achieved the same numerical result for a given measurement. It speaks to a higher form of consistency - not system-wise but domain-wise -, which is arguably even stronger (it, at the very least, points to a much larger, underlying system acting on their behalf). So, if you start getting readings that your measurements remain equal under a constant setup on experiments performed through several media/contexts/arrangements, you may be in the presence of a fairly general principle acting (or perhaps even a law of nature). Chang adopts this view, with a slight yet critical modification, inspired by Regnault's frustrations: instead of a 'holistic' approach (à la Duhem), a much more modest one, he calls 'minimalist', is preferred, wherein the achievement of single-values doesn't necessarily tells us anything profound about the Universe per se, but more likely about our particular method of interrogating it. Consider if instead of testing many sorts of thermometers against each other and in several circumstances, Regnault decided to test many different forms of gauging temperature (say, based on pressure or work) - would the single-values previously obtained remain? At this point the debate starts to become hideously technical, so let's set that aside. The important thing is, Chang is very cautious about incurring any type of overextension of what the heat experiments mean within the broader context of 'doing science' in general; but, he does use them as an opportunity to share about his own interpretation of the plight for a particular type of methodological concern, and it's an interesting one.

Moderately warm takes

Chang sees in the history of scientific enterprise a mountain of corpses from long lost and abandoned theories he seems uneasy about. Sure, science is all about pushing forward and taking our most cherished notions through the fire of evidence to see which ones pass unscathed (though not everyone would agree). And yet, there's often a profligacy to which we engage with theoretical speculation, most of which ultimately unwarranted, that one gets a feeling he's trying to tell us is a step in the wrong direction.

To understand why we must understand where he is coming from. Chang is a believer in realism of the scientific kind - that is, that science is (or, at the very least, ought to be) about the real world, where the word 'real' takes a very specific meaning: it's not just a normative judgment on the factual existence of quarks or something to the like, but also a defense of the preeminence and foundational role of sense data in our understanding of it. In full-blooded scientific realism, like that of Hume, even mathematical or logical concepts are mere constructions, 'habits of the mind', with no objective existence or self-justification. They represent stable judgements we make about the world, but are in no way features of it, much less necessary ones - there could very well be things that don't fit into reason or our "modes of intuition" (like time and space) and that we'll either never know, because we can't even conceive of them with our limited minds; or, if we do get to experience them, will be at a loss to make heads or tails of what they're supposed to mean (see, eg, the entirety of particle physics in the last 120 years). It's important to underline the "scientific" qualification of this position, because one can be realist about several things, and those often can not be conflated - for example, both string-theorists and French literary theorists would probably be on the the far-end of the opposite extreme from Hume, though for very different reasons. Anyway, the point is: though he counts himself as a realist and a skeptic, he isn't as radical as some of his other peers (Regnault included); though, as we said, he does tend to view with suspicion a lot of the 'bolder' claims often exposed by theorists. The reason is that he thinks most of the theoretical artifacts these people are constantly churning are either poorly-defined, inconsistent or without proper justification (which isn't identical to "lacking evidence"). And the issue comes back to the point already hammered on his treatment of Duhem et al: that of the disconnect between many of these new concepts being pushed so vigorously by theorists and the already established ones - in other words, an underdetermination of the models. He is partial rather for the adoption of an operational criterion for concepts (which he borrows from Bridgman) - ie, that whatever abstractions we adopt to advance our understanding of phenomena be grounded directly on measurable outcomes of some operation performed by the scientist (so, in a sense, keeping the whole abstraction thing on a first-order basis). He is a bit less radical than Bridgman though, allowing for the definition of "measurable" to extend to indirect observations - like those made through the use of instruments - and non-immediate forms, like light from a distant star -, whereas Bidgman would take anything not un-mediatedly perceived by the senses as already a Bridg too far (sorry for that).

You see, Chang isn't against concepts per se. He just wishes they were less like bubbles in the aether, and more like actual, grounded extensions of our sensible objects. He sums that point nicely by reminding us that abstractions are not generalizations - while the former consists of taking something concrete apart by tearing all 'superfluous' features away, the latter consists of a much larger "jump" from the contingent to the necessary and universal. Abstractions are a deductive exercise, while generalizations can't help but to be inductive - and also a bit reductive. The biggest problem with grounding our concepts on other concepts is that there's no obvious point where we are justified in making the jump from 'abstract/concrete' (that in Chang are two sides of the same coin; one qualitative and the other quantitative in nature) into 'real world data' - that is, accordance with our experiments. Suppose that we have one of those awesome, very general reductionist theories that aims to encapsulate an entire domain; the theory has reasonable agreement with reality, but eventually it starts to deviate from our data at a given sector. What should we do now? Do we throw the whole thing out? Of course not, as it's still very useful and accurate in most other scenarios; but how do we deal with the deviation? The point is the theory itself is worse than useless in helping us figure out where to go next - it actually prevents us from exploring many possible avenues, with the risk of becoming altogether inconsistent (it's enough to consider the apparently insurmountable issue of which dear principles we should abandon in light of quantum theory's success: locality, realism [yet another kind, mind you], or observer independence) . Think of it like a Jenga tower: in order to move a block from anywhere other than the top layer, you need to act very carefully; if you remove too big a piece or in too central a location, it risks coming tumbling down altogether. In order to keep even a semblance of cogency, you need to move carefully or offset any major changes (like removing bits from opposite sides or replacing a piece with another, more appropriate one - what, what do you mean that move is illegal? What are you, a board-game normativist?). It's funny, actually, that reductionists tend to resort to that old metaphor that if science is "like a building", focusing on the foundations to the detriment of whatever else is the only surefire way to keep moving "up" - like, where would load-being columns and walls fit into this analogy? Isn't the 'fourth floor' foundational to the fifth - and all subsequent ones? I guess epistemologists don't play Jenga much. But the biggest issue is that, even in the spirit of the metaphor, the bottom layer can't even be said to represent the ultimate foundation, a job that would rather be more appropriately scrubbed to the ground itself that continues indefinitely below, heavier and darker and messier at every level; for, much like the infinite tower of turtles at the base of the world for the ancients, reductionism can't escape its infinite regress of concepts that depend on other concepts and so on, with empirical content only flimsily making an occasional appearance, in order to prop up a bulging slab or another, ad aeternum. But what if there is a better way?

What if we take this over-reliance of concepts on one another not as a bug but a feature, and with just a few restrictions in place to ensure well-foundedness - like 'no vertical chains of concepts allowed, only lateral ones', to avoid circularity issues; and a strict formation rule that ties concepts with a given empirical ground from which they can't be borrowed or divorced from easily. Well, that just so happens to be Chang's idea, and he very deftly and elegantly uses the building metaphor to illustrate it: suppose we take the reductionists for their word, and buy the whole "science is an edifice" thing, but instead of stopping there decide to take the analogy all the way to its ultimate implications. In this case, we would clearly see that a "foundation" isn't even a particularly illuminating term: it's, by its very nature, local to a specific location and not absolute in any sense - in fact, if we dig deep enough we'll eventually reach a "maximum point of foundationality" (the gravitational center of the Earth), and if we push forward we'll just start climbing up again in the abstraction ladder until we reach the other side. In reality, the ground is only a foundation for anything inasmuch as it is a compact stratum of mutually-supportive non-fundamental things, bound together by their own (however individually negligible) gravity! And not only that, but it's not the only possible stratum: countless many combinations of totally random objects could in principle be brought together so that they, too, would eventually coalesce into new "worlds" with their own "grounds" to base things on. Isn't that the most poetic vision ever? Chang's approach is a form of Coherentism - a tradition in the Philosophy of Science which argues that theories are justified by the self-consistent or mutually-supporting character of its constituting elements, which will always have more descriptive power than the sum of any of them individually. In this sense, it obviates (though isn't necessarily against) the notion that said theory must be grounded on any distinct set of self-evident first-principles. In his particular take of the doctrine, it's based mainly on a defense of the Principle of Comparability, as we've seen on his treatment of Regnault (that we are better served by the cumulative agreeableness of several - however individually mistaken or inaccurate - measurements than we are by a single, yet isolated, highly accurate implementation), and the Operationalizational view of concept-formation, as per Bridgman.

But Chang doesn't stop there. He sees science as a highly-dynamic activity - one in which, rather than a single, flat curve, we're treated to a roller-coaster-like back-and-forth ride of crude, aimless, sometimes discontinuous series of steps that yet, somehow, always tend to arrive at the right destination. He, once again, sees this chaotic aspect of actual scientific practice not as a bug but a feature, one that is illustrative of the self-reflecting and highly-exacting character of experimental science, which he identifies as the epistemic iteration process, (which Peirce would call the "self-correcting character of knowledge"). To understand how it works, start with a reasonable assumption about the presumed value that some variable of interest should take as the result of an experiment; after you perform said measurement, take the returned value and use it as the input for the next version (or iteration) of the experiment, and so forth, continuously. The idea is that, even if your initial assumption turned out to be wrong (an almost guarantee, actually), the fact that your inputs and outputs are part of the same functional domain bounds the possible results in a way that no amount of previous adjustment could correct for, and by doing it again and again with the produced outputs as inputs you're effectively collapsing the possible space of functions that could have provided this specific set of results, until you're only left with the class the most assuredly explains the observation in the first place. And if that sounds like fractal thinking, that's precisely because it is, and no wonder also, since there's a very deep connection between fractal phenomena and nonlinear systems (which, as you may remember, are very prolific in the natural world). Chang's argument for this method is based off two principles he's touched before, though only in a passing manner: the so-called Principle of Respect (that previous observations have relevance to our current ones, even if the reasoning behind them is found to be at fault - though it's also important to remind ourselves of the pluralistic nature of the theoretical landscape (remember the "many worlds and their grounds" of Coherentism), which imbues the researcher with a certain degree of freedom over which specific tradition to build his own work in, avoiding the Kuhnian dilemma) and the Principle of Progress (that to increase the robustness of a concept we should subject it to an increasingly varied set of circumstances, even if they lie somewhat outside its usual domain of validity, in order to verify if it's able to produce single-values in accordance with our other, more established models; this progress usually comes in the form of enrichment - ie, the introduction of new insights - or self-correction - that is, through the refinement of previous observations).

Lastly but not leastly, Chang will present us with his (to his own view) "boldest" idea, one that he hopes might help us avoid the pitfalls that had plagued thermometric science over the course of 300 years and, perhaps, even guide us forward in new and exciting directions. He calls it Complementary Science (as a complement to the specialist science practiced by professional scientists): an approach to both the History and the Philosophy of Science in which he hopes to systematize and address the production of all knowledge which science can't or isn't willing to attain by itself for any variety of reasons (hence 'complementary', rather than descriptive or prescriptive). It does so through three main avenues: the recovery of lost or forgotten knowledge, critical awareness of science's previous "missteps" and their lessons, and by new developments as a result of this rehabilitation. He's basically saying that philosophy and history can help illuminate issues that scientists themselves seem too hard-headed busy to effectively address, which honestly it's barely controversial at all, really - unless you're Stephen Hawking. And it's not like science has ever (except for early 20th century Vienna) had any claim on being the sole purveyor of truth and insight to all of mankind's burning questions, so not much to draw from here, to be honest.

But, I couldn't miss the opportunity to touch on a point, however tangential, that kept pulsating harder and harder at the back of my mind for the entirety of my reading, and which I believe Chang might have considered but is waaay too prudent to ever address out loud.

What if everything's energy? (but no, not like that)

Now, before we get into the juicy bits, let me add a disclaimer that this will be a highly-speculative exercise in trying to connect Chang's own ideas and the themes explored in his book with some ruminations I've been having on the whole metascientific enterprise and its significance, so naturally just take everything being shared here with a grain of salt. If you, like me, are a part of the Rationalist community, then you will most likely harbor some convictions that'll put you at odds with a full-blooded scientific realist position - probably by asserting the need of mathematical-statistical tools and the universality of certain heuristics for an effective understanding of reality, as well as a healthy dose of "Cartesian skepticism" regarding the data we gather through our senses/cognitive processes. And yet, here is Chang opening up a whole new can of worms that for once escapes the usual simulation debate or indispensability arguments for the reality of numbers: what if our sensory data itself encodes the very structure of reality? Let me elaborate:

When we talk about temperature we're really just talking about a way of measuring energy - or at least an aspect of it. And if we put aside (as we must) all new-agey, expansive notions of what energy actually is and treat it at the most bare-bones level, what do we actually get? I'll tell you what: a scalar quantity associated with a system's degrees of freedom - or roughly speaking, it's "possible states" (sure, it's a scalar only in classical mechanics, becoming a tensor in general relativity, but what is a tensor if not a scalar with extra steps?) Why is that relevant? Because, all alternative drivel aside, energy truly is one of the most fundamental informational 'building blocks' we have at our disposal in any attempt to make sense of empirical reality, together with space and its geometry. And the fact that it's not only represented by a number but that this number is conserved (except in non-static spacetime) seems to be telling us a great deal more than what's warranted about the universe and how it's even knowable in the first place. Remember the establishment of fixed-points in the beginning of our journey, and how important they were for us to even start tackling the troublesome issue of thermometry? Well, a conserved quantity is a kind of fixed-point, and as Noether has shown us there's plenty of insight we can derive from that realization alone. But we needn't stop there. Just like establishing a fixed-point was merely the first stage in the development of the thermometer, we could actually derive a great deal of knowledge from a broader application of the (arguably very-grounded) concept of energy.

In a way, we can say that the very idea of temperature was only possible as a practical notion because of how well-behaved heat is in the domain we're most familiar with. It took a lot of experimentation to nail precisely its mechanism and be able to produce reliable gauges (thermometers). But there is a very real sense in which a person, without recourse to Regnault's painstakingly collected notes but enough access to modern mathematical and computational - and just a few data points - could have derived all of modern thermodynamic theory, with thermometric theory as an aside. This isn't to diss on experimentalists, to impose an anachronistic revisionism or even negate the very surgical concerns expressed by Chang on the fruitlessness of blind theorizing, but to point to the very inescapable realization that the possibility of our apprehension of heat phenomena is baked into its very nature - and, in particular, that fact that it displays some level of regularity. I don't think people appreciate the very powerful implication of this statement well enough. It's telling us that whatever we can know, we can know mathematically (though perhaps not deterministically, or linearly, but still comprehensively), we can model and investigate the many possible outcomes of an experiment, and if not describe with certainty at the very least provide sensible probabilities for each possible outcome. It means that there is something out there that heat phenomena is like, other than heat itself; that it obeys certain rules and which our theories can capture, either partially or entirely, but it can't be said to be a product of our mind alone because our mind itself is its product, so in any case we'd just be pushing the issue further. What exactly "it" is, whether in this specific case or a more general sense, is a question far too hairy to deal with here, but if you feel like diving into that rabbit hole, Ladyman et al offers a great starting point. The bottom line is: for us to even intelligibly conceive of anything, it needs to have some dimension of regularity, which our brains can then pick apart and save only these more stable features; but in order for it to possess a degree of regularity it must have some structure, which soon translates into amenability to inferences (every structure is bound in a certain way that defines it, and this means that information about a part of it can be used to illuminate other parts as well).

Going back to heat: it's not only that it's actually (a form of) energy, but that to be as such tells us something important about adjacent phenomena - like mechanics, chemistry, etc. If energy can be transformed from one form into another, it must be so that there is some underlying level wherein the two are in a way equivalent - think of how you can't sensibly sum up two apples and two oranges to get any meaningful result, but you can always sum four fruits. More astounding, we soon realize that even those things that fail the regularity test - like heat and its apparent lack of constancy - are still telling us something very important about themselves: namely, that this is not the end of the line. If heat is gone, and as a form of energy it should be conserved, it must mean that it was transformed into some second thing other than heat; soon enough, we're interfacing heat with all sorts of other phenomena and arriving at Lavoisier's law. Suddenly, the entirety of chemistry is now viable thanks not to any particular empirical content, but an overarching principle governing those specific behaviors. Two things cannot interact unless they share at least a common property, and if they share a common property there must be a regularity that encapsulates this relationship, so they're not two things but two aspects of a single thing, with a quantity attached to moderate their relation. It's Coherentism all over again, but this time supercharged and stripped down to its core: if empirical data is the dirt that got together to form the multiple worlds of natural domains and their 'grounds', it is gravity (the consistency requirement that makes reality knowable) that is really running the whole show from the get go. No matter how hard you try to push it out of your models, there's just no way to build an ontology out of empirical contents alone, because those very contents carry within them the metaphysical baggage of consistency and its mathematical underpinnings - think how even a random but infinite sequence like pi carries within it every single possible piece of information, if only scrambled.

But where does "everything is (like) energy" fit into this? Well, this is where the implications grow really bizarre: what this would all seem to suggest is that even in domains where we're dealing with entities whose dynamics would seem far-removed from anything physical, then either: A) there will be some energy-like quantity associated with the system, which can be identified and manipulated to provide insight into that system's behavior; or, B) the lack of any relevant regularities would inform us that the particular 'slicing' used in investigating said domain in non-well-founded (usually these end up being amenable to description and not much else), and should probably be understood as a subdomain (or an arbitrary collection of smaller domains) of a "real" one, where the regularities are more pronounced and are, thus, more appropriate to deal with. There's a very interesting case to be made, for instance, that in most modern, financial economies, 'money' can be considered a form of energy; and, as a matter of fact, it could be argued that, at the end of the day, what we're doing when we're doing science is ultimately just searching for such regularities in nature, translating them in terms of manageable quantities associated with its evolution (energy), and investigating their structure and the landscape it occupies (think Hamiltonian mechanics) in order to extract interesting behaviors or possible applications. Now, of course, 'energy' should be understood here in its most abstract terms, the ones we defined at the beginning of this section - as a quantity standing for a system's degrees of freedom, that relates equivalent but distinct states of a system and that is defined in terms of regularities which manifest in its treatment. Most of all: it's one-hundred percent domain specific. It's NOT the case that every  "energy" measure will be interchangeable or even sensibly subject to comparison, and this is most definitely not a defense of "Cabin" or "Secret" or any such single-word cryptic pseudoscience advocacy of 'how your will can change reality' or things to the like. If you got to here thinking anything of the sort I have failed you, dear reader, and we should probably part ways.

To the rest of you, on the other hand: at this point you're probably thinking this is a load of "abstract nonsense" at best and utter crap at worst. Well, what if I told you that there's a very neat implementation of the spirit of the very ideas just discussed in the area of machine learning, and it's been around since the 80's? Welcome to the wonderful world of energy-based models. In the above link you can check a very comprehensive treatment of this; it's certainly one of the most interesting (and promising) research programs out there, specially in the area of generative models. The whole idea behind it consists, very roughly, of treating the semantic space of a given problem as made of individual states or 'answers' with an assigned number that reflects their probability, which is called their 'energy'. The goal is to minimize the energy of the system, thus finding the configuration that most accurately describes the relationships encoded in the base system. I'm probably butchering it in my exposition, but I've seen these been ported back into physics (where they took much inspiration from, naturally) to incredible results, so definitely worth a look for those interested.

In closing, I'd say that "Inventing Temperature" is a must for any aspiring philosopher of science, regardless of their inclinations. The writing is clear and direct, the exposition measured and the references abundant. Chang is an academic unlike any other in the area, and in many respects seems to operate closer to the ethos of a Regnault than of a Thomson - to our loss, because to hear his ideas in greater detail would warrant a book in itself. But on the question of what, after all, can we learn about the odyssey of the creation temperature, I'd be remiss not to observe how many subjects nowadays seem lost precisely because they lack their own gauges. What benefit to civilization if only we had a Regnault or a Lavoisier to the humanities, one could say. To which I might add: hell, I'd take even a Wedgwood.