tis.so

Reply to “Heuristics That Almost Always Work”

by Suspended Reason

I’m almost tempted to write a post called How I think about dialectic” that says, Look, all beliefs are theories about the world, which is to say, heuristics for acting. If there are two rival heuristics (‘beliefs’) that people walk around with—say, conflict theory and mistake theory—and each works OK or seems coherent, it would be wild to go, Well, only one of these heuristics can be right; which one is it?’ No, you’d say, how do we drill into particulars to get a more nuanced understanding of when conflict vs mistake theory is more productive a frame? How do we refine our sense of the problem ontology in order to refine our agency (the efficacy of our heuristics)?”

A few weeks back, Collin brought up Scott Alexander’s recent newsletter, Heuristics That Almost Always Work,” and some of the problems that emerge when you use the Law of the Excluded Middle outside of logic, in the real world.

One thing Scott is good at, in his writing, is setting up a puzzle from starting premises, and then trying to tease out apparent incoherences, inconsistencies, or problems. The issue, however, is that often, in formulating the puzzle, assumptions and premises are taken for granted in its formulation which do not, when examined explicitly, hold true in the real world. A puzzle emerges which claims to be about the world, but is instead largely a product of the puzzle’s construction. (In this way, it is like philosophy’s thought experiment.)

In Heuristics That Almost Always Work,” Scott supplies a list of example scenarios that are defined by having an investigator and a mystery. The mystery is introduced to the investigator by a cue, an external-facing sign of possible trouble. The investigator, in each thought experiment, repeatedly encounters the same cue and must decide how to interpret or act on it.

The investigator can resolve the ambiguity of the cue in two ways: by taking it seriously, or by dismissing it. The former we can label a positive response, the latter a negative. The investigator’s heuristic for interpreting the mystery can yield a true negative, true positive, false negative, and false positive. False negatives, in the examples Scott lists, are very rare but very costly. False positives are rather common, but can be cleared up and identified as false with a minimal amount of work. His agents learn, with repeated false positives, to uniformly dismiss cues (that is, label them negatives), rather than putting the time in to properly investigate. The uniform passivity of their response—never investigating, always dismissing, cues—makes their job as investigator fundamentally unnecessary. A spam filter that never filters out spam is not a spam filter, and certainly not worth paying an annual salary.

A very rudimentary way to behave rationally, in this paradigm, would be to perform a cost-benefit analysis, weighing the overhead of false positives (routine inspections and follow-up) against the tail risk of a false negative. A false positive, for a security guard who hears a rustling, might require setting down his crossword puzzle and doing a lap around the building with a flashlight. A false negative—a robbery on his watch—loses him the job. Depending on how badly he needs the job, and how often he expects a robbery to coincide with a rustling, he may or may not find it worthwhile to investigate such cues.

But the actual way that professionals deal with these situations is to remodel their ontologies, and to look for subtle differences in the type of cue (and the context the cue emerges from), and to set up different more precise heuristics. In some sense, a category just is a heuristic, and the issue with the security guard example is that it presumes there is only one kind of rustling, that there is only one type of stimulus and thus one type of rational response. In reality, each stimulus, each cue, is unique in some way. Being effective in such professions as an investigator typically involves developing an intuitive ontology for which cues ought to be taken seriously, and which ought to be dismissed. This is what actual experts do: they build ontologies from experience.

A problem, of course, is that learning the distribution space of phenomena requires adequate sampling. If an agent is never exposed, in training, to some combination of cue and (e.g. disastrous) outcome, they can never develop a proper ontology for noticing and preventing that outcome. This is a problem not with heuristics but learning, and the nature of tail risk. But it is handled not by developing a full sense of possible tail risk scenarios, but by developing a sense of normalcy. When you’ve heard the rustling of wind ten thousand times, an actual break-in will sound different; it may be unplaceable, it may not be obviously a break-in, but it will not sound identical to the ten thousand winds. Gary Klein’s famous basement fire” intuition example in firefighters is a commonly cited example:

It is a simple house fire in a one-story house in a residential neighborhood. The fire is in the back, in the kitchen area. The lieutenant leads his hose crew into the building, to the back, to spray water on the fire, but the fire just roars back at them. Odd,” he thinks. The water should have more of an impact. They try dousing it again, and get the same results. They retreat a few steps to regroup. Then the lieutenant starts to feel as if something is not right. He doesn’t have any clues; he just doesn’t feel right about being in that house, so he orders his men out of the building—a perfectly standard building with nothing out of the ordinary. As soon as his men leave the building, the floor where they had been standing collapses. Had they still been inside, they would have plunged into the fire below.

A sense of normalcy isn’t perfect, but it’s effective at breaking us out of routine tactics, telling us to pay attention, and getting us to look closely at what feels off.