Optimization v. empowerment
Last time, I talked about how heuristics are scoped to an expected distribution of possibles (or more accurately, “probables”). If a certain situation pops up 70% of the time, all else equal it’ll be more important that a given heuristic work for that case, than for a case that pops up 1% of the time. So all our tactics and technologies, all our inferences, are tailored to work in a predicted future environmental context.
This is how strategy works in game-playing as well. Imagine two variations on the same situation: two armies on a battlefield, facing one another, one preparing an assault on his opponent’s defenses. In the first variation, the defending army knows exactly where the attack will hit. In the second case, the defending army has no idea where the assault will come. These are limit cases: there is never perfect certainty about the future, and there is never perfect cluelessness. (An army can move only so far in a given time span; there are angles of attack which would be particularly foolish or ineffectual and can likely be ruled out; there are positions whose assault would accomplish nothing militarily; etc). But these limit cases make clear that a defensive army which knows where in its line the offense will target is hugely advantaged. It can now pool troops and resources in that area, and put extra effort into fortifying that section of the line.
In other words, the defensive line has been optimized with respect to an anticipated future. On the other hand, this optimization makes it exploitable. Now, there are weaker sections in the defensive position, where resources and troops have been pulled away from, which the enemy would benefit by attacking (if it believes the defensive army believes the assault will happen elsewhere, and has optimized accordingly, etc).
If the defensive army has low confidence about where it will be attacked, then it will have to evenly spread out its troops and resources across the line. This makes it less exploitable—there are no gaping holes or weak points in the line—and more “empowered.” (Empowerment is an emerging concept from machine learning which says all else equal, maximize degrees of freedom.)
Empowerment and optimization, then, are descriptions of opposite limit cases. Approaching the limit of perfect empowerment, one becomes increasingly prepared for any possible future. Approaching the limit of perfect optimization, one becomes maximally responsive to a single possible future.
Subsidy
When a player optimizes against one predicted course of opponent action, he subsidizes all other courses of action by making them more effective and less-well defended against. This is a common motif in tennis: if a player consistently hits to his opponent’s forehand, that opponent will slowly shift positions so that the majority of the court is now on his backhand side. This gives the opponent a better position for returning forehands, but makes a backhand hit against him far deadlier, since he must cover more ground, sprinting across clay to return it.
Legibility, Illegibility, Pseudolegibility
“Legibility” is a limit case describing the ability for onlookers to able to infer aspects of reality from the information you produce. “Illegibility” is a limit case describing the inability for onlookers to perform such inference. “Pseudolegibility” occurs when onlookers feel confident in their ability to perform such inferences, but are systematically misled into a false model of reality.
In adversarial-dominant1 games such as warfare, legibility is undesirable, as it allows opponents to optimize around you, increasing their advantage. Illegibility forces opponents to stay empowered, limiting their ability to optimize. Pseudolegibility, when pulled off, is most advantageous: it leads the enemy to optimize incorrectly, making them more exploitable.
I explore some of these dynamics with less compressive clarity, but greater detail, here.
Like nearly every term defined in this post, to say a game is “adversarial” or “cooperative” is to describe an impossible limit case. All games, are Schelling convincingly argued in Strategy of Conflict, are mixed. Any two given agents will share some interests in common, and also have interests which differ or clash.↩︎