Torque policy
1.
Previously, I’ve written about torque epistemology and semantics. The concept was inspired by a Bourdieu passage about how and why writers try to “twist the stick” of discourse with strategic representations of the world:
This explains why writers’ efforts to control the reception of their own works are always partially doomed to failure (one thinks of Marx’s ‘I am not a Marxist’); if only because the very effect of their work may transform the conditions of its reception and because they would not have had to write many of the things they did write and write them as they did—e.g. resorting to rhetorical strategies intended to ’twist the stick in the other direction”—if they’d been granted from the outset what they are granted retrospectively.
To communicate, or to represent, is to act—and actions are always performed with some delta in mind, a transformation of the world as it is, towards how it ought to be. We light the fire to heat the house; we email our colleague to prevent his making an error. What is important with torque utterances is that they diagnose, and attempt to correct, a perceived discursive imbalance: because decision-makers’ perceptions or beliefs are already biased in one direction, they must be de-biased through the presentation of a biased picture in some contrary direction. This tit-for-tat, corrective approach to messaging can erode audience trust in the message source, but also appears more effective at mobilizing partisan enthusiasms.
In torque policy, similarly, some type of regulatory or governing agency establishes behavioral guidelines. These guidelines for behavior may be legally enforced or else enforced socially through judgment, reputation-damage, and shame. In some cases, they are relatively private affairs, targeted messaging, for the receiver’s own sake, which assumes some systematic desire or bias (“ought”) in one direction, and thus systematically misrepresents descriptive reality (“is”) in order to counter this user ought. For instance, an instruction manual may advise a user to perform X action for 60 seconds when only 30 seconds are needed, because consumer studies have shown that most users severely under-count seconds, and will (if advised at 60 seconds) on average perform the action only half that.
Must we really wait 15 minutes after applying waterproof sunscreen, before going on a swim? Tooth-brushing (2 minutes) and handwashing (20 seconds) guidelines may similarly be torque. On the other hand, they may not be. Part of the issue with torque policy is that it muddies the epistemic status of the larger precautionary landscape. Some fastidious literalists will always perform the proper advised action to its full extent, which causes some wasted energy but is overall relatively harmless. More problematically, the existence of torque policy in some domains can cause non-literalists to interpret non-torque advisements as over-stated. Real requirements will be fallen short of, warning labels will not be taken seriously. Torque policy can cause a treadmill of exaggeration.
While many of the above examples assume a ~benevolent body producing guidelines which will result in roughly optimal outcomes across a population, this is by no means necessarily the case. A brand of dishwasher detergent may advise that users pour an unnecessary amount of detergent powder, since a higher use rate means higher sales.
Some examples collected by fellow tis.so members:
“best by” dates on food (hyper-legible & idiot-proof, compare with sniff tests, taste tests, visual mold checks as surrogates)
a company sets a productivity goal it doesn’t expect any employee to be capable of reaching, with the hope of “motivating” the employees to produce at a higher clip than they might otherwise. A few employees work weekends for months in order to hit the goals.
recommendations as to how often you should replace your water filter, air filter, etc
doctor telling you you need to lose 30 pounds when in actuality, even 15 or even 5 would begin to have positive effects on your (knees, diabetes, whatever)… [if] the doctors say 30, it’s in an attempt to shock you into action; and he thinks results would be better if it was done more like charity
my favourite example in this space has got to be the DHS color-coded terrorism threat levels (low, guarded, significant, high, severe), which never fell to the either of the two lowest levels before the system was retired after 10 years
the American Society of Civil Engineers issuing a “failing grade” for every part of the US’s road and bridge infrastructure every year and putting out a figure of 4.5T in needed investment by 2025
2.
Overall, I’m less and less satisfied with the “torque” concept these days, as it seems to blend more and more into linguistic pragmatism and the ACiM hypothesis. But what is clear, from these examples, is that action and outcome are always the ends of communication, even supposedly neutral “informative” communication, and that representation—far from being something which can only ever ben true or false, honest or deceptive—is always tailored toward accomplishing these outcomes. To view communication as something which is foremost “true” or “false” is to fall into the fallacy of the excluded middle, and repeat the errors of the logical positivists.
Utterances are tactics, and every tactic encodes (lossily) its goal. We can’t reliably reverse-engineer a speaker’s goal from a single tactic; the lossy nature, and confounding by “speaker beliefs,”1 means that a given tactic is always ambiguous, could be in service of a thousand desired outcomes. But as we gather a body of data, concerning deployed tactics, we are able to triangulate them into a model of desire.
For instance, a decision to turn city-ward at an intersection may be taken to indicate that the driver desires to head into the city. It may alternatively be a result of both the driver’s desire to leave the city, and a mistaken sense of direction—his model of the world as confounder.↩︎