tis.so
April 2, 2022

Cat couplings are a way to construct and reinforce types

by Hazard

I love when I realize someone else has done all the leg work for an important aspect of my worldview that I haven’t yet had the time to dig into. I recently had the pleasure of realizing how John Nerst’s Cat Couplings post performs the assist for my Type as social coordination mech post.

From me:

Types are more than patterns. Most patterns we observe never get reified with a phrase or a handle, and most patterns that we individually reify never get ratified by a social network. A Type is a pattern that a group can all recognize and sync up their behaviors on. It creates an object of discourse. It’s an entity that people can have Takes on. It’s A Thing.

I go at the purpose” or function of types. I also assert that types are decision rules, If [recognizable features from startpack meme] THEN [judge in XYZ way]”, a notion I flush out more in Arguing Definitions to Argue Decisions. But how do these types which are decision rules come to be? There’s defs going to be a multiplicity of ways, I mean startpack memes and bingo cards are themselves explicit social tech for making types.

Nerst pinpoints a very specific and very common way that casual language is used to construct and reinforce types.

First, a definition:

A cat coupling is a kind of phrasing where it’s unclear whether an attribute is meant as justifiably picking out a subset, or unjustifiably describing the whole, and as a result strengthens the connection between the concept and the attribute.

It’s easier to explain with some examples. The one that made me take notice in the first place was this statement: Pessimism has its downsides, but is still preferable to naive optimism.”

It was leveled at me during an argument about progress in society and it got me thinking. What does naive optimism” refer to, exactly? Is it only picking out the optimism that’s naive — i.e. in this context mistaken — and saying pessimism is better than that? Well, true, but that’s not much of an argument if we don’t know how much of optimism is mistaken.

Is it saying that all optimism is naive simply by virtue of being optimism, and therefore that pessimism is more correct? No, that just assumes its own conclusion.

His post is excellent and has many more examples if you feel like you wouldn’t be able to spot a cat coupling in the wild. Later, he explicitly connects Cat Couplings to Types reinforcement!

Cat couplings make much more sense and aren’t weirdly ambiguous at all if we understand them in terms of types and not sets. They’re not making logically intelligible claims about physical reality, they’re evoking, constructing and re-articulating types. Naive optimism” is a type, a concept, a piece of mental machinery, and, if you will, a social construct. So are rich bosses” and unsavory foreigners” as well as neurotic introverts and contrarian twats. Doctors who keep up with the latest science” and venture capitalists who hollow out our public systems” are arguably types as well, if less succinctly described, and some certainly want to re-articulate and reinforce them

That is all, go forth and, may a great multitude of blogs eventually converge on the truth!

typification decision rules knowledge logistics John Nerst erisology

April 1, 2022

Everything bottlenecks with appearances

by Suspended Reason

The basic idea behind opticracy (or opticratics”) is that humans must—necessarily, inevitably, and unavoidably— proxy for reality with appearances. Metaphysically, this is as banal as it gets. The brain knows only its sensory inputs. But by extension, opticracy points out that all, in a non-trivial way, all our decisions, judgments, assessments, and theories rest on appearances. Because of this, appearances are the thing that really matter” in social life (which includes professional, institutional, romantic, etc life). We are hired on the basis of optics; we make friends through optics; we are chosen as reproductive mates through optics. Reality matters, in social games, only insofar as it makes certain appearances more robust to interrogation. Nothing more, nothing less. Truth is regularly suspected on the basis of not looking right”; falsehood is regularly accepted because it does.

One way I’ve illustrated this in the past is to say: imagine if an employer required that you look productive in order to stay employed. (Easy to imagine.) For a while, actually being productive is the best way to look productive, and most employees are actually productive in turn. But something strange happens with the oversight system, and pretty soon, actually being productive has little to no effect on whether an employee appears productive. What happens to employee behavior? Do they keep being Actually Productive, at penalty, for the sake of their employer’s bottom line? (Or some internalized sense of integrity”?) Do they switch to merely appearing, because it’s what rewarded? It seems clear to me the latter is far more likely, especially given that any high-integrity” holdouts will be systematically fired and replaced by people who appear. In other words, whatever incentives doesn’t take care of, selection will.

(The only exception I can think involves areas like charitable and volunteer work, where individuals are in it for” the organizational mission, and not a paycheck. But even here, what are individuals optimizing for? The appearance of impact, not impact itself.)

But another tact is to cite Karen Horney’s wonderful Neurosis & Human Growth, where she argues that there are two basic routes available to the narcissist, seeking to validate his ego-ideal. He can actually strive towards accomplishment, as a means of securing validation. Or he can enter a world of delusion à la Rupert Pupkin in The King of Comedy. What is crucial is that these are, in some sense, fungible solutions. Both are means of securing the same end. That end is self-image, proxy for real status (because all we have are appearances…). The deluded narcissist wireheads his way to the same pleasures the accomplished man earns properly.” But wireheading is a teleological misnomer, and our sense of propriety is strictly social.

This explains part of why I emphasize, in ACiM exegesis, that communication is best described as the manipulation of recipient (”reader”) behavior by the communicator (”writer”). For one, natural selection means that only communicative tendencies which cash out in real effects are selected for, and for communication to cash out” it must do so via an agent, whose behavior changes through the communication. But perhaps more importantly, in the cybernetic training process by which we all culturally learn how to communicate, we can only receive feedback through others’ expressions. That is, even if we think we are attempting to create a cognitive effect (e.g. remorse) in a recipient—and not a behavioral effect—we only know if we are successful by the changes in the recipient’s outward expression. Moreover, the techniques we have learned to use, to create these cognitive effects, have been learned through the feedback of previous interactants’ outward expression.

opticracy selection games Karen Horney ACiM

March 31, 2022

Coordinating with cups

by Crispy Chicken

Language” is the solution to a coordination problem.

  1. You put a mug and a glass in front of me, and want to know which I want. I can point to the one I want.1
  2. You’re going to the store to buy me a cup for my birthday. You want to know if I’d like a mug or glass, you’ll pick something nice out for me. We have have abstract classes of things to make this possible. Therefore, I say: I want a mug.”
  3. You’re going to the store to buy me something for my birthday and you want to know what kind of thing I’d want—I want a cup, of course, but I don’t care if it’s a mug or glass or even if there’s a novelty cup that’s hard to describe. I need an abstract class that I know implies the purpose and basic usability parameters, and since it’s a common niche, we’ve developed one. Therefore, I say: I want a cup.”
  4. You’re going to the store to buy me something for my birthday and you want to know what would make me happy. I don’t actually know, but I know some abstract properties of it like I want to to be uniquely related to my relationship with you. I can explain this to you, because the idea of a uniqueness” and a relationship” allow me to describe schemas that you can fill in with your knowledge of our shared narrative. Therefore, I say: I dunno, but I want something that’s very us’.” You pick me up a cup, because we’re always drinking hot cocoa together, 10/10 great friend.
  5. I have a mug and glass and I want to know which one you want. I want to present the question to you, such that you know to point to one of them. Therefore, I say Point to the one you want.” You can resolve one” to referring to the two cups as a class, because we’ve been socialized into a shared notion of saliency that makes the class of objects I’m considering clear, once I’ve specified that I want you to specify a discrete object with the term one” and made it clear it’s close by in your field of vision with point”. You point to the mug.
  6. I have a mug and glass and I want to know which one you want. I want to present the question to you, such that you know to point to one of them. Therefore, I say Point to the one you want.” You can resolve one” to the two cups, because we’ve been socialized into a shared notion of saliency that makes the class of objects I’m considering clear, once I’ve specified that I want you to specify a discrete object with the term one” and made it clear it’s close by in your field of vision with point”. You point to me, playing off the contract” that I wasn’t in the group of objects I was trying to get you to select me from. This is a joke”, it breaks the expected contract to defamiliarize people into accepting information that’s out of the expectation of normal interactions. We fall in love, start a family, and our heriditary empire conquers the world.
  7. I want to do something for your birthday, but I don’t want to let you in on the fact that I’m poking around for what you’d like. I ask you what you think of some novelty mugs I found on Instagram. You are capable of giving preferences without a specific use case in mind. I may not even know whether you’re thinking of the mug as more decorative or functional—but since you’re capable of communicating abstracted preferences I can hide information for you, for your own good, because you love surprises.
  8. I work at a mug factory and my job is to figure out if a mug is defective as it goes by on the assembly line. My employer wants to make sure that I pay special attention to the grooves on the bottom of the mugs, that are meant to allow water to drain in the dishwasher. I’ve never heard of a groove” except in music, but I’m familiar enough with language to know that words can have multiple unconnected senses, so instead of detailing my clearly incoherent interpretation of groove” or explaining it, I’m able to simply ask What’s a groove?” and my boss picks up a mug and shows me, by pointing to the salient part.

Untitled

  1. I’m a mug designer and I want to make the top of the mug a certain color. In order to communicate with the draftstmen who will communicate with the manufacturers, I find myself having to point a lot, so I repurpose the word lip” as if the opening of the mug was a mouth to mean the top region of a mug’s opening” in metaphor to the human mouth and lips.
  2. New type of guy drops in 3000 BC: guy who doesn’t want to burn his hand holding the water he’s boiled to sanitize it. He adds a handle to a normal metal drinking cup. People ask why he made his cup that way when they come to his house. He says I was tired of burning my hands when I drink hot water.” Everyone has enough experience of the physics of heat to understand that holding a hot cup is painful and dangerous, and they can simulate the heat not transferring to the rest of the mug, because this brand new type of guy was capable of pointing them to the intention through language.

Language solve a coordination problem: How do I explain something with enough detail to cooperate in a given situation without explaining the entire universe?”


  1. Wait, how do you understand pointing? Turns out shared attention” might be a biological primitive that humans have more than most other creatures—see Michael Tomasello’s work for more.↩︎

reference pointing coordination

March 30, 2022

Gaming & entailment

by Suspended Reason

In the past, I’ve argued both (1) that the meaning of information is best described as its pragmatic entailment for the receiver, (2) that all communication is manipulation. It isn’t obvious how these two theses fit together, and I want to clarify the connection here.

In the frame I’d like to advance, agents are constantly caught, by virtue of having goals and preferences and desires (all roughly synonymous for present purposes), in games. A game is simply any interaction between agent and environment, where the agent’s goals or preferences transform the environment into sets of obstacles and affordances. The environment is composed of other agents and of non-agentic forces. Each constantly emits information; the environment as a whole is characterized by both an informational and physical layer, or extrinsic” and intrinsic” layer.

The informational layer exists” insofar as agents (henceforth players”) have evolved sensory and computational mechanisms for perceiving the physical layer, and making inferences or predictions about that layer. Because agents base their decisions or actions—in other words, premise their future states—on (information-based) inferences, and because any given player’s environment comprises other agents, whose various states and future states have bearing on the given player’s ability to realize goals, this player (and all other players) will find it in his interest to manipulate the information he emits. Through the production and alteration of information, he manipulates agentic aspects of his environment in a way analogous to his manipulation of non-agentic aspects (e.g. lifting a rock) by physical force.


Meaning is the entailment or So what?” of received information—it is a subjective, relational property of how information relates to an agent’s pursuit of a goal. The meaning of hoof tracks and blood splatters leading northwest, to a hunter tracking them, are that he must head northwest, where the deer has limped off. The informational layer, scoped to an agent, is always pragmatic insofar as it is tailored by the brain to emphasize goal relevancies, and to de-emphasize non-relevancies. In humans, what we consider the meaning” of an aspect of the informational layer is its implications on our goal-directed projects. In playing our games, or advancing our projects, we rely inevitably on inference on the informational layer.

Manipulation is the So what?” of produced information. It is the desired alteration of other agents’ behavior, which is often very approximate and more directional than precise, hence the metaphor of steering.” The manipulation occurs not because the receiving agents are forced to act in a new way, but because they choose to act a certain way, on account of the entailment (meaning) of the information received. Thus, I may communicate with the hunter by falsifying tracks, in order to lead him in the wrong direction, away from the wounded deer, so that I may seize the prize for myself. I manipulate the hunter by producing information which entails a course of action that is amenable to my own projects, and which he believes to be amenable to his own. Often, when projects (that is to say, goals) are aligned, it is the case that the information I produce, while still technically manipulative, is advantageous to his goals: perhaps he is a friend, who will share the venison with me, and as I have spotted the bloody tracks while walking home, I point him in the true direction so that he will apprehend the deer.

How does this so what” entailment work for the receiving party? More or less, the information produced alters his calculation of the probabilistic outcomes of different paths he might take—in a word, inference. Delta” will therefore be a key concept—the meaning of the received signal is the delta it entails for action, and manipulation is the attempt at enacting such a delta in the receiver.

games strategic interaction ACiM pragmatism meaning extrinsic-intrinsic

March 29, 2022

Catwoman leaving Batman for Jar Jar Binks because of his implied proficiency at oral sex as a cautionary tale for artificial general intelligence

by Collin Lysford

Machine learning techniques are getting a lot better at generating reams of plausible sounding text. Since human beings frequently use text to communicate, it’s easy to interpret this as we’re getting much closer to artificial intelligence capable of human communication.” But human communication relies on a lot more than textual back-and-forth, and I think it’s useful sometimes to flip your thinking the other way around. Instead of focusing on what AI has done that seems very human, focus on what humans do that seems very hard for the AI to replicate.

To that end, I want to share a tweet:

This tweet had over 8000 likes at time of writing. It was communicating a message, and that message was clearly received and enjoyed by thousands of people. But it was a message for a very specific time and shared context, so I need to start by explaining that. On June 14th, 2021, an Variety interview went viral on Twitter because of this quote:

Justin Halpern: A perfect example of that is in this third season of Harley’ [when] we had a moment where Batman was going down on Catwoman. And DC was like, You can’t do that. You absolutely cannot do that.” They’re like, Heroes don’t do that.” So, we said, Are you saying heroes are just selfish lovers?” They were like, No, it’s that we sell consumer toys for heroes. It’s hard to sell a toy if Batman is also going down on someone.”

This KnowYourMeme page has more context. Knowing that backstory might help you get the image. If not, then here’s a belabored explanation of the joke:

Catwoman is walking away hand in hand with Jar Jar Binks. Catwoman is waving farewell with her other hand. Jar Jar has an extremely smug expression. Batman is standing alone, looking distraught with his hands out in an incredulous position. This is funny because Jar Jar Binks has an extremely long tongue. The implication of this picture is that Catwoman is dissatisfied by Batman’s unwillingness to perform oral sex, and is going to go out with Jar Jar Binks instead so she can receive superlative oral sex instead of none.

There’s a reason this image went viral while my explanation wasn’t actually that funny. The moment of recognition is the joke, and the small touches like Jar Jar’s sassy gait add a lot of color that the text doesn’t. All the same, this is communication. If an artificial general intelligence is going to reach superhuman levels, then it should be able to get the joke” in the way that thousands of humans did. But when you look at what it takes to get this joke, it’s a pretty tall order.

Prerequisite 1: Extremely Recent Context

Many machine learning models work by having a team of workers hand-annotate a large volume of training data”, then use that to train the model to look at new things. This works when you’re dealing with objects that are reasonably stable throughout time. Predicting success on Twitter, with a new main character every day, is a different matter entirely. A classifier guessing whether this tweet would dramatically succeed or get no likes should guess no likes” for June 13th 2021, and every single prior day, and then dramatically change it’s mind after seeing the sudden flood of batman oral sex content.

Untitled

So, your model has to be updating on a timescale of ideally minutes, and hours at absolute most. Any learning loop slower than that is unable to handle human social communication.

Prerequisite 2: Recognition of Shared Fictional Information

Batman, Catwoman, and Jar Jar Binks aren’t real. But Batman and Catwoman aren’t real together, in the same fictional universe. Because there are a lot of different Batman stories, it’s not precisely a fact that Batman and Catwoman are dating”. But clearly, a picture setting up the narrative Catwoman expects oral sex from Batman and isn’t getting it” is reasonable in a way that wouldn’t be true if Batman was replaced with Santa Claus. So, your model has to be plugged in to the shared fictional canon, and know that romantic relationships where someone could expect oral sex from their partner” is a classifier that can be fairly applied to Catwoman and Batman. Any model that doesn’t incorporate shared fictional canon is unable to handle human social communication.

Prerequisite 3: Recognition of Implied Physicality from Fiction

When I google Jar Jar Binks tongue length”, my top result is this post from ScreenRant:

Another distinctive trait of the Gungans is their long, agile tongues that seem to have minds of their own. Given that Gungans have an insatiable appetite for everything from shellfish and slimy bugs to a dessert that takes four people to eat, it’s a good thing that they have one-meter-long tongues (over 3 feet) to help grab any and all food in their vicinity.

A neural net trained on a lot of articles from the internet might get lucky and know that Jar Jar Binks has a long tongue because it ingested this article. But I’m certain that almost none of the thousands of people who liked this tweet have read this article. Most people who know that Jar Jar Binks has a long tongue know it because they saw it in the movie. They wouldn’t come up with the over 3 feet” figure themselves, but would happily tell you Jar Jar’s tongue? It’s real long.”

Your model has to have seen” Star Wars Episode 1: The Phantom Menace to know this - you can’t count on someone at ScreenRant encoding everything that ever happens in a movie into a textual form. It also has to know the connection between tongues and oral sex, but interestingly, not in a strictly simulationist way. The claim here isn’t literally that a 3 foot tongue would make you twelve times better at oral sex than the usual 3 inch human tongue in some machine-measurable way that could be proven via simulation. It’s more of a loosey-goosey You use tongues for oral sex, Jar Jar Binks has a superlative tongue, and so a claim that Jar Jar Binks gives superlative oral sex is reasonably in-bounds for the purposes of telling a joke.” Any model that can’t watch a movie and map what it sees to a concept of physical reality - but not a strict simulation!- is unable to handle human social communication.

Prerequisite 4: Emotional And Positional Interpretations of Inanimate Objects

Let me repost my over-explaining of the joke, but this time, remove anything emotional or positional.

Catwoman, Batman, and Jar Jar Binks are in a picture together. This is funny because Jar Jar Binks has an extremely long tongue. The implication of this picture is that Catwoman is dissatisfied by Batman’s unwillingness to perform oral sex, and is going to go out with Jar Jar Binks instead so she can receive superlative oral sex instead of none.

Do you think the first sentence implies the rest? Of course not. You have to be able to see that Catwoman and Jar Jar Binks are standing together, and that Batman is alone. It’s funny because Catwoman looks flippant and Jar Jar looks smug and Batman looks distraught. Humans constantly write little narratives to themselves looking at inanimate objects, so a model that’s able to reach superhuman levels of understanding has to be able to write those narratives as well. A model that can’t write social stories based on the position and implied emotion of inanimate objects is unable to handle human social communication.

Words are cool and I like writing with them. Machine learning is getting better at using words, and good for the machines. But even if we figure out how to have a pile of statistical correlations look at Shakespeare so hard that it can write more Shakespeare, that’s only making progress for one particular sub-case of one particular facet of the vast and wild world of human social communication. It’s not making any progress whatsoever towards solving any of the above problems. If you’re imagining societal or existential risk from an AI with superhuman powers of communication, you’re imagining that all of these problems are going to get solved. Otherwise, you’ve just got a guy that can write a good tweet sometimes but has no real theory of mind for which tweets will do well in advance - and buddy, we’ve got those guys already.

machine learning context text generation AI