tis.so

Will my uploaded brain whistle?

by Collin Lysford

Compare knowing and saying:

how many metres high Mont Blanc is

how the word game” is used

how a clarinet sounds.

Someone who is surprised that one can know something and not be able to say it is perhaps thinking of a case like the first. Certainly not of one like the third.

-Ludwig Wittgenstein, Philosophical Investigations, proposition 78

I kind of, sort of, know how to whistle. This is a big improvement from most of my life where I couldn’t whistle at all. But I constantly feel like whistling, so I would always force out a tuneless bunch of air. Sometime over the last few years, maybe as a result of starting voice lessons, my whistling went from non-existent to merely remedial. If I whistle a tune we both know, you’ll probably be able to guess it. But I certainly don’t place the notes nearly as accurately as with my voice. There’s a huge degree of certainty on what kind of sound will come out when I start. I hear the noise and try to make the rest of the whistling work out based on how the first note sounds.

So somewhere in the tangle of my brain is some information about how to make my lips and lungs cooperate to make a noise and adjust it. But that doesn’t mean the sound” of whistling is encoded anywhere. I’m distributing some of that cognition to my lips themselves. I don’t think” a sound that is imperfectly outsourced to my body; I think” a process that cooperates with my body to produce a sound.

This difference becomes quite material when I imagine my life as a brain on silicon. Many people believe it’s inevitable we’ll be uploading our brains into computers - the information encoded in a human brain is on a scale tractable to potential storage capabilities, so whats stopping us from putting that information on a computer? And some people go even further and think of this as an upper bound” on the timescale for when we’ll have artificial general intelligence. Even if we can’t generate one from scratch, then surely we can start with a human brain on a computer as a general intelligence and improve from there. But that information encoded in my brain isn’t the abstract idea of whistling, it’s a partnership with my lips. So what does it mean to say that it’s uploaded?”

One answer is that the brain will live inside a simulation that wires it up to virtual lips so well that I won’t be able to tell the difference. But if you need that sort of simulation for the brain to work, you no longer have the guarantee that you can easily encode everything that you need. The entirely of the physical world that touches a brain contains a lot more information than the brain itself does. And the required complexity of the simulation means you’re front-loading a lot of the difficulty. If brain emulation is a stepping stone to AGI, but it requires PerfectlySimulateTheNaturalWorld.exe to make the emulated brain work, then why muck around with artifical general intelligence at all when you could just use PerfectlySimulateTheNaturalWorld.exe to answer all your questions?

So the question is: how much less than perfect can the simulation attached to your brain be and still be useful? Your brain is a tool that’s made to interface with an enormous amount of physical context, but how much of that context can be omitted and still have it work? A chief hope for emulated intelligence is being able to speak textually, and people often have a subjective feeling of being disembodied while speaking textually, so there tends to be a lot of unconscious hope that the words part” might be cleanly separable from the rest. But is there a reason for that belief?

To be clear, this isn’t a rhetorical question on my part - I genuinely don’t know how you’d even start determining separability of various brain functions. It all seems pretty goopy and connected to me. But as we sit in the fog of uncertainty, I do think we focus too much on the ideal case of just split out the thinky-bits from the biology-bits”, when of course our mood influences our thoughts, many people think better on walks, and we only know what we’ll sound like when we start pursuing our lips and whistling. I have this horrible image of the world’s first emulated brain, running fine on a hard drive but completely unable to talk to any interfaces we create for it. Eyes, hearing, speech - all of it relying on hidden biological context we didn’t manage to fake well enough. All we can do is watch the brain waves on a computer screen, wondering if anything is awake” in there, wondering if it’s feeling pain, unable to ask it anything. Before we get too excited about uploading brains, we should make sure we have a well-founded answer for why they won’t end up like this.