The Mongolian meta

by Collin Lysford

Geoguessr is a game about being plonked on to a random Google Street view location and trying to find, without Googling, where in the world you are. Here’s an 173 page document about finding where you are in Mongolia.

What I find fascinating about this is that some of these are genuine truths about slow-changing natural phenomena: biomes, weather, landscapes. And other things are about potentially mutable, but practically stable human artifacts, street signs and the like. But then there’s also tons of stuff about the roof rack of the Google car and camera blur that are functions of these exact photographs. Those correlations will all fail all at once whenever Google happens to update it’s Mongolia pictures.

This is probably what it feels like to be a large language model. All these details are predictive so they’ll all get used. The model is disembodied and can’t tell that some of these details will be more resilient to future change than others. (The humans coming up with the meta do have a grammar of change, but without the ability to take new street view photos themselves, it’s practically irrelevant for the purpose of developing a meta that works right now.) It receives all details as static, take-it-or-leave-it data points; it can’t try tweaking each aspect of the picture to learn that the roof rack is easier to change than the sky. Nor indeed does it need to! Insulated from the ruthless scythe of natural selection, it only needs to give the correct answers for what there is now. It does not matter that it’s brittle and easy to fool, because no one is trying to fool it.

But what if the model did have a predator? For fun, read this document imagining you’re a member of the Google Street View team trying to make everyone’s Mongolia games as low-scoring as possible. You know the meta, so you know what you’re up against. What would you do? Well obviously take new photographs”, but more than that. There was a spare tire when they were mapping the west, but not the east. So obviously you’d want to include the spare tire while mapping the east and not the west; that one data point alone will dramatically damage the score of someone following the meta. And there’s all sorts of stuff like this: the tent, the sunset, the camera blur. You can hand-pick all these easy to change details to give exactly the wrong impression. Play around with this a bit and you’ll get an intuition of why adversarial counter-models are so devastatingly effective against most predictive models today.