A number of interesting possibilities have been arising recently out of the interaction between self-driving cars and the police.
In principle, it seems simple: self-driving cars should just be made to follow the law. In practice, of course, this is laughably reductive, both because the cars are imperfect, and also because laws are vague. There are also questions that arise from the mechanisms that are currently in place to enforce the law.
To take a simple case, consider the question of how fast cars should drive. In principle, there are posted speed limits everywhere, and driving faster than that is technically illegal. In practice, however, on many streets, the majority of cars regularly drive faster than the posted speed limits, and police will almost never stop a car solely because of going slightly faster than that (though it could perhaps be used as a pretense). If you go fast enough for long enough, you are extremely likely to eventually have your car pulled over, and possibly impounded, but there is little that anyone can currently do to directly prevent people from driving above a certain speed limit. Rather, behavior is shaped by the threat of enforcement and collective norms.
Forbes sees this as a case where there is a clear law, and it should simply be broken: “There are cases where robocars could and should violate the law, because the law is wrong. For example, they should move at the speed of traffic, even if that’s above the speed limit”.1 This is extremely vague as to how much the speed limit should be violated, and seems to depend on many human operators collectively determining the appropriate speed.
A more comprehensive fix might be to just set an actual speed limit (presumably higher than current speed limits), and enforce that programmatically in self-driving cars, such that all companies know what the actual legal limit is, and cars will be programmed not to exceed it. With enough self-driving cars, they might begin to play the pace-setter role, such that humans would naturally have to adapt to the speed of their traffic. On the other hand, it’s plausible that self-driving cars might at some point be able to drive safely at a higher speed than typical humans, which might eventually ratchet things up to the point that humans are unable to safely keep up.
As things currently stand, it’s not impossible to imagine a self-driving car exceeding what is seen as a practically safe speed (perhaps through a bug, or the car being hacked), such that police would want to stop it. The logical way to do this might be to try to get the company to shut it down remotely or override the automation, but perhaps this would be impossible in the case of a hack. The question then becomes, what powers do police actually have to stop such a car?
Presumably at some point they could shoot the tires out, or otherwise use the violence of the law to bring about the desired effect, but would this require a series of escalating steps? Would they first have to attempt to pull the car over? And if the car did then pull over without explicit violence, what powers do the police have? Police in many places do seem to have broad discretionary power to stop cars as they choose, but impounding a vehicle might require a certain level of violation.
Obviously without a human operator, certain enforcement mechanisms are simply irrelevant. The law is not likely to be written to allow police to give a company a ticket in the same way that one could be given to a human, nor would we expect that to be effective.
In other words, unless we see a major surge in DIY self-driving car hobbyists (which will presumably be illegal without some sort of extreme licensing requirements), the future of driving seems likely to evolve into a highly cooperative dynamic, in which laws are written to accommodate self-driving cars, cars are programmed to accommodate these laws, and major car companies will increasingly become more like arms of the state.
There is more to be explored here, but I’ll leave it for a future post.