COMMENT
Some years ago, after a road-rage incident that left me seething, I made the conscious decision that if I had to choose between collecting a cyclist on the left or moving across and crunching the panels of some idiot hanging around in my blind spot on an otherwise empty section of road – I would definitely choose the latter.
We make these sorts of decisions all the time. Unlike the anecdote above, most of our decisions are made on the fly. Rally drivers have been known to joke that if they're losing control of a car, they'll aim for the tree with the left side of the car. The co-driver becomes collateral crumple zone. As black as that humour is, it does reflect how we think when we're driving.
Unlike Asimov's third law of robotics, our own survival instinct is paramount.
Currently pundits around the world are posing questions about how an autonomous car would react in an event calling for a level of judgement based on the relative value to society of other road users. Such an event might be an incident like this: Coming around a blind bend at speed you are left with three choices: You can opt to plough into the side of a school bus full of kids, mow down Granny on the other side of the road, or turn the car through a fence and drive over a steep embankment.
Mostly, drivers will probably exercise judgement based on a cascading series of scenarios ranging from flat-spotted tyres to some kids injured – but hopefully not killed. Granny should be safe, even in the worst-case scenario. Whether you'll choose the embankment – and put your own life at risk – will depend as much as anything on your experience as a gambler and your confidence that your car will protect you from blunt-force trauma.
All this flashes through our minds in an instant of a second.
It's worth mentioning at this point that an autonomous car (and an autonomous bus) wouldn't have set you up in this situation in the first place.
Every time we're faced with a life-or-death decision, our minds narrow down the coin-tossing to the 'least worst' outcome – and aim for that. Are we expecting too much of an on-board computing system to do any more than that? Particularly given the probability of such an event arising with 'George' in control is much reduced.
We expect perfection from a system that could be faced with imperfect operating parameters. That system, however, will reduce the level of imperfection to a barely calculable risk.
Death through misadventure is never acceptable. And we should be able to minimise the likelihood of it in our modern, technology-savvy society, but there's a danger that in fixating on the shortcomings of future systems, we'll blind ourselves to their benefits.
Paraphrasing Voltaire, 'the perfect is the enemy of the good'. That point remains true today, and certainly true in respect of autonomous cars. Progress, imperfect as it is, inevitably leads to better outcomes down the track. We can't wait for OH&S committees, bureaucrats and legal inertia to sort out all the possible pitfalls of this new technology. It needs to start happening now.
To the credit of many – politicians, toll-road operators and the car companies – there's a groundswell of support for self-driving cars, which will take much of the risk out of private transport.
Arguments will continue to ebb and flow, not least of all among driving enthusiasts and humanists, but really… ask yourself this: how much enthusiasm do you feel for driving in a daily morass of shiny metal boxes all funnelled through the same overtaxed road system?
Let's just leave it to our silicon overlords, I say…
… but I can afford to say that, I won't be around to see it.