U.S. transportation officials open an investigation into Tesla after a fatal crash tore the roof off a Model S operating in Autopilot mode. Gavino Garay reports.
THE first person has been killed by their driverless car. Super bad news. Not just for the family but for the cause of driverless cars in general.
An American man was using the Tesla Autopilot feature on a highway in Florida when his car failed to see a white truck crossing the highway against a sky described as “brightly lit”. The car did not react and ploughed under the truck, tearing the roof of the car. The car spun across the road and the man — Joshua Brown, 40 — died at the scene.
What have we learned from this? Two things.
First: driverless cars will crash. That was obvious — there is no computer system that doesn’t fail and no car that doesn’t crash. Anyone clinging to the idea of a computer controlled car that never crashes was trapped in a fantasy. But now we have the actual horror of a crash to deal with.
Second: when a Tesla on autopilot crashes, the story whips across the internet like a blaze through dry grass.
This is not good news for people who want driverless cars to succeed. And that should probably be all of us — they are likely to be much safer on average, and could save a lot of time, reduce traffic, etc.
Just a few more high profile crashes could mean driverless vehicles are stopped in their tracks.
THE GHOST IN THE MACHINE
Human society is a delicate dance of science and politics. Any number of reasons — good or bad — can keep an invention on the sidelines. Humans control inventions, not the other way round.
In the case of driverless cars, that matters a lot. Humans are super-duper bad at judging risk because of two big screw-ups in our brain.
1. We do relatively unsafe things if we think we are in control.
One study shows people go faster in simulated driving if you tell them they are the driver than if you tell them they are the passenger.
For another example, horseriding kills eight people each year in Australia and sends 4000 to hospital. But because we hold the reins we feel that’s a risk we will take.
Sharks kill just three people a year, but because we can’t control them we fear them far more. (Notably, sharks also make the news way more often when they are the cause of death.)
People react weirdly to risks they can’t control. There’s a reason sci-fi dystopias so often feature futures where we put our lives into the hands of robots — it makes people very uncomfortable.
2. We judge things as more likely if we can visualise them.
This is called the availability bias. If we’ve seen a video of a Tesla smash into something without stopping, we are more likely to be petrified of them.
All this makes certain consumers are likely to distrust self-driving cars more than previously. But the big risk to the industry is if a regulatory change happens.
THE YOUTUBE PROBLEM
The fact the latest Tesla crash is not on video — yet — is surprising. Dash-cams are proliferating and YouTube is full of Tesla videos of various kinds.
The man killed in the Tesla crash actually posted a video of his Tesla saving him from a crash in April, just weeks before he died.
The big risk to the future of driverless vehicles is not a video where the owner is hurt. The risk is videos where the driverless car and its occupants are fine.
A dog being killed by a driverless car would be a local outrage. A child being killed by a driverless car would be a national outrage. A child being killed by a fleet of driverless trucks belonging to some sort of faceless transport corporation would be a global outrage.
In that environment, expect a reaction. It won’t take much for parents to get the government to ban driverless vehicles near schools, for example — even if the driverless cars are statistically safer (remember how WA went on a shark cull after recent attacks? People demand action and, even if it makes little sense, politicians deliver.)
The power of images to get policy made on emotion is proven in the live cattle trade, refugee deaths at sea, wars in Asia, etc, etc. Transport policy will be no different.
DON’T BET ON EVERY INVENTION
A future where a good idea is feared and delayed is not ideal. But it is important to predict the future realistically. Driverless cars could end up alongside, say, democracy, the theory of evolution and pollution controls as things we didn’t, as a society, accept straight off.
Seeing the future realistically may not be fun, but it matters.
It matters for car-makers. It shows they don’t just need to be better than the opposition to win. They need to be near-perfect.
It matters for investors. You can try betting against the natural conservative streak in human nature, but you need to do so carefully.
It matters for infrastructure makers. People are saying robot cars will take over by 2050, or 2030, or even 2018. What does that mean for what we need to build now? The answer is to take the most excitable estimates with a major grain of salt.
A future of driverless cars is possible, and maybe in our lifetimes. But it will be a slow and painful process to get there.
Jason Murphy is an economist. He publishes the blog Thomas The Think Engine. Follow him on Twitter @jasemurphy.