Stay Connected:

The biggest risk posed by autonomous cars is not the technology, but the blind faith users will place in it. Sunday’s fatality in Arizona will be the first of many and expectations should be lowered.

The technology that permits cars to drive themselves is truly a modern miracle. However, it also has many weak points.

Firstly, the system relies on IT systems and despite all those mantras we’ve heard about “garbage in, garbage out” and “computers just do what their told, it’s the humans programming them that make the errors”, in essence, sometimes systems fail due to extreme conditions. Most computer systems are tested in a very wide temperature range for example, but even then, no system can be expected to work equally well in -30°C and +55°C. Then when you factor in the programmer error or, as Intel showed, chip designer error, no wonder we seem to spend much of our lives rebooting laptops and downloading updates.

The second weak point of autonomous cars is the heavy reliance on sensors. Even if the sensors had a 100% accuracy they get dirty and lose communication with the systems governing them. If you have ABS on an older car, you’ll know that many warning lights are simply due to dirty sensors and can be rectified with a good clean. However, the bigger issue with sensors is that they are a weak point for hackers and anyone wishing to seize control of a vehicle will enter through the sensor array. Sadly, this is inevitable and unavoidable; no sensors, no autonomy.

The third weak point is the environment in which the system operates. Lane departure warnings won’t work if the lane markings are excessively worn and road signs can’t be read by computers if they are partially obscured by branches or snow. As this is a fairly common state of affairs, many systems will be designed to work with imperfect data, but there are always limits. Even then, the mapping software needs to be updated with real time road layouts or your car could get confused at a new junction layout (as anyone trying to use Google Maps to negotiate Lewisham town centre will testify).

The fourth weak point is the driving software. This is really the weakest element and Sunday’s collision demonstrates how easily a system can misread a situation. I’m sure the system had been well designed to recognise other cars, but even Autonomous Emergency Braking systems of the sort now required by EuroNCAP still struggle to see cyclists and pedestrians and aren’t even expected to work with motorcycles. Sadly, the victim, Elaine Herzberg, was pushing her bicycle across a poorly lit road and looked neither like a bicycle or a pedestrian to the system and I’m sure this will be seen as a factor.

In terms of expectation, the biggest weak point of a self-driving car is the expectations humans have of it and this is best illustrated by the footage released by the Tempe, Arizona police department. In this footage, you can clearly see that the human who is supposed to be the failsafe in an emergency is not really watching the road at all. We could speculate as to what she is doing, but this is largely irrelevant as, in essence, the natural reaction to sitting in a car that it is driving itself at night in a quiet part of town would not to be sitting there waiting for the worst to happen.

And this is the problem in a nutshell: there is no system in the world that is going to work well if it relies on a bored, daydreaming human to suddenly take control in an unexpected emergency.

In this respect, it might make more sense for there to be no human driver during tests. This way, designers would have to be 100% confident that their system would behave in a safe manner before allowing it on the road instead of relying on a human to sort out any failings at short notice (which we evidently can’t do).

Will designers ever come up with a perfect system? No, as illustrated above, there are too many potential problems that can arise, and no system could plan for them all. Will designers create a robot car that is better than a human driver? Absolutely. The question is more about what level of error can we accept as a society and when we therefore consider the system safe enough for widespread adoption. The debate seems to be whether this is 5, 10 or 30 years away. For now, it’s an open question.