Home UK News Self-driving cars: the revolution that wasn’t?

Self-driving cars: the revolution that wasn’t?

88

Researchers have been trying to build autonomous vehicles since the 1960s – first for military use, and more recently for civilians. 

To their proponents, these vehicles will revolutionise the way we get around, allowing drivers to relax behind the wheel as artificial intelligence, guided by cameras and censors, leads them to their destinations. 

Motoring and tech companies such as Google’s Waymo and General Motors (GM) have invested vast sums (a combined £78.9 billion between 2010 and 2021 alone, according to some estimates) in trying to make self-driving a reality. 

Yet for all the grand promises made by the likes of Tesla’s Elon Musk, who in 2016 claimed that autonomous driving was “basically a solved problem”, self-driving cars and lorries are not yet a common sight on our roads; and as yet, no self-driving system has achieved the longed-for status of “full automation”.

What has made this so difficult?

There are a whole host of reasons that the dream of a self-driving revolution hasn’t yet come to pass: cost, the slow evolution of the required technology, and a lack of public trust to name but a few. Perhaps the biggest, however, is safety. Humans are generally good at dealing with the unexpected; but machines must be “taught” to behave in certain ways in certain situations. How should a car with no driver respond if it sees, say, a rock in the road that may or may not be a paper bag? What happens if it snows and the white markings are obscured? Safety concerns have plagued the industry since Google began developing self-driving cars in 2009.

Are these concerns justified?

The industry says that self-driving cars are up to seven times less likely to get into crashes leading to injury than normal cars. After all, AI isn’t susceptible to a temptation to drink and drive, and won’t get tired. But a series of high-profile accidents have dented confidence in such claims. In 2018, Elaine Herzberg became the first pedestrian to be killed by a self-driving vehicle, when she was hit by an Uber test car in autonomous mode in Tempe, Arizona. It seems that the car got confused when Herzberg stepped into a road while pushing a bicycle with bags on its handlebars – an array of objects it couldn’t interpret. Similarly, a woman in San Francisco was seriously injured when she was hit by a human-driven car and knocked into the path of one of GM’s Cruise “robotaxis”, which dragged her 20ft because it was programmed to pull over when confronted by an unknown situation.

Have there been other issues?

The crashes involving Uber and Cruise vehicles have done massive reputational damage to those firms (Uber initially cancelled its driverless taxi trials; Cruise has lost its licence in California). But US trials have also thrown up other issues: driverless cars have been accused of everything from preventing emergency services from reaching crime scenes to “braking inappropriately” and failing to stop at red lights. Last year, Tesla – which offers “Autopilot”, driver-assistance technology that falls well short of full automation – had to recall two million vehicles in the US after regulators found that measures intended to ensure drivers paid attention when using it were inadequate; the US National Highway Traffic Safety Administration had looked at 956 Tesla collisions in which such technology was reported to be involved.

Has there been any progress?

Tech companies have overcome some of the difficulties in training self-driving technology by limiting the areas in which the cars operate, thereby reducing the unknowns they face: “robotaxis” are now available for hire in cities such as San Francisco, Phoenix and Wuhan in China, where a 500-strong fleet recorded 730,000 trips in the past year alone. Highly-automated vehicles are also arriving on British roads, where Ocado and Asda are trialling the use of self-driving vans for grocery deliveries – albeit with a “safety operator” behind the wheel. Last year, it also became legal in the UK for drivers of Ford’s Mustang Mach-E to take their hands off the steering wheel on the motorway – the first step of its kind in Europe.

Are more destined for British roads?

Mark Harper, the Transport Secretary, thinks so: he predicts that people will legally be able to drive “with your hands off the wheel, doing your emails” by 2026. The Government is now trying to firm up the UK’s regulatory framework via the Automated Vehicles Bill, which would ban misleading advertising so that vehicles can only be marketed as “self-driving” if they meet certain safety standards, and confirm that manufacturers, not owners, are responsible for crashes while a vehicle is driving itself. The bill could become law later this year. Other countries, and the EU, are also grappling with how to regulate driverless vehicles; but establishing a coherent set of safety rules has proved challenging.

What will happen next?

That depends who you ask. In January, Bloomberg reported that Apple had become the latest US firm to scale back its autonomous driving programme, delaying the internal launch of its long-rumoured Project Titan to 2028. Other US companies such as Waymo and Cruise are said to be scaling back plans after suffering huge losses – leading some analysts to predict a bleak future for the sector. “You’d be hard-pressed to find another industry that’s invested so many dollars in R&D and that has delivered so little,” says the engineer Anthony Levandowski, a prominent critic of the technology. Others hold to a longer view: the McKinsey Centre for Future Mobility estimates that the market for autonomous-driving systems could be worth $400bn by 2035. For now, it isn’t just public trust that’s the problem – it’s the fact that lots of people really quite like driving. Will they want to hand control of their cars to a computer when they get behind the wheel?

Why driverless vehicles are still a very rare sight on the world’s roads today