The recent crash of a Tesla car in the United States, in which two people died, has reignited debate about the capabilities and safety of today’s “self-driving” technologies.
Tesla cars include an “autopilot” feature that monitors surrounding traffic and lane markings. The company is currently rolling out a more advanced “full self-driving” system that promises automatic navigation, stopping at traffic lights, and more.
Investigators say it appears nobody was in the driver’s seat of the vehicle when it crashed. Tesla chief executive Elon Musk has said no self-driving features were in use at the time.
Nonetheless, the tragic incident has raised questions over self-driving technology: how safe is it, and how much attention does it require from drivers?
What do we mean by ‘self-driving?
Experts talk about six levels of autonomous vehicle technology, ranging from level 0 (a traditional vehicle with no automation) to level 5 (a vehicle that can independently do anything a human driver can).
Most automated driving solutions available on the market today require human intervention. This puts them at level 1 (driver assistance, such as keeping a car in a lane or managing its speed) or level 2 (partial automation, such as steering and speed control).
These capabilities are intended for use with a fully attentive driver prepared to take control at any moment.
Level 3 vehicles have more autonomy and can make some decisions independently, but the driver must remain alert and control the system is unable to drive.
In the past few years, several fatal crashes involving level 2 and level 3 vehicles have occurred. These crashes were largely attributed to human error and mistaking these automation levels for full self-driving capabilities.
Vehicle manufacturers and regulators have been criticized for not making these systems more resilient to misuse by inattentive drivers.
The path towards higher levels of automation
For higher levels of automation, a human driver won’t necessarily be involved in the driving task. The AI self-driving software would effectively replace the driver.
Level 4 is a “self-driving” vehicle with the abounded scope of where and when it will drive. The best example of a level 4 vehicle is Google’s Waymo robotaxi project. Other companies are also making significant progress in developing level 4 vehicles, but these vehicles are not commercially available to the public.
Level 5 represents a truly autonomous vehicle that can go anywhere and at any time, similar to what a human driver can do. However, the transition from level 4 to level 5 is orders of magnitude harder than transitions between other levels and may take years to achieve.
While the technologies required to enable higher levels of automation are advancing rapidly, producing a vehicle that can complete a journey safely and legally without human input remains a big challenge.
Three key barriers must be overcome before being safely introduced to the market: technology, regulations, and public acceptance.
Machine learning and self-driving software
Self-driving software is a key differentiating feature of highly automated vehicles. The software is based on machine learning algorithms and deep learning neural networks that include millions of virtual neurons that mimic the human brain.
The neural nets do not include any explicit “if X happens, then do Y” programming. Rather, they are trained to recognize and classify objects using examples of millions of videos and images from real-world driving conditions.
The more diverse and representative the data, the better they recognize and respond to different situations. Training neural nets are like holding a child’s hand when crossing the road and teaching them to learn through constant experience, replication, and patience.
While these algorithms can detect and classify objects accurately, they still can’t mimic the intricate complexities of driving. Autonomous vehicles need to detect and recognize humans and other objects and interact with, understand and react to how these things behave.
They also need to know what to do in unfamiliar circumstances. Without a large set of examples for all possible driving scenarios, the task of managing the unexpected will be relatively resistant to deep learning and training.
Policy-makers and regulators around the world are struggling to keep pace. Today, the industry remains mostly self-regulating, particularly in determining whether the technology is safe enough for open roads. Regulators have largely failed to provide criteria for making such determinations.
While it is necessary to test the performance of self-driving software under real-world conditions, this should only happen after comprehensive safety testing and evaluation. Regulators should come up with a set of standard tests and make companies benchmark their algorithms on standard data sets before their vehicles are allowed on open roads.
In Australia, current laws do not support the safe commercial deployment and operation of self-driving vehicles. The National Transport Commission is spearheading efforts to develop nationally consistent reforms that support innovation and safety to allow Australians to access the benefits of the technology.
A graduated approach to certification is needed, in which a self-driving system could first be evaluated in simulations, then in controlled, real-world environments. Once the vehicles pass specific benchmark tests, the regulators can allow them on open roads.
The public must be involved in decisions regarding self-driving vehicle deployment and adoption. There is a real risk of undermining public trust if self-driving technologies are not regulated to ensure public safety. A lack of trust will affect not only those who want to use the technology but also those who share the road with them.
Finally, this incident should catalyze to bring regulators and industry to establish a strong and robust safety culture to guide innovations in self-driving technologies.
Without this, autonomous vehicles would go nowhere very fast.