Autonomy is a society-changing technology. It is more than the linear progression of information and digital technologies that we have been used to over the last 60 years. Simply, autonomy brings the digital world into the physical world. As a result, we must think differently about autonomous technologies from a societal, engineering, and economic perspective.
The first industrial revolution, the mechanization of farm machinery and the creation of powered mills, not only changed food and cloth production, it fundamentally altered the physical and economic landscape of countries. Farms that would once support multiple families of workers needed fewer people thanks to machines, and those workers migrated to the industrial towns to work in factories. Land that had once been a symbol of power and wealth became less economically viable, and grand estates and the families that owned them collapsed into economic decline.
Autonomy is no less a potent agent of change. Robots have already impacted factories and warehouses. The most modern of these have just a few engineers maintaining the robots when, in the past, the buildings would have been teeming with skilled and non-skilled workers.
So far, the factory and warehouse walls have served as a dam holding back the progress of autonomous technologies. Industrial robots have been safely caged in yellow steel enclosures to protect human workers from heavy, high-speed robotic arms with rudimentary safety systems shutting down the robot before a human can venture inside the cage. If you remove the barrier between humans and robots, the complexity of safety technologies grows rapidly and expensively. The result has been a two-fold inhibitor, economic viability (for example, with self-driving cars), and public concern and resistance (in the case of pilotless planes).
COVID-19 has shown us that global changes to what is normal can happen in a matter of weeks. The impact on the workforce has been stark. Essential workers are operating in “higher risk” environments that require close interaction with potential virus carriers daily. Knowledge workers have retreated to home offices and become “virtual” office workers. Many jobs simply ended, no longer viable at this stage of economic retrenchment.
The new economic and social norms favor solutions with increased autonomous technology that interact directly with people. We would happily accept drone delivery options if it meant we could still get the items we need. Taxis, Uber, trains are suddenly “high risk” activities, especially for their operators. The chronic failings of elderly care, especially in care homes, in every economically developed country in the world have been cruelly exposed. All of these are areas where significant levels of autonomous technology investment are focused.
Lifting the economic and social constraints for drones, caring robots, and driverless transportation systems, the remaining obstacle becomes the technology. At the heart of the technology, the challenge is our ability to engineer complex, adaptable autonomous systems that can consistently operate safely. Artificial Intelligence is a driver of autonomous decision-making technology, and it forces different approaches to implement “safety.” As engineers of these technologies look to move from prototypes and small fleets to large-scale deployment, the scale of the safety problem dramatically increases. The magnitude of that problem demands a marriage between traditional safety engineering and statistical approaches. Currently, AI safety frameworks, using agreed-upon standards and proven techniques, are emerging but are fragmented. An agreed consolidated approach doesn’t exist but would benefit many industries, including automotive, IoT, and aviation.
System capability, economic viability, and social acceptance of drones, helper robots, and autonomous vehicles grow daily, and we now run the risk of hitting a safety wall. A lack of proven safety could prevent autonomous technologies from ever truly escaping “yellow steel cages” that limit their operation.
As part of VW’s Silicon Valley Research team, Burkhard Huhnke was project leader when Stanford University completed the DARPA Grand Challenge that saw a VW Passat drive itself across a 500-mile course in 2005. At a conference in 2019, he shared his surprise that almost 15 years after the first successful autonomous vehicle, we still didn’t have self-driving cars. His conclusion as to why was safety was much harder and more expensive than we imagined. Burkhard highlighted the experiences in aviation as a possible model for the ground-up development of safe autonomous vehicles. It is noteworthy that a growing number of the second wave of autonomous vehicle projects are leveraging the safety experience from the avionics industry to provide a foundational approach to solving the problem.
The technology industry has barely begun to address or even understand the safety requirements of autonomy at scale and may be decades from doing so. What is now clear is safety cannot be an afterthought added to an autonomous system but must be one of the fundamental design elements present from day one.