Securing your robotics stack: what happens when robots go haywire?

As Tesla and other tech companies rush to release humanoid service robots, how can businesses ensure their autonomous assistants are safe for human interaction?

241030 Haywire Robots

The unexpected consequences of marauding robots have long been a mainstay in science fiction. In Katushiro Otomo’s 1991 animation, Roujin Z, an elderly invalid in palliative care is strapped to a robotic nursing bed designed to improve his quality of life. But the bed takes on a mind of its own – actually, the personality of the man’s ex-wife – and rampages through downtown Tokyo, the patient still fixed to the machine.

Thankfully, nursing machines running amok and Terminator-like cyborgs remain in the world of make-believe. But the robots that do exist can still pose a very real threat. Since a one-tonne production line machine crushed Robert Williams to death in 1979 – the first person to be killed by a robot – machines and people have had unfortunate, sometimes fatal encounters.

Today, many robots are smaller and smarter than the five-story-tall parts retrieval system that caused Williams’ death, but the dangers persist. In 2023, a man in South Korea was killed by a robotic arm that mistook the worker for a box of vegetables. While such incidents are rare, 41 people were killed by industrial robots in the US. between 1992 and 2017.

Robots offer businesses one solution to increasing labour costs and worker shortages. From a company’s perspective, robots don’t strike, fall sick or quit. Heavy industries will continue to use robotics in industrial settings but machines are increasingly leaving the factory floor and entering public spaces. What’s stopping these new additions to our public lives from going haywire?

Here come the robots

You might have already spotted cleaning robots roving through airport terminals or gliding along a restaurant floor. Some companies are using robots to deliver parcels and automatons can even be found interacting with humans in hotel receptions.

Although the tasks of these machines are simple, the uncontrolled nature of public environments can lead to accidents. In 2016, a security robot ran over a child in a Californian shopping centre. In January this year, a delivery robot crashed into a parked car in Helsinki and then fled the scene. Sometimes, robots collide with one another, such as when a self-driving Tesla crashed into a Promobot, a humanoid service robot. 

Although these accidents are infrequent they demonstrate companies are yet to find a completely safe way to integrate robots into public spaces. How might businesses ensure the next wave in robotics are safe, when even simple machines occasionally cause havoc?

Industry has had far longer to figure out safety standards than in the consumer sphere, says Roberta Nelson Shea, global technical compliance officer at robot arm manufacturer, Universal Robots. Businesses that use industrial robotics use guardrails to ensure people are exposed to as few hazards as possible. While this is a simpler task in a highly controlled environment like a factory, this is much harder to achieve in a public space. 

“There are people of all ages in a mall or an airport,” says Nelson Shea. “You’ll have babies in arms and toddlers squirming out of reach and making a run for it but we do not yet have safety devices designed to detect children reliably. Robots need to be able to factor in the child’s age, a person’s limb size and how big an object is. We don’t have the same data for that as we do in the industrial space.”

Consumer-facing robots are designed to be sturdy to prevent toppling onto unsuspecting victims, with no pointed edges and various security features, such as physical bump strips, vision-based detection software and force-limiting mechanisms. Manufacturers must legally be satisfied a product is safe before shipping.

But, a spokesperson for the British Standards Institute asks: “How does one configure a robot to be safe?” This data is based on evidence from experiments on fit, healthy, working-age volunteers. “Is this a valid dataset for the safety of robots in public locations?”

B2C robotics: setting the standards

There are not yet international universal standards for B2C robotics, although there is guidance available. The International Standards Organisation’s ISO 13482 specifically addresses personal care robots, including guidance for mobile robots operating in public spaces.

The 79-page document covers several risks and encourages manufacturers to make robotics “inherently safe” by design. “Holes or gaps in the accessible part of the robot shall be designed so that the insertion of any part of the human body is prevented,” it says. Protective measures for shutting robots off and information for using and operating the machines is also included.

Additional safety guidance can be found in the ISO 10218 and the ISO/TS 15066 for industrial robots. The EN 1525 standard in Europe for automated guided vehicles in public spaces provides advice on collision avoidance and safer navigation.

But standards setters are struggling to stay ahead of the latest developments in robotic technology. Although there are some aspects of a robot’s design that can be covered by existing standards, they “are not keeping up with new developments such as humanoid robots”, a spokesperson from the British Standards Institute’s AMT/10 committee for robotics standards says.

The lack of standards doesn’t change legal safety requirements. Standards are not mandatory and the designer or manufacturer of a humanoid robot must satisfy themselves that their robots are safe. This needs to be recorded in a technical file before they can be approved for a UKCA or CE mark.

Mark Brown, the managing director of BSI Digital Trust Consulting, says security and privacy by design for emerging consumer robots will “need to be enshrined through new standards development”. This will likely progress to regulation, he adds.

Securing your robotics stack: can you hack a robot?

On-board failsafes are a common safety mechanism but they remain vulnerable to cyber attackers. There is a risk that consumer robots could be a target for hackers, in the same way that internet of things (IoT) and smart devices have become. However, robotics potentially carry a greater threat.

“The difference between hacking robots and hacking a computer is that computers have no physical manifestation,” says Alex Ivkovic, CIO at CDF Corporation, a US packaging manufacturer that uses robotics in its facilities. “Robots can cause actual physical damage and harm people,” he adds. “You certainly need to consider that greatly.”

The cyber risk profile of robots can be more easily managed in industrial environments such as factories, where businesses control the entire ecosystem and often use private 5G networks. “Robots are a lot harder to breach than in an open scenario,” says Abu Bakkar, chief innovation officer at consultancy firm HLB International.

Companies should examine their robotics cybersecurity as they would in any software or IoT environment. Organisations must be careful to secure their networks, guard their data and remain watchful over their supply chains.

“Robotics are made of many parts and, if any of these components are compromised, you have issues with the supply chain,” Bakkar says. “Everything we know about cybersecurity is just as valid in robotics.”

What’s next for robotics?

Technology companies, including Nvidia and Tesla, plan to integrate generative AI into humanoid robots. Tesla CEO Elon Musk claims its Optimus robots will be capable of doing chores, walking the dog or babysitting children. 

According to Nvidia CEO Jensen Huang, robotics are the “next wave of AI”. “One of the most exciting developments is humanoid robots,” Huang said, as Nvidia announced a suite of services, models and computing platforms for developing, training and building robots.

Ongoing international work towards enshrining generative AI safety standards might well apply to consumer robotics, especially as GenAI is built into the machines. But establishing a global framework is likely to face challenges, Bakkar says, particularly because AI is developing so quickly. Competing interests mean vendors are pursuing their own governance models and there is still no unified approach to how specific AI models should function.

“AI is going to be the foundation for robotics and how to control them so, once we have an acceptable framework for AI, robotics will be a lot more practical for everybody to use,” Bakkar adds. “But without better frameworks and more testing it’s going to be very difficult for us to control, especially on the security side.”

Organisations should specify the use cases for their robotics and how they might reasonably be misunderstood or misused, Nelson Shea advises. They should then integrate protections against those risks into the design and thoroughly document it, she adds. Any foreseeable risks should be clearly warned against.

Standards always lag behind frontier technology. The full range of safety implications is often unclear until something goes wrong. For example, Nelson Shea cites the label on lawnmowers in the US that informs users they shouldn’t be held up to trim hedges.

While standards groups continue to iterate and improve safety, organisations should do their best to design against harms. Further data will emerge in time. “We’re trying to move ahead and make things better,” Nelson Shea says. “No matter what, we’ll learn more with each and every instance along the way.”