Tech

How Do Isaac Asimov's Laws Of Robotics Hold Up 75 Years Later?

In the future, robots will become more human than ever. But there are concerns about how they'll make ethical decisions.

How Do Isaac Asimov's Laws Of Robotics Hold Up 75 Years Later?
Getty Images / Yoshikazu Tsuno
SMS

Imagine sitting in a self-driving car that's about to crash into a crowd. The car has to choose between hitting everyone or running off the road, putting your life at risk. So how does it make that decision?

For simple bots, Isaac Asimov's "Three Laws of Robotics" might help. But, for more complex machines, researchers aren't so sure the 75-year-old set of rules will work.

According to Asimov's laws, robots can't injure humans or allow them to be harmed; they have to obey orders humans give them; and they must protect themselves. But there's a caveat. If the laws conflict, the earlier law takes precedent.

Single-function robots — something with a straightforward job, like a Roomba — could in theory follow those laws. But with some of the robots engineers are working on, like the U.S. military's robot army, it gets complicated.

Robots may not function properly — even if they're built to follow the laws. In one experiment, for example, researchers programmed a robot to save another bot if it got too close to a "danger zone."

This Robotic Exoskeleton Helps You Stay On Your Feet
This Robotic Exoskeleton Helps You Stay On Your Feet

This Robotic Exoskeleton Helps You Stay On Your Feet

The balance-boosting device could be especially useful for the elderly and people with prosthetic limbs.

LEARN MORE

Saving one robot was easy, but when two were in danger, the rescue bot got confused. In about 40 percent of trials, it couldn't decide which to save and did nothing.

So while Asimov's laws might help retain some order between humans and robots, it doesn't seem like our future will line up with his mostly subservient robots — at least for now.