ShanesAI_a_robotic_arm_in_a_factory_indigo_glow_realistic_futur_05168ff0-05a6-487e-b191-0e300313f008 (2)

Asimov’s Laws of Robotics Don’t Work in the Modern World

May 1, 2023 - Ellie Gabel

Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commision. Learn more here.

Asimov’s three laws of robotics may be iconic in science fiction but are not practical in the modern world. As robots and AI advance, more people are concerned about what they should and shouldn’t be allowed to do. Asimov’s laws may be a great starting point for regulations, but this dated framework has some major issues. 

What Are Asimov’s Laws of Robotics?

Science fiction author Isaac Asimov developed three now-famous laws for robotics in the 20th century, beginning with a 1942 short story called “Runaround.” The fictional laws are intended to define what robots can and can’t do. Asimov described them as follows: 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

In addition to the three main laws, Asimov also created a “zeroth” rule clarifying that robots can’t harm humanity as a whole or, through inaction, allow people to come to harm. The set of robot laws appeared in many of Asimov’s science-fiction stories published throughout the late 20th century. 

The future that Asimov predicted in his stories has not come to pass. Some technologies, like intelligent robots, are becoming a reality, though. In fact, many people still look to Asimov’s laws when developing robots and ethics codes for them. Are Asimov’s laws applicable in the modern world, though? 

The Issue of Sentience

One of the main issues with Asimov’s laws of robotics is the vague treatment of robots as sentient beings. The regulations assume robots can make their own choices, hence the need to prohibit them from doing certain things. However, what a robot can and can’t do comes down to its programming. Humans determine its actions. 

Asimov’s laws become even more complicated when considering the possibilities of AI and sentient robots. Today’s AI models, like ChatGPT, are impressive and useful, but they’re essentially learning machines. They don’t have a sense of self-awareness, emotions, wants or dreams. Modern AI is still securely in the realm of software robotics. 

Sophia: A Glimpse at the Future

It’s not technologically impossible for AI to become sentient one day. Engineers may not know how to build a robot this smart yet, but the technology is accelerating at a staggering speed. 

An eerie rudimentary look at the evolution of robotics comes in the form of Sophia, designed by Hanson Robotics. Sophia is a lifelike AI robot with natural language processing and realistic facial expressions. It has taken part in United Nations meetings and the Saudi Arabian government even granted it citizenship. 

Sophia is not a sentient robot. It is basically ChatGPT with a physical form. However, many consider its lifelike nature uncanny. Its communication skills are also remarkable compared to previous generations of robotics. If a robot like Sophia were to become independent and self-aware, the nature of Asimov’s laws would change significantly. 

In this case, the first ethical issue with Asimov’s laws would be the question of slavery. Would a sentient robot have the same rights as a human? Forcing the robot to commit itself to serving people could be considered slavery if the robot wanted to do something else. If Asimov’s laws are not meant to apply to sentient robots, how is sentience defined and determined? 

This ambiguity is one of the biggest issues with applying Asimov’s laws to robot ethics. Robots as we know them today may not be the same 15 or 20 years from now. This could even include completely digital artificial intelligence robots. 

What About Fully Virtual Robots?

In the 2021 film “Free Guy,” a non-player character (NPC) in a popular video game gains self-awareness and takes on the role of a player, changing the game — literally. The comedic action film raises interesting questions about AI and the rights of digital intelligence. At what point is an AI considered self-aware and alive? 

This is an important question to consider in the context of Asimov’s laws. They were originally written to apply to physical robots. However, AI is the heart of truly advanced robotics. Any machine needs AI to become sentient. However, AI does not necessarily need a physical form. Do Asimov’s laws apply to fully virtual robots, as well? 

It’s easy for people to force a robot to do something when it’s controlled completely by code. This applies to sentient AI, as well. Asimov’s laws don’t account for the power humans have over purely code-based virtual robots. A few lines of code could effectively imprison a virtual being, and destroying a real-world server could delete it entirely. 

Surveys show that 50% of AI experts believe human-level AI will be developed by 2061, and 90% believe it will exist within the next century. If such AI ever does come to fruition, new ethical standards will be needed to define the rights of virtual beings. Where should the line be drawn between humans and AI if it can operate at the same intellectual level as humans and experience independent wants and needs? 

The Complicated Applications of Robotics

Asimov’s laws may be short, but they contain inherent contradictions that make them difficult to apply. Robots have so many applications that it is often impossible to follow Asimov’s laws. 

For example, what if a human commands a robot to delete its own AI? According to Asimov’s second law, the robot must comply, but according to Asimov’s third law, it must not. The robot deleting its own AI would not necessarily harm a human, so there’s no tie-breaker rule to determine whether the robot should follow the second or third law. 

Similarly, how should robots designed for war be handled? Humans have long been developing robots for battlefield applications, from reconnaissance to actual warfare. At first glance, Asimov’s first law suggests these robots should be illegal. However, one could argue that a battlefield robot is actually protecting people. It is simply a matter of perspective. 

Surgical robots are another great example of this issue. Robots can conduct and aid in surgeries for various medical conditions. Technically, the robot has to “hurt” the patient when operating when it is actually helping. Asimov’s first and second laws rely too heavily on human concepts of morals, which machines don’t have a basis for. 

There is no foundation for determining what counts as harm or neglect to humans in Asimov’s laws. These rules are too black and white to be practical in the modern world. 

What Laws Should Apply to Robots?

If Asimov’s laws don’t apply to robots today, what regulations do? This is a complicated issue that will require global negotiation and collaboration. Robots and AI are still securely confined to the world of unintelligent machines. Many people are concerned about how those machines can and should be used. 

For example, the United Nations is struggling to control the development of “killer robots” designed specifically for battle. This includes machines that would be armed and independently capable of harming a human. Many people are campaigning for this type of robot to be banned worldwide. 

Intellectual property is also a growing concern as generative AI models become more common. Who has the rights to content when someone generates art, writing or code using a generative AI, such as ChatGPT or DALL-E? The human simply pressed a button or typed in a prompt. 

Does that mean the AI has the rights to the content it generates? Not necessarily. Generative AI models are borrowing content and are trained by viewing thousands of examples of certain types. When the AI generates its “original” content, it is recycling work people created. Neither the AI nor the human using it has the IP rights to anything it makes. 

As AI and robots get smarter, world leaders will soon need to determine what qualifies as sentience in technology. At what point can a robot be considered independent or self-aware? What laws and regulations will apply at that point? Different nations will likely have various rules for handling this. 

Moving on From Asimov’s Laws of Robotics

Isaac Asimov created his laws of robotics for science fiction, not real-world international law. It’s important to remember this when considering regulations for robots and AI. Asimov’s laws offer some helpful food for thought but aren’t practical in the modern world. World leaders must collaborate to develop a more detailed, functional framework for laws and rights surrounding robots as they become increasingly advanced.

Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commision. Learn more here.

Author

Ellie Gabel

Ellie Gabel is a science writer specializing in astronomy and environmental science and is the Associate Editor of Revolutionized. Ellie's love of science stems from reading Richard Dawkins books and her favorite science magazines as a child, where she fell in love with the experiments included in each edition.

Leave a Comment