robots holding red flags

What are the Risks of Artificial Intelligence?

February 13, 2023 - Ellie Gabel

Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commision. Learn more here.

Have you ever seriously considered the risks of artificial intelligence?

Humankind is in the midst of several technological revolutions taking us to destinations we’ve never seen and may not fully comprehend. Artificial intelligence may be the most consequential of these.

AI – and the risks of artificial intelligence – is one of the most widely and hotly debated topics today. Is the conversation and concern justified? We won’t appreciate the risks until we explore concrete examples of how they might manifest in our lives and livelihoods. Let’s take a deeper look.

1. Diffusion of Responsibility

The biggest risk of artificial intelligence is also one of the least-discussed ones. Having an outside intelligence to which we can offload our decision-making could make us less conscientious about our actions. We may also become less in tune with why we make the decisions we do.

There is a sociopsychological phenomenon known as the diffusion of responsibility. This is where people become less likely to take action when they believe other people are present. We should reasonably expect that having “an AI for that” could have this effect as well.

One concrete area where this may manifest is cybersecurity. AI is a known ally in cybersecurity – it’s able to flag and isolate unauthorized network events automatically. If people know an AI is handling certain aspects of cybersecurity, they may grow lax in others, like ignoring red flags in a phishing email.

These are already fertile grounds for academic inquiry. There are multiple types of “responsibility gaps” – all of which AI might exacerbate if we become too reliant on artificial decision-making. Experts identify “moral accountability gaps” and “culpability gaps” as two of the most worrisome.

A culpability gap describes a situation where a person did not adequately understand the consequences of taking a certain action. When it’s AI making most of our decisions, the sense that we’re culpable for the actions undertaken gets fuzzy. If we don’t know why a decision was made, are we culpable for the results – both for good and for ill?

2. Job Replacement

We work for a living – but what happens when artificial intelligence and automation come for our jobs? It’s not a matter of whether AI will displace workers, but rather how many. The Brookings Institute published findings in 2019 showing that 25% of American workers will experience “high exposure” to automation, meaning AI could do 70% of their jobs for them.

This is a potential risk because our social contracts and safety nets are not prepared for mass unemployment or the re-skilling of replaced workers. Moreover, research from McKinsey shows that Black workers could be the most widely impacted by automation. When it comes to automation, the risks to the working class currently mirror the inequities built into our society.

Companies designing or adopting robots and cobots would do well to seek a middle-ground for AI that supplements human decision-making without fully replacing the labor involved. Examples include robots that follow human order-pickers in warehouses to direct their pick path and carry oversized items, or robotic arms that lift or shift heavy workpieces in manufacturing settings, like automotive components.

3. Fraud and Social Engineering

The National Institute of Standards and Technology defines social engineering as an attempt to fool a target into handing over credentials, revealing personal details, incriminating data, or intellectual property, or making a payment or taking another action they don’t wish to do.

AI is a natural fit in the current cybersecurity, fraud, and social engineering landscape.

Artificial intelligence is uniquely capable of identifying and mimicking the social signals that might successfully trick somebody into providing sensitive information. Moreover, AI can do this far faster and more accurately than human “social engineers” ever could.

Another way AI is proving a fruitful investment for cyber criminals is through deepfakes. Deepfakes are a primary risk of artificial intelligence because they’re even more convincing than most other types of social engineering.

Criminals make deepfakes by creating an algorithm that intelligently and almost seamlessly combines existing video footage or still images with photos or videos of the target, somebody in their family, or a figure they otherwise trust.

Deepfakes have already proven themselves convincing in several high-profile examples – like a widely circulated Volodymyr Zelenskyy deepfake. Unfortunately, women face a disproportionate amount of the risk in other situations, like cases where their likenesses may be stolen for pornographic purposes. Until we have reliable tools to identify deepfakes, these risks will persist.

4. Intelligent Automated Weapons

There are massive risks associated with handing control of weapons systems over to artificial intelligence. In fact, those risks are present whether the AI functions as designed or whether it does not.

Why? Fundamentally, it’s because AI makes its decisions in a mysterious black box that essentially nobody on Earth fully comprehends. The lines of code comprising the “thought patterns” and “decision-making capacity” of an AI – as we’ve seen with driverless cars – are far from infallible.

The other reason using AI in autonomous weapons systems constitutes a gigantic risk is because human beings don’t need more ways to wage war. We certainly don’t need new ways to wage war in which human beings aren’t culpable or accountable, and where battlefield decisions aren’t fully understood, communicated about, and properly deliberated beforehand.

This is such a grave threat that more than 34,300 researchers in the robotics and AI fields published an open letter in 2015 expressing their fears about it to regulators and the public.

5. Institutional Prejudice and Inequality

We know that human-directed financial institutions frequently engage in institutional bias and discrimination. “Racialized patterns” have been observed throughout the banking industry – a situation where people belonging to minorities are deliberately targeted with predatory practices or systemically denied access to financial products, like mortgages and small-business loans.

Today, the banking industry is full-steam ahead on building AI to administer customer service, recommend, approve, or deny banking products, and generally pass the kinds of judgments on a person’s creditworthiness and sense of responsibility that human banking agents used to pass.

Using AI this way could make the process more efficient, but at the risk of duplicating the human prejudices that have defined the banking and similar industries for decades. Engineers develop AI platforms using machine learning – a process “trained” using existing data sets. If those data sets reflect institutional racism and other types of bias, so will the AI.

AI: An Example of GIGO

AI and the risks of artificial intelligence are, rightfully, hotly contested topics. You may have heard the term “GIGO,” or “garbage in, garbage out.” It’s a phrase that applies readily to AI. AI has the potential to outstrip human beings in both the depth of its analytical capacity and the speed with which it can draw conclusions. But without a steady grasp of ethics, and clean and unbiased data to draw from, we’ll keep training AI on garbage data and keep getting garbage results.

Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commision. Learn more here.

Author

Ellie Gabel

Ellie Gabel is a science writer specializing in astronomy and environmental science and is the Associate Editor of Revolutionized. Ellie's love of science stems from reading Richard Dawkins books and her favorite science magazines as a child, where she fell in love with the experiments included in each edition.

Leave a Comment