Why We Should Worry About Deep Fake Content

December 2, 2021 - Revolutionized Team

Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commision. Learn more here.

People often say you can’t believe everything you read, and they have a good point. It’s important to be a thoughtful media consumer and look for signs of a person’s authority instead of automatically trusting what they say or write. But what if the content you see and hear seems real and isn’t? What if it comes from someone you’ve known for years? That’s the emerging situation surrounding deepfakes, or deep fake content. 

Advanced artificial intelligence (AI) algorithms can create material that appears entirely genuine and is fully fabricated. Let’s take a look at some of the potentially dangerous implications of this technological progress. 

Deep Fake Scams Could Affect You

Many individuals think scams won’t happen to them, until they suddenly find themselves caught up in some kind of trickery. Those misleading acts often have some telltale signs, such as pressuring people to act with urgency. However, scammers who work with deep fake content are taking things to a whole new level. 

An Instagram Takeover With a Doctored Video

More specifically, they’re banking on the fact that most people trust those they know. Consider the case of Daniel Higgins, a music producer who goes by the name D Higgins and uses his Instagram account as a primary platform for promoting his work. He reportedly has thousands of followers there, including some music industry contacts for which he does not have their phone numbers. 

One day, he couldn’t log into the social media site. After checking his feed’s recent content, he was astounded to see a video of someone who looked and sounded just like him, encouraging followers to invest in cryptocurrency. Some of his followers recognized what was up, however, and texted him to find out what was really happening. 

Higgins went through Instagram’s online account recovery process, but all the associated emails went to the party that took over his profile. After a week of attempts, a spokesperson for the social media site advised they were looking into the matter. However, Higgins decided to bring up the problem with his local news station to get more traction with his case. 

This example of a working professional affected by a deep fake video shows cybercriminals don’t limit themselves to targeting people with tremendous levels of fame. As deep fake tools become more widely available and easy to use, hackers will likely use them more frequently. 

After all, their goal has always been to fool people with material as realistic and believable as possible. Deep fake content certainly fits that definition. 

Deep Fake Videos Can Frame People

Social media sites harvest data from their users, usually for advertising reasons. In other cases, hackers use it for malicious purposes.

Cybersecurity researchers have created numerous examples to show how easy it is to make a deep fake clip apparently showing a famous person, such as a world leader, saying things. People have used soundbites taken out of context for years, editing them to make it seem like the affected parties uttered often-controversial statements. 

The fear is that a deep fake trend could blackmail a famous person and damage their reputation, plus cause people to lose trust in them. However, like the case above, someone need not be extremely well-known to become affected by this threat. 

An Attempt to Smear Cheerleaders With a Deep Fake?

A mother in Pennsylvania allegedly created deep fake videos showing rival cheerleaders on her teenage daughter’s team drinking and smoking. She then brought the content to the coach, hoping to get them kicked off the squad. 

However, more recently, the plot of this story thickened when several deep fake experts asserted it was very unlikely the mother created a convincing video of this sort. Instead, they believe the people in the video actually were drinking and smoking, but they used the “I was deep faked” excuse to get out of the spotlight. 

In any case, this complicated case shows the possibilities of how deep fake material could get someone attention for all the wrong reasons. When the content people see is increasingly questionable, technology can blur the line between the manufactured and the real. 

Deep Fakes Represent a New Type of Cybercrime

Deep fake content is not only video-related. In 2018, Google demonstrated its Duplex technology. It showed the possibilities of using an AI assistant that could speak over the phone to someone, such as to book an appointment. That technology does not try to impersonate anyone, but it showed the potential of realistic-sounding AI. 

More Than a Dozen People Collaborate in a Deep Fake Bank Transfer Plot

News recently broke of the second-known case of cybercriminals using voice AI in sinister ways. A bank manager in Hong Kong received a phone call from a company director. The business leader wanted to authorize $35 million worth of bank transfers concerning a recent acquisition. They also clarified that a certain lawyer was on board to coordinate the procedures. 

The bank manager had previously spoken to the company director before. They also looked in their inbox and saw messages from the named lawyer. Since they believed everything was legitimate, they began moving the money. However, an investigation showed it was all a deep fake audio scam, with at least 17 individuals involved in carrying it out. 

Jake Moore formerly worked as a police officer in the United Kingdom and is now a cybersecurity expert at ESET. He commented, “We are currently on the cusp of malicious actors shifting expertise and resources into using the latest technology to manipulate people who are innocently unaware of the realms of deep fake technology and even their existence.

He continued, “Manipulating audio, which is easier to orchestrate than making deep fake videos, is only going to increase in volume and without the education and awareness of this new type of attack vector, along with better authentication methods, more businesses are likely to fall victim to very convincing conversations.”

Deep Fake Content Could Make Satellite Images Less Reliable

Satellite imagery gives people crucial information that helps them understand the world. For example, it can show sea surface temperatures and ocean colors, both of which can indicate the types of fish in the area. 

Satellite feeds can also reveal early signs of widespread famine or help activists monitor human rights violations and illegal activities, ranging from sex trafficking to imposing measures to prevent births within specific groups. 

Faking Satellite Imagery in Washington State

Researchers have also identified a problem they call “deep fake geography “ and “location spoofing,” which create falsified satellite images. They tested their ideas by creating satellite pictures of a fake version of Tacoma, Washington. It consisted of geographic structures and urban structures from Beijing, China and Seattle, Washington.

The team warned that untrained individuals might have trouble differentiating between actual Tacoma images and the manufactured ones. In such cases, people might get to the point where they cannot quickly trust satellite images for what they show and have to take extra precautions to verify their validity. 

Could Deep Fake Satellite Imagery Pose a National Security Risk?

People interested in government security warn that deep fake content could mean satellites pick up on content that’s not there. 

More specifically, Todd Myers, who works as the automation lead of the CIO-Technology Directorate at the National Geospatial-Intelligence Agency, worries about generative adversarial networks (GANs), which he says the Chinese are already adept at using. 

“The Chinese have already designed; they’re already doing it right now, using GANs… to manipulate scenes and pixels to create things for nefarious reasons,” he said.  Myers then gave an example, saying, “So from a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there. Then there’s a big surprise waiting for you.”

High Awareness Is a Must as Deep Fake Content Becomes More Common

Everyday people don’t have the tools to quickly spot deep fake material online or anywhere else. Unfortunately, there are not easy to detect signs of fabrication. For example, research to curb deep fakes centers on picking up on the minute details, such as how light gets reflected in someone’s eyes. 

However, a good place for concerned individuals to start is by being aware that such content exists. If they see a video of a friend, celebrity or anyone else saying something, it’s no longer necessarily safe to assume it’s true.

Similarly, anyone who receives a call from a superior asking them to transfer large amounts of money or take a similar action may want to hang up the phone and contact the requester themselves before proceeding. 

Finally, it’s a good idea to stay abreast of new deep fake efforts, keeping up with the latest approaches people try. Many internet researchers are highly interested in seeing what deep fake content could do, so they frequently make proofs of concept to highlight the possibilities.

Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commision. Learn more here.


Revolutionized Team

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Articles

Share This Story

Join our newsletter!

More Like This