Here's Everything You Need to Know about Deepfakes (2024)

In light of recent deepfakes, the 1997 film Wag the Dog feels like foreshadowing. In the movie, Robert De Niro plays a spin doctor who sways an election by fabricating a war to distract voters from a presidential scandal. He pulls it off with the help of a Hollywood producer, actors and a studio.

Here's Everything You Need to Know about Deepfakes (1)

Today, creating deceptive content requires much less. Thanks to AI, all it takes is one person with a computer, some images and free software to convince the public of something that never happened.

As deepfakes get more advanced, will content made or modified by AI become indistinguishable from reality? And what would happen if this technology gets into the wrong hands in 2024, the biggest election year in history?

Let’s dive into the dark, and sometimes funny, world of deepfakes to see what they’re capable of now and in the future—and what’s being done to curtail their abuse.

What is a deepfake?

A deepfake refers to media, including video, photo and audio, generated or manipulated by AI to impersonate someone or fabricate an event that never happened.

The word stems from the term “deep learning,” a subset of AI) and appears to have originated in 2017 thanks to a Reddit user. The terms “face swapping” or “voice cloning” are also used when talking about deepfakes.

Famous deepfakes include:

As the above examples show, deepfakes can range from harmless entertainment to sinister manipulation — and it’s the latter that has politicians and the public on guard.

The Growing Popularity of Deepfakes

From 2022 to 2023, the number of deepfake fraud incidents globally rose by 10x, according to research by identity verification platform Sumsub. The biggest surge was in North America, used to power fake IDs and account takeovers.

And now, experts warn that deepfakes are being used more than ever in attempts to spread election disinformation.

In September 2023, just days before a tight parliamentary election in Slovakia, an audio recording was posted online sounding like progressive candidate Michal Šimečka discussing how to rig the election.

Fact-checkers warned the audio was a deepfake – but that didn’t stop its spread. It’s impossible to say if and how the falsified recording affected the outcome, but in the end, Šimečka’s opponent won.

In January 2024, in what NBC News calls the “first-known use of an AI deepfake in a presidential campaign,” a robocall impersonating President Joe Biden urged New Hampshire voters not to participate in their state’s presidential primary.

Just weeks later, the FCC outlawed robocalls featuring AI-generated voices.

How easy is it to create a deepfake?

Easy is subjective – let’s begin with a better question: How convincing do you want the deepfake to be?

If you just want to create a funny video depicting you as an Avenger, apps like iFace can take a single selfie of you and put your face into famous movie scenes in seconds.

But creating the kind of sophisticated deepfake that the world is really worried about, the kind that could potentially sway elections, takes much more effort than that.

As the two election examples above show, audio is easier to fake than video. To see just how much effort is required to create a persuasive deepfake video, we first need to understand how deepfakes work.

To create a convincing digital forgery of another person’s likeness, the AI model must first be trained.

In other words, it needs to study faces to learn what the source and target images look like to replicate highly realistic scenarios.

To do this, you must feed it images, audio, and video for it to study – the more, the better. This can take hours upon hours, lots of GPUs on your computer and of course the technical skills to make it happen.

YouTuber Mike Boyd, who had no apparent prior experience in special effects or machine learning, set out to create a “reasonably convincing” deepfake of him in famous movies.

He achieved his mission by using open-source software DeepFaceLab, studying multiple tutorials and feeding the AI model thousands of photos of himself.

It took him 100 hours, and here’s one of the deepfakes all of that work produced:

Here's Everything You Need to Know about Deepfakes (3)

Image Source

As AI advances, however, reasonably convincing deepfakes become easier to create and higher quality.

For context, back in March 2023, an AI-generated video depicting Will Smith eating spaghetti traumatized the world after it was posted to the subreddit r/StableDiffusion.

Here's Everything You Need to Know about Deepfakes (4)

Image Source

The original poster, a user named “chaindrop,” says the video was created using ModelScope, an open-source text-to-video AI tool.

A year later, I wanted to see how far this technology has come in the year since the original “Will Smith eating spaghetti” video.

I used Vercel’s text-to-video AI tool to generate a two-second video based on the simple prompt of “Will Smith eating spaghetti.” This is what popped out:

Here's Everything You Need to Know about Deepfakes (5)

Image Source

Erm, not quite right. But I will say that this deepfake Will Smith’s pasta-eating movements are clearer and a lot less frantic than the ModelScope one.

PikaLabs produced a more realistic version of the actor but it didn’t fulfill the prompt.

Here's Everything You Need to Know about Deepfakes (6)

Image Source

If Sora’s outputs end up being anything like the realistic and elaborate videos OpenAI displays on its landing page, then we might actually have something to worry about in the future.

What’s Being Done to Fight Harmful Deepfakes

As you can imagine, regulating technology that is increasingly deceptive and constantly evolving is a challenge. But in the U.S. and abroad, entities in both the public and private sectors are taking action.

As of March 13, 2024, 43 U.S. states have either introduced or passed legislation regulating the use of deepfakes in elections, according to nonprofit consumer advocacy organization Public Citizen. The Associated Press also reports that at least 10 states have already enacted laws related to deepfakes.

Across the pond, the UK enacted the Online Safety Act in October 2023, which among other things, outlaws the nonconsensual sharing of photographs or film of an intimate act — including images “made or altered by computer graphics or in any other way.”

Additionally, in February, the UK's Home Secretary James Cleverly met with leaders at Google, Meta, Apple and other tech companies to discuss how to safeguard upcoming elections.

In December 2023, European Union lawmakers approved the AI Act, a first-of-its-kind legal framework that attempts to identify and mitigate risks associated with artificial intelligence.

Under this law, deepfakes must be labeled as artificially generated so the public can remain informed.

In the private sector, businesses involved in developing AI tools are putting their own guardrails in place.

Last year, Google launched a watermarking tool to make it easier for software to detect AI-generated images. In February, 20 tech companies including Google, Meta and OpenAI signed a tech accord to fight AI election interference.

In it, the companies committed to detecting and labeling deceptive AI content on their platforms so that it’s clear that it’s been generated or manipulated by artificial intelligence.

Last month, Google joined C2PA as a steering committee member, alongside Adobe, Microsoft and Sony, to help develop "content credentials,” which is essentially a virtual badge attached to digital content that, when clicked, shows details on how AI was used to make or modify it.

OpenAI, the maker of DALL-E and ChatGPT, already has mitigation efforts in place, including prohibiting the use of a public figure’s likeness in its AI-generated images.

When I prompted ChatGPT to generate an image of Will Smith eating spaghetti, it wouldn’t comply.

Here's Everything You Need to Know about Deepfakes (7)

Because OpenAI’s text-to-video model will reject prompts requesting celebrity likeness, Sora will likely follow the same policy.

From consumer fraud to attempted election interference, deepfakes are clearly already doing damage.

When truth can be manufactured at the click of a button, it’s on each individual to stay skeptical – until governments and companies catch up.

Topics: Artificial Intelligence

Here's Everything You Need to Know about Deepfakes (2024)

FAQs

Here's Everything You Need to Know about Deepfakes? ›

A deepfake refers to media, including video, photo and audio, generated or manipulated by AI to impersonate someone or fabricate an event that never happened. The word stems from the term “deep learning,” a subset of AI, and appears to have originated in 2017 thanks to a Reddit user.

What do you need to know about deepfakes? ›

A “deepfake” is an image, a video, voice, or text created by AI. The “deep” is from “deep learning,” a method of training computers on massive amounts of data to perform human-like tasks. The “fake” indicates that it's computer-generated and difficult to distinguish from human-generated media.

Is it illegal to watch deepfake? ›

Watching deepfakes is not illegal in itself, except in cases where the content involves unlawful material, such as child p*rnography. Existing legislation primarily targets the creation and distribution of deepfakes, especially when these actions involve non-consensual p*rnography.

What are the dangers of deepfakes? ›

Not only has this technology created confusion, skepticism, and the spread of misinformation, deepfakes also pose a threat to privacy and security. With the ability to convincingly impersonate anyone, cybercriminals can orchestrate phishing scams or identity theft operations with alarming precision.

Are deepfakes really a security threat? ›

Even scarier are the AI-generated deepfakes that can mimic a person's voice, face and gestures. New cyber attack tools can deliver disinformation and fraudulent messages at a scale and sophistication not seen before. Simply put, AI-generated fraud is harder than ever to detect and stop.

How to tell if an image is a deepfake? ›

The Poynter journalism website advises that if you see a public figure doing something that seems “exaggerated, unrealistic or not in character,” it could be a deepfake. For example, would the pope really be wearing a luxury puffer jacket, as depicted by a notorious fake photo?

What is deepfake syndrome? ›

The term was coined in 2017 by a Reddit user, and has later been expanded to cover any videos, pictures, or audio made with artificial intelligence to appear real, for example realistic-looking images of people who do not exist.

Can deepfake be detected? ›

For images and video files, deepfakes can still often be identified by closely examining participants' facial expressions and body movements.

Can you sue someone for a deepfake? ›

The Disrupt Explicit Forged Images and Non-Consensual Edits, or DEFIANCE Act, allows victims to sue if those who created the deepfakes knew, or “recklessly disregarded” that the victim did not consent to its making.

Do you need permission to deepfake someone? ›

The Copyright Act provides copyright protection for works, including films, music, and creative content. Individuals infringing upon copyrights by creating deepfakes using copyrighted works without permission can face legal action under this act.

Should I be worried about deepfakes? ›

Deepfakes are creating havoc across the globe, spreading fake news and p*rnography, being used to steal identities, exploiting celebrities, scamming ordinary people and even influencing elections.

Can deepfakes be tracked? ›

As these generative artificial intelligence (AI) technologies become more common, researchers are now tracking their proliferation through a database of political deepfakes.

Why are deepfakes scary? ›

The real danger of deepfakes is how convincing they've become. Anyone can use deepfake tools to create fake videos or images that spread misinformation and propaganda, harass and intimidate others, commit identity theft and fraud, and ultimately ruin people's lives.

Can you get in trouble for looking up deepfakes? ›

Is it illegal to download deepfakes? Downloading deepfakes isn't universally illegal but becomes so when the content violates laws, such as p*rnographic deepfakes created without the consent of the individual featured. Furthermore, downloading copyrighted material can lead to accusations of copyright infringement.

What states are deepfakes illegal? ›

Georgia, Hawaii, Texas and Virginia have laws on the books that criminalize nonconsensual deepfake p*rn. California and Illinois have given victims the right to sue those who create images using their likenesses. Minnesota and New York do both. Minnesota's law also targets using deepfakes in politics.

How can we protect against deepfakes? ›

Limit the amount of data available about yourself, especially high-quality photos and videos, that could be used to create a deepfake. You can adjust the settings of social media platforms so that only trusted people can see what you share.

What should be done about deepfakes? ›

Protecting creators

Policymakers should hold people accountable for producing unauthorized deepfakes of creator performances and hold platforms accountable if they knowingly disseminate such unauthorized content.

What is the need for deepfake detection? ›

Deepfakes are highly realistic digital creations made by combining "deep learning" with fake imagery. Deepfake detection is crucial, because it poses a significant threat to the authenticity of information, potentially leading to misinformation and manipulation on an unprecedented scale.

What are the good things about deepfakes? ›

Deepfake can also be used to mask the identity of people's voices and faces to protect their privacy. Individuals can use Deepfakes to create avatar experiences for self-expression on the internet. Individuals can gain autonomy and expand their purpose, ideas, and beliefs by using a personal digital avatar.

Top Articles
Latest Posts
Article information

Author: Catherine Tremblay

Last Updated:

Views: 5639

Rating: 4.7 / 5 (47 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Catherine Tremblay

Birthday: 1999-09-23

Address: Suite 461 73643 Sherril Loaf, Dickinsonland, AZ 47941-2379

Phone: +2678139151039

Job: International Administration Supervisor

Hobby: Dowsing, Snowboarding, Rowing, Beekeeping, Calligraphy, Shooting, Air sports

Introduction: My name is Catherine Tremblay, I am a precious, perfect, tasty, enthusiastic, inexpensive, vast, kind person who loves writing and wants to share my knowledge and understanding with you.