What are deepfakes, their risks and how to spot them (2024)

Ever since the term Deepfake appeared online in 2017, it has become more popular because of its innovative way of creating artificial videos and the dangers it poses. More recently, the term came into the mainstream after fake nude photographs of American singer Taylor Swift proliferated on X (formerly known as Twitter), which led to calls in Congress for new legislation.

These AI-generated images of real people, and which appear authentic, have garnered significant attention in light of Swift’s targeting. Some states have already enacted laws targeting deepfakes, while others are considering measures to combat their proliferation. Efforts include deepfake detection algorithms and embedding codes in content to identify misuse. Model legislation proposed by the American Legislative Exchange Council focuses on criminalizing possession and distribution of deepfakes depicting minors and allowing victims to sue for nonconsensual distribution of sexual content.

However, ensuring effective enforcement and navigating free speech concerns remain significant challenges. Federal legislation has also been introduced to provide individuals with property rights over their likeness and voice, allowing them to sue for misleading deepfakes. States such as Indiana and Missouri are pushing for legislation criminalizing the creation and distribution of sexually explicit deepfakes without consent.

But deepfake p*rnography is just the tip of the iceberg when we talk about the risk this type of technology poses. Deepfakes have several potential uses that could represent different harms, such as fake news, hoaxes, financial fraud and other types of p*rnography like revenge p*rn or child sexual abuse material.

What are deepfakes?

Deepfakes are videos, photos, or audio recordings of real-life people that seem authentic, but have been manipulated with Artificial Intelligence, according to the U.S. Government Accountability Office (GAO). The name comes from the type of machine learning used to generate this type of media, deep learning.

The GAO says deepfakes are tools that can be used for exploitation and disinformation. They could influence elections and cause damage to public and private figures, “but so far have mainly been used for non-consensual p*rnography”, as was the case with Taylor Swift.

How do deepfakes work?

Deepfakes utilize advanced AI techniques like autoencoders and generative adversarial networks (GANs) to create realistic synthetic media —both examples of deep learning, which can take certain types of data and learn to produce a similar media that resembles the example.

An autoencoder is an artificial neural network — designed to replicate how the human brain learns information — trained to recreate input from a simple representation, which means that it can reconstruct an image or a video taking a basic file. GANs consists of two competing artificial neural networks, one is trying to produce a fake version and the other tries to detect it. They work constantly, resulting in a more “realistic” or “accurate” portrayal. According to GAO, “GANs create more convincing deepfakes, but are more difficult to use”.

Improvements on these types of technology are making deepfakes harder to detect. In the past, viewers could easily detect fraudulent content, but this may no longer be the case, considering how realistic some images, videos and audios seem.

Risks of deepfakes

As mentioned, deepfake technology could be used to create several types of content, such as p*rnography using a celebrity or any person’s face without their consent, or fake news with altered videos of politicians saying things they never said or doing things they never did.

A report made by the Department of Homeland Security states that “the threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people’s natural inclination to believe what they see”, and highlight that deepfakes and synthetic medicare effective in spreading misinformation or disinformation despite not being “advanced or believable”.

The Department also highlights how divided the opinion of experts on the urgency of the threat synthetic media and deepfakes pose is. It says that the spectrum of concerns range from “an urgent threat” to “don’t panic just be prepared”.

How to detect deepfakes

Technological detection of deepfakes relies on extensive and diverse datasets for training detection tools, but current datasets are insufficient and require constant updates to effectively detect manipulated media. Automated detection tools are still under development, with ongoing research aiming to automatically identify deepfakes and assess the integrity of digital content. However, detection techniques often spur the development of more sophisticated deepfake methods, so regular updates to detection tools are necessary.

Even with effective detection, disinformation spread through deepfake videos may still be impactful due to audience unawareness or lack of verification. Social media platforms have inconsistent standards for moderating deepfakes, and proposed legal regulations raise concerns about freedom of speech, privacy rights, and enforcement challenges.

As for human detection, while in the past it could’ve been easy to spot a fake video — there were common visual mistakes like inconsistent eye blinking or lack of definition in certain areas of the media —, currently, with its advances it is becoming even harder to spot fake content.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

What are deepfakes, their risks and how to spot them (2024)

FAQs

What are the risks of deepfakes? ›

Threat actors are already creating deepfake images, audio, and video content with lifelike facsimiles of real people. Celebrities, the public, and businesses are being targeted. Fake imagery is being used to cause reputational harm, exact revenge, and carry out fraud.

What are the threats of deepfakes? ›

Celebrity and nonconsensual p*rnography. A major threat that deepfake poses is nonconsensual p*rnography, which accounts for up to 96% of deepfakes on the internet. Most of this targets celebrities. Deepfake technology is also used to create hoax instances of revenge p*rn.

What is meant by deepfakes? ›

deep·​fake ˈdēp-ˌfāk. plural deepfakes. : an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.

Is there a way to detect deepfakes? ›

Facial and body movement

For images and video files, deepfakes can still often be identified by closely examining participants' facial expressions and body movements.

How deepfakes affect people? ›

Non-consensual deepfake videos can cause significant harm to individuals by exploiting and manipulating their likeness for explicit or damaging content. This can lead to severe harm, including loss of employment opportunities, public humiliation, or damage to personal relationships.

Why should we be worried about deepfakes? ›

Deepfakes are creating havoc across the globe, spreading fake news and p*rnography, being used to steal identities, exploiting celebrities, scamming ordinary people and even influencing elections.

What are the challenges of deepfakes? ›

Regulatory Challenges:
  • Regulating deepfakes in electoral campaigns is challenging due to rapid technological advancements and the global nature of online platforms.
  • Governments and election authorities struggle to keep pace with evolving AI techniques and may lack expertise in regulating AI-driven electoral activities.
May 14, 2024

What are the potential problems that deepfakes can cause? ›

28, 2024 /PRNewswire/ -- Deepfakes – realistic AI-generated audio, video, or images that can recreate a person's likeness – are one of the most pressing challenges posed by generative AI, given the potential for bad actors to use it to undermine democracy, exploit artists and performers, and harass and harm everyday ...

How can we protect against deepfakes? ›

Limit the amount of data available about yourself, especially high-quality photos and videos, that could be used to create a deepfake. You can adjust the settings of social media platforms so that only trusted people can see what you share.

Are deepfakes good or bad? ›

A “deepfake” is fabricated hyper-realistic digital media, including video, image, and audio content. Not only has this technology created confusion, skepticism, and the spread of misinformation, deepfakes also pose a threat to privacy and security.

What is an example of a deepfake? ›

Deepfake technology examples range from convincing voice recordings and “filter-type” videos all the way to fully fabricated yet highly lifelike content.

Is deepfakes illegal? ›

There's no federal law specifically addressing deepfake p*rnography, although if the images depict a minor, federal child p*rnography laws may apply.

How do people make deep fakes? ›

Creating a deepfake is a complex process that relies on the use of artificial intelligence algorithms, specifically those focused on deep learning. These algorithms analyze thousands of images and videos to learn how to mimic a person's facial expressions, movements, and voice.

How can we solve deepfakes? ›

To achieve this, deepfake detection solutions typically use a combination of deep learning algorithms, image, video, and audio analysis tools, forensic analysis, and blockchain technology or digital watermarking—all of which help the solution to identify inconsistencies undetectable to the human eye.

Are deepfakes identity theft? ›

By leveraging artificial intelligence, deepfakes enable fraudsters to clone your face, voice, and mannerisms to steal your identity.

What are the malicious uses of deepfakes? ›

The impact of deepfakes on society

Deepfakes can be used to harm reputations, manipulate public sentiment, sway elections, and erode democratic processes. Beyond politics, deepfakes present a threat to personal security. The technology can be used for malicious purposes, such as blackmail, fraud, and cyberbullying.

What are the privacy concerns of deepfakes? ›

Deepfakes represent a significant threat to personal privacy as they can be manipulated for personal and financial gain. With the ability to convincingly alter digital media to depict individuals saying or doing things they never did; malicious actors can exploit deepfakes for various purposes.

Top Articles
Latest Posts
Article information

Author: Barbera Armstrong

Last Updated:

Views: 6343

Rating: 4.9 / 5 (79 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Barbera Armstrong

Birthday: 1992-09-12

Address: Suite 993 99852 Daugherty Causeway, Ritchiehaven, VT 49630

Phone: +5026838435397

Job: National Engineer

Hobby: Listening to music, Board games, Photography, Ice skating, LARPing, Kite flying, Rugby

Introduction: My name is Barbera Armstrong, I am a lovely, delightful, cooperative, funny, enchanting, vivacious, tender person who loves writing and wants to share my knowledge and understanding with you.