Deepfakes and Deception: A Framework for the Ethical and Legal Use of Machine-Manipulated Media - Modern War Institute (2024)

On March 6, 2023, journalist Sam Biddle published an article in The Intercept describing the pursuit of a deepfake capability by United States Special Operations Command (USSOCOM). The article confirmed, for the first time, that the US military is actively exploring the use of the controversial technology to influence and deceive certain foreign audiences and adversaries. Biddle’s report relies on a publicly available USSOCOM procurement document that solicits commercial bids to fill future capability gaps. His article implies that the military’s interest in employing deepfakes demonstrates hypocrisy, as the US government has “spent years warning deepfakes could destabilize democratic societies.” The implication deserves closer analysis.

The fact that USSOCOM is examining these next-generation influence capabilities is not entirely scandalous. The US military conducts a broad array of research, development, and testing. Further, USSOCOM is DoD’s joint proponent and coordinating authority for internet-based information operations and typically acts as the US military’s technological pathfinder. The joint force relies on USSOCOM’s progressive attitudes, creativity, and innovative acquisitions. But it is not clear how the US military will balance the potential advantages of deepfakes with their well-documented harmful consequences. This article examines that dilemma and suggests a framework for further analysis.

Deepfakes Defined and Decried

The term deepfake is not explicitly defined in United States law or policy, but recent legislation includes a definition of a term that extends to deepfakes. Section 5709 of the Fiscal Year 2020 NDAA requires a comprehensive report on the foreign weaponization of “machine-manipulated media,” which Congress defined as, “video, image, or audio recordings generated or substantially modified using machine-learning techniques in order to falsely depict events, to falsely depict the speech or conduct of an individual, or to depict individuals who do not exist.”

Debates over deepfakes within the US government have revolved almost exclusively around the dangers presented by foreign-made forgeries. In June 2019, the US House Intelligence Committee held a hearing during which several artificial intelligence experts described threats posed by deepfakes and other generative AI applications. The committee issued a statement expressing concern about how deepfakes will contribute to a “post-truth future.” Since the hearing, lawmakers, the Congressional Research Service, the FBI, and various think tanks have issued dire warnings about the harms caused by foreign deepfakes and have urged government agencies to invest in deepfake detection technologies to guard against adversarial information operations.

Additionally, Congress has mandated various parts of the US national security enterprise to take steps toward countering deepfakes. Section 5724 of the FY2020 NDAA established a competition “to stimulate the research, development, or commercialization of technologies to automatically detect machine-manipulated media.” Section 5709 required the intelligence community to conduct a study on the foreign weaponization of deepfakes. The FY20 NDAA also established a requirement for the intelligence community to notify Congress whenever “there is credible information or intelligence that a foreign entity has attempted, is attempting, or will attempt to deploy machine-manipulated media or machine-generated text aimed at the election or domestic political processes of the United States.” The notification requirement is now codified in public law at Title 50, US Code 3369a. There is no doubt that foreign-made deepfakes pose a threat to democracy and will likely complicate future US elections.

What is less clear, however, is to what extent US government agencies are developing deepfake technology for future operational use or whether senior leaders fully grasp the legal and policy issues associated with the use of generative AI to influence foreign audiences. There has been little open debate about whether the US military should employ deepfakes in support of peacetime information initiatives, gray-zone information operations, or wartime military information support operations and military deception campaigns.

Very few commentators have considered whether the US military might use deepfakes to achieve operational effects in competition or conflict. One example is a 2019 California Law Review article in which Professors Bobby Chesney and Danielle Keats Citron briefly surmised that “the U.S. military... might use deep fakes to undermine the credibility of an insurgent leader by making it appear that the person is secretly cooperating with the United States or engaging in immoral or otherwise hypocritical behavior.” The remainder of their discussion of deepfakes in the military context, however, focused on targeting enemy deepfake producers and imposing costs against “foreign individuals or entities who may make harmful use of deep fakes.”

Not all deepfakes are exploitative (i.e., a subject may have consented) or particularly harmful, and there may be great utility in employing deepfake technology as a military information-related capability. For example, deepfakes could impede recruiting efforts for terrorist groups that rely on the internet to radicalize young men and women. In a future armed conflict, commanders might use deepfakes to confuse the enemy and protect forces maneuvering to an objective.

That said, if the US military intends to utilize deepfakes in support of operations, it should do so with a clear picture of the potential risks and rewards. In a January 2023 Brookings Institution report entitled “Deepfakes and International Conflict,” the authors noted, correctly in my view, “The decision to generate and use deepfakes should not be taken lightly and not without careful consideration of the trade-offs.” They recommended establishing “a broad-based, deliberative process” as “the best route to ensuring that democratic governments use deepfakes responsibly.”

Deepfake Categories, Military Use Cases, and the Pitfalls to Navigate

Deepfakes are diverse in many ways. As discussed above, some feature video or still images, while others feature audio alone. Some are generated for nefarious purposes without the subject’s consent while others are made with total transparency. Regardless of the variables in format and maker’s intent, almost every deepfake falls into one of five major categories. Understanding these categories, how they might be employed by a military force in an armed conflict, and some of the legal, policy, and ethical considerations involved offers a framework for determining when and how deepfakes can—or should—be employed in the future.

The first category, Living Person/No Consent (“LPNC”), likely includes the bulk of existing deepfakes in circulation. For good reason, this category also attracts the most substantial attention by media organizations and scholars. It includes deepfakes like the Russian-made video of Ukrainian President Volodymyr Zelenskyy urging his countrymen to surrender after the Russian military invaded in February 2022. The LPNC category also includes p*rnographic deepfakes and other fake videos and audio files that exploit the images and likenesses of celebrities. Recent LPNC deepfakes include fake audio recordings of the rapper Jay-Z performing Billy Joel’s, “We Didn’t Start the Fire” and video of actor Bruce Willis appearing in a commercial for the Russian mobile phone company MegaFon. In May 2022, a deepfake of Elon Musk emerged online purporting to show him endorsing the cryptocurrency platform BitVex.

In the military context, LPNC deepfakes might carry significant advantages. For example, US military planners working on military information support operations could create a deepfake of an influential Islamic State, al-Qaeda, or al-Shabaab leader saying or doing things that he would not typically say or do with the intent of confusing terrorists in the field or discouraging new recruits. Considering how many ISIS fighters became radicalized online before traveling to Iraq, Syria, and elsewhere to fight, sophisticated deepfakes could impede terror networks or exploit disagreements among factions within them. Commanders and their servicing legal advisors would need to consider whether deepfakes like these trigger the covert action statute—although Title 10, US Code 394c, which categorizes clandestine military activities or operations in cyberspace as traditional military activities, likely exempts deepfakes employed in support of military campaigns from the requirements of the covert action statute.

The second category of deepfakes, Living Person/Consent (“LPC”), includes many videos made for entertainment purposes. For example, in 2018 a group of doctorate students at the University of California, Berkeley created a series of deepfake videos depicting them dancing like professionals. Their paper, “Everybody Dance Now,” remains one of the more influential, and entertaining, works of scholarship in the area of computer visualization.

In the military context, LPC deepfakes might be used as lawful tools for deception. As Professor Eric Talbot Jensen and Summer Crockett explained in their 2020 article for the Lieber Institute for Law and Warfare, “A deepfaked video including inaccurate intelligence information might significantly impact the conduct of military operations.” Further, as Professor Hitoshi Nasu explained, the law of armed conflict does not prohibit deception. The US military might release a series of focused deepfakes as part of an approved military deception campaign to trick an adversary into believing that a battlefield commander is in multiple locations at once, for example.

The third category of deepfakes, Deceased Person/No Consent (“DPNC”), includes projects like a fake video of deceased Islamic State leader Abu Mohammed al-Adnani produced by Northwestern University’s Security and AI Lab in 2020. Northwestern researchers produced the DPNC deepfake video of al-Adnani to raise the public’s awareness about the technology’s proliferation. For similar reasons, also in 2020, researchers at MIT created a fake speech by President Richard Nixon purporting to tell the American people that the 1969 Apollo 11 mission to the moon failed and that the astronauts had all died. MIT leveraged media coverage of their fake moon landing speech video to raise the public’s awareness of manipulated internet-based media. It remains one of the most compelling examples of a deepfake online today.

In the military context, a fake video of a deceased enemy combatant (e.g., Anwar al-Alwaki or Osama bin Laden) criticizing the performance of al-Qaeda’s rank-and-file fighters might be of use in confusing and countering violent extremists. Commanders would need to take great care in ensuring that DPNC deepfakes comply with customary international laws prohibiting disrespectful and degrading treatment of the dead.

The fourth category of deepfakes, Deceased Person/Consent (“DPC”), includes an emerging group of videos intended to allow living people to interact with their deceased loved ones. The company MyHeritage, for example, creates AI-manipulated videos and photos of deceased loved ones. While some observers find this service to be “creepy,” MyHeritage and other companies are apparently thriving. In May 2023, actor Tom Hanks told podcaster Adam Buxton that he is open to the potential for future filmmakers to include deepfake video and audio featuring his image, likeness, and voice after he is deceased.

In the military context, a country or command may consider obtaining the permission of an influential leader to generate deepfake videos or audio messages to confuse an adversary about whether a particular target is deceased. One could imagine a scenario in Ukraine in which President Zelenskyy is killed but continues to appear on the news, as if he were still alive, to issue guidance, comment on battlefield developments, and congratulate his soldiers for contemporary gains.

The final category of deepfakes, Fake Person/Event (“FPE”), is worthy of special attention. It includes videos, audio recordings, and images of totally fabricated people and fake events. In 2022, Russian hackers established a pro-Kremlin website, Ukraine Today, to flood the information environment with deceptive news and opinions. The hackers created a troupe of fake bloggers using AI to generate profile photographs of Ukrainians that do not actually exist. A man named Vladimir Bondarenko, for example, published a series of articles intended to distort the public’s understanding of Russia’s aggressive war in Ukraine. In reality, Bondarenko is not a real person; images of his face and his propaganda were all computer generated. Similarly, in February 2023, China published a series of videos for the so-called Wolf News network purporting to show American reporters praising China’s contributions to geopolitical stability. In another video, a news anchor criticizes the United States government for its shameful inaction on gun control. In June 2023, the social media platform Douyin (the Chinese version of TikTok) suspended the account of Russian soldier Pavel Korchatie after his four hundred thousand followers finally realized that he was a deepfake controlled by a Chinese user.

Conclusion and Recommendations

The US military must decide now whether it will continue to explore deepfake technology for operational use or whether it should focus its investments in deepfake detection technology alone. It must also acknowledge the fundamental distinction between employing deepfakes in armed conflict scenarios like the use cases described above and doing so in competition short of conflict. The potential utility of these technologies is significant, yet leaders must also recognize that the US military’s embrace of deepfakes could contribute to information chaos. In the context of competition, that risk outweighs any benefits, and the US military should continue to do what General Laura Richardson vowed to do in the US Southern Command area of operations: tell the truth. Credibility is currency in the world, and the use of deepfakes below the threshold of armed conflict will threaten US credibility.

However, the United States must not forgo the opportunity to develop a deepfake capability as a tool for deception in armed conflict. Lawmakers and defense policymakers should explore and develop deepfake technology for use at the tactical and operational levels of warfare. Deepfakes could give warfighting commands advantages over enemy forces and enable protection for maneuvering forces. Deepfakes are certainly dangerous, particularly for democratic societies. But they are not inherently immoral, unethical, or illegal. If developed and deployed responsibly, they could advance military objectives and even save lives.

Major John C. Tramazzo is an active duty Army judge advocate and military professor at the US Naval War College’s Stockton Center for International Law, where he coteaches a course on the Law of Armed Conflict. John previously served as the regiment judge advocate for the Army’s 160th Special Operations Aviation Regiment (Airborne) at Fort Campbell, Kentucky. He has also served as a legal advisor within the Joint Special Operations Command at Fort Bragg, North Carolina and the Army’s 10th Mountain Division at Fort Drum, New York. He has deployed to Afghanistan and to Jordan multiple times and has traveled to the EUCOM and AFRICOM areas of responsibility for temporary duties.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Image credit: kremlin.ru, via Wikimedia Commons (adapted by MWI)

Deepfakes and Deception: A Framework for the Ethical and Legal Use of Machine-Manipulated Media - Modern War Institute (2024)

FAQs

What are the ethical concerns of Deepfake technology? ›

Deepfakes are a subset of AI outputs, utilizing deep learning techniques like generative adversarial networks (GANs) to generate highly realistic but fabricated content, often raising ethical and legal concerns related to privacy, intellectual property, consent and the spread of misinformation.

What are some of the arguments against deepfakes? ›

Social Harms

The personal harms that can be unleashed by deepfakes are limited only by the imaginations of bad actors, but they are dwarfed by the scale of societal harms we may soon experience. An obvious misuse of deepfakes is their potential role in creating more sophisticated fabricated news stories.

What is the conclusion of deepfakes? ›

Conclusion. Deepfakes have shown us both the creative and destructive potential of synthetic media. The dual nature of deepfakes underscores the importance of understanding this technology, not only what is deepfake and how deepfake works but also its potential impact.

What is the solution for Deepfake technology? ›

To achieve this, deepfake detection solutions typically use a combination of deep learning algorithms, image, video, and audio analysis tools, forensic analysis, and blockchain technology or digital watermarking—all of which help the solution to identify inconsistencies undetectable to the human eye.

How does deepfake affect our society? ›

Social impact

Deepfake videos can also manipulate public opinion and erode trust in media and public sources. The ability to fabricate realistic videos of public figures, politicians, or celebrities saying or doing things they never actually did can have far-reaching consequences for society and democratic processes.

Are there laws against deepfakes? ›

Georgia, Hawaii, Texas and Virginia have laws on the books that criminalize nonconsensual deepfake p*rn. California and Illinois have given victims the right to sue those who create images using their likenesses. Minnesota and New York do both. Minnesota's law also targets using deepfakes in politics.

What are the negative uses of deepfakes? ›

Not only has this technology created confusion, skepticism, and the spread of misinformation, deepfakes also pose a threat to privacy and security. With the ability to convincingly impersonate anyone, cybercriminals can orchestrate phishing scams or identity theft operations with alarming precision.

What can be done to stop deepfakes? ›

Technology

However, creating and maintaining automated detection tools performing inline and real-time analysis remains a challenge. But given time and wide adoption, AI-based detection measures will have a beneficial impact on combatting deepfakes.

Why should we be worried about deepfakes? ›

Deepfakes are creating havoc across the globe, spreading fake news and p*rnography, being used to steal identities, exploiting celebrities, scamming ordinary people and even influencing elections.

What is an example of a deep fake? ›

One benign example is a video that appears to show soccer star David Beckham fluently speaking nine different languages, when he actually only speaks one. Another fake shows Richard Nixon giving the speech he prepared in the event that the Apollo 11 mission failed and the astronauts didn't survive.

Who benefits from deepfakes? ›

For content creators, the deepfake possibilities are super beneficial because they could create a deepfake version of themselves that could do their educational content, advertising, and so much more.

What is the science behind deepfakes? ›

That's because neural networks are the basis for deepfake technology. Neural networks make machine learning possible through a “feed-forward” structure of interconnected nodes. These nodes mirror the neurons in the human brain. And like the human brain, a computer can learn to perform a task through training.

How can deepfakes be detected? ›

Blurring or misalignment: If the edges of images are blurry or visuals are misaligned — for example, where someone's face and neck meet their body — you'll know something is amiss. Inconsistent audio and noise: Deepfake creators usually spend more time on the video images rather than the audio.

What is the abuse of deepfake technology? ›

Deepfake, a term derived from “deep learning” and “fake,” refers to highly convincing digital manipulations in which individuals' faces or bodies are superimposed onto existing images or videos without the individuals' consent. This emerging form of “image-based sexual abuse” presents unprecedented challenges.

What is the tool to detect deepfakes? ›

AI DeepFake Detection Tool #1 - Sentinel

It's a cloud-based solution offering real-time deepfake detection with high accuracy. It uses advanced AI algorithms and multiple technologies such as facial landmark analysis, temporal consistency checks, and flicker detection to identify manipulated media.

What are the risks of Deepfake technology? ›

Not only has this technology created confusion, skepticism, and the spread of misinformation, deepfakes also pose a threat to privacy and security. With the ability to convincingly impersonate anyone, cybercriminals can orchestrate phishing scams or identity theft operations with alarming precision.

What are the privacy concerns of deepfakes? ›

The use of the technology to harass or harm private individuals who do not command public attention and cannot command resources necessary to refute falsehoods should be concerning14. The ramifications of deepfake p*rnography have only begun to be seen.

What are the ethical concerns of AI in security? ›

The technology's ability to process vast amounts of data and identify threats in real time is invaluable, yet it introduces complex ethical dilemmas related to privacy, bias, accountability, transparency, and the potential for misuse. The trade-off between privacy and security is a notable ethical concern.

What are the misuses of deepfake? ›

Perhaps one of the most alarming components of deepfake misuse is the potential for reputational damage to people. By superimposing someone's face onto specific or compromising content, malicious actors can tarnish reputations and motive irreparable harm.

Top Articles
Latest Posts
Article information

Author: Ouida Strosin DO

Last Updated:

Views: 6337

Rating: 4.6 / 5 (76 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Ouida Strosin DO

Birthday: 1995-04-27

Address: Suite 927 930 Kilback Radial, Candidaville, TN 87795

Phone: +8561498978366

Job: Legacy Manufacturing Specialist

Hobby: Singing, Mountain biking, Water sports, Water sports, Taxidermy, Polo, Pet

Introduction: My name is Ouida Strosin DO, I am a precious, combative, spotless, modern, spotless, beautiful, precious person who loves writing and wants to share my knowledge and understanding with you.