Abstract
The dissemination of deep fakes for nefarious purposes poses significant national security risks to the United States, requiring an urgent development of technologies to detect their use and strategies to mitigate their effects. Deep fakes are images and videos created by or with the assistance of AI algorithms in which a person’s likeness, actions, or words have been replaced by someone else’s to deceive an audience. Often created with the help of generative adversarial networks, deep fakes can be used to blackmail, harass, exploit, and intimidate individuals and businesses; in large-scale disinformation campaigns, they can incite political tensions around the world and within the U.S. Their broader implication is a deepening challenge to truth in public discourse. The U.S. government, independent researchers, and private companies must collaborate to improve the effectiveness and generalizability of detection methods that can stop the spread of deep fakes.
Recommended Citation
Dunard, Nick. “Deep Fakes: The Algorithms That Create and Detect Them and the National Security Risks They Pose.” James Madison Undergraduate Research Journal 8, no. 1 (2021): 42-52. http://commons.lib.jmu.edu/jmurj/vol8/iss1/5.
Included in
Artificial Intelligence and Robotics Commons, Communications Law Commons, Data Science Commons, National Security Law Commons, Privacy Law Commons, Science and Technology Law Commons