Are humans or artificial intelligence better at spotting fake videos?

Olya Kopruseva / Pixels

Source: Olya Kopruseva / Pixels

Technique of creating realistic fake videos using AI It is becoming increasingly complex, making it difficult, if not impossible, to determine whether audio, images, or videos are real. Can humans or machines tell if a video is original, created by artificial intelligence, or has been altered? Has technology reached the point where there is no foolproof way to identify AI-modified videos?

Manipulated videos are not a new problem; It is important to note that they can be created without AI. Advances in artificial intelligence, particularly deep Neural Networks and generative adversarial networks, have created sophisticated tools for realistic fake videos. Deepfake is a type of AI-modified video.

There are several types of deepfake videos, including “face swap”, in which a video’s face is grafted onto another person to make that person appear to be saying or doing things they did not actually say or do. Lip sync videos are where mouths are moved to match an audio recording. “Puppet-Master” videos are videos in which videos of a person are animated based on the facial movements and expressions of another person sitting in front of the camera. (See this example.)

AI models need to be trained on a lot of image and video data, so the targets of deepfakes are usually celebrities and politicians with a lot of publicly available snapshots.

Actor Bruce Willis recently made headlines after he was reported to have sold the rights to his face to Russia’s Deepcake, although Willis’ agent denied the reports. Willis’ face was reported to have been used in a Russian commercial created using the “face swap” deepfake technology.

Val Kilmer worked with software company Sonantic to use artificial intelligence to create an emotional and lifelike model of his speaking voice before treatment for throat cancer.

Respeecher, a startup in the field of artificial intelligence sound reproduction, has created a file algorithm To repeat the 1977 voice of Darth Vader.

As photos and video data of individuals become more widely available online, the problem of deepfakes may become a more pervasive problem for public figures and individuals. Deepfakes in the wrong hands can violate privacy rights, spread misinformation, and cause financial instability and political turmoil.

Researchers and technologists are working on algorithms that automate the detection of fake visual content. AI models are now able to outperform human experts in a wide range of activities, from chess to medical diagnosis, so AI has the potential to help solve this problem.

The need for accurate, automated detection of deepfake files was worrisome enough that large companies, including Facebook, Microsoft and Amazon, offered $1,000,000 in prize money for the most accurate deepfake detection models during Deepfake Detection Challenge Contest From 2019 to 2020. The top-performing model was 82.56 percent accurate compared to the publicly available video data set for participants. However, in a “black box data set” of 10,000 unexpected videos not available to early participants, the flagship model performed with an accuracy of only 65.18 percent.

newly study It found that both normal human observers and AI models for detecting deep fakes in computer vision are similarly accurate but make different types of errors. People with access to automated model predictions were more accurate, indicating AI-assisted cooperation Decision making It may be useful but is not likely to be foolproof.

The researchers found that when AI makes false predictions and humans gain access to these models’ predictions, humans end up revising their answers incorrectly. This indicates that machine predictions can influence human decision-making – an important factor when designing human AI systems cooperation.

The fake media problem has been around long before these AI tools were used. Like any technological advancement, people find positive and negative applications. AI has created exciting new possibilities with applications in the creative and film industries, at the same time, increasing the need for reliable discovery, protection of privacy rights, and risks Management Against malicious use cases.

Current research indicates that human and machine models are incomplete in detecting AI-modified videos. One answer might be a collaborative approach between AI and human detection in order to address the shortcomings of both. Since no form of disclosure is likely to be foolproof, education About deepfake technology can help us become more aware that vision is always incredible—a reality that was true long before deepfake AI tools arrived.

Copyright © 2022 Marlynn Wei, MD, PLLC. All rights reserved.

Leave a Comment