Zephyrnet Logo

High-quality Deepfake Videos Made with AI Seen as a National Security Threat

Date:

Deepfake videos so realistic that they cannot be detected as fakes have the FBI concerned about they pose a national security threat. (GETTY IMAGES)

By AI Trends Staff

The FBI is concerned that AI is being used to create deepfake videos that are so convincing they cannot be distinguished from reality.

The alarm was sounded by an FBI executive at a WSJ Pro Cybersecurity Symposium held recently in San Diego. “What we’re concerned with is that, in the digital world we live in now, people will find ways to weaponize deep-learning systems,” stated Chris Piehota, executive assistant director of the FBI’s science and technology division, in an account in WSJPro.

The technology behind deepfakes and other disinformation tactics are enhanced by AI. The FBI is concerned natural security could be compromised by fraudulent videos created to mimic public figures. “As the AI continues to improve and evolve, we’re going to get to a point where there’s no discernible difference between an AI-generated video and an actual video,” Piehota stated.

Chris Piehota, executive assistant director, FBI science and technology division

The word ‘deepfake’ is a portmanteau of “deep learning” and “fake.” It refers to a branch of synthetic media in which artificial neural networks are used to generate fake images or videos based on a person’s likeness.

The FBI has created its own deepfakes in a test lab, that have been able to create artificial personas that can pass some measures of biometric authentication, Piehota stated. The technology can also be used to create realistic images of people who do not exist. And 3-D printers powered with AI models can be used to copy someone’s fingerprints—so far, FBI examiners have been able to tell the difference between real and artificial fingerprints.

Threat to US Elections Seen

Some are quite concerned about the impact of deepfakes on US democratic elections and on the attitude of voters. The AI-enhanced deepfakes can undermine the public’s confidence in democratic institutions, even if proven false, warned Suzanne Spaulding, a senior adviser at the Center for Strategic and International Studies, a Washington-based nonprofit.

“It really hastens our move towards a post-truth world, in which the American public becomes like the Russian population, which has really given up on the idea of truth, and kind of shrugs its shoulders. People will tune out, and that is deadly for democracy,” she stated in the WSJ Pro account.

Suzanne Spaulding, senior adviser, Center for Strategic and International Studies

Deepfake tools rely on a technology called generative adversarial networks (GANs), a technique invented in 2014 by Ian Goodfellow, a Ph.D. student who now works at Apple, according to an account in Live Science.

A GAN algorithm generates two AI streams, one that generates content such as photo images, and an adversary that tries to guess whether the images are real or fake. The generating AI starts off with the advantage, meaning its partner can easily distinguish real from fake photos. But over time, the AI gets better and begins producing content that looks lifelike.

For an example, see NVIDIA’s project www.thispersondoesnotexist.com which uses a GAN to create completely fake—and completely lifelike—photos of people.

Example material is starting to mount. In 2017, researchers from the University of Washington in Seattle trained a GAN can change a video of former President Barack Obama, so his lips moved consistent with the words, but from a different speech. That work was published in the journal ACM Transactions on Graphics (TOG). In 2019, a deepfake could generate realistic movies of the Mona Lisa talking, moving and smiling in different positions. The technique can also be applied to audio files, to splice new words into a video of a person talking, to make it appear they said something they never said.

All this will cause attentive viewers to be more wary of content on the internet.

High tech is trying to field a defense against deepfakes.

Google in October 2019 released several thousand deepfake videos to help researchers train their models to recognize them, according to an account in Wired. The hope is to build filters that can catch deepfake videos the way spam filters identify email spam.

The clips Google released were created in collaboration with Alphabet subsidiary Jigsaw. They focused on technology and politics, featuring paid actors who agreed to have their faces replaced. Researchers can use the videos to benchmark the performance of their filtering tools. The clips show people doing mundane tasks, or laughing or scowling into the camera. The face-swapping is easy to spot in some instances and not in others.

Some researchers are skeptical this approach will be effective. “The dozen or so that I looked at have glaring artifacts that more modern face-swap techniques have eliminated,” stated Hany Farid, a digital forensics expert at UC Berkeley who is working on deepfakes, to Wired. “Videos like this with visual artifacts are not what we should be training and testing our forensic techniques on. We need significantly higher quality content.”

Going further, the Deepfake  Detection Challenge competition was launched in December 2019 by Facebook — along with Amazon Web Services (AWS), Microsoft, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley; University of Maryland, College Park; and State University of New York at Albany, according to an account in VentureBeat.

Facebook has budged more than $10 million to encourage participation in the competition; AWS is contributing up to $1 million in service credits and offering to host entrants’ models if they choose; and Google’s Kaggle data science and machine learning platform is hosting both the challenge and the leaderboard.

“‘Deepfake’ techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online,” noted Facebook CTO Mike Schroepfer in a blog post. “Yet the industry doesn’t have a great data set or benchmark for detecting them. The [hope] is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.”

The data set contains 100,000-plus videos and was tested through a targeted technical working session in October at the International Conference on Computer Vision, stated Facebook AI Research Manager Christian Ferrer.  The data does not include any personal user identification and features only participants who have agreed to have their images used. Access to the dataset is gated so that only teams with a license can access it.

The Deepfake Detection Challenge is overseen by the Partnership on AI’s Steering Committee on AI and Media Integrity. It is scheduled to run through the end of March 2020.

Read the source articles in  WSJProLive Science, Wired and VentureBeat.

Source: https://www.aitrends.com/security/high-quality-deepfake-videos-made-with-ai-seen-as-a-national-security-threat/

spot_img

Latest Intelligence

spot_img