Logo Zephyrnet

Il rilevamento dei deepfake pone una corsa tecnologica problematica

Data:

Experts hold out little hope for a robust technical solution in the long term.

With disinformation concerns increasing as the US presidential election approaches, industry and academic researchers continue to investigate ways of detecting misleading or fake content generated using deep neural networks, so-called “deepfakes.”

While there have been successes — for example, focusing on artifacts such as the unnatural blinking of eyes has resulted in high accuracy rates — a key problem in the arms race between attackers and defenders remains: The neural networks used to create deepfake videos are automatically tested against a variety of techniques intended to detect manipulated media, and the latest defensive detection technologies can easily be included. The feedback loop used to create deepfakes is similar in approach — if not in technology — to the fully undetectable (FUD) services that allow malware to be automatically scrambled in a way to dodge signature-based detection technology.

Detecting artifacts is ultimately a losing proposition, says Yisroel Mirsky, a post-doctoral fellow in cybersecurity at the Georgia Institute of Technology and co-author of a paper that surveyed the current state of deepfake creation and detection technologies.

“The defensive side is all doing the same thing,” he says. “They are either looking for some sort of artifact that is specific to the deepfake generator or applying some generic classifier for some architecture or another. We need to look at solutions that are out of band.”

The problem is well known among researchers. Take Microsoft’s Sept. 1 announcement of a tool designed to help detect deepfake videos. The Microsoft Video Authenticator detects possible deepfakes by finding the boundary between inserted images and the original video, providing a score for the video as it plays.

While the technology is being released as a way to detect issues during the election cycle, Microsoft warned that disinformation groups will quickly adapt.

“The fact that [the images are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology,” said Tom Burt, corporate vice president of customer security and trust, and Eric Horvitz, chief scientific officer, in a blog post describing the technology. “However, in the short run, such as the upcoming US election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.”

Microsoft is not alone in considering current deepfake detection technology as a temporary fix. In its Deep Fake Detection Challenge (DFC) in early summer, Facebook found the winning algorithm only accurately detected fake videos about two-thirds of the time. 

“[T]he DFDC results also show that this is still very much an unsolved problem,” the company said in its announcement. “None of the 2,114 participants, which included leading experts from around the globe, achieved 70 percent accuracy on unseen deepfakes in the black box data set.” 

In fact, calling the competition between attackers and defenders an “arms race” is a bit of a misnomer because the advances in technology will likely mean that realistic fake videos that cannot be detected by technology will become a reality not too far in the future, says Alex Engler, the Rubenstein Fellow in governance studies at the Brookings Institute, a policy think tank.

“We have not see a dramatic improvement in deepfakes, and we haven’t really a super-convincing deepfake video, but am I optimistic about the long-term view? Not really,” he says. “They are going to get better. Eventually there will not be an empirical way to tell the difference between a deepfake and a legitimate video.”

In un documento politico, Engler argued that policy-makers will need to plan for the future when deepfake technology is widespread and sophisticated.

On the technical side, like the anti-malware industry, there are two likely routes that deepfake detection will take. Some companies are creating ways of signing video as proof that it has not been modified. Microsoft, for example, unveiled a signing technology with a browser plug-in that the company said can be used to verify the legitimacy of videos.  

“In the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media,” Burt and Hovitz wrote. “There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.” 

Another avenue of research is to look for other signs that a video has been modified. With machine-learning algorithms capable of turning videos into a series of content and metadata — from a transcription of any speech in the video to the location of where the video was taken — creating content-based detection algorithms could be a possibility, Georgia Tech’s Mirsky says. 

“Just like malware, if you have a technique that can look at the actual content, that is helpful,” he says. “It is very important because it raises the bar for the attacker. They can mitigate 90% of attacks, but the issue is that an adversary like a nation-state actor who has plenty of time and effort to refine the deepfake, it becomes very, very challenging to detect these attacks.”

Giornalista esperto di tecnologia da più di 20 anni. Ex ingegnere ricercatore. Scritto per più di due dozzine di pubblicazioni, tra cui CNET News.com, Dark Reading, Technology Review del MIT, Popular Science e Wired News. Cinque premi per il giornalismo, tra cui Best Deadline ... Visualizza la biografia completa

Letture consigliate:

Ulteriori intuizioni

Source: https://www.darkreading.com/analytics/deepfake-detection-poses-problematic-technology-race/d/d-id/1338953?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

spot_img

L'ultima intelligenza

spot_img