Zephyrnet Logo

Facebook detection challenge winners spot deepfakes with 82% accuracy

Date:

Partners in the Deepfake Detection Challenge — including Facebook, Partnership on AI, and others — announced the contest winners today. The top-performing model achieved 82.56% deepfake detection against a public data set of 100,000 videos created for the project. More than 2,000 participants contributed over 35,000 models to the competition, which started in December and concluded May 31. Top-performing teams split $1 million in prize money.

“The first entries were basically 50% accuracy, which is worse than useless, and the first real ones were like 59% accuracy, and the winning models were 82% accuracy,” Facebook CTO Mike Schroepfer told reporters.

Schroepfer said Facebook intends to use the findings to improve its own deepfake detection tech in production today. Deepfake detection is an area of particular concern ahead of U.S. presidential elections in November.

All the winners used the EfficientNet network architecture to construct their models. Facebook engineers found that top-performing models tended to use a form of data augmentation or augmentations that blend fake and real faces.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

The Deepfake Detection Challenge data set will be open-sourced, with details shared next week at the Computer Vision and Pattern Recognition (CVPR) conference. CVPR was originally scheduled to be held in Seattle but will now take place entirely online starting Sunday.

“Honestly, prior to all of this, if I just wanted to download a good deepfake detector from GitHub, it didn’t really exist like nine months ago — I think that’s a problem. And so just actually having a baseline system that works reasonably well, that gives people a starting point, I think is probably at this point more important than worrying about … adversarial examples,” Schroepfer said about open-sourcing the data set.

Competing teams used the Deepfake Detection Challenge Data Set to train their models. The data set is a collection of 100,000 videos created by actors who signed consent agreements. With more than 3,500 actors, it includes 38 days of video.

Facebook and partners launched the Deepfake Detection Challenge last fall with a group of partner organizations that includes the BBC, the New York Times, several academic institutions, and the Partnership on AI’s Steering Committee on AI and Media Integrity. AWS contributed $1 million in cloud credits, while Schroepfer said Facebook contributed roughly $10 million to the project.

With a similar goal of using AI to better moderate content, last month Facebook launched the Hateful Memes Challenge.

The deepfake detection and hateful meme news comes as Facebook is being challenged on several fronts for its record of profiting from hate. Facebook CEO Mark Zuckerberg recently defended President Trump’s right to post a call for military shooting of looters during large-scale protests against white supremacy and racism following the killing of George Floyd. Twitter labeled the same language in a tweet as “glorying violence.”

Facebook employees held a virtual walkout in response to the company’s position, and two senior employees reportedly threatened to resign, according to the New York Times. A Wall Street Journal report late last month asserted that Facebook knowingly profits from a recommendation algorithm that divides and promotes extremism and hate and that the company has avoided change to prevent any potential backlash from conservative politicians.

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/oivM3vjuMiQ/

spot_img

Latest Intelligence

spot_img