Zephyrnet Logo

Meta ito try ‘cutting edge’ AI image detection on platforms

Date:

Meta is building tools to detect and label AI-generated images posted on its social media platforms, and is testing large language models to automatically moderate content online.

On Tuesday, Meta’s president of global affairs (and former UK Deputy Prime Minster) Nick Clegg announced plans to incorporate symbols, watermarks, and metadata for fake pictures created by text-to-image models on Facebook, Instagram, and Threads. 

Clegg said engineers at Meta are currently developing tools to tag AI-made images with the caption “Imagined with AI” on its social media apps, and will apply the label over the coming months. The software needed to automatically identify and embed invisible watermarks in other types of synthetic content, like audio and video, is still under development we’re told.

Instead, Meta will begin asking users to disclose whether they have posted AI-generated footage or sounds. “We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so,” Clegg said.

Meta’s tools support standards set by the Coalition for Content Provenance and Authenticity (C2PA) and the International Press Telecommunications Council (IPTC). These are different industry initiatives backed by technology and media groups trying to make it easier to identify machine-generated content. 

Users will eventually be able to see a symbol in a synthetic image generated by Meta, and see details on when it was made, by which model, and by whom in its associated metadata. The biz also wants to detect and classify synthetic images generated by other AI tools that also comply with C2PA or IPTC guidelines. 

Meta’s latest strategies tackling AI content come just after its Oversight Board, a panel of independent experts scrutinizing its content moderation policies, complained that the current rules on manipulated media were “incoherent”. The board launched a probe last year into why Meta decided to allow a fake video of President Biden that had been digitally altered to claim he was a pedophile. 

In addition to the C2PA and IPTC-backed tools, Meta is testing the ability of large language models to automatically determine whether a post violates its policies.

The social media biz is training these systems on its own data, and believe software could cut down the content that needs to be assessed by human reviewers, allowing them to focus on trickier cases. 

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies” Clegg said.

“People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. We’re taking this approach through the next year, during which a number of important elections are taking place around the world.” ®

spot_img

Latest Intelligence

spot_img