Generative Data Intelligence

Meta to try ‘cutting edge’ AI image detection on platforms

Date:

Meta is building tools to detect, identify, and label AI-generated images shared via its social media platforms. It is also testing large language models to automatically moderate content online.

On Tuesday, Meta’s president of global affairs (and former UK Deputy Prime Minster) Nick Clegg announced plans to flag up on Facebook, Instagram, and Threads material created not only by the US giant’s own text-to-image AI models but also stuff crafted by outside machine-learning applications. The idea being to warn netizens that stuff online may not be what it seems, and may have been invented using AI tools to hoodwink people, regardless of its source.

Clegg said engineers at Meta are right now developing tools to tag photo-realistic AI-made content with the caption, “Imagined with AI,” on its apps, and will show this label as necessary over the coming months.

Photo-realistic images created by the built-in Meta AI assistant are already automatically labeled as such, using visible and invisible markers, we’re told. It’s the high-quality AI-made stuff that’s submitted from the outside that also needs to be detected in some way and marked up as such in the Facebook giant’s empire of apps.

In the short term, Facebook intends to rely on watermarks inside submitted files that declares the source of the content – which relies on external AI content generators including those markings or metadata, without that being stripped out – and also asking folks to honestly declare stuff as AI-generated when sharing media.

In the long term, Meta intends to use classifiers that can automatically discern whether material was made by a neural network or not, thus avoiding this reliance on user-submitted labeling and generators including supported markings. This need for users to ‘fess up when they use faked media – if they’re even aware it is faked – as well as relying on outside apps to correctly label stuff as computer-made without that being stripped away by people is, as they say in software engineering, brittle. Those automated classifiers, if they ever work as well as desired, are needed the most.

For now at least, people uploading audio and video will definitely need to declare whether that material is deepfaked or real as Meta says there isn’t yet consistent way for it to identify AI-made footage and sound; and pictures either are auto-marked by Meta AI, include watermarks that reveal the true source, or presumably can be flagged up manually by people. And later this may all be automated.

“We’ll require people to use this disclosure and label tool when they post organic content with a photo-realistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so,” Clegg said.

As for the watermarking Meta supports, that includes those by the Coalition for Content Provenance and Authenticity (C2PA) and the International Press Telecommunications Council (IPTC). These are industry initiatives backed by technology and media groups trying to make it easier to identify machine-generated content. They are not foolproof and can be defeated as we’ve previously reported.

Meta’s latest strategies tackling AI content come just after its Oversight Board, a panel of independent experts scrutinizing its content moderation policies, complained that the current rules on manipulated media were “incoherent.” The board launched a probe last year into why Meta decided to allow a fake video of President Biden that had been digitally altered to claim he was a pedophile to stay on its social network empire. 

In addition to the C2PA and IPTC-backed tools, Meta is testing the ability of large language models to automatically determine whether a post violates its policies.

The social media biz is training these systems on its own data, and believe software could cut down the content that needs to be assessed by human reviewers, allowing them to focus on trickier cases. 

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies” Clegg said.

“People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. We’re taking this approach through the next year, during which a number of important elections are taking place around the world.” ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?