Meta is working with industry partners on common technical standards for identifying AI content, including video and audio.

Nick Clegg, president: global affairs at Meta, says in the coming months, images that users post to Facebook, Instagram and Threads will be labelled when the company can detect industry standard indicators that they are AI-generated.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Clegg writes in his company blog. “People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology.

“So it’s important that we help people know when photorealistic content they’re seeing has been created using AI. We do that by applying ‘Imagined with AI’ labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies’ tools too.”

He says Meta has been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI.

“Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads. We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app.

“We’re taking this approach through the next year, during which a number of important elections are taking place around the world. During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve.

“What we learn will inform industry best practices and our own approach going forward.”

When photorealistic images are created using the Meta AI feature, several things help to ensure people know AI is involved, including putting visible markers users can see on the images, and both invisible watermarks and metadata embedded within image files.

“Since AI-generated content appears across the internet, we’ve been working with other companies in our industry to develop common standards for identifying it through forums like the Partnership on AI (PAI). The invisible markers we use for Meta AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices,” Clegg adds.

Meta is building tools that can identify invisible markers at scale – specifically, the “AI generated” information in the C2PA and IPTC technical standards – so it can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.

But, while companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale. “So we can’t yet detect those signals and label this content from other companies,” he adds. “While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it.

“We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context.”

It’s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers, so Meta is working to develop classifiers that can help to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, it is investigating ways to make it more difficult to remove or alter invisible watermarks.

“This work is especially important as this is likely to become an increasingly adversarial space in the years ahead. People and organisations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it. Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead.”