As the lines between human and AI-generated content get blurrier by the day, tech companies are taking measures to keep users informed of what type of content they're dealing with. For instance, every photo manipulated through Samsung's new Generative Edit feature and every Generative wallpaper on the Galaxy S24 series has a small watermark and metadata to confirm its AI origins.
For some time now, Meta has had in place a similar tool for photos created with the Meta AI image generator. This tool lets Meta users know when they're seeing AI-generated content through simple text prompts, watermarks, and invisible watermarks.
This week, Meta announced a new initiative to label AI-generated images on Facebook, Instagram, and Threats, including those generated by different AI systems from other companies.
Meta wants higher AI transparency on social media
Although AI companies include signals in their AI generators, there are ways for people to strip out invisible markers, so it is not yet possible to identify all AI-generated content. But Meta is working on developing tools that can automatically detect AI content, even if it lacks invisible markers.
Meta announced that it is working alongside industry partners to set in place standards for identifying photos, videos, and audio that were synthesized using AI and posted on social media platforms.
The company says it is building industry-leading tools capable of detecting AI-generated content at scale. With new systems in place, in the coming months, Meta will label AI content on Facebook, Instagram, and Threads whenever it can detect industry standard indicators of AI-generated content.
Before the industry can adopt these new AI detection tools at scale, Meta will be adding a new feature that allows social media users to use AI responsibly and disclose when the content they're sharing is generated by an AI. If users fail to label their AI content accordingly, they might be penalized.
Meta says, “If we determine that digitally created or altered image, video, or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context.”