Meta to label AI-generated images on Facebook and Instagram

Meta Platforms is set to implement a system to detect and label images created by third-party artificial intelligence services in the coming months. The labels, embedded as invisible markers within files, will be applied to content on Facebook, Instagram, and Threads that bears these markers. Nick Clegg, Meta’s president of global affairs, explained in a blog post that the move aims to alert users to the digital origins of images that often closely resemble authentic photos. Currently, Meta already labels content produced using its own AI tools.

Upon implementation, Meta will extend this labeling practice to images generated on platforms managed by OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Alphabet’s Google. This announcement provides insight into the evolving standards tech companies are adopting to address potential issues associated with generative AI technologies, capable of generating realistic yet fake content in response to simple prompts.

Clegg expressed confidence in the reliability of labeling AI-generated images but acknowledged the complexity of developing tools for audio and video content marking. While these technologies are not fully mature, Meta plans to incentivize industry-wide adherence. In the interim, the company will require users to label their modified audio and video content, imposing penalties for non-compliance, without specifying the nature of the penalties. Clegg noted the lack of a viable mechanism to label written text generated by AI tools like ChatGPT.

Meta’s independent oversight board recently criticized the company’s policy on doctored videos, advocating for labeling instead of removal. Clegg concurred with the critique, emphasizing that the current policy is inadequate in an environment with an increasing volume of synthetic and hybrid content. The new labeling partnership is presented as Meta’s move toward aligning with the board’s recommendations.