Meta has announced changes to its rules on AI-generated content and manipulated media, following criticism from its Oversight Board. Starting next month, the company will label a wider range of such content, including applying a “Made with AI” badge to deepfakes. Additional contextual information may be shown when content has been manipulated in other ways that pose a high risk of deceiving the public on an important issue.
The move could lead to the social networking giant labelling more pieces of content that have the potential to be misleading, especially important in a year of many elections taking place around the world. However, for deepfakes, Meta is only going to apply labels where the content in question has “industry standard AI image indicators,” or where the uploader has disclosed it’s AI-generated content. AI-generated content that falls outside those bounds will, presumably, escape unlabelled.
The policy change is also likely to lead to more AI-generated content and manipulated media remaining on Meta’s platforms, since it’s shifting to favor an approach focused on “providing transparency and additional context,” as the “better way to address this content” (rather than removing manipulated media, given associated risks to free speech). So, for AI-generated or otherwise manipulated media on Meta platforms like Facebook and Instagram, the playbook appears to be: more labels, fewer takedowns.
Meta said it will stop removing content solely on the basis of its current manipulated video policy in July. This timeline gives people time to understand the self-disclosure process before Meta stops removing the smaller subset of manipulated media. The change of approach may be intended to respond to rising legal demands on Meta around content moderation and systemic risk, such as the European Union’s Digital Services Act.
read more > techcrunch.com