The Supervisory Board, Meta’s semi-independent policy consultancy, is focusing its attention on how the company’s social platforms handle explicit AI-generated images. On Tuesday, it announced investigations into two separate cases into how Instagram in India and Facebook in the United States handled AI-generated images of public figures after Meta’s systems failed to detect and to respond to explicit content.
In both cases, the sites have now silenced the media. The board is not naming individuals targeted by AI images “to avoid gender-based harassment,” according to a Meta email sent to TechCrunch.
The board of directors handles files concerning Meta’s moderation decisions. Users must first appeal to Meta about a moderation measure before contacting the Oversight Board. The committee is expected to release its full findings and conclusions soon.
Cases
Describing the first case, the committee said a user reported the AI-generated nude of an Indian public figure on Instagram as pornography. The image was posted by an account that exclusively posts AI-created images of Indian women, and the majority of users reacting to these images are based in India.
Meta failed to remove the image after the first report, and the ticket for the report was automatically closed 48 hours after the company failed to further review the report. When the original complainant appealed the decision, the report was again automatically closed without any oversight from Meta. In other words, after two reports, the explicit AI-generated image remained on Instagram.
The user eventually appealed to the council. The company then only took action by removing the objectionable content and removing the image for violating community standards on bullying and harassment.
The second case involves Facebook, where a user posted an explicit AI-generated image that resembled an American public figure in a group focused on AI creations. In this case, the social network removed the image as it had been posted by another user earlier, and Meta had added it to a bank of media matching services under the category “Photoshop or derogatory sexualized drawings “.
When TechCrunch asked why the board selected a case in which the company successfully removed an explicit AI-generated image, the board said it was selecting cases “emblematic of more wide on Meta platforms”. He added that these cases help the advisory committee examine the overall effectiveness of Meta’s policy and processes on various topics.
“We know that Meta is faster and more effective at moderating content in some markets and languages than in others. By taking one case from the United States and one from India, we want to verify whether Meta protects all women around the world fairly,” Oversight Board Co-Chair Helle Thorning-Schmidt said in a statement.
“The Board believes it is important to examine whether Meta’s policies and enforcement practices are effective in addressing this issue.”
The problem of fake pornography and online gender-based violence
Some generative AI tools – not all – have been developed in recent years to enable users to generate porn. As TechCrunch previously reported, groups like Unstable Diffusion attempts to monetize AI porn with obscure ethical lines And bias in data.
In regions like India, deepfakes have also become a matter of concern. Last year, a report from BBC noted that the number of fake videos of Indian actresses has skyrocketed in recent times. Data suggests that women are more often the subjects of deepfake videos.
Earlier this year, Deputy IT Minister Rajeev Chandrasekhar expressed dissatisfaction with tech companies’ approach to combating deepfakes.
“If a platform thinks it can get away with not removing deepfake videos, or simply maintain a casual approach, we have the power to protect our citizens by blocking such platforms,” Chandrasekhar said at a conference. press at the time.
Although India has considered introducing specific rules related to deepfakes into law, nothing is set in stone yet.
Although the country has legal provisions for reporting gender-based violence online, experts note that the process could be tedious, and there is often little support. In a study published last year, the Indian advocacy group IT for change noted that Indian courts must have robust processes to address online gender-based violence and not trivialize these cases.
Aparajita Bharti, co-founder of The Quantum Hub, a public policy consulting firm based in India, said there should be limits on AI models to prevent them from creating explicit content that causes harm.
“The main risk of generative AI is that the volume of this type of content increases because it is easy to generate such content and with a high degree of sophistication. Therefore, we must first prevent the creation of such content by training AI models to limit production in case the intention to harm someone is already clear. We should also introduce default labeling for easy detection,” Bharti told TechCrunch via email.
There are currently only a few laws in the world that address the production and distribution of porn generated using AI tools. A handful of American states have laws against deepfakes. The United Kingdom introduced legislation this week criminalize the creation of sexually explicit images based on AI.
Meta’s response and next steps
In response to the Oversight Board’s cases, Meta said it removed both pieces of content. However, the social media company did not address the fact that it failed to remove the content on Instagram after initial reports from users or how long the content remained on the platform.
Meta said it uses a mix of artificial intelligence and human review to detect sexual content. The social media giant said it doesn’t recommend this type of content in places like Instagram Explore or Reels recommendations.
The Supervisory Board requested public comments – with a deadline of April 30 – on the issue which addresses the harms of deep fake porn, contextual information on the proliferation of this type of content in regions like the United States and India, and the possible pitfalls of Meta’s approach to detecting explicit AI-generated images.
The board will investigate the cases and public comments and post the decision on the site in a few weeks.
These cases indicate that large platforms are still struggling with older moderation processes, while AI-powered tools have enabled users to create and distribute different types of content quickly and easily. Companies like Meta are experimenting with tools that use AI for content generationwith some effort to detect such images. In April, the company announced that it apply “Made with AI” badges to deepfakes if it could detect content using “industry-standard AI image indicators” or user disclosures.
However, perpetrators are constantly finding ways to evade these detection systems and post problematic content on social platforms.