Meta Admits to Flagging Images as Deepfakes Based on 'Media Reports'

Meta Admits to Flagging Images as Deepfakes Based on 'Media Reports'

MENLO PARK, Calif. — Meta has told its Oversight Board that the company relies on “media reports” when designating images as nonconsensual sexual content or deepfakes and adding them to its permanent database of banned content.

The disclosure came in a statement issued this week by Meta’s Oversight Board criticizing Meta for its inconsistent handling of deepfakes, which constitute one of several categories of images — some legal and some illegal — that Meta flags as violating its platforms’ terms of service.

Responding to questions about two specific cases of deepfakes, one involving an Indian public figure and another an American public figure, Meta acknowledged the practice of testing explicit images using a Media Matching Service (MMS) bank.

MMS banks “automatically find and remove images that have already been identified by human reviewers as breaking Meta’s rules,” the board explained.

When the board noted that the image resembling an Indian public figure “was not added to an MMS bank by Meta until the Board asked why,” Meta responded by saying that it “relied on media reports to add the image resembling the American public figure to the bank, but there were no such media signals in the first case.”

According to the board, this is worrying because “many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance. One of the existing signals of lack of consent under the Adult Sexual Exploitation policy is media reports of leaks of non-consensual intimate images. This can be useful when posts involve public figures but is not helpful for private individuals. Therefore, Meta should not be over-reliant on this signal.”

The board also suggested that “context” should be considered as a potential signal that nude or sexualized content may be AI-generated or manipulated and therefore nonconsensual, citing hashtags and where content is posted as examples of such context.

Meta has been repeatedly challenged by sex workers, adult performers and many others to shed light on its widespread shadow-banning policies and practices, but access to the specifics of those processes has been scant. Meta’s answer to its own Oversight Board is a rare instance of lifting the veil of secrecy about its arbitrary and often-confusing moderation practices.

As XBIZ reported, the Oversight Board has previously criticized Meta for its policies regarding content it considers sexual, although its recommendations do not appear to have had a meaningful impact on the company's still-opaque moderation practices.

The Oversight Board made nonbinding recommendations that Meta should clarify its Adult Sexual Exploitation Community Standard policy by using clearer language in its prohibition on nonconsensual manipulated media, and generally “harmonize its policies on non-consensual content by adding a new signal for lack of consent in the Adult Sexual Exploitation policy: context that content is AI-generated or manipulated.”

The board also recommended that AI-generated or -manipulated nonconsensual sexual content should not need to be “non-commercial or produced in a private setting” to be in violation of Meta’s terms of service.

Copyright © 2025 Adnet Media. All Rights Reserved. XBIZ is a trademark of Adnet Media.
Reproduction in whole or in part in any form or medium without express written permission is prohibited.

More News

NYC Adult Stores Petition for Rehearing in Zoning Law Case

A group of adult businesses on Tuesday petitioned the U.S. Court of Appeals for the 2nd Circuit to rehear a case involving a zoning law that could severely limit adult stores’ operations in New York City.

Ofcom Releases Transparency Reporting Guidelines

Ofcom, the U.K. media regulator, has made public its official guidance detailing how online service providers — including adult sites — will be required to publish annual transparency reports on their efforts to protect children from online harms.

New AV Rules Take Effect for Ireland-Based Sites

Ireland’s Online Safety Code came into force Monday, including a provision requiring adult sites headquartered in Ireland to implement age assurance measures beyond self-declaration.

XBIZ Amsterdam Calls on New Startups for 'Spotlight' Program

XBIZ is pleased to announce that its new “Startup Spotlight” programming will make its European premiere at XBIZ Amsterdam 2025, set to take place Sept. 2-5 at the Jakarta Hotel Amsterdam.

Texas Resumes AV Lawsuit Against Aylo Following SCOTUS Decision

A district court judge in Texas has unfrozen the state’s $1.6 million lawsuit against Aylo for allegedly failing to comply with age verification requirements, Bloomberg Law is reporting.

JuicyAds Wins Trademark Infringement Case Against Fraudulent Domain

JuicyAds has won its World Intellectual Property Organization (WIPO) case against a website using a similar domain to impersonate the company's site and defraud customers.

Anissa Kate, Jordan Starr Top AEBN for Q2 of 2025

AEBN has published its top-selling stars for the second quarter of 2025, with Anissa Kate landing atop the leaderboard for straight theaters and Jordan Starr heading up the gay rankings.

AEBN Reveals Eva Maxim as Top Trans Star for Q2 of 2025

AEBN has published its top trans stars list for the second quarter of 2025, with Eva Maxim landing atop the leaderboard.

France Reinstates Age Verification Rule for EU Sites

France’s highest court, the Council of State, on Tuesday reinstated age verification rules for EU-based sites under the country’s Security and Regulation of the Digital Space (SREN) law, ruling in favor of the French government and against Hammy Media.

Show More