Meta Admits to Flagging Images as Deepfakes Based on 'Media Reports'

Meta Admits to Flagging Images as Deepfakes Based on 'Media Reports'

MENLO PARK, Calif. — Meta has told its Oversight Board that the company relies on “media reports” when designating images as nonconsensual sexual content or deepfakes and adding them to its permanent database of banned content.

The disclosure came in a statement issued this week by Meta’s Oversight Board criticizing Meta for its inconsistent handling of deepfakes, which constitute one of several categories of images — some legal and some illegal — that Meta flags as violating its platforms’ terms of service.

Responding to questions about two specific cases of deepfakes, one involving an Indian public figure and another an American public figure, Meta acknowledged the practice of testing explicit images using a Media Matching Service (MMS) bank.

MMS banks “automatically find and remove images that have already been identified by human reviewers as breaking Meta’s rules,” the board explained.

When the board noted that the image resembling an Indian public figure “was not added to an MMS bank by Meta until the Board asked why,” Meta responded by saying that it “relied on media reports to add the image resembling the American public figure to the bank, but there were no such media signals in the first case.”

According to the board, this is worrying because “many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance. One of the existing signals of lack of consent under the Adult Sexual Exploitation policy is media reports of leaks of non-consensual intimate images. This can be useful when posts involve public figures but is not helpful for private individuals. Therefore, Meta should not be over-reliant on this signal.”

The board also suggested that “context” should be considered as a potential signal that nude or sexualized content may be AI-generated or manipulated and therefore nonconsensual, citing hashtags and where content is posted as examples of such context.

Meta has been repeatedly challenged by sex workers, adult performers and many others to shed light on its widespread shadow-banning policies and practices, but access to the specifics of those processes has been scant. Meta’s answer to its own Oversight Board is a rare instance of lifting the veil of secrecy about its arbitrary and often-confusing moderation practices.

As XBIZ reported, the Oversight Board has previously criticized Meta for its policies regarding content it considers sexual, although its recommendations do not appear to have had a meaningful impact on the company's still-opaque moderation practices.

The Oversight Board made nonbinding recommendations that Meta should clarify its Adult Sexual Exploitation Community Standard policy by using clearer language in its prohibition on nonconsensual manipulated media, and generally “harmonize its policies on non-consensual content by adding a new signal for lack of consent in the Adult Sexual Exploitation policy: context that content is AI-generated or manipulated.”

The board also recommended that AI-generated or -manipulated nonconsensual sexual content should not need to be “non-commercial or produced in a private setting” to be in violation of Meta’s terms of service.

Copyright © 2025 Adnet Media. All Rights Reserved. XBIZ is a trademark of Adnet Media.
Reproduction in whole or in part in any form or medium without express written permission is prohibited.

More News

UPDATED: Supreme Court Rules Against Adult Industry in Pivotal Texas AV Case

The U.S. Supreme Court on Friday issued its decision in Free Speech Coalition v. Paxton, striking a blow against the online adult industry by ruling in support of Texas’ controversial age verification law, HB 1181.

North Carolina Passes Extreme Bill Targeting Adult Sites

The North Carolina state legislature this week ratified a bill that would impose new regulations that industry observers have warned could push adult websites and platforms to ban most adult creators and content.

Supreme Court Ruling Due Friday in FSC v. Paxton AV Case

The U.S. Supreme Court will rule on Friday in Free Speech Coalition v. Paxton, the adult industry trade association's challenge to Texas’ controversial age verification law, HB 1181.

Ofcom: More Porn Providers Commit to Age Assurance Measures

A number of adult content providers operating in the U.K. have confirmed that they plan to introduce age checks in compliance with the Online Safety Act by the July 25 deadline, according to U.K. media regulator Ofcom.

Aylo Says It Will Comply With UK Age Assurance Requirements

Tech and media company Aylo, which owns various adult properties including Pornhub, YouPorn and Redtube, plans to introduce age assurance methods in the United Kingdom that satisfy government rules under the Online Safety Act, the company has announced.

Kyrgyzstan Parliament Approves Measure Outlawing Internet Porn

The Supreme Council of Kyrgyzstan on Wednesday passed legislation outlawing online adult content in the country.

Trial Set for Lawsuit by U Wisconsin Professor Fired Over Adult Content

A trial date of June 22, 2026, has been set for the civil lawsuit filed by veteran communications professor Joe Gow against the University of Wisconsin board of regents, which fired him for creating and appearing in adult content.

New UK Task Force Meets to Target Adult Content

The architect of an influential report that recommended banning adult content deemed “degrading, violent and misogynistic” has convened an “Independent Pornography Review task force” aimed at translating that report’s findings into action in the U.K.

11:11 Creations Launches Affiliate Program

11:11 Creations principal Alicia Silver has launched 11:11 Cash for creators and affiliates.

Pineapple Support, Pornhub to Host 'Self Love' Support Group

Pineapple Support and Pornhub are hosting a free online support group for performers to develop self-love.

Show More