COLUMBIA, S.C. — South Carolina Attorney General Alan Wilson, along with 25 other attorneys general largely from Republican-controlled states, sent an open letter to Pornhub parent company Aylo on Friday demanding the company address a supposed “loophole” that could allow users to post CSAM.
The letter was prompted by a controversial hidden-camera report promoted on X.com by a person formerly affiliated with conservative undercover journalism outlet Project Veritas. The report purported to show an employee of Aylo, then called MindGeek, being filmed without his knowledge in a social setting at an unspecified time, and making claims about the company’s moderation policies.
Basing their query on that questionable source, the 26 attorneys general wrote to Aylo VP of Payments, Trust and Safety Matt Kilicci and Ethical Capital Partners VP of Compliance Solomon Friedman “to inquire about a possible ‘loophole’” in Aylo platforms’ moderation practices, which the AGs contend “potentially permits content creators to publish child sexual abuse material (CSAM).”
The attorneys general expressed concern that “content creators and performers must produce a photo ID to open an account with Pornhub to upload content, but they are not required to show their faces in the content they upload to the site, so there is no way to confirm that the content actually features the performer/content creator that uploads the content” and that this “loophole” could enable dissemination of CSAM.
“Please provide us with an explanation of this ‘loophole;’ whether Aylo and its subsidiaries do, in fact, permit content creators and performers to obscure their faces in uploaded content; and, if so, whether Aylo is taking measures to change this policy to ensure that no children or other victims are being abused for profit on any of its platforms,” the 26 conservative AGs wrote, also adding a query about what steps the companies are taking to prevent AI-generated CSAM from being broadcast on their platforms.
Aylo was asked to reply to those inquiries within 30 days of this letter.
Disregard for Section 230 and the Privacy of Adults
The letter from the attorneys general makes no mention of Section 230 protections, which shield companies from liability for user-generated content.
Disregarding both First Amendment protections and the privacy of adult performers who choose not to show their faces, the attorneys general also imply that private companies, under pressure from the state, should tell consenting adults how to shoot erotic content, and that any content deemed “pornographic” according to unspecified standards should be required to include performers’ faces.
Among the sources cited in the letter is an article by the National Center on Sexual Exploitation (NCOSE), the religiously inspired anti-porn crusading lobby formerly known as Morality in Media.
XBIZ contacted an Aylo spokesperson who offered the following statement:
We are committed to transparency and will respond to the Attorneys General within the required timeline. We are aware that an employee erroneously points to the existence of a supposed ‘loophole’ in the company’s moderation practices. What is being referenced is that Aylo platforms let their content creators and performers choose whether to show or to hide their face in their content.
There are a number of reasons why a verified model may choose to upload content that does not include their face, including their right to privacy. For this reason, we take extra precautions to ensure this content can be uploaded safely.
When a piece of privacy-preserving content is uploaded, an evaluation process is carried out by the moderator to determine if by reviewing the uploader’s other content and ID or other verification documents, each of the performers can be identified. If the performers can be identified, the content may be approved, and if not, the content is escalated to a senior member of the moderation team for a secondary review and to determine whether the content can be approved, rejected, or whether additional documentation is required.
We recognize the evolving challenges posed by user-generated content online. As such, we are constantly evolving and improving safety and security measures, which include, among others, Upload Verification Program, banning downloads, human moderation of all uploaded content, and continuous additions to our suite of automated moderation tools (CSAI Match, Content Safety API, PhotoDNA, Vobile, Safer, Safeguard, and more recently, NCMEC Take It Down and StopNCII).
Main Image: South Carolina AG Alan Wilson