opinion

AI-Generated Adult Content and the Law

AI-Generated Adult Content and the Law

In the near future, more and more adult content creators will stumble across realistic reproductions of their image and likeness posted on platforms or sold on membership sites — content never produced or authorized by the creator. In these reproductions, the models may be depicted engaging in activities to which the original creator never would have consented, or speaking words — in their own voice — that they never would have uttered.

Can anything be done in these circumstances? As with many legal situations, it depends. Did the creators unintentionally authorize the creation of derivative works from a prior production? Did the applicable model release contain provisions restricting AI reproductions or disparaging uses? These issues are becoming top of mind for adult creators, producers and online platforms.

Content creators concerned about misuse of their image and likeness through AI technology should ensure that any model release specifically addresses the issue before execution.

In recent years, artificial intelligence has gone from the science-fiction section of bookstores to becoming readily available from the app stores on every cellphone. Businesses are onboarding and adapting artificial intelligence applications at an accelerating rate. It is now common for the average person to have at least some firsthand experience using a generative AI tool like ChatGPT or Lensa to create so-called “synthetic media.” Adult sites are becoming populated with AI content that depicts no actual human being, or reproductions of established creators made with or without their permission.

As generative AI rapidly expands, it is essential that adult performers, producers and publishers anticipate and prepare for the pros and cons of both synthetic media and its evil twin, the “deepfake” — a subset of synthetic media which misleadingly depicts an individual engaged in some activity that did not actually occur.

A few early adopters have begun to explore the benefits that these new technological advances can bring to the industry, making it easier and more cost-effective to edit existing content as well as to generate entirely new content. Meanwhile, others have experienced the negative consequences of the proliferation of synthetic media. Content creators’ media can be fed into an AI program to create new works without their authorization. Users can be tricked by a phishing scam that uses a voice clone to access their private accounts. And anyone can be the victim of an extortionist who uses nonconsensual sexual deepfakes in a blackmailing scheme.

In this article, we look at the few laws on the books that directly address artificial intelligence, as well as laws unrelated to artificial intelligence that may nonetheless be applied to those who use artificial intelligence to generate adult content.

Existing Artificial Intelligence Laws

There is no direct regulation of artificial intelligence at the federal level in the United States. However, Congress has included funding in recent defense spending bills to study the use of deepfakes by foreign entities intent on spreading disinformation or otherwise damaging national security.

A handful of states have passed laws directly regulating deepfakes in certain contexts. Texas, California and Washington prohibit deepfakes created or spread for the purpose of election interference. Virginia, Georgia, California, New York and Florida prohibit nonconsensual sexual deepfakes. New York also explicitly prohibits nonconsensual deepfakes of celebrities for commercial purposes, regardless of whether such a deepfake is sexual in nature.

It should be noted that these state laws are not immune to criticism. Organizations like the Electronic Frontier Foundation have warned that deepfake laws may face significant First Amendment challenges when applied to Americans, as such laws may capture material that is not harmful and chill protected speech. Likewise, domestic deepfake laws may face significant jurisdictional issues when applied to individuals and entities abroad.

Internationally, foreign governments take different approaches to limit the adverse effects of deepfakes. For example, China adopted rules requiring deepfakes to bear a watermark, and creators of deepfakes must offer ways to “refute rumors” related to their creations.

Applying Other Laws to Artificial Intelligence

• CSAM Laws

Federal laws against child sexual abuse materials (CSAM) do not directly reference artificial intelligence, synthetic media or deepfakes. However, existing CSAM laws at the federal level have long been held to prohibit digitized depictions of actual minors in a sexual context. Florida and Maryland have even amended their state-level CSAM laws to explicitly prohibit sexual deepfakes of identifiable children. Given the increasing difficulty of discerning whether a suspected CSAM image depicts an actual minor or some fictitious character, publishers, regulators and service providers will struggle to determine whether a particular image or video is legal or contraband. Making the wrong call can have disastrous consequences for all involved.

• Copyright Law

The Copyright Office currently maintains that synthetic media is not eligible for copyright protection, because such works are not works of human authorship. Thus, creators of synthetic media cannot register their works nor use the DMCA notice and takedown procedure to remove their works from pirate sites and social media accounts. But works created by humans do not lose their protection if mixed with AI-generated materials.

The law has not developed sufficiently to determine the extent to which such mixed works will be subject to DMCA notices or available for copyright registration. The Copyright Office issued new guidance in March 2023, offering its views on how and when such works are subject to copyright registration. The Copyright Office has also launched an initiative to further explore these issues and accept input from relevant stakeholders on the use and impact of generative AI in creative works. Lobbyists have also begun to suggest various amendments to federal copyright law, which, if passed in the future, could both advance and hinder the development of new generative AI technologies. In short, changes in how copyright law is applied to AI content are on the horizon.

• Section 2257 Documents and Consent Forms

Federal law requires certain information be obtained from individuals involved in production of adult content, and credit card payment processors require the collection of certain identification information and consent forms from individuals if their explicit images are uploaded by users to online platforms. Typically, synthetic media depicts imaginary persons created by the computer rather than actual, identifiable human beings. It would be impossible to provide identification documentation or consent forms from such fictional characters. However, it is also important to acknowledge that, in order to create synthetic media, AI programs are fed thousands of images and videos of actual, identifiable human beings. As such, a single piece of synthetic media which depicts an imaginary nude person arguably depicts not one but thousands of actual human beings who are no longer identifiable in the synthetic media. It is impossible, or at least unreasonable, to expect content creators to provide the identification documentation and consent forms of every person on whom each piece of synthetic media is based. It is also impractical and nonsensical to suggest that they should be required to do so, considering the image does not depict any identifiable human being. Unfortunately, the law, the rules of the payment processors and the policies of various adult platforms have not caught up to the technology or been tested in this context. As such, it is unclear whether and how adult AI content creators will be expected to comply with these requirements.

• Name, Image and Likeness Laws

Unlike generative AI content, deepfakes intentionally depict an actual, identifiable human being. As described above, more and more states are passing laws to explicitly prohibit such activity when sexual or political in nature. However, even without such laws, deepfakes may violate state right of publicity laws, which prohibit the use of a person’s name, image or likeness for commercial purposes without consent. Website operators and adult content creators can face legal liability when creating or publishing deepfakes unless a sufficient First Amendment defense exists, such as if the deepfake is created to parody a celebrity or a work in which they appear or offers some commentary on a politician’s policies.

• Age Verification Laws

A small but growing number of states have passed laws that require adult website operators to verify the age of all users within that state or be subject to civil liability for any damages caused by providing a minor with access to adult content. These laws define an adult website as a website on which at least 33.3% of the content is sexual in nature. Typically, these laws explicitly state that such content should be “counted” regardless of whether it depicts “actual, simulated or animated displays” of sexual content. Alternatively, a statute may use broad language that requires counting “any description or representation of nudity.” As such, sexual synthetic media will likely be included in any calculation when determining whether such state laws apply to a particular site. Whether such laws are constitutional and enforceable is a question left for another day.

• Unfair and Deceptive Trade Practices

The Federal Trade Commission (FTC) has warned businesses that they may violate the prohibition on deceptive trade practices under Section 5 of the FTC Act if they use an AI tool to deceive customers. For example, a sex toy manufacturer should not utilize computer-generated consumer reviews nor use a celebrity deepfake as an advertisement of their goods.

Conversely, businesses may use potentially deceptive AI tools in certain circumstances, provided adequate disclosures are made so as to avoid misleading the consumer. For example, a dating site may be populated with computer-generated profiles and a paysite may sell monthly subscriptions to computer-generated adult content if the consumer understands that it is engaging with synthetic media. Likewise, the FTC recently required Facebook to provide users with notice that their content was being used by the site’s AI programs to offer its facial recognition feature.

Because the term “artificial intelligence” is ambiguous and may have many different meanings, the FTC has warned businesses against misusing the term as a marketing tool. The FTC has specifically warned businesses against falsely claiming that a good or service is AI-powered when it does not utilize artificial intelligence or when it merely uses an AI tool in the development process. The FTC has also told businesses to ensure that their AI goods and services work as advertised. For example, businesses should not exaggerate what the AI good or service is capable of doing, and businesses should not make any claim which lacks scientific support, or which is applicable to only certain users or under certain conditions.

The FTC has also warned businesses to be aware of the reasonably foreseeable risks associated with their AI goods and services. Businesses should not release AI goods or services that do more harm than good. A primarily harmful AI good or service is considered an unfair trade practice under Section 5 of the FTC Act.

Finally, the FTC has warned businesses about the risks of AI tools that produce discriminatory outcomes based on race, gender and other legally protected classes. The FTC notes that AI tools that use biased data or data that lacks diversity are likely to produce deceptive or discriminatory results, which may lead to an enforcement action. To prevent this, the FTC suggests businesses test their algorithms for inequities against protected classes, conduct and publish independent audits, and offer outside inspectors access to their data and source code.

Recommendations

Content creators concerned about misuse of their image and likeness through AI technology should ensure that any model release specifically addresses the issue before execution. Traditional model releases grant broad rights to the producer, which may allow creation of derivative works using the creator’s image and likeness in ways the creator never anticipated. The widespread availability of AI tools means that creators must be cautious when granting these rights, to ensure any resulting depictions are consistent with the intent of the parties to the release.

Publishers that use AI content on their websites, profiles or platforms should clearly disclose whether the depictions are synthetic media and if the media depicts identifiable human beings. Failure to do so could create liability for deceptive trade practices or result in confusion regarding the need for Section 2257 documentation and consent forms.

Lawyers representing adult industry clients involved with AI should be sensitive to the potential issues triggered by this technology and stay informed about the rapid developments in the law surrounding synthetic media. Clients should ask their lawyers how their content, websites or platforms may be impacted by AI technology, and ensure that their interests are being properly protected.

AI technology has the potential to change the future of content production and distribution in unpredictable ways. By remaining attentive to these emerging issues, people working in the adult industry can avoid the potential pitfalls and protect their content and businesses from liability or exploitation.

Lawrence Walters heads up Walters Law Group, which represents clients worldwide in all facets of the adult entertainment industry. For more information, visit Walters Law Group online at FirstAmendment.com and @walterslawgroup on social media.

Related:  

Copyright © 2024 Adnet Media. All Rights Reserved. XBIZ is a trademark of Adnet Media.
Reproduction in whole or in part in any form or medium without express written permission is prohibited.

More Articles

opinion

Best Practices for Payment Gateway Security

Securing digital payment transactions is critical for all businesses, but especially those in high-risk industries. Payment gateways are a core component of the digital payment ecosystem, and therefore must follow best practices to keep customer data safe.

Jonathan Corona ·
opinion

Ready for New Visa Acquirer Changes?

Next spring, Visa will roll out the U.S. version of its new Visa Acquirer Monitoring Program (VAMP), which goes into effect April 1, 2025. This follows Visa Europe, which rolled out VAMP back in June. VAMP charts a new path for acquirers to manage fraud and chargeback ratios.

Cathy Beardsley ·
opinion

How to Halt Hackers as Fraud Attacks Rise

For hackers, it’s often a game of trial and error. Bad actors will perform enumeration and account testing, repeating the same test on a system to look for vulnerabilities — and if you are not equipped with the proper tools, your merchant account could be the next target.

Cathy Beardsley ·
profile

VerifyMy Seeks to Provide Frictionless Online Safety, Compliance Solutions

Before founding VerifyMy, Ryan Shaw was simply looking for an age verification solution for his previous business. The ones he found, however, were too expensive, too difficult to integrate with, or failed to take into account the needs of either the businesses implementing them or the end users who would be required to interact with them.

Alejandro Freixes ·
opinion

How Adult Website Operators Can Cash in on the 'Interchange' Class Action

The Payment Card Interchange Fee Settlement resulted from a landmark antitrust lawsuit involving Visa, Mastercard and several major banks. The case centered around the interchange fees charged to merchants for processing credit and debit card transactions. These fees are set by card networks and are paid by merchants to the banks that issue the cards.

Jonathan Corona ·
opinion

It's Time to Rock the Vote and Make Your Voice Heard

When I worked to defeat California’s Proposition 60 in 2016, our opposition campaign was outspent nearly 10 to 1. Nevertheless, our community came together and garnered enough support and awareness to defeat that harmful, misguided piece of proposed legislation — by more than a million votes.

Siouxsie Q ·
opinion

Staying Compliant to Avoid the Takedown Shakedown

Dealing with complaints is an everyday part of doing business — and a crucial one, since not dealing with them properly can haunt your business in multiple ways. Card brand regulations require every merchant doing business online to have in place a complaint process for reporting content that may be illegal or that violates the card brand rules.

Cathy Beardsley ·
profile

WIA Profile: Patricia Ucros

Born in Bogota, Colombia, Ucros graduated from college with a degree in education. She spent three years teaching third grade, which she enjoyed a lot, before heeding her father’s advice and moving to South Florida.

Women In Adult ·
opinion

Creating Payment Redundancies to Maximize Payout Uptime

During the global CrowdStrike outage that took place toward the end of July, a flawed software update brought air travel and electronic commerce to a grinding halt worldwide. This dramatically underscores the importance of having a backup plan in place for critical infrastructure.

Jonathan Corona ·
opinion

The Need for Minimal Friction in Age Verification Technology

In the adult sector, robust age assurance, comprised of age verification and age estimation methods, is critical to ensuring legal compliance with ever-evolving regulations, safeguarding minors from inappropriate content and protecting the privacy of adults wishing to view adult content.

Gavin Worrall ·
Show More