The FAIR Act: A new right to protect artists in the age of AI

Image via: Yeti Studios/stock.adobe.com

Let's talk about fairness.

In the world of AI, it is easier to create than ever before, for everyone, at all skill levels. Even for me, a person whose kindergarten teacher threatened to hold me back for my inability to cut and paste! If you haven't had a chance to test out the capabilities of AI image generation for yourself, go check out Adobe Firefly, type in a prompt like, “astronaut walking through a space jungle on a starry night”, and watch the magic of AI before your own eyes!

Creative professionals, too, are harnessing the power of this new technology. For them, generative AI represents an amazing first step in their creative process. It automates the repetitive parts of their work and allows them to focus their time on their true differentiator: their ideas.

Yet one aspect of generative AI is troubling to some, including members of Adobe’s creative community. If an AI model is trained on all the images, illustrations, and videos that are out there, the AI can learn to recreate new works in the exact same style as an original creator of those images, illustrations, or videos. Now, of course, style and art imitation have long been part of the physical art world. Today, if an artist slavishly copies another artist’s work, they could be found guilty of copyright infringement. But copyright doesn’t cover style. This makes sense because in the physical art world, it takes a highly skilled artist to be able to incorporate specific style elements into a new work. And, usually when they do so, because of the effort and skill they put into it, the resulting work is still more their own than the original artist’s. However, in the generative AI world, it could only take a few words and the click of a button for an untrained eye to produce something in a certain style. This creates the possibility for someone to misuse an AI tool to intentionally impersonate the style of an artist, and then use that AI-generated art to compete directly against them in the marketplace. This could pose serious economic consequences for the artist whose original work was used to train that AI model in the first place. That doesn’t seem fair.

Adobe trained our generative AI model Firefly only on our own licensed Adobe Stock images, other works in the public domain, moderated generative AI content, and work that is openly licensed by the rightsholder. Using a training set like this minimizes the risk that situations like the style impersonation described above could happen. But we know that since there are other tools out there that are primarily trained off the web, the intentional misuse of AI tools could be a real issue for our creative community and is certainly a legitimate concern right now.

That's why Adobe has proposed that Congress establish a new Federal Anti-Impersonation Right (the "FAIR" Act) to address this type of economic harm. Such a law would provide a right of action to an artist against those that are intentionally and commercially impersonating their work or likeness through AI tools. This protection would provide a new mechanism for artists to protect their livelihood from people misusing this new technology, without having to rely solely on laws around copyright and fair use. In this law, it’s simple: intentional impersonation using AI tools for commercial gain isn’t fair.

A few key points about this approach:

  1. This right applies specifically to AI work (it does not extend to any existing rights in the physical world).

  2. This right creates liability for the misuser of the AI tool, allowing the artist to go after the misuser directly.

  3. The right requires intent to impersonate. If an AI generates work that is accidentally similar in style, no liability is created. Additionally, if the generative AI creator had no knowledge of the original artist’s work, no liability is created (just as in copyright today, independent creation is a defense).

  4. This anti-impersonation right would also protect someone’s likeness (similar to rights of publicity you find in some states such as New York, California, or Tennessee) to prevent an AI model trained on images of you or me from making likenesses of us for commercial gain without our permission. Normal model releases would still apply.

  5. This right should be enacted at the federal level to avoid multiple conflicting state laws.

  6. This right should include statutory damages that award a preset fee for every harm, to minimize the burden on the artist to prove actual economic damages.

Supporting the evolution of style and creativity

All style innovation – whether physical or digital – involves a certain amount of derivation. Throughout history, artists have honed their style and spurred new artistic movements by building off the works and styles of earlier generations of artists. Had I pursued a career in art (which would have required a more encouraging kindergarten teacher!), I probably would have been trained on earlier styles before coming up with one that is uniquely Dana Rao. Even Vincent Van Gogh, who is famous for a very particular style, is said to have learned much of his early painting techniques by imitating impressionist artwork and evolving this style to a new style dubbed post-impressionism. So, while it’s essential that an anti-impersonation right protect artists from economic harm from the misuse of AI tools, we also need to protect style innovation in the digital world in order for art to progress and evolve.

That’s why the FAIR Act is drafted narrowly to specifically focus on intentional impersonation for commercial gain. For example, if you typed an artist’s name into a prompt and passed off the output for your own financial benefit, you’re hardly learning from or evolving their style. If you explore someone’s style, build upon it in a unique way and find a commercial outlet for your own work in your name – that’s a different use case altogether and one we think should be able to continue in order to foster creativity and the progression of art.

Collaboration is key

There are still lots of questions out there about AI and how it can be trained, and in some cases, it’ll be a while before we have answers. Adobe will continue to advocate for protections and policies that support artists, and we know that solutions work best when they build off of input from the community they are intended to protect. That’s why we encourage our creative community to engage with us, join our town halls, tell us how this technology can better meet their needs and how industry and government can address their concerns. Collaboration is the key to ensuring this technology is developed and deployed in the right way, fairly, for everyone.