Building safe, secure, and trustworthy AI: Adobe’s commitments to our customers and community

Image of a butterfly generated by Adobe Firefly

Image generated with Adobe Firefly

Today, Adobe announced our support for the White House Voluntary AI Commitments to promote safe, secure, and trustworthy AI. These commitments represent a strong foundation for ensuring the responsible development of AI and are an important step in the ongoing collaboration between industry and government that is needed in the age of this new technology.

Adobe has been investing in AI for over a decade, using it to allow our customers to unleash their creativity, perfect their craft and power their businesses in a digital world. As we bring transformational technologies to market, we have always sought to pair innovation with responsible innovation. We believe that placing thoughtful safeguards around AI development and use will help it realize its full potential to benefit society.

With millions of creative customers who range from aspiring digital artists to war-time photographers, to fashion designers, marketers, and more, Adobe also strongly believes AI must be developed with creators at the forefront. AI can unlock incredible new opportunities for creators, allowing them to be more productive than ever and design entirely new experiences bounded only by their imagination. But with a technology as powerful as AI, to do so requires a thoughtful and comprehensive approach. That’s why along with ensuring the safety, security and trust of AI systems, Adobe is also sharing our additional commitment to develop AI that empowers creators.

We thank the Biden-Harris administration for their leadership in this important space and are proud to share the ways we will continue to apply these commitments. With a rapidly evolving technology, it is critical to have thoughtful and deep conversations between industry and government to ensure AI legislation and regulation support innovation, while providing essential safeguards for society and the public.


Adobe is committed to ensuring the safety of our AI models. All of our AI products are designed in line with our principles of accountability, responsibility and transparency.

In 2019, Adobe implemented a comprehensive AI program that includes training, testing, and a review by our AI Ethics Review Board. As part of this program, all product teams developing AI must complete an AI impact assessment. This multi-part assessment allows us to evaluate the potential impact of AI features and products and identify potential risks, so we can remediate any concerns before bringing a product to launch. We also provide feedback mechanisms so that users can report any potential harmful outputs and we can remediate any concerns. We are committed to continuing this rigorous testing and review process to ensure our AI is developed safely and responsibly.

In addition, Adobe is a contributor to the NIST Risk Management Framework, which these commitments build upon and a member of the Partnership on AI. We are committed to continuing to share our learnings with our peers and with government and establishing industry best practices so that we can ensure a unified approach to responsible AI.


As with other Adobe products or features, our AI models are considered the intellectual property of Adobe and are subject to strict security and IP protection measures.

We are committed to leveraging our robust security program that includes red-teaming, 3rd-party penetration testing, a bug bounty program, and a comprehensive incident response program to ensure the security of our intellectual property, and of our products and systems, including AI models and product features.

We also commit to regularly share with the wider AI community information, best practices, and learnings from how we test and protect the security of our Artificial Intelligence models.


Adobe is committed to advancing provenance and watermarking solutions to help bring more transparency and trust to digital content in the age of AI. AI makes it increasingly easy to generate convincing content in the span of seconds. We need to give people the resources they need to understand where a piece of content came from and what happened to it along the way.

As a leader in the image-editing space, this has been a focus for Adobe for the past four years. That’s why we developed a technology called Content Credentials – which are like a nutrition label for content that can show information such as a creator’s name, the date an image was created, what tools were used to create an image, and any edits that were made. By capturing what tools were used in the digital creative process, Content Credentials can show when something was created with AI and prove when something was human-created or captured in the physical world by a recording device.

We’ve brought together a global coalition called the Content Authenticity Initiative that now counts more than 1500 members from across industries – including Associated Press, New York Times, Wall Street Journal, Microsoft, NVIDIA, Nikon, Leica, Universal Music Group and Stability AI – all united behind Content Credentials. Content Credentials are built on an open standard developed by the Coalition for Content Provenance and Authenticity (C2PA), so anyone can implement them into their own tools and platforms.

We have seen a lot of momentum around the need for transparency in AI-generated content. So far , many conversations have focused on watermarking which is important and included in the C2PA open standard. Imperceptible watermarking and cryptographic provenance are complementary and a strong united measure to ensure objective understanding of where content originates. As AI becomes more prevalent in the content we consume, we believe secure metadata about how an image came to be (also known as image provenance) will be critical to allow people to see their content with context.

Adobe is committed to advancing the Content Credentials solution across our tools and platforms and maintaining any Content Credentials on content we display. Content Credentials are available for our customers to use in flagship Adobe tools like Photoshop and Lightroom. We also automatically attach Content Credentials to content created in our own generative AI tool Adobe Firefly, and in generative AI features in Photoshop, Illustrator and Adobe Express to indicate that generative AI was used.

We will continue to develop this technology and the corresponding standard in an open way so that everyone can benefit from it. And we will continue to work together with industry and government to drive implementation of this standard, especially as we head into critical moments such as the 2024 US elections, where trust in digital content is more important than ever.

Finally, we are committed to helping develop educational resources to help the public understand the importance of media literacy and how to use tools like Content Credentials.

Empowering creators

In addition to ensuring the safe, secure, and trustworthy development of AI, Adobe is committed to taking important steps to ensure that creators benefit from the power of AI.

Among the key concerns we have heard from our creative community are control over how their data is used and ensuring ownership of their work in the digital age. At Adobe, we trained the first model of our generative AI tool Adobe Firefly only on licensed images from our Adobe Stock collection, openly licensed content, and public domain content where copyright has expired. Beyond our own model, we are also committed to helping creators protect their work across the entire AI ecosystem.

We are at a pivotal moment. We are at the forefront of an incredible technological transformation and it’s essential that we take this opportunity to thoughtfully address the implications of AI as we build our future together.

Adobe looks forward to continuing to collaborate with the White House and other policymakers, as well as industry, creators, and the public to uphold commitments that will ensure AI is developed and deployed in the right way for everyone.