Looking Ahead to the AI Action Summit: Seizing the AI Opportunity

The upcoming AI Action Summit taking place in Paris next week represents a pivotal moment for public and private sectors to come together in shaping a common approach to responsible and ethical Artificial Intelligence (AI). Building upon previous Summits in Bletchley Park and Seoul, this year’s Summit will have a new focus on seizing the opportunities that AI presents through the responsible development and deployment of technologies.

With the upcoming AI Action Summit, we see key areas where governments from around the world and industry can come together in ensuring AI innovation thrives in the right way for everyone.

Helping creators protect their style and livelihoods

At Adobe, we believe AI should advance creative careers, not threaten them. One of the main concerns we’ve heard from creators is related to style, specifically when someone uses AI to mimic an artist’s style and then passes off the AI-generated work as the original artist’s. In the age of AI, it takes just a few clicks for a bad actor to use an AI model to reproduce art works in a certain style and pass it off as the original artist’s, essentially forcing artists to compete against themselves in the marketplace.

Adobe has been advocating for a federal anti-impersonation right that helps protect creators against bad actors who use an AI model explicitly to impersonate their style for commercial gain. The recent legislative effort, Preventing Abuse of Digital Replicas Act (PADRA) in the U.S., is a critical step towards ensuring the responsible use of AI-generated content.

Creators using AI tools also want to ensure they can obtain copyright protection over their work in this new era. Copyright law is designed to protect human creativity – therefore an AI output alone may not be copyrightable, but we believe the combination of human creativity and AI will and should be. The U.S. Copyright Office recently issued guidance on this topic, concluding similarly that works that people create or edit with assistance from AI can be copyrighted, but that works generated entirely by an AI system or based purely on text prompts are not eligible. We encourage governments around the world to establish clear guidance for how creators can earn a copyright for their works that include AI as part of the creative process.

“We all share a responsibility to ensure that AI innovation develops in a way that benefits everyone. Protecting artists from commercial, AI-based exploitation is in all of our best interests and we’re committed to advocating for policies that support artists, and empower them to thrive in the age of AI”

Karen Robinson, Senior Vice President and Deputy General Counsel, Adobe

Legislation can be a vital deterrent to bad actors, and technological solutions can also strengthen creators’ options. To give creators more control over how their content is used, the recently announced Adobe Content Authenticity Web App empowers creators by giving them the option to attach Content Credentials to gain attribution for their work and signal to other generative AI models if they do not want their content to be used for AI training, and is also actively working to drive industry-wide adoption for this feature.

Combatting the dangers of deepfakes

With advancements in AI, content creation and editing has become more powerful for creators and more accessible than ever for consumers. But the power of anything good can also be used for bad. AI tools can be weaponised to create deepfakes that undermine national security, seed discord, and even influence elections.

One of the most critical guardrails needed in the age of AI is ensuring we have the right tools and policies to combat harmful deepfakes and deceptive digital content. Without tools and standards to help guard against these threats, we risk the ability to tell the difference between fact and fiction.

To combat deepfakes and build trust and transparency in digital content, governments can leverage industry standards such as Content Credentials, which essentially serve as a “nutrition label” for digital content. Developed by the Coalition for Content Provenance and Authenticity (C2PA), Content Credentials are a free, open-source, technical standard that can provide detail on who made a piece of content, as well as how, where, and whether AI was used. It empowers consumers to verify and then trust, navigating the digital ecosystem with greater clarity.

The Adobe-led Content Authenticity Initiative (CAI) is a global, cross industry coalition, supported by over 4,000 members, that is working to advance free, open-source implementations of Content Credentials provenance technology. Having already been integrated into Adobe apps that creators know and love including Photoshop, Lightroom, Premiere Pro and Adobe Express, Content Credentials are also being adopted by CAI and C2PA members including Leica, the BBC, Agence France Presse (AFP), Google, Microsoft, LinkedIn, and most recently, Samsung in their new Galaxy S25 smartphone. Widespread adoption of Content Credentials across the content supply chain will play a vital role in realising a more trustworthy and transparent digital ecosystem.

Governments also have a crucial part to play in driving adoption of this standard by including provenance requirements in AI legislation for those that make, use, and distribute content – from manufacturers of cameras and smartphones to social media platforms and news websites. Governments should seek to make content provenance technology available to users, as well as ensure that online platforms and social media companies implement Content Credentials capabilities and do not strip away any metadata associated with content published on their platforms.

Governments can also build trust with their citizens by incorporating content provenance technology into the content they publish. By using Content Credentials, governments provide citizens with a way to verify the authenticity of the content they are viewing. This can be a critical tool to leverage when policymakers need a trusted way to communicate with the public. Finally, industry, academia, and government need to work together to raise awareness about the risks of deepfakes and teach the public how to use tools like Content Credentials.

Commitment to Responsible Innovation

At Adobe, we have long been committed to responsible innovation. Our approach to developing generative AI is guided by our longstanding AI Ethics principles of accountability, responsibility and transparency, and rooted in support for the creative community. As an example, when we started developing Adobe Firefly, our family of creative generative AI models, we made the decision to train it exclusively on content that Adobe has permission to use – including licensed content from Adobe Stock and public domain content – and never on user content.

Our deliberate approach to training Firefly ensures that it is commercially safe, giving everyone - from creators to enterprise customers – confidence in using it. This is evidenced by the incredible adoption of Firefly, which has been used to generate over 16 billion images globally. With Firefly, we have demonstrated that it is possible to develop best-in-class generative AI models without having to infringe on the rights of creators.

It’s also important to note that AI is only as good as the data it is trained on—more data likely means more accurate and less biased outcomes. As governments around the world develop their approach AI, supporting access to quality data to ensure AI innovation continues to thrive both accurately and responsibly is important.

Collaboration for Responsible AI Future

We stand at a transformative moment with AI, and collaboration between policymakers, creators, industry and the public will be vital to ensuring that AI accelerates creativity and opportunities while also ensuring the right guardrails in place.

In this spirit, as part of the official AI Action Summit Business Day on 11 February, we will be leading a discussion on the adoption of Content Credentials to restore trust and transparency in the era of synthetic media alongside Microsoft, Google, Imatag, AFP, and French lawmaker Violette Spillebout.

As we look ahead to the AI Action Summit, we look forward to continuing to work with governments globally to ensure AI is developed and deployed in the right way for everyone.