Adobe’s EU AI Pact pledges: Driving responsible innovation and transparency in an AI-powered world
Today, Adobe announced our commitment to uphold the pledges of the EU AI Pact, which aims to ensure AI systems used in the EU are developed and deployed responsibly. Throughout our four-decade history, Adobe has been committed to responsible innovation, and we believe that by working together with global governments, we can establish guardrails that allow AI innovation to thrive in the right way for everyone.
One of the most critical guardrails is ensuring we have the right tools and policies to combat harmful deepfakes and deceptive digital content. From scams using fabricated audio of European Commission President Ursula von der Leyen, to fake videos of former British Prime Minister Rishi Sunak, to AI-generated images misrepresenting politicians from former U.S. President Donald Trump to Indian Prime Minister Narendra Modi, deepfakes are sowing confusion around the world. And once we’re fooled by a deepfake, we begin to doubt everything we see or hear online, even if it is true. That could have a profound impact on democracy.
As AI becomes more powerful and prevalent, now more than ever, we need a global technological standard that provides transparency around digital content to foster a more trustworthy digital space. Adobe pledges to continue to lead work in this space and we urge government to drive adoption of this standard across the entire content ecosystem.
The power of provenance
Since 2019, the Adobe-led Content Authenticity Initiative (CAI) has been focused on restoring trust in digital content through provenance: information about where a piece of content came from and what happened to it along the way. The CAI has been driving widespread adoption of the provenance tool Content Credentials, which are a free, open standard created by the Coalition for Content Provenance and Authenticity (C2PA). Content Credentials are essentially a “nutrition label” for digital content, that can show information like the date and time the content was created, how it was edited, and whether and how AI was used. This level of transparency is crucial for helping to dispel doubt and providing a way for consumers to verify the authenticity of content they are consuming.
Take, for example, a photojournalist who takes a picture of a political candidate shaking hands with voters. The photojournalist turns on Content Credentials and captures information about the image, such as the date it was taken, the location, and the camera she used to take it. Her editor then crops the photo and uses an AI tool to blur license plate numbers in the photo, and those edits are added to the photo’s Content Credentials metadata. Finally, when the photo appears on a news site and is reshared on social media, viewers can see information about its origin and edit history. The credential not only shows that AI was used, but it shows how it was used — in this case, a simple blurring tool rather than a wholly AI-generated image. It also shows changes that were made that didn’t use AI, which are just as important for understanding how a photo may have been edited or, in some cases, manipulated. With this context alongside a piece of content, people can make more informed decisions about whether to trust it.
From vision to reality: crucial next steps
Today, the CAI counts over 3,300 members — including the BBC, AFP, Axel Springer, Reuters, Wall Street Journal, Microsoft, NVIDIA, and Sony — all united behind this vision. But in order for this solution to work, we need to implement Content Credentials everywhere from the point of capture or creation, to editing and publication, all the way to consumption.
Some camera manufacturers like Leica and Nikon have already built this capability in line with the C2PA standard into certain camera models. This is a good start. We need to continue to drive adoption into all camera makes and models so that photographers can access this important tool, no matter what camera they are using. In addition, it is essential that Content Credentials are implemented on smartphones, where the vast majority of people capture content today. Anyone taking a photo or video — not just professional photographers — should be able to attach this important contextual information to the content they capture.
Furthermore, even if they captured every edit and piece of information about a particular piece of content, Content Credentials do little good if the public cannot actually see them. Today, social media platforms are among the main channels where people consume content, so it’s critical that social media carry and display the Content Credentials of content in a way that is easy for users to engage with.
Today, we call on governments around the world to include provenance requirements in AI legislation for everyone that makes, uses, and distributes content. Government should seek to ensure that devices that enable its users to capture content, such as smartphones, make provenance technology available so that users can choose to turn this capability on for any content captured or created on the device. Government should also work to ensure that online platforms and social media companies implement Content Credentials capabilities and do not strip away any metadata associated with the content published on their platforms. Finally, industry and academia should work together with government to educate the public about the dangers of deepfakes and how they can use tools like Content Credentials to combat them.
A collaborative approach to bring AI to the world responsibly
AI has already begun to transform our lives, and we are at a pivotal moment. The EU AI Pact is an important step toward establishing the right guardrails that will bring this technology to the world responsibly and Adobe looks forward to continuing to collaborate with government as well as industry, creators, and the public to create a more trustworthy digital space.