Adobe’s role in election integrity

Illustration of an earth on top of a voting box.

The power and promise of artificial intelligence are upon us. At Adobe, we believe AI can supercharge and democratize creativity to transform how we work, play, and learn. With Generative AI foundation models like our own Adobe Firefly, if you can dream it, you can make it.

However, the power of anything good can be used for bad. We have all seen AI deepfake images, audio, and video insidiously insert themselves into our everyday life, whether intended to reputationally disparage a candidate in an election, commit sophisticated financial fraud, or intentionally inflict emotional pain. The bad actors are finding ways to disrupt our lives by disrupting our ability to trust what we see and hear online.

The implications for democracy are profound. As we saw recently with AI-generated audio of President Biden being used to steer voters away from the polls in New Hampshire, even shoddy digital fakery holds the power to sow confusion. And the danger of deepfakes is not just the deception — it is also the doubt that they cause. Once we know we can’t trust what we see and hear digitally, we won’t trust anything, even if it is true. And that has never been as important as in 2024, with more than four billion voters expected to participate in more than 40 elections around the world. This means facts matter more than ever, and therefore the importance of restoring trust in digital content matters more than ever.

That’s why I’m thrilled that Adobe is participating with like-minded companies in the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, joining signatories including Google, Microsoft, Meta, TikTok, OpenAI, IBM, Amazon, Anthropic and others, in Munich today to sign this commitment to take concrete, immediate steps to strengthen election integrity. With this group of participants, and the many stakeholders across government and civil society participating, we are optimistic that we can give people critical context about the things they see and hear online and ultimately restore trust in digital content.

The steps outlined in the Accord are important and comprehensive. Here at Adobe, we are especially excited about the collective commitment to developing and implementing technological solutions like provenance, watermarking, and classifiers to label and provide metadata about AI-generated content. Transparency builds trust, and building an end-to-end chain of content authenticity will help undermine the power of bad actors to manipulate content by giving good actors a way to be believed. While bad actors will try and flood the internet with deepfakes and misinformation, if we give the good actors the tools they need to prove what’s true, the public will have a verifiable way to distinguish fact from fiction.

We are also very supportive of the Accord’s commitment to foster public awareness through education campaigns on the risks of deepfakes and the ways the public can protect themselves, an important step for all of us to take together. We believe these media literacy efforts are the most critical step we can take immediately to deter the dangers of deepfakes. The public needs to know that they can’t trust everything they see and hear online, and they need to know what tools are available to them to understand what’s true.

This agreement builds on years of work that reflects Adobe’s commitment to responsible innovation. As a world leader in digital media content creation tools, we have been thinking hard about the dangers of bad actors manipulating media, long before deepfakes first hit the internet. In 2019, we founded the Content Authenticity Initiative to bring together tech, media, NGOs, academics and more to design and coalesce around a solution based on transparency and provenance. Then, with a group of like-minded companies, we co-founded the Coalition for Content Provenance and Authenticity C2PA, the standards body behind the open technical specification for content provenance called Content Credentials. Content Credentials have fast become the industry standard for provenance implementation as the nutrition label for digital content. Just this month, we were thrilled to welcome announcements from OpenAI, Meta and Google that they will be adopting the C2PA standard to help label AI-generated content across their platforms and further provenance efforts.

The Accord sets a new precedent in the comprehensive and global effort to combat misinformation. It’s going to take all of us working together to ensure AI can realize its promise and potential for good.