The road ahead for responsible innovation
The road to responsibility is filled with choices. As a leading technology company, Adobe’s products touch billions of people, and the decisions we make about how we ship — not just what we ship — matter. Just a year ago, we launched Adobe Firefly, our first generative AI foundation model, and throughout the development process, we thought deeply about commercial safety, creator protection and AI policy ideas that foster innovation while supporting safety, and addressing the implications of AI on deepfakes. That’s a big agenda, but AI is a transformational technology that requires transformational thinking.
Prioritizing commercial safety
We chose to train our first foundation model for Firefly on licensed content to offer our customers a solution that avoids the legal copyright battles over how generative AI models are trained. That was a strategic tradeoff and a noticeable departure from the prevailing market approach to training AI models, as we know that more data can help make AI more accurate and lessen bias. But we took on the technical challenge of making Firefly best-in-class while trained on licensed content, and we are pleased that the marketplace has responded positively by affirming the value of a commercially safe offering that doesn’t infringe on third-party IP rights.
I had dinner with a General Counsel of a Fortune 500 company recently who told me she was only considering generative AI options that were copyright-safe, as she didn’t want to be trapped building workflows around a model that may have to be ripped out later. Building models in a responsible way has become more important as we learn about the implications of open training. We’ve all seen stories by now of people making harmful images of celebrities or politicians that cause pain and reputational damage; of models that were trained on unsafe images; and of models trained on copyrighted images. AI is simple: what goes in is what comes out. If you don’t train on unsafe images, you minimize the risk that your model will produce them in the first place. When you choose what model to use in your workflow, you are choosing what risk you are willing to take on. That’s why last year we announced the first of its kind generative AI indemnification policy to stand behind our commercially safe models. Given how we train Firefly, we have already minimized the legal risk of using it, but we are happy to provide the extra legal protection for those that want it.
Championing creator rights
Responsible AI innovation goes deeper than solving technical problems. We also have to think about broader societal implications of the technology, both good and bad. For example, how do we help creators protect their economic livelihood in a world where AI needs access to data to thrive? Creators are concerned that AI trained on their work can create competing works in their unique style. To address this, we are working with Congress on establishing a new right in the age of AI to offer creators protection from style misappropriation. Why shouldn’t creators be able to go after people using AI to impersonate them and undercut them in the marketplace for their own work? The Federal Anti-Impersonation Right Act we proposed is one approach to protecting those rights and helping to level the playing field as we navigate a new age of AI-assisted creativity.
AI ethics & standardized safety
Ethics is a critical aspect of responsible innovation. Adobe established an AI Ethics engineering team four years ago to map our engineering practices to our AI Ethics principles of Accountability, Responsibility, and Transparency. Our announcement today of the new AI Assistant in Adobe Experience Platform (AEP) is the latest example of our AI Ethics engineering process to understand and address the AI implications before we ship. Our engineers completed an AI Impact Assessment to help us evaluate the use cases and added technical constraints to improve reliability and accuracy and reduce the potential for harm and bias. With our Privacy team, we ensured that the AEP AI Assistant was architected to respect every customer’s data preferences before training. Using our practical experience in building and governing AI for ourselves and thousands of our clients, we have taken these lessons to governments and companies around the world to drive the right balance between innovation and responsibility.
We signed the voluntary White House AI commitments and provided input on the White House Executive Order on AI, worked with the European Commission to provide feedback on the recently passed AI Act, and recently joined leading technology companies including Microsoft, Google, OpenAI, TikTok and others in signing the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” In order to deliver a successful AI strategy, companies must build governance internally but also collaborate externally, to ensure that the AI they are delivering will meet the evolving criteria set out in regulations globally. With expected government action aimed at combating misinformation, including placing new requirements on platforms and AI model providers, being prepared and ready to respond has never been more important.
Our latest policy proposal, which focuses on accelerating AI innovation while enabling responsibility at scale, is to establish a government-administered, standardized set of test prompts to provide clear and achievable standards for bias, safety, and harm. This approach is informed by the practicality we have learned administrating our own AI ethics engineering program over the last four years. Such a “test prompt bank” would have pre-approved prompts designed to elicit different safety and ethical issues in any model. In addition, it would have an approved set of benchmarks for each ethical dimension to test against. With the prompt bank and the benchmark, model makers could objectively know how their models are performing against government-approved safety standards.
We believe industry and government should come together to create global prompt banks to establish global standards for safety. This approach gives companies clear standards and accountability. Meet the standards, and you can ship. Don’t meet the standards, you should fix it. Certainty drives investment, investment drives innovation, and this is a great place for governments to lead the way in building an AI economy.
The fight against harmful deepfakes
Finally, with generative AI making it easy for anyone to create and disseminate deceptive content in mere seconds, the problem of deepfakes is rising, threatening elections and causing consumer confusion every day. We are proud of the work Adobe is doing in leading an industrywide approach to address deepfakes by promoting transparency and provenance with Content Credentials. Content Credentials are like a nutrition label for digital content. By providing important context such as who, where, when, and what tool was used to create or edit a piece of content, we can give consumers the information they need to understand what’s true. We are very excited to see momentum build — from Leica shipping the first camera with Content Credentials embedded within it to OpenAI, Meta, Google and BBC announcing their support of the standard just a few weeks ago. With global elections looming, everyone, from industry to governments to the public, needs to work together to restore trust in digital content.
Customer choice
At Adobe, we are committed to providing our customers choice: do they want a commercially safe model to use without fear? Do they want to use a third-party large language model (LLM) to add text to their AI-assisted workflows? We are committed to providing them with choice, best-in-class technology, and responsible innovation. The road ahead will be an exhilarating journey, marked by both groundbreaking innovation and difficult, unexpected choices. We’re excited to take this journey with you.