Putting principles into practice: Adobe’s approach to AI Ethics
Image source: your123 , Adobe Stock.
Today, companies are increasingly infusing artificial intelligence (AI) and machine learning (ML) into their products to better serve their customers. At Adobe, hundreds of features powered by Adobe Sensei, our AI engine are helping enterprises deliver more personalized experiences, empowering people to take their creativity to the next level, and allowing users to achieve in seconds what otherwise could take hours. With more than a decade of experience integrating AI and ML across Adobe’s tools, it’s clear that AI done right can enhance the creation, delivery, and optimization of digital experiences at scale.
It's also clear that today’s consumers see the value of emerging technologies: According to a recent Adobe survey, 70 percent of consumers say they trust AI to improve their experiences with a brand. But in order to harness AI in the most valuable way, we must recognize the unique challenges this technology brings. AI is only as good as the data on which it’s trained. And even with good data, you can still end up with biased AI, which can unintentionally discriminate or disparage and cause people to feel less valued.
At Adobe, we are constantly striving to make AI better for everyone. That’s why we’ve established a comprehensive AI Ethics program to ensure we’re developing AI technologies in an ethical, responsible, and inclusive way for our customers and communities. As one of the world’s most innovative companies for nearly 40 years, Adobe takes the impact of our technology as seriously as the development of the technology itself.
More than just a buzzword
Two years ago, when we set out to create our AI Ethics program, we took a deliberative and thoughtful approach. This began with establishing the ethical principles we follow when developing AI-powered technologies.
We set up an AI Ethics Committee comprised of a cross-functional group of Adobe employees with diverse gender, racial, and professional backgrounds — from research engineers to product developers to legal teams and more. Having a diverse committee is important to evaluate innovations from various perspectives and can help identify potential issues that a less diverse team might not see. Together, the Committee came up with Adobe’s AI Ethics principles of accountability, responsibility, and transparency.
With accountability and responsibility, we are saying that it is no longer sufficient to deliver the world’s best technology for creating digital experiences. We want to ensure our technology is designed for inclusiveness and respects our customers, communities, and our Adobe values. Specifically, that means developing new systems and processes to evaluate if our AI is creating harmful bias. Transparency means being open about how we use AI with our customers and providing feedback mechanisms to report concerns across our AI-powered tools or our AI practices. We want to bring our customers into the journey and work with our community to design and implement AI responsibly.
Having a concise, simple, and actionable set of principles that align to our specific corporate values is key to operationalizing them throughout our engineering structure.
Practicing what we preach
So what exactly does “operationalizing our principles” look like? We created standardized processes from design to development to deployment, that include training, testing, and a review process overseen by a diverse AI Ethics Review Board.
As part of this process, engineers complete a multi-part AI Ethics Assessment to capture the potential ethical impact of AI-powered innovations. For most products and features, the impact assessment shows no major ethical risk. Take, for example, an Adobe font selector tool that uses AI to help customers choose different typefaces. After an initial assessment, the product met our standards for approval. In other cases, products undergo further review.
Prior to release in fall of 2020, we reviewed Photoshop feature Neural Filters, which lets users add non-destructive, generative filters to create things that were not previously in images using AI/ML. Think: adding color to black-and-white photos, changing someone’s expression from sad to happy, swapping out hairstyles, etc. We knew that Neural Filters would empower creators to make compelling adjustments and speed up the image editing process. But we wanted to make sure that they produced results that were a close reflection of real human characteristics and didn’t perpetuate harmful biases. Thanks to a rigorous, technical review process, Neural Filters reflects and respects human characteristics and has become a major hit among creators around the world.
Collaboration is key
While different companies may have their own unique AI considerations, the need for ethical standards when it comes to AI is universal. As we continue to evolve Adobe’s AI Ethics program, we’re sharing our learnings and best practices among industry peers. We contributed to the Software Alliance BSA’s AI Ethics industry code of practice so that other companies can leverage work that has already been done in the space — and as a charter member of the Partnership on AI Ethics, we collaborate with academic, civic, industry, and media organizations to address the most important and difficult questions concerning the future of this emerging technology.
We recognize AI development and AI ethical review is an ongoing process. As we continue to learn and grow, we will work together with our employees, customers, and communities to deliver innovations that reflect our Adobe values and make good on our commitment to responsible development of technology.
Learn more here about Adobe’s AI Ethics principles.
This story was originally published on CNBC.