Adobe unveils new AI ethics principles as part of commitment to responsible digital citizenship

The promise of artificial intelligence (AI) Is limitless. AI has the power to take human creativity and intellect to new levels through deeper insights, accelerated tasks and improved decision-making abilities that will truly change life and business as we know it.

But, like all new and powerful technologies, AI comes with its own unique challenges. AI, whether it’s the dataset or the algorithms, is a technology that is developed by humans. And humans are biased by nature because of our experiences, backgrounds, learnings and many other factors. Humans are not perfect and therefore AI is not going to be perfect. But humans can learn and be better and therefore the AI can become better.

At Adobe, as we innovate and harness the power of AI in our tools, we are dedicated to addressing the harms posed by biased data in the training of our AI. AI Ethics is one of the core pillars of our commitment to responsible digital citizenship, a pledge from Adobe to address the consequences of technology innovation as part of our role in society.

That’s why we have spent the last two years thoughtfully building out our Adobe AI Ethics Principles: responsibility, accountability and transparency. These reflect who we are and these three principles are our guiding light to thoughtful deployment of AI in our products:

Responsibility: We will approach designing and maintaining our AI technology with thoughtful evaluation and careful consideration of the impact and consequences of its deployment. We will ensure that we design for inclusiveness and assess the impact of potentially unfair, discriminatory, or inaccurate results, which might perpetuate harmful biases and stereotypes. We understand that special care must be taken to address bias if a product or service will have a significant impact on an individual’s life, such as with employment, housing, credit, and health.

Accountability: We take ownership over the outcomes of our AI-assisted tools. We will have processes and resources dedicated to receiving and responding to concerns about our AI and taking corrective action as appropriate. Accountability also entails testing for and anticipating potential harms, taking preemptive steps to mitigate such harms, and maintaining systems to respond to unanticipated harmful outcomes.

Transparency: We will be open about and explain our use of AI to our customers, so they have a clear understanding of our AI systems and their application. We want our customers to understand how Adobe uses AI, the value AI-assisted tools bring to them, and what controls and preferences they have available when they engage with and use Adobe’s AI-enhanced tools and services.

Since AI lives at the intersection of technology and human insight, we needed a range of perspectives to help us create our principles and determine our approach. As part of our commitment to AI Ethics we launched an AI Ethics Committee and Review Board, which includes experts from around the world with diverse professional backgrounds and life experiences, and we’re confident in their ability to guide our efforts. These groups are tasked with making recommendations to help guide development teams and review new AI features and products to ensure they meet Adobe’s three guiding principles above.

Last fall, our AI Ethics Review Board worked with Adobe development teams on Neural Filters, which are AI-powered filters that let Photoshop users change someone’s age or expression, or even turn a black and white photo to color, by simply using a filter. With new technologies like Neural Filters, we are carefully studying the use and impact as it is used. We know that perfect AI is not possible, since data comes from humans, and humans are biased. But it is possible, and important, to get better, and we can do that with the assistance of our millions of users. That is why every time a filter is applied in Neural Filters, Photoshop gives users the opportunity to report whether the results of our AI filters has produced content that makes them feel less valued. The user can even send before and after images to Adobe engineers for further inspection. This constant feedback loop with our user community is the best way to ensure our tools minimize bias and uphold our values as a company.

Neural Filters was the first AI feature that the committee reviewed against Adobe’s AI Ethics Principles, and with that milestone we are now ready to scale the review to hundreds of AI-powered features we generate each year. We are very excited about our progress over the past two years and are confident in where we are going.

Please join us in helping shape the future of AI into something good for all, by sharing our commitment to responsible, accountable, and transparent management of AI.