The building blocks of Microsoft’s responsible AI program
The pace at which artificial intelligence (AI) is advancing is remarkable.
The pace at which artificial intelligence (AI) is advancing is remarkable. As we look out at the next few years for this field, one thing is clear: AI will be celebrated for its benefits but also scrutinized and, to some degree, feared. It remains our belief that, for AI to benefit everyone, it must be developed and used in ways which warrant people’s trust.
Over the past few years, principles around developing AI responsibly have proliferated and, for the most part, there is overwhelming agreement on the need to prioritize issues like transparency, fairness, accountability, privacy, and security. Yet, while principles are necessary, having them alone is not enough. The hard and essential work begins when you endeavor to turn those principles into practices, and that is work that we and many of our customers and partners are engaged in now. Below we share some of the decisions we have made along the way, as well as the lessons we have learned, in the hope that they benefit others and shed light on our thinking.
Governance as a foundation for compliance
Microsoft’s approach, which is based on our AI principles, is focused on proactively establishing guardrails for AI systems so that we can make sure that their risks are anticipated and mitigated, and their benefits are maximized.
Our responsible AI governance model borrows from what we have learned worked to successfully integrate privacy, security and accessibility into our products and services.
Centrally, we have three teams working together to set a consistent bar for responsible AI across the company:
- The Aether Committee, whose working groups leverage top scientific and engineering talent to provide subject-matter expertise on the state-of-the-art and emerging trends
- The Office of Responsible AI, which sets our policies and governance processes — and
- The Responsible AI Strategy in Engineering (RAISE) group, which enables our engineering groups to implement our responsible AI processes through systems and tools.
We have also come to rely heavily on our Responsible AI Champs, who sit in engineering and sales teams across the company. They raise awareness about Microsoft’s approach to responsible AI and cultivate a culture of responsible innovation in their teams.
Developing rules to enact our principles
Our Responsible AI Standard sets out the requirements that teams building AI systems must follow. It has been an iterative process to prepare the Standard, working closely with our engineering and sales teams to learn what works and what does not. Over time, we will build out a set of implementation methods that teams can draw upon to meet each of the requirements of the Standard. We expect this to be a cross-company, multi-year effort and one of the most critical elements for operationalizing responsible AI across the company.
Drawing red lines and working through the grey areas
In the fast-moving and nuanced practice of responsible AI, it is impossible to reduce all the complex sociotechnical considerations into an exhaustive set of pre-defined rules.
Our sensitive uses review process has helped us navigate the grey areas that are inevitably encountered and has lead in some cases to new red lines, when we declined opportunities to build and deploy specific AI applications because we were not confident that we could do so in a way that upheld our principles.
For example, our sensitive uses review process helped us determined that a local California police department’s real-time use of facial recognition on body-worn cameras and dash cams in patrol scenarios was premature. As a result, we turned down the deal. That sensitive uses review process also helped us to form the view that there needed to be a societal conversation around the use of facial recognition, and laws needed to be established. Thus, a redline was drawn for this use case and Microsoft called for governments to regulate facial recognition in 2018.
Evolving our mindset and asking hard questions
Another key lesson we have learned is the importance for all of our employees to think deeply about and account for sociotechnical impacts of the technology they are building. That is why we have developed company-wide training and practices to help our teams build the muscle of asking ground-zero questions, such as, “Why are we building this AI system?” and, “Is the AI technology at the core of this system ready for this application?”.
Some of our teams have experienced galvanizing moments that accelerated progress, such as triaging a customer report of an AI system behaving in an unacceptable way. We have also seen teams wonder whether being “responsible” will be limiting, only to realize later that a human-centered approach to AI results in not just a responsible product, but a better product overall.
Pioneering new engineering practices
Privacy, and the GDPR experience in particular, taught us the importance of engineered systems and tools for enacting a new initiative at scale and ensuring that key considerations are baked in by design.
Although tooling — particularly in its most technical sense — is not capable of the deep, human-centered thinking work that needs to be undertaken while conceiving AI systems, we think it is important to develop repeatable tools, patterns, and practices where possible to drive consistency and so that the creative thought of our engineering teams can be directed toward the most novel and unique challenges.
In recognition of this need, we are embarking on an initiative to build out the “paved road” for responsible AI at Microsoft — the set of tools, patterns and practices that help teams easily integrate responsible AI requirements into their everyday development practices.
Sharing our efforts to develop AI responsibly
We are acutely aware that, as the adoption of AI technologies accelerates, new and complex ethical challenges will arise. While we recognize that we do not have all the answers, the building blocks of our approach to responsible AI at Microsoft are designed to help us stay ahead of these challenges and enact a deliberate and principled approach. We are committed to sharing what we learn and working closely with customers and partners to make sure we all understand how build and use AI responsibly.
Along those lines, on April 27th and 28th Microsoft chief digital officer, Andrew Wilson, will address Adobe Summit attendees in his session: How to Delight Customers and Increase Market Share Through AI. We look forward to continuing the discussion and learning together.