Forrester Analyst Provides 3 Principles For ‘Ethical AI’

Connection Between People And Artificial Intelligence Technology

As businesses ramp up their use of artificial intelligence (AI), marketing and CX leaders must ensure their teams are building unbiased, equitable, and ethically responsible AI systems that don’t unwittingly cross over any regulatory lines or, perhaps worse, turn customers off.

“The world of the future is going to be awash in artificial intelligence,” said Brandon Purcell, principal analyst at Forrester, during a session at the firm’s CX North America virtual conference last month. “Artificial intelligence is increasingly going to be making some very critical decisions about people’s freedom, livelihoods, health, and access to credit, and so we need to be very deliberate in the way that we create these systems.”

A number of different entities, from corporations to governments to industry groups, already have “their own ethical AI frameworks that denote the principles that they’d like to strive toward in creating AI,” Purcell added. These frameworks share three key principles, he said: fairness and bias prevention, trust and transparency, and accountability.

Principle #1: Fairness and bias prevention

Forrester research shows that people will walk away from a brand if they learn its AI system is making biased decisions or discriminating against anyone, Purcell said.

It’s not that Purcell thinks companies are deliberately building AI systems with bias. So how does a machine learning algorithm become biased in the first place?

Two ways, Purcell explained. The first, algorithmic bias, occurs when a dataset is not representative of the entire population. For example, a facial-recognition training dataset of only people with brown hair would not operate effectively when analyzing people with blond or red hair.

“The solution here is for data scientists and developers to try to find what they call ‘IID data’ – independent and identifiably distributed data – that reflects the distribution and diversity of the real world,” he said.

The second way is human bias, and it’s “a more pernicious problem,” Purcell said. “As a species, we have not been particularly fair to each other over the course of our history, and this inequity is actually codified in data. When we use this data to teach a machine learning algorithm to do its job, it’s going to end up not just making the same unjust decisions but doing it at a scale that wasn’t before possible when humans were making those decisions.”

The solution, he said, is to “modify the training data or modify the deployment engine in a way that reflects a more just and equitable outcome.”

That in itself requires a multifaceted approach that relies on a diversity of organizational viewpoints – from data scientists, to lines of business, to the executive level – as well as customers.

“[There are] 22 different definitions of fairness, and so hopefully you appreciate the difficulty in just saying, ‘OK, we want to be fair,’” Purcell pointed out. “You really need to define fairness standards for each use case for your data scientists and developers.”

Principle #2: Trust and transparency

Many AI projects fail because the AI was created as a black box, and the business stakeholder who’s going to use the system doesn’t trust it, Purcell said. “Using interpretable or explainable AI is a way of evoking trust,” he said.

But organizations are finding themselves in a trade-off between accuracy and transparency, Purcell said. In a nutshell, the more accuracy required – he used the example of determining a person’s credit approval using a machine-learning algorithm (which learns to make predictions from data) vs. a neural network (in which algorithms process signals via interconnected nodes, much like our brains) – the trickier it becomes to explain.

“In some cases, especially when you have enough data, neural networks are more accurate than their other machine-learning forebearers,” but they also lack transparency within their input layers, he said. “You can’t pull the hood on a neural network and see how the decision is being arrived at.”

When embarking on an AI project, Purcell suggested using a two-by-two grid (below) that looks at the risk to the company of making a wrong decision vs. the criticality of accuracy.

A grid that measures risk of making a wrong decision vs. the criticality of accuracy.

“This isn’t a one-and-done exercise,” he added. “You want to ensure that you’re continually meeting the requirements of transparency and ‘explainability’ for each use case where you’re using artificial intelligence.”

Principle #3: Accountability

According to Forrester research, only 20 percent of companies said they are building their own AI systems from scratch. Meanwhile, 51 percent said they are building their systems using third-party components – “like a bunch of Lego blocks,” Purcell said – and 48 percent are buying commercially available packaged solutions with AI embedded in them.

At issue? “Any time you’re buying, you’re either cobbling together your own AI or buying AI from a third party. That introduces third-party risk, and third-party risk can sack your AI,” Purcell said.

In other words, if one piece of a system – “with all of these interlocking, interdependent components that interact in complex ways” – goes awry, it can be quite challenging to determine which specific component or entity is responsible.

That is why strict third-party due diligence is necessary, which means “selecting partners and vendors who share your values and understand your space, who understand the ethical implications of using their AI components or their AI technology to make critical decisions for your business, [which also] impacts your customers,” Purcell said. “Perform due diligence fairly and often.”

At Adobe, we believe that everyone deserves respect and equal treatment, and we also stand with the Black community against hate, intolerance and racism. We will continue to support, elevate, and amplify diverse voices through our community of employees, creatives, customers and partners. We believe Adobe has a responsibility to drive change and ensure that every individual feels a sense of belonging and inclusion. We must stand up and speak out against racial inequality and injustice. Read more about the actions we’re taking to make lasting change inside and outside of our company.

We also know many people are still impacted by the current COVID-19 crisis and our thoughts are with you. The entire Adobe team wants to thank you, our customers, and all creators around the world for the work you do to keep us inspired during this difficult time.

Experience Makers Live event ad.
https://www.adobe.com/events/experience-makers-live.html?sdid=KQQQV7Q3&mv=social&mv2=ownsoc&s_osc=7011O000002uRQIQA2&s_iid=7011O000001z6DjQAI