Brought to you by DataRobot logo
Better World/Better Business

The State of AI Bias: 6 Things You Need to Know

Ted Haniyeh Colin Priest Headshot
Ted Kwartler, Haniyeh Mahmoudian, Colin Priest
January 25, 2022

AI has shifted from possible future to increasingly adopted present. But concern for AI bias is rising nearly as quickly as AI adoption itself—and these worries are legitimate in a world where bias can lead to loss of revenue, trust, customers and employees.

DataRobot surveyed more than 350 companies to gauge how AI bias perceptions, issues, prevention and management tactics have evolved since 2019. We interviewed our Trusted AI team to break down the results. “We wanted a pulse check on the industry at the decision-maker level and the concerns these leaders had about using AI,” explains Ted Kwartler, VP, Trusted AI at DataRobot. And having increasingly seen headline stories about AI, Colin Priest, DataRobot Global Lead – AI Governance, wanted to discover “whether these were isolated cases or indicative of a broader problem.” Additionally, the team was keen to understand how mature AI governance was in enterprises, how aware C-level executives were of the issues, and how companies were responding to AI bias.

Here’s what we discovered.

1. Concerns about AI Bias are growing—and this is warranted

54% of IT leaders said they were very or extremely concerned about AI bias—up from 42% in 2019. This was driven by fears relating to loss of consumer trust, compromised brand reputations and other factors. Such worries aren’t hypothetical either—62% of organizations have lost revenue due to bias, and 61% have lost customers.

These negative impacts were in part down to algorithms contributing to discrimination on age, gender and race, despite organizations putting guardrails in place. Businesses must therefore reevaluate, but Haniyeh Mahmoudian, Ph.D, Global AI Ethicist at DataRobot, believes executives must also respond to greater awareness. 

“You keep seeing AI bias headlines pop-up, which wasn’t the case a few years ago. These now really draw people’s attention,” says Haniyeh. Because of this, policymakers are more aware of algorithmic bias – and are increasingly considering legislation to deal with it. With that idea gaining traction, Haniyeh thinks businesses are now “more attentive in preparing themselves to work on ways to measure and mitigate bias.”

2. There’s a disconnect between confidence and reality

Most organizations attempt to mitigate AI bias using a range of measures. These include data quality checks (69% of respondents), employee training (51%), hiring experts (51%) and measuring AI decision-making factors (50%). Furthermore, 84% plan to invest more in AI bias prevention schemes over the coming 12 months.

Confidence is high, with 71% of respondents extremely or very confident in their ability to identify bias—up from 64% in 2019. But is such confidence justified? “More than a third of enterprises have suffered losses due to AI bias, and yet 75% of US respondents were at least very confident in their ability to identify AI bias,” says Colin. “This combination of frequent failures and extremely high confidence levels demonstrates one of the greatest challenges facing AI success is naivety and overconfidence.”

Haniyeh posits this might stem from a “mismatch between what executives think is happening versus the reality of their business.” In other words, they have confidence in systems, but might not be thinking about AI biases, nor have relevant tests in place, nor have experienced use cases that would cause bias.

3. Regulation will help solve challenges in identifying and preventing AI bias

While respondents—unsurprisingly—didn’t allude to overconfidence as a challenge in eliminating AI bias, other factors were cited, including understanding the reasons for specific AI decisions, understanding patterns between input values and decisions, and developing trustworthy algorithms.

Two thirds of companies erect ‘guardrails’ to automatically detect bias on feature sets, but an overwhelming 97% suggested instances of human bias and error would be reduced through platforms with standardized workflows and automated bias detection features. Surprisingly, this dovetailed into a fondness for regulation to drive standardization. “It’s shocking that 81% of respondents welcome regulation in this space,” says Ted. “You never hear that from senior leaders in industry, who usually argue regulation stifles innovation and destroys jobs.”

Haniyeh wonders if this again points to industry leaders now being more informed about trust and ethics issues in AI and “becoming comfortable with regulation, because they know having a biased system can result in mistrust from customers and lead to reputational damage.” There’s also, thinks Ted, a realization this technology is “powerful and beneficial” but presents “ambiguity around appropriate use,” which thoughtful regulation could clear up, increasing the velocity of adoption.

Defining a uniform way to approach AI bias would bring clarity, especially in high-risk use cases. This, explains Ted, means “companies will know explicitly when they’d need conformity assessments, without which they’re running blind.” Executives don’t like risk and uncertainty; they’d prefer, thinks Ted, a “common goal of developing a business-as-usual solution to AI bias.”

4. We need to understand regional/cultural differences

This article earlier called out a US-specific confidence data point. Interestingly, it wasn’t shared by UK respondents. Broadly, British executives were more worried about loss of customer trust; US ones were more confident about their ability. This highlights the importance of understanding and responding to cultural differences when figuring out how to address AI bias.

According to Colin, differences in risk aversion and confidence are ingrained at a societal level: “Americans have a culture of pushing boundaries, accepting failure, and confidence—at times, over-confidence. The UK has a more introspective and apologetic culture that’s more risk-averse and less rash.” Ted agrees, pointing to one stat in particular: “If you’re making broad, sweeping generalizations of American executives, they’re more confident and less reflective. That’s shown in the report where it states 17% of companies have put a model in production that demonstrated bias. That takes a certain level of confidence!”

But it’s also worth being mindful of regulatory impacts when assessing differences. Long-time EU membership boosted UK consumer regulation, and so it’s more comfortable with restrictive regulation. As Colin notes: “It’s therefore no surprise the UK has a more cautious approach to AI, with consumers top of mind.” By contrast, the US is, suggests Haniyeh, “more open” and executives there “might think if issues are mitigated, they’ve done their work and can move on.”

These differences have important further ramifications, though. Haniyeh says the UK “doesn’t have access to the ground truth, because it cannot collect information about gender or race.” This makes it hard to test whether a system is biased and to have confidence in it, which risks “fairness by unawareness.”

5. You need the right people involved

There’s increasing interest in working with third parties to solve the challenges of AI bias. 47% of respondents already rely on third-party AI bias experts or consultants, and 64% say they would hire an AI firm to make sure their organization’s algorithms were not biased.

Haniyeh thinks this is a sensible approach—not least when considering the mismatch issues mentioned earlier: “If you’re concerned about bias, you might only test in one definition, but somebody else would come and test in a different way. So in your original way of evaluating, you might be fair; but in a different one, you’d be biased.”

Naturally, context determines when outsourcing and partnerships should be considered. Haniyeh reckons for simple marketing, it’s unlikely you’d need a third party to audit your use cases. But for high-risk use cases like hiring, it’s appropriate. That said, she notes working with others can be beneficial for any organization that thinks it’s too close to see problems, or that wants to be more confident in work that’s being done, and “see what’s found in respect to what the company’s already observed.” Third parties can also help audit a system based on requirements that will soon arise from regulation.

6. The future will be collaborative, greener and better managed

A key aim of the survey was to explore how AI bias—and ways to prevent it—will continue to evolve, from the perspective of industry figures looking at the challenge from a holistic, systemic level. For Haniyeh, a key finding was that the “notion of using a third-party or consultants for measuring AI bias has increased,” a trend she expects to continue. Meanwhile, Ted says the “green impact of all this computer power needs to be understood,” and expects AI to become “wrapped up in the umbrella of ESG,” with stakeholders having to explore the environmental impact. “And that’s interdisciplinary,” he adds, which will force companies to look into relevant governance processes and regulation.

Colin adds that rapid progression and change could soon usher in a range of benefits: “I’m optimistic we will observe the evolution of AI governance, a deeper awareness of potential threats, and a transition from crisis management to reliable business-as-usual processes to manage these risks.” And this needs to happen, because companies have a lot to lose if they don’t address AI bias.

The onus now is on companies to be responsible and ethical when leveraging AI. They must use relevant resources to ensure the success of their efforts, including educating employees, putting guardrails in place, and working with third-party expertise to support bias initiatives. The last of those is perhaps most important of all: humans can create flaws in AI and be over-confident in their own solutions, but other people can examine systems with fresh eyes and advise how to optimize them, remove biases, and therefore remove risks that would otherwise negatively impact an organization.

Ted
Ted Kwartler
VP of Trusted AI, DataRobot
Haniyeh
Haniyeh Mahmoudian
Global AI Ethicist, DataRobot
Colin Priest Headshot
Colin Priest
Global Lead, AI Governance, DataRobot
Tags: ai AI bias AI ethics AI Ethics AI regulation article Better World/Better Business Deep Tech ethics governance machine intelligence

Keep up with the latest news

Subscribe
You've successfully subscribed!






DataRobot is committed to protecting your privacy. You can find full details of how we use your information, and directions on opting out from our marketing emails, in our Privacy Policy.