Passer directement au contenu principal

Article 8 min read

7 ways to reduce bias in conversational AI

Par Hannah Wren

Dernière mise à jour March 23, 2022

You may remember Tay, Microsoft’s infamous Twitter chatbot that unintentionally picked up on hate speech and became a racist, sexist jerk overnight. Tay became a cautionary tale.

Since then our ability to monitor biases with respect to artificial intelligence (AI) has improved, and it’s continuously advancing.

“We’re getting better at measuring data sets and de-biasing algorithms. For instance, someone mathematically measured sexism in a data set that was associating ‘nurse’ with female and ‘doctor’ with male,” said Mikey Fischer, who developed a system that translates natural language into code and recently completed his Ph.D. in Computer Science at Stanford University, specializing in AI and Natural Language Processing.

AI can learn both good and bad things, but it depends on what it’s fed. This puts the onus on businesses creating and using it to become more accountable and operate carefully and with empathy.

Businesses need AI to create the personalized, effortless experiences customers expect. But they’ll also need to take steps to ensure they don’t birth a bot like Tay that amplifies our worst biases and disenfranchises groups of people.

7 ways to reduce bias in conversational AI

1. Opt for transparency and explainability

Whether we’re talking about Amazon’s facial recognition software mistaking minority members of Congress for criminals or Cambridge Analytica harvesting data to impact the 2016 U.S. presidential election, there needs to be transparency into how companies operate with respect to AI and explainability—which means understanding why an AI system made a certain decision.

“Companies that are transparent with technology or make it open access tend to get into fewer issues,” said Fischer. “Businesses get into trouble with AI when they over-promise or hide things, not necessarily because the technology isn’t perfect—everyone understands there are limitations to technology.”

Just like companies are transparent about pricing or business values, they can be open about AI, like what their technology can and can’t do, and internal processes around it. This also means being honest when something goes wrong.

“Companies that are transparent with technology or make it open access tend to get into fewer issues.”
Mikey Fischer

Transparent businesses also ensure customers know where they are in the system when interacting with AI, according to Fischer. For instance, the “B.O.T law” requires companies to inform customers in California when they’re talking to a chatbot instead of a human.

[Related read: Regulating AI—a call for transparency and ethical use]

2. Allow for a diverse set of viewpoints

Ethical tech means both being transparent and allowing for a diverse set of viewpoints, according to Fischer.

In 2017, a video went viral of a soap dispenser that only responded to white hands—because it wasn’t tested on other skin types. Another study found that self-driving cars had trouble detecting pedestrians with dark skin, again because the technology was only built to recognize white people.

Businesses need to hire diverse teams and test algorithms on diverse groups—or they’ll get unsettling outcomes that magnify hidden prejudices of a monocultural workforce.

3. Build chatbots that are contextual and use-case specific

Businesses can reduce bias by creating chatbots that are contextually relevant and use-case specific, according to Fischer.

“When AI is set for a task that’s too broad, there is no way for it to be unbiased because a lot of what is ethical is contextually relevant,” said Fischer. “When we’re all forced into a monoculture of technology, it’s not really possible for it to hit all the nuances of a given demographic or whoever the AI is trying to serve.”

Examples of domain-specific chatbots—chatbots set for a specific task—include Bank of America’s Erica, which helps clients manage their finances, and the World Health Organization’s WhatsApp bot that provides users with reliable information around COVID-19.

“It’s about being user and use-case specific so the chatbot has enough context to give it the ability to be ethical,” said Fischer. “If it has a specific task, the user and the system have more of the melding of the minds.”

[Related read: More context leads to better chatbots—and better conversation]

4. Rate chatbots against our own biases

A chatbot is based on any number of inputs we give it, learning from the world the way it is or has been, not as it should be. As a result, we pass on our biases—conscious or unconscious.

“There is no such thing as an unbiased system,” Fischer explained. “AI is always based on some definition of fairness that it’s trying to optimize for and there are many definitions of what fair means.”

The example Fischer gave was a chatbot for creditworthiness. Do you define creditworthiness as someone more likely to pay back a loan or optimize it to maximize profit?

“AI is always based on some definition of fairness that it’s trying to optimize for and there are many definitions of what fair means.”
Mikey Fischer

And even when we think we’ve programmed AI without bias, it can learn prejudices we may not realize we have.

“AI systems have millions of parameters and sometimes it’s not immediately clear to a human the reason why a decision was made,” said Fischer. “Even the most careful parents can produce a child that is far from what they expected.”

A bank might find that its creditworthiness algorithm is biased on race and remove race as an input. But the algorithm can statistically deduce race from other factors, like geographical location or where someone went to college.

“It’s hard to fully remove discriminating factors,” said Fischer. That’s why, as people responsible for building the AI experiences of the future, we need to rate chatbots against our own biases.

[Related read: 5 innovative ways brands are using bots to help customers connect]

Keep things moving

Three ways an AI-powered knowledge base changes the game.

5. Collect better data

Chatbots are a byproduct of data they receive. AI isn’t magic—it’s subject to biased data in, biased data out.

But not all preconceived notions about a customer are bad. They can create easier, better-personalized experiences. For instance, Spotify recommends songs to users based on their previous listening behavior.

The important thing is that you ensure an algorithm isn’t using what it already knows about someone for harm. This requires analysts to interpret data carefully before feeding it to machines and evaluate the outcomes of that data.

“When AI is set for a task that’s too broad, there is no way for it to be unbiased because a lot of what is ethical is contextually relevant.”
Mikey Fischer

Collecting data has other nuances in it, too. For example, most data online is extreme.

“Say you were collecting Instagram pictures. Instagram pictures are not a reflection of real life. They’re the crème de la crème,” said Fischer. “A lot of data online is similarly polarizing.”

It helps to collect more routine and less inflammatory data that shows how a system will actually be used as well as rate data specifically for the human values you’ll program into it, like fairness or equal opportunity.

“If you’re not careful about these things, you’ll get unexpected outcomes around biases. The worst-case scenario is unknowingly disenfranchising a certain group of people,” said Fischer.

6. Keep a human in the loop

Businesses get the greatest value from AI when using augmented intelligence—human intelligence and machine intelligence combined.

Machines don’t actually know what they’re doing. “They’re just regurgitating what’s already there,” said Fischer.

That’s why it’s important to keep a human in the loop to evaluate for biases even after testing a system, making sure it works for specific users, and looking at individual use cases as opposed to global demographics, according to Fischer.

It also helps to have a fallback system so customers can contact a real person if they encounter something unexpected or inaccurate.

[Related read: 5 ways AI is helping solve human problems]

7. Become end-user-centric

AI is most successful when it serves the benefits of who’s directly using it: the end-user.

Becoming end-user-centric requires a business to use empathy when designing AI systems, display humanity, and operate with a conscience.

Businesses can take a more human approach to AI by empowering the end-user through a feedback loop.

If a chatbot answers customers’ frequently asked questions, a customer might have the ability to give feedback via text, a form, or a thumbs up or thumbs down icon. They can also indicate that they’re unsatisfied by asking to reach a real person.

Taking it a step further, Fischer believes AI will be less biased if those it serves can amend it themselves—and that empowering end-users to make AI systems more powerful will lead to the next breakthrough in AI.

That’s why it’s important to keep a human in the loop to evaluate for biases even after testing a system, making sure it works for specific users, and looking at individual use cases as opposed to global demographics.

Fischer’s Ph.D. was on creating AI that people can adjust, such as by reframing preconceived notions, without knowing how to program.

“In my Ph.D., you could program by demonstration and instruct the computer as if you were telling another person. Today, Siri says, ‘I can’t do that for you,’ but in my vision of the future, Siri will say, ‘I can’t do that for you, but could you show me how?’” said Fischer. “When there are end-users from a variety of backgrounds contributing to AI, and customizing it for their own needs, it’ll be more helpful and representative of the people.”

[Related read: Bringing empathy to product design]

Ethical tech isn’t a buzzword; it’s a business necessity

Research tells us that AI is key to delivering a frictionless customer experience. But that’s table stakes.

Customers also expect companies to operate with integrity and human values. This requires businesses to bake empathy, transparency, and trust into their DNA, especially with respect to AI—because when they don’t, the results can be unexpected and even terrifying.


Photo credit: Cottonbro

Articles associés

Article
6 min read

Science-based targets are the key to sustainable business

To help combat climate change, many companies are setting science-based emissions reduction targets. Learn more about these efforts and the impact they can have on the planet.

Article
5 min read

Here's how customer service teams are actually using AI

From bots to automated workflows under the hood, Generative AI tools for customer service driving higher productivity, happier agents, and satisfied customers

Article
5 min read

That’s a wrap: A look back at Zendesk Relate 2022

We loved seeing you at Zendesk Relate 2022. Here are a few highlights—and a peek into what you can still explore online.

Article
6 min read

We’re placing some bets on the future of customer experience

Join us at Relate to hear our five big bets on what the customer experience will look like by 2030.