ai, biased, responsible, data, explainable

Ethical and responsible AI – the future of data-driven technology

March 3, 2022

As the applications for data-driven technology increase in everyday life, questions around ethical and responsible AI are emerging in the public arena.

Every time we play a song on Spotify, watch a video on YouTube, or order a takeaway online, we leave data footprints. Companies collect and use them to feed AI tools, which empower their business by helping them make better decisions. By using customer data, for example, companies can understand people’s behaviors and run better targeted marketing campaigns.

But “with great power there must also come great responsibility”. When AI-based decisions have a major impact on people’s lives, as in the case of receiving a bank loan or extra medical care, companies have a responsibility towards customers in terms of fairness and transparency. In other words, AI-based decisions should not be biased and, as required under the GDPR, they must be explainable – meaning accompanied by clear explanations.

In fact, if AI has the potential to help organizations make better decisions, why not make them fairer?

This is the question we’ll explore in this article, as we discuss cases of bad and good AI technology.

Discriminating data

AI solutions haven’t always been proven to work for the public good. They have been found to generate biased and unfair decisions in many circumstances. But how can data-driven technology discriminate against people? The reason lies in the principle of machine learning itself. AI systems learn how to make decisions by looking at historical data, so they can perpetuate existing biases. In other words, if the data contain biases, then the output will do too, unless appropriate precautions are taken.

In 2015, Amazon’s AI recruiting tool turned out to be biased against female applicants. It penalized resumes where the word “women” appeared, such as in “women’s chess club captain”. Since the tech industry is historically a male-dominated world, by learning from historical data, Amazon’s tool was preferring male candidates over women.

But that is not the only case. In 2019, an algorithm widely adopted in the U.S. healthcare system was proven to be biased against black people. The algorithm was used to guide health decisions, predicting which patients would benefit from extra medical care. Learning from historical data, the tool perpetuated long-standing racial disparities in medicine; its results favored white patients over black patients.

Whoever designed the aforementioned algorithms did not care about explaining the AI-based decisions, and did not realize the gravity of their mistakes, compromising brand credibility.

Biases can also occur if there is a lack of complete data when building an AI tool. In fact, if data are not complete, they may not be representative, and the tool may therefore include biases. This is exactly what has happened to many facial recognition tools. Because they weren’t built on complete data, the tools encountered issues with recognizing non-white faces. Particularly notorious is the case of the iPhone X, whose facial recognition feature was defined racist because on many occasions it failed to distinguish between Chinese users.

Responsibilizing AI

As more cases of biased AI appear, responsible AI becomes a real necessity. In 2019, the European Union started tackling the problem by publishing a series of guidelines for achieving ethical AI, The Ethics Guidelines for Trustworthy Artificial Intelligence. Major tech companies such as Google and Microsoft have already moved in this direction by releasing their responsible AI manifestos. The road to responsibilizing AI is still long, but every business can play its part. Companies can adopt different approaches to enforce fairness constraints on AI models:

  1. FIXING THE ROOT PROBLEM – IMPROVING DATA PREPARATION
    Since most cases of bias in AI are produced by biased historical data, improving the data preparation phase can fix the root problem. In this phase, human operators can identify both clear and hidden discriminatory data, and evaluate if the data are representative of the group taken into consideration.It’s important that these operations are performed by domain experts, as they have a better understanding of the problem. Since business experts might not have a data science background, no-code and simple data preparation tools, like Rulex Platform, become a must.
  2. OPENING AI – MAKING OUTPUT EXPLAINABLE
    Adopting eXplainable AI (XAI) over black box AI does make a huge difference. XAI tools produce explainable and transparent outcomes. This means business experts can understand and evaluate the outcomes, and detect and delete possible biases from automated decisions.
    “A good decision could improve your business today, but an explained decision could bring you to better understand and improve your processes in the future”, says Enrico Ferrari, Head of R&D Projects at Rulex. He has worked side-by-side with firms for many years now, striving to innovate their decision-making process.
    “We were working with eXplainable AI when the concept was still unknown to the wider public, creating solutions with a very high level of explainability and transparency like Logic Learning Machine (LLM) – an algorithm that produces outcomes in the form of IF-THEN rules. In 2016, our commitment to explainable technology was recognized by MIT Sloan, which honored us for having one of the most disruptive technologies.”

The path towards ethical and responsible AI may not be easy, but it is vital for companies who want to grow their customers’ trust and safeguard their rights and privacy, avoiding risky gaffes which may affect their credibility.

Marketing Specialist • Sales and Marketing

Related Posts