We are seeing examples of bias is creeping into algorithms for decision-making which can result in decisions that financially exclude women in particular. Untangling it is tricky, we need to consider the assumptions made, how our own personal experiences and context influence and the intended and unintended consequences. Dialogue helps us uncover some of the bias and explore how we can take actions to mitigate. Last month we hosted our Ethical Moral Issue in Artificial Intelligence (AI) Workshop 2022 and in one of the live discussions calls we featured Sonja Kelly, Director of Research and Advocacy from Women’s World Banking. She helped us learn about the implications of algorithmic bias by exploring her research paper on Algorithmic Bias, Financial Inclusion, and Gender.
In her paper, Sonja mentions that “Algorithmic bias is complicated and requires multiple approaches to ensure the automated processes that improve efficiency do not translate into unfair treatment of women and marginalised customers. The good news is that machine learning and artificial intelligence, while part of the problem, can also be part of the solution. Technology, along with effective management and organisational processes, provides new solutions for bias mitigation.” AI technology only really works when the data and algorithms that make sense of that data, are themselves fair. Testing for bias in algorithms can be quite complex, we must think about different kinds of errors that might be in our models and how they get there.
An algorithm reviews historical data to predict future likely outcomes. The amount and type of data you have on an individual can put them in a position to be excluded or privileged. Particularly for the already financially excluded groups such as women, rural and low-income populations, AI has the potential to include them through the use of alternative data, such as phone credit and GPS location. This can be alternatives to the traditional credit history commonly used. However, there is also a risk of further exclusion if algorithms are not considering context or equity. As Sonja Kelly highlights discrimination happens when some prioritized groups receive a systematic advantage (being offered credit, for example) and other groups are placed at a systematic disadvantage (being denied credit, for example). We need to recognise the gaps and bias in data and address these to be able to make AI fairer and inclusive. If we can do this the AI can hold great promise.
When thinking about digital credit, AI and ML can hold great promise for women. Around one third of SMEs are run by women in emerging markets and research shows that loan officers offer women smaller loans and penalise them more for mistakes. Women also have less credit history. These existing biases could be rectified by AI. In particular we are trying to avoid false negatives when assessing digital credit, which is where the algorithm rejected the application, but the client would have repaid the loan. This impacts negatively both the client rejected and is also a lost business opportunity for the credit provider. Bias can mean that women experience more false negatives than men and the concept of fairness can help rebalance algorithms. But from research by Women’s World Banking we know that there are many different definitions and approaches to fairness and balancing this with efficiency.
Sonja’s paper identified three different stages where bias can creep in, which are:
- Data – the types of data we chose, sampling and labelling bias, reliability of predicting behaviour and safety/privacy concerns
- Algorithm development – conscious and unconscious bias of developers, learning of the algorithms and what happens in crisis’s where past behaviour is no longer a good predictor for future behaviour
- Outcomes – reviewing performance to identify issues, what that means for the algorithm and rebalancing for fairness
The live discussion call concluded with a few key practical tips to follow to reduce bias in algorithms
- Define fairness for your institution with key targets
- Know your data, including what variables are explained by gender or other categories of exclusion, think about who are not represented
- Increase buy-in for fairness with your data scientists or coders
- Form a multi-disciplinary group to regularly review your algorithm and its outputs against fairness criteria
- Increase representation of underrepresented groups in your organisation at all levels
- Open up conversations with the regulator to establish and share best practices
It is clear that credit companies are just the tip of the iceberg when using data that can create biases. Examples of high profile cases of these kinds of biases in the news are when Twitter taught Microsoft’s AI chatbot to be a racist in less than a day or when Joy Buolamwini MIT student discovered that facial recognition technology was failing to register or recognise black faces and when Amazon scrapping a machine learning hiring tool that inadvertently discriminated against women because, based on historic data, women just did not fit the profile of an engineer. Another great example is when Apple’s Siri and Google’s Alexa use AI-powered speech recognition to provide voice or text support, however, voice-to-text is far less accurate for non–native English speakers and even the fact that both personal assistant’s voices are female.
You can watch the recording of the discussion with Sonja Kelly, Sarah Corley and Keneilwe Tsotsotso here.