Ethical and Moral Issues in Artificial Intelligence Workshop 2022 – Summary

Categories : Blog

Posted:

Author: Sarah Corley

We are seeing more usage of Artificial Intelligence (AI) and Machine Learning (ML) than ever before as we become a more digitised society. Whilst it undoubtedly has the potential to expand both the number of digital finance products/services and the reached customer base, we also need to consider how such technology can increase exclusion. We held a workshop in May 2022 which explored some of the moral and ethical considerations of AI.

AI and ML start with data – we feed in data to create algorithms, the algorithms use this data to predict future behaviour and then collect more data, learning how to improve their algorithm. Data is now a serious commodity; we’ve seen a dramatic increase in its value. Many[1] now cite data as being the most valuable resource over oil and gold. It is no surprise that data privacy is now a serious business, with laws such as Europe’s General Data Protection Regulation (GDPR)[2], leading the way to try and redress the balance between the individual and business.

More countries around the world are now adopting Data Protection Laws, such as Rwanda, our case study on the first live discussion call of the workshop, featuring Alex Rizzi (Centre for Financial Inclusion), Alain Ndayishimiye (C4IR Rwanda), Fiacre Mushimire (Cenfri) and Tariro Nyimo (DFI).

Data Protection laws are focused on giving back the right to the individual to decide what happens to their data and how it is used. These laws alone, however, are not effective. The public needs to understand their rights and be aware of how their data is being used to be able to take advantage of their rights.

A study[3] in Rwanda by The Centre for Financial Inclusion initially showed high levels of trust when 30 mobile money users were interviewed. 80% indicated that they felt AI is fairer than a loan officer, citing human error/bias as a concern. However, when the type of data used in the AI was disclosed a different picture emerged. Data types linked closely to financial behaviour, such as financial history and mobile money transactions, were largely deemed fair by respondents. Utility payments and airtime top-ups were not as favourably viewed, garnering only 33% and 40% positive responses respectively. Respondents were often very surprised and not comfortable with the use of alternative data such as airtime top-ups, text messages sent, and number or contacts to assess creditworthiness.

This highlights the importance of consumer understanding and education; awareness and campaigns are key for this. Involving the consumer’s perceptions and views in the design of products/services will also improve its ethics and usage.

Rwanda was chosen as a case study for this discussion due to their innovative approach to data protection. The law also needs to be enforced and a body/agency identified to deal with complaints and breaches.

The Ministry of Information Communication Technology and Innovation of Rwanda, in partnership with the World Economic Forum, launched the Centre for the Fourth Industrial Revolution (C4IR) in Rwanda in March 2022. The Centre primarily focuses on artificial intelligence and data policy and seeks to develop multi-stakeholder partnerships to drive innovation and adoption at scale for the benefit of society. Other centres in Africa include C4IR South Africa. The C4IR Rwanda reiterated the need for African countries to establish infrastructure and resources that enable a well-functioning data protection system and to appoint authorities who govern these.

The law alone is not enough, we need bodies/authorities who enforce and regulate both public and private sector organisations that use citizens’ data. One approach to consider is Privacy by Design where privacy is the default option and organisations collect the least amount of data they need. This exists within GDPR and other data privacy laws but is generally not being adopted or enforced.

When looking at data sources it also raises issues around inclusion and exclusion, particularly for specific groups or sub-groups that are already vulnerable and maybe even further excluded in the route to digitise. We need to recognise the gaps and biases in data and address these to be able to make AI fairer and more inclusive. If we can do this AI can hold great promise. This conversation was the topic for the second live discussion call with Sonja Kelly (Women’s World Banking), Keneilwe Tsotsotso (DFI) and Sarah Corley (DFI).

When thinking about digital credit, AI and ML can hold great promise for women. Around one-third of SMEs are run by women in emerging markets and research shows that loan officers offer women smaller loans and penalise them more for mistakes. Women also have less credit history. These existing biases could be rectified by AI.

In particular, we are trying to avoid false negatives when assessing digital credit, which is where the algorithm rejected the application, but the client would have repaid the loan. This impacts negatively both the client rejected and is also a lost business opportunity for the credit provider. Bias can mean that women experience more false negatives than men and the concept of fairness can help rebalance algorithms. But from research[4] by Women’s World Banking we know that there are many different definitions and approaches to fairness and balancing this with efficiency.

We also need to consider three different stages where bias can creep in, which are:

  1. Data – the types of data we chose, sampling and labelling bias, reliability of predicting behaviour and safety/privacy concerns
  2. Algorithm development – conscious and unconscious bias of developers, learning of the algorithms and what happens in a crisis where past behaviour is no longer a good predictor for future behaviour
  3. Outcomes – reviewing performance to identify issues, what that means for the algorithm and rebalancing for fairness

We concluded with a few key practical tips to follow to reduce bias in algorithms

  • Define fairness for your institution with key targets
  • Know your data, including what variables are explained by gender or other categories of exclusion, and think about who is not represented
  • Increase buy-in for fairness with your data scientists or coders
  • Form a multi-disciplinary group to regularly review your algorithm and its outputs against fairness criteria
  • Increase representation of underrepresented groups in your organisation at all levels
  • Open up conversations with the regulator to establish and share best practices

Choosing what data to use and how to use it fairly involves complex moral decisions. This was the focus of our third and final live discussion call of the series, led by Xavier Martin (DFI).

What is fair? This was the opening question raised by Xavier. The concept of fairness will vary between individuals based on their values, perceptions and experiences and their country, culture, religion, gender, and status to name a few variables. There is no one definition that the world can unite and agree on.

When it comes to fairness in AI, a good resource is the Dynamics of AI toolbox[5] developed by AI Ethics Lab. It keeps track of the AI Principles that are being used and you can view and sort by characteristics such as country and region and types of organisations. It helps you compare, understand and evaluate which hopefully helps you make informed choices about the fairness of your use of AI. The tool has four core overarching principles of autonomy, no harm, benefit and justice. Each of these then contains further detailed principles to give a more in-depth view.  

Ethical OS also produced a toolkit[6] which identifies eight categories of risk that we need to be attentive to when building technology. Indeed their fourth risk is on machine ethics and algorithmic biases. Their toolkit also contains 14 scenarios to generate dialogue and debate and 7 future-proofing strategies to guide towards ethical action.

It is no surprise that both resources take the form of a toolkit as fairness is something to be discussed, debated and re-iterated and will vary between contexts, cultures and applications. There is no one size to fit all or flow-chart to follow. It is complex and challenging.

To help comprehend some of the moral dilemmas we face in AI and ML, try taking the Moral Machine[7] developed by MIT. It takes you through a series of moral dilemmas to program a driverless car, you choose which of the two options you believe is ‘fairer’ or ‘the lesser evil.’ Whilst a different scenario from our field of digital finance, these moral dilemmas still exist in our industry and can have significant consequences for the individual, such as being denied credit for example. The Moral Machine shows there is not always a universally fair option or win-win for all; trade-offs and tough choices sometimes need to be made.

The more we discuss and understand the different choices and principles we adopt the more likely we are to make the best choices. Make use of toolkits and sites like Moral Machine to create space for dialogue and debate, and continue to be reflective, ask questions and keep learning. This will be how we try and ensure AI and ML are developed and used in fair and inclusive ways.  


[1] https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data

[2] https://gdpr.eu/what-is-gdpr/

[3] https://www.centerforfinancialinclusion.org/trust-of-data-usage-sources-and-decisioning-perspectives-from-rwandan-mobile-money-users

[4] https://www.womensworldbanking.org/wp-content/uploads/2021/02/2021_Algorithmic_Bias_Report.pdf

[5] https://aiethicslab.com/

[6] https://ethicalos.org/

[7] https://www.moralmachine.net/