Ethical crash test for AI? How to navigate the road to responsible innovation

If you want to buy a car, you not only pay attention to the price, but also to safety. Driving a car without an inspection – unthinkable! AI can detect diseases or predict when farmers should start sowing or assess creditworthiness. However, AI can also have negative effects if it does not reliably detect diseases in crops or in humans, generates false information or downgrades the creditworthiness of socially disadvantaged groups. As a result, shouldn’t there also be a kind of inspection authority for AI?

Therefore, carefully conducted AI risk assessments are a moral responsibility. Similar to a crash test for cars, risk assessment can increase safety and trust in AI products as well as raise awareness among the developers involved to incorporate AI ethics into the development of their AI products.

Ingredients for an AI risk assessment

What could such a technical inspection for AI look like? The BMZ’s FAIR Forward initiative addressed this question with a clear goal in mind: to develop an applicable methodology for assessing AI risks that could serve as a guideline for the practical implementation of AI ethics in international cooperation – a sort of ethics crash test for AI. The following ingredients were available:

  • A globally accepted ethical guidance, the Recommendations on the Ethics of AI (UNESCO 2023), to navigate the ethical maze in a focused way.
  • A diverse and dedicated team, consisting of FAIR Forward, Eticas, and a Community of AI Inclusion Experts and project partners from Sub-Saharan Africa and Asia Pacific.
  • Developing guides for AI analysis, both from a qualitative and quantitative perspective. The effects of artificial intelligence should be evaluated in a wider context to test them for bias and, if necessary, take effective countermeasures – similar to clinical trials: only when the side effects of a medicine have been sufficiently researched alongside its benefits can a decision be made on authorisation.
  • Seven innovative and diverse discriminative AI projects that were open to embark on a pilot test of the methodology, ranging from climate adaptation, agriculture to public service delivery.

Download the Responsible AI Assessments

As one of the involved AI inclusion experts, Josia Paska Darmawan (from Indonesia), remarks the following:

AI technologies have been used in different contexts to drive positive impact in our communities. The FAIR Forward Responsible AI Assessments guides stakeholders towards the ethical development of AI by offering comprehensive frameworks and tools to evaluate its potential risks. I am privileged to collaborate with a diverse community of AI inclusion experts in shaping these guidelines, particularly in their application to mapping high carbon stock forests in Indonesia. These assessments are instrumental in helping the project to ensure that the entire process — from problem formulation, data collection to deployment — aligns closely with the principles of free, prior, and informed consent (FPIC) and minimizes harm to the local ecosystem.

Josia Paska Darmawan, AI inclusion expert

How to navigate risks through assessments

Which steps an AI risk assessment can follow is presented below using a fictional chatbot from Ghana – the ‘GhanaBot’, which is designed to help citizens with enquiries about public services.

  1. Charting Unknown Seas – Scoping Call: The Scoping Call aims to identify hidden currents, i.e. potential risks beneath the surface of “GhanaBot”. Together with the project team and its developers, we aim to understand: What is the chatbot’s mission? Who are its users? What data fuels it? The Scoping Call ensures that we are not sailing blindfolded.
  1. Exploring the Currents – Deep Dive: The Deep Dive is all about investigating the identified risks thoroughly. Our main method: Critical debates with the project team, but also local NLP experts, public service providers and citizens who might use “GhanaBot”. We get to understand its limitations and brainstorm ways how to navigate the currents for users.
  1. Navigational Blueprint – Final Report: Based on the Deep Dive’s insights, we fine-tune the discussed recommendations into specific actions that are tailored to “GhanaBot’s” implementation context. The final report is our blueprint to ensure a safer user journey.

 

The Pros and Cons of our approach

What you get from the Responsible AI Assessments:

  1. Guidance: Responsible AI Assessments help AI developers and project managers to steer clearer of potential pitfalls and make informed decisions – like a navigator who relies on a compass to find their path.
  2. Learning by doing: This is not simply dry paperwork. The assessments are practical learning experiences. By identifying risks, teams improve their understanding of AI and its impact.
  3. Context sensitivity: The Responsible AI Assessments are a flexible template. You can tailor it to your specific context and AI use case.
  4. Active discussion: AI risk assessments encourage critical reflection among diverse stakeholders to reveal blind spots and create tailored mitigation.

 Of course, the Responsible AI Assessments also come with their own challenges:

  1. Expertise required: Assessing risks demands auditing and localized domain expertise – this includes the lived expertise of users and those impacted by AI. As a project team, we recommend that you get them around the table.
  2. More than a checkbox: Assessing AI for risks is not a box-ticking exercise. It is an ongoing process that is just as good as the care and expertise you pour into it. Assessing AI risks requires continuous attention. One assessment will not do – the journey is the goal.
  3. No magic silver bullet: Even when you do an assessment, they are no “harm free” guarantees. Rather they are a starting point toward this aspiration. Additional checks and balances must be put in place to mitigate risk and harm for the end users and beneficiaries of these AI products.
  4. Limits of the approach: Although they were tested on diverse AI use cases, the Responsible AI Assessments would have to be adapted for new ones, e.g. generative AI or use of AI in health. More use cases will be covered in 2024’s second pilot phase!

Access the Responsible AI Assessments

If you would like to improve the assessments or provide feedback, please contact [email protected]. They are open-source – and we intend to further develop and co-create them!