How to overcome bias and stereotypes in AI: perspectives on ethical AI

@ GIZ
Imagine an AI that claims to identify whether someone is gay or straight just by analysing their photograph. Shocking, isn’t it? This provocative question was the starting point of a session at MozFest House in Amsterdam, titled “Reclaiming Power: Using Ethical AI for More Equitable Futures”.
Masuma Shahid: AI Bias and LGBTQIA+-Rights

Masuma Shahid, Lecturer and Researcher at the Erasmus School of Law in the Netherlands with a focus on LGBTQ+-rights, challenged participants by projecting images of people on a screen and asking the audience to guess their sexual orientation. This exercise was based on the controversial “AI Comes Out of the Closet” project, which highlights the biases inherent in AI systems designed to determine sexual orientation.

It’s alarming how these AI systems reinforce harmful stereotypes and invade privacy.

Masuma Shahid, Lecturer and Researcher at the Erasmus School of Law in the Netherlands

She underscored the critical need for intersectional and inclusive approaches to AI, particularly for LGBTQIA+ individuals. Reflecting on regulatory developments, Masuma referenced the recent progress on the EU AI Act, stating, “The EU has taken a step closer to enforcing strong regulation of AI, drafting new safeguards that would prohibit a wide range of dangerous use cases.”

These include prohibitions on mass facial recognition programs in public places and predictive policing algorithms that try to identify future offenders using personal data. Masuma emphasised that such measures are crucial to protect marginalised communities – including LGBTQIA+-individuals – from the potentially harmful impacts of AI.

@ Alex Knight, Unsplash

However, similar regulations are urgently needed in countries of the Global Majority, where LGBTQIA+-individuals are at risk of criminalisation due to such technologies.

 

Chenai Chair: Language and Trust in AI

Chenai Chair, lead of Mozilla’s Africa Mradi Program, took participants on a personal journey, exploring the cultural, cross-border and racial dimensions of AI. She reflected on her own experience as an African woman working in the global AI space and on the significance of language in shaping perceptions of AI.

In many Global Majority communities, there’s a mistrust towards AI because it feels foreign and imposed.

Chenai Chair, lead of Mozilla’s Africa Mradi Program

She shared insights from Mozilla’s Africa Mradi strategy, emphasising the need for localised and culturally sensitive AI solutions, such as their Common Voice project which collects open-source language data that can be used for speech recognition with a strong emphasis on low-resource languages.

 

Maria Hattya Karienova: AI and Policy   

Maria Hattya Karienova, Inclusive development and digital rights advocate at the Pulitzer Center in Indonesia, rounded off the provocations by discussing broader societal impacts and the importance of considering gender, disability, and other intersecting identities in AI development. She based her remarks on her experience with an AI policy training that FAIR Forward conducted in Indonesia.

She also alluded to the hidden impacts of algorithmic management on the rights of digital workers who are increasingly relying on digital labour platforms as a source of primary income. Maria posed critical questions about data ownership and exploitation within the AI workforce.

@ Nahrizul Kadri, Unsplash

“Who owns the data that fuels AI, and how are the rights of AI data workers protected?” Maria asked, prompting attendees to consider the ethical implications of data practices.

 

Moving Towards Actionable Solutions

The session concluded with a call to action: Ethical AI is not just about technology but about the values and principles that guide its development and use. It requires a collective effort to ensure AI serves all communities equitably and inclusively.

@ GIZ

We must decentralise power in the AI space and ensure that these technologies are developed by and for the communities they serve.

Karlina Octaviany, advisor and Country Focal Point at FAIR Forward in Indonesia

Reclaiming power in the AI space is an urgent endeavour and it requires our collective action and responsibility in shaping an equitable AI future.

For more information on the Fair Forward: Artificial Intelligence for All programme visit here.