5 Questions for…Elisa Lindinger, Co-Founder SUPERRR Lab

You founded the SUPERRR Lab in 2019 together with Julia Kloiber. Your vision is to shape a “feminist, digital future”. To be specific, how are you implementing this vision?

The most important answer to this is: by no means alone. We want to ensure that the digital transformation serves society and contributes to making it fairer. For us, a feminist approach means reducing injustices and redistributing power. This requires the perspectives and cooperation of many. In global politics, it is becoming clear that digitalization is leading to a greater concentration of power, and big tech and parts of the party landscape are moving closer together. The result of this is less co-determination, a loss of digital safe spaces, and less participation in the benefits that the digital transformation promises.

There was once talk of a global village where we can all be who we are. A village in which we move closer together and develop an understanding of one another in our diversity. This promise has not been realized. This is precisely why we need new visions of what a digital future should bring us. We often talk about “futures”, because many are possible. But which ones do we want to achieve? At SUPERRR, we are constantly engaged in this debate and invite you to join us and others in creating visions of the future that we want and that unite us

We are currently living in times of autocrats gaining influence worldwide and societies becoming more and more politically divided. Disinformation and the impact of AI are becoming steadily more dangerous in this environment. What skills do we need to face this danger?

Digital technology is an additional tool in the repertoire of those who want to divide our societies. I don’t know whether we are doing ourselves a favor by giving technology such a special role, as is the case of the current discourse. The fact is that technology is used by people for a specific purpose – it does not act on its own, nor is it a force of nature.

This is precisely a competence that I believe we should strengthen: Not to humanize technology and to dare to question developments and claims in the digital sector from time to time.

Disinformation is not an invention of the 21st century, as proven by the history of the Third Reich, in which disinformation and propaganda were central factors. In times of ever faster and denser digital communication, this is also accelerating. We may be able to slow down the spread of disinformation with technical measures, but we are only combating the symptoms, not the causes.

During the Digital Forum on 26.11.2024 at the BMZ Berlin, you illustrated the power imbalances algorithms can be subject to. You advocate for stronger regulation of artificial intelligence. As a global community, where should we start?

I wish I had the answer! As a supporter of evidence-based policy, I think a data basis is essential: Where are artificial intelligence processes being used in such a way that they are potentially causing great harm? Either by affecting a large number of people or by affecting people who are already experiencing discrimination. We have the General Equal Treatment Act and the UN Convention on Human Rights. If AI discriminates, it is not an ethical problem, but a breach of fundamental rights, and that is what we should call it.

Regulatory projects and other critical debates are currently focusing primarily on the social impact of AI. But artificial intelligence is now an industry with a huge turnover, so it is also worth taking a look at the production process, and here things look pretty bleak in many places: Preparing data and training AI models is precarious or even traumatizing work, often outsourced to countries in the global majority, while the profits are hoarded in industrialized countries.

Whether companies are even allowed to use the data they use to train their AI models is often a gray area – this is where an industry enriches itself with data from entire societies, all under the buzzword “innovation”. But what is an innovation worth if it does not serve society? We need other decision-making aids for when, how, and for what purpose we use technology – that would be a good start.

A meta-analysis with more than 100,000 participants from 26 countries shows that Women use AI significantly less often than men. Another study reveals that half of the AI systems examined have gender-specific biases. Who is responsible for this development?

Women using AI less often than men is perhaps simply a sign of rationality, as many of the promises of the AI boom are completely exaggerated. The promised efficiency gains are not being realized, in fact, quite the opposite. But seriously, there are many reasons for the differences in usage, which we need to look at in a differentiated way. People primarily come into contact with AI tools or digital applications at work. But the employment rates of men and women diverge. Poverty also contributes to people being left behind digitally – women are also more affected here, especially at an older age.

So, neither women nor technology are to blame for this imbalance, but it is an expression of social injustice. We have a responsibility to change the situation.  AI applications do exhibit the gender biases that are inherent in their training data – as well as racism, and ableism. We as a society are a bad teacher, because the training data is nothing more than archives of past decisions that the technology is now supposed to abstract. We cannot expect technology to do better than we are.

In 2025, the “Hamburg Declaration on Responsible AI” will be signed – a paper of principles in which the BMZ and UNDP commit, among other things, to avoiding gender-specific inequalities in AI technologies. What concrete measures are needed to ensure that AI systems are not biased towards gender, ethnicity, financial status, education, and sexual orientation?

First of all, we should not humanize AI, but find the people who benefit from such systems and start by raising their awareness ¬ or by creating accountability. To be honest, I am pessimistic that there will be a technical system that is completely non-discriminatory. We humans are not either. That’s why we need effective mechanisms to make discrimination as difficult as possible, but also to ensure that people can assert their rights in cases of discrimination.

The disclosure of the data basis used in the training or refinement of AI models, for example for audits, is a conceivable step. The obligation to audit algorithmic systems that are used in contexts where discrimination is common is another. In addition, people must always be allowed to object to an automated decision. People must be able to do this with reasonable effort or it must be possible to transfer this to interest group organizations. After all, people who experience discrimination have less time, money, and access to support structures to enforce their rights. The best – and perhaps only effective – way to reduce discrimination through AI is to work for fairer societies. As you can see: In the end, we always come back to the question: How do we want to live together, what future are we working towards?