close
close

topicnews · October 6, 2024

The Federal Council should take this into account

The Federal Council should take this into account

Allow everything? Or ban everything? Black and white thinking doesn’t help when it comes to regulating artificial intelligence. There are also recipes for healthy technological development.

In order to create, you need the will to create: it is now up to the Federal Council to set the right framework conditions for dealing with AI.

Christoph Ruckstuhl / NZZ

If Google, Microsoft, Open AI and Co. have their way, then AI will not only stop climate change, but also eradicate diseases, successfully fight hunger and create peace – at some point in the future. Such utopian scenarios are the product of unprecedented hype, but remain speculative.

It is much more informative to look at the concrete benefits as well as the actual harms of AI in the here and now: In the Netherlands, thousands of families had to pay back the childcare allowances they had received for years because a discriminatory algorithm mistakenly accused them of fraud. Elsewhere, AI systems sort out women’s applications or lead to false arrests. Social media algorithms spread hate speech and try to keep young people online for as long as possible. AI chatbots give unreliable answers and are nevertheless integrated into search engines, where people also find out information about elections. Facial recognition systems monitor public spaces, and in war AI can select victims.

These are not speculative scenarios, this is all actually happening. Laissez-faire also seems out of place. Should we stop the technology altogether? No. The Federal Council should pursue a smarter plan this winter.

First: Prevent damage

AI can have an impact on fundamental rights or democracy – and to protect these goods, we don’t want to have to rely on the goodwill of tech providers. Here we as a democratic society should define the requirements to ensure the responsible use of AI.

This means, for example, that authorities must be transparent when they use algorithms to make decisions about people; Employees must be involved when AI is used; and people must be protected from discrimination by AI. The latter is widely shared: Initiated by Algorithm Watch Switzerland, an alliance of politics, civil society, science and business recently urged the Federal Council to make protection against discrimination one of the priorities when it considers possible regulations around AI this winter. National councilors from six parties are supporting the petition. The message is clear: Nobody can want bad and unfair AI. And to avoid this, regulations are a central means.

Second: Enabling real benefits for everyone

There is a certain political economy behind AI today: it is in the hands of a few large global corporations whose market value corresponds to the economic output of France or Great Britain. The AI ​​market is neither competition-friendly nor sustainable, but is characterized by concentrations of power that are certainly relevant to democracy (note: the same companies moderate public debate on social media and provide the administration’s IT infrastructure or our children’s school software). Generative AI models like those behind Chat-GPT or Gemini involve such enormous consumption of energy and water that Google and Microsoft have de facto thrown their climate goals overboard. In Kenya, people earn two dollars an hour under precarious working conditions and categorize data for Open AI’s AI models or sort out violent content on Facebook.

We must never ignore this political economy when we talk about AI. And instead of falling for either the hype or the demonization, we should ask ourselves: Which Do we want AI? How do we ensure that it doesn’t just serve the interests of a relatively few? And how can we design AI – instead of it shaping us?

In order to create, you need the will to create. The Federal Council must set framework conditions – by promoting interdisciplinary research, media and democratic competence, by investing in public and non-profit infrastructure, by raising awareness and education. But also through targeted regulations that prevent negative impacts and harmful innovations – and at the same time are designed to promote responsible AI supply chains, sustainable AI development and a healthy AI market that produces innovations that are oriented towards the common good.

The Federal Council must therefore set out ambitiously in this AI winter without being misled by the hype: it must swallow the pill and combat the symptoms that come with AI. And at the same time, he must address the causes and advocate for healthier technological development that delivers real benefits – for all of us, not just Big Tech.

Angela Müller, 39, is the managing director of Algorithm Watch in Switzerland. The non-profit organization highlights the effects of algorithms and AI on people and society. Müller studied political philosophy and received his doctorate in law. She was an expert in hearings at the Council of Europe, the German Bundestag and the Swiss Parliament and was recognized as one of “100 Women in AI Ethics” worldwide in 2024.

An article from the «»