top of page

A feminist view of the EU AI Act

Updated: Aug 10, 2023

The Act is the first-ever legal framework for regulating artificial intelligence (AI) in the European Union. Its purpose is to establish unified rules for the development, market placement, and usage of AI. Members of the European Parliament (MEPs) intend to enhance citizens' ability to lodge complaints about AI systems and obtain explanations regarding decisions made by high-risk AI systems that have a significant impact on their rights. Kim van Sparrentak, a member of Holland's Greens party, stated that this vote is a significant step in AI regulation, emphasizing the importance of fundamental rights as the foundation of such regulation. They stressed that AI should prioritize serving people, society, and the environment, rather than the other way around.

Following this legislative framework process from an outside perspective is quite tricky. Generally, the Regulation is developed through the so called ‘ordinary legislative procedure’, which means the European Parliament and the Council act as co-legislators, and both have to approve the legislation. Let's get a bit closer into the details to understand the process itself before diving into the details.

Milestones:

  • April 2021: EU Commission presented their draft (without a definition of GPAI)

  • December 2022: EU Council published their position (including a definition of GPAI)

  • June 2023 (tbd): plenary vote of the EU Parliament (including a definition of GPAI)

The Draft is scheduled for a plenary vote in the European Parliament in June. Afterward, consensus shall be determined through "trilogue" discussions – involving representatives from the European Parliament, the Council of the European Union, and the European Commission – before positions and amendments will be adopted after a second or potential third reading by the European Parliament and the Council. Interinstitutional negotiations have become standard practice for the adoption of EU legislation. During the “trilogue”, the three positions shall be consolidated into a provisional agreement. Practically a "4-column document" will be created. It will present the three positions and include an empty column to be determined as the outcome of the "trilogue" discussions.

The 10th Feminist AI and Digital Policy Roundtable focusses its debate mostly on the EU Parliament Committee’s Draft Compromise Amendments released on 11/05. Mher Hakobyan from Amnesty International commented on this: "(…) the European Parliament sent a strong signal that human rights must be at the forefront of this landmark legislation." But it this the case?

Next to a feminist lens on the EU AI Act which is currently missing, the standardisation process will play an important role in protecting marginalised and underrepresented groups. From my personal observation, NGO's and activists do not focus their attention on this part of the EU AI Act process enough. For a deep dive into standardisation processes please read this article.

Based on this, we started to discuss a feminist lens on the EU AI Act. This lens can be considered from two perspectives: (1) The contents of the EU AI Act and (2) the EU AI Act Process itself. The following sections demonstrate that content and process views cannot be separated from each other. Rather both lenses go hand in hand and promote (or obstruct?) each other.

The EU AI Act and the Global South

The first question that came up during the Roundtable was asked by an AI Ethics Expert from India and does show the need for a global view on the EU AI Act.


  • What does the EU AI law mean for me?

  • Does the EU AI Act have a ripple effect on markets such as India?

  • Can we build a more inclusive lens globally due to the EU AI Act?

  • How does it impact our jobs?

The perspective of the Global South is an important pillar in feminist digital politics. Not only because of outsourced data labelling with appalling working conditions, a high risk of discrimination against female People of Colour in AI systems or the fact that the Global South is missing from most data sets, this comes into focus when we talk about justice from a global perspective.

How does the EU AI Act deal with this?

The only place where stakeholder-involvement is explicitly mentioned in the AI ACT is its codes of conduct in Art. 69. However, everything foreseen there is based on voluntary action.

Next to this, it must be admitted that the idea behind the EU AI Act is to protect European Citizens and safeguard European values vis-à-vis different approaches in other parts of the world. It is indeed envisioned that the principles of the EU AI Act must be adhered to by international actors should they wish to part-take in the European market (Brussels Effect). Within these international, geostrategic dynamics, divergent power dimensions are strong and tend to foster the exclusion of marginalised and underrepresented groups. This is why it is crucial to tackle the topic of power dimensions within AI (regulation), which is Global North driven.


For more details about the Global South Perspective on AI, please read our article “Why do we need to speak about digital colonialism?”

Stakeholder Participation and Voluntariness (Art. 69)

One could argue that the Art. 69 of the EU AI Act opens a possibility for volunteering co-liberation within the EU AI Act. Although this is the case, we would like to question accessibility, transparency, and funding. Checking the current level of (diverse?) stakeholder participating within the processes of the Act, we see a very homogenous environment. Although programs for upskilling talent in EU Tech Policy such as Training for Good exist, a broader approach of educating stakeholders is required. This is why funding programs from EU Member States need to be installed to enable the targeted participation within the pace we want to EU AI Act to move forward!


From a feminist perspective, a voluntary inclusion of marginalised and underrepresented groups only is unacceptable. For example, the inclusion of persons with disabilities is sought in the EU AI Act as follows:


"As signatories to the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), the European Union and all Member States are legally obliged to protect persons with disabilities from discrimination and promote their equality, to ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies and systems, and to ensure respect for privacy for persons with disabilities. Given the growing importance and use of AI systems, the application of universal design principles to all new technologies and services should ensure full, equal, and unrestricted access for everyone potentially affected by or using AI technologies, including persons with disabilities, in a way that takes full account of their inherent dignity and diversity. It is therefore essential that Providers ensure full compliance with accessibility requirements, including Directive 2019/882 and Directive 2016/2102 of the European Parliament and of the Council. Providers should ensure compliance with these requirements by design. Therefore, the necessary measures should be integrated as much as possible into the design of the high-risk AI system."

Within the power matrix of patriarchy, people with disabilities can be overlooked. Therefore, words like "should ensure" and "include as much as possible" will not lead to the change we want to see to create an inclusive feminist future for all. This is why marginalised and underrepresented groups must be prioritised in the Standardisation processes!


The Importance of Standardisation:

In general, harmonized standards play a crucial role in filling the gaps left by the EU AI Act, and therefore, the responsibility for this compromise falls on the shoulders of standard setters. They have the task of defining tests and metrics to establish the required system benchmarks, as well as outlining the tools and processes for system development. Adhering to these harmonized standards provides an "objectively verifiable" means of complying with the essential requirements of EU legislation. Those who choose to follow these standards benefit from a "presumption of conformity," meaning that meeting the standards is equivalent to fulfilling the essential requirements. On the other hand, those who opt for alternative solutions must typically demonstrate that their approach offers at least the same level of consumer protection as the harmonized standards. Despite being voluntary, these standards carry significant weight.

However, the development of harmonized standards for AI faces one final challenge: technical feasibility. Creating standards for such a complex technology as AI has proven to be a formidable task, and the extent of this challenge is often underestimated.

So how is the AI Act addressing stakeholder participating within the standardisation process?


"Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council1 should be a means for providers to demonstrate conformity with the requirements of this Regulation. To ensure the effectiveness of standards as policy tool for the Union and considering the importance of standards for ensuring conformity with the requirements of this Regulation and for the competitiveness of undertakings, it is necessary to ensure a balanced representation of interests by involving all relevant stakeholders in the development of standards. The standardisation process should be transparent in terms of legal and natural persons participating in the standardisation activities". (Articles 40 to 51, Annexes V - VIII and associated recitals No.61)

However, in fact this process is extremely exclusive. For a deep dive into the standardisation process, please read our article "A feminist lens on EU AI Act Standardisation".


Based on this, we call for funding from EU Member States to Civil Society Organization for enabling participation to promote equity. Reviewing the participating in the EU AI Act Drafting process, we need to ask the question if the knowledge is existing within NGOs and for Digital Rights Organization as well as from a Feminist Activist perspective to participate efficiency in deep-dive discussions around Standardisations of AI-Systems. In comparison to the existing Tech Lobby Power in Brussels, upskilling of people for advocacy of human rights and feminist viewpoints is an important point to address from a public perspective.


The organization "ARTICLE 19" commented on this: "(..) (We) remains profoundly concerned by the way in which key decisions around technical standards, which will have huge human rights implications, have effectively been outsourced by policymakers to the European Standardisation Organisations (ESOs), which are not inclusive or multistakeholder, and have limited opportunities for human rights expertise to meaningfully participate in their processes. Ceding so much power to them risks undermining the democratic legitimacy of the process.”

The Deep-Fake Porn Issue:

Imagine seeing yourself in a sexually explicit video in which you have never participated. This is a distinct possibility today for a female celebrity or a regular woman living in the age of Deepfakes. This is why regulators should be more concerned about this dangerous development. The EU AI Act does not mention the threat of deep-fake porn.


This example shows how concrete Tech regulations must be in certain cases to protect human rights. Threats such as Deep-Fake Porn mostly effect marginalised and underrepresented groups. It goes without saying, that no matter of the gender, sexual orientation, skin colour or other traits, deep-fake porn might be a risk. In fact, it can be assumed that this will address females the most based on existing trends and analysis such as states in MIT Technology Review “The viral AI avatar app Lensa undressed me—without my consent“


Meanwhile, regulation of General Purpose AI (GPAI) is an extreme challenge task, no doubt!


GPAI and the EU AI Act:


The debate about GPAI has gained strong momentum within the last months. The first draft of the EU Commission did not include a definition of General Purpose AI. GPAI simply has not been part of the agenda back in 2021, although the existence and power of this technology was already obvious. Yet, were we lacking real life examples? The latest draft provided by the EU Council as well as the expected version by the EU Parliament is covering this type of AI-Systems. From a feminist lens, GPAI is a major threat with unclear outcome.

A growing community of AI researchers believes that training large AI models with "all available data" could lead to Artificial General Intelligence (AGI). We are now in an era of trillion parameter machine learning models trained on billion-sized datasets collected from the internet. However, concerns have been raised about the generation of these datasets. Critiques point to issues with curation practices, the quality of alt-text data, problematic content in commonly used datasets like CommonCrawl, and biases in large-scale visio-linguistic models such as OpenAI's CLIP model trained on opaque datasets like WebImageText.


Considering that the costs and benefits of AI systems are distributed unevenly, with creators benefiting the most while marginalized individuals and communities paying the highest price when AI fails, is it worth paying this price for improved predictive text or semantic search? Should the EU AI Act require an extra assessment of whether an GPAI tool is truly necessary before discussing regulation? Can data sets be effectively filtered at such a massive scale? Is ChatGPT a Feminist?

The "trilogue" itself


As we mentioned in the beginning of this article, a content based feminist critique such as Deep-Fake Porn or General Purpose AI is as important as analysing the process of the EU AI Act. The "trilogue" is one of the final states within the legislative process of the EU. The nature of the "trilogue" contracts with transparency and co-liberation principles we know from inclusive tech development: The number of participants is limited and that they take place beyond close doors.


The anticipation of the power of Big Tech and its lobbying budget in Brussels raises the question of who the participants in the "trilogue" are. Organisations advocating for a feminist perspective, human rights activists and other NGOs lack both financial power and accessibility.


Higher liabiliies for developers


The developers are mostly off the hook. Duties for developers are mostly missing. The basic problem is, what should we test for? From a regulatory perspective, we want to test companies as much as possible. Therefore, introducing duties for developers can be a useful tool to protect people from discriminatory risks. This is also important because an important influencing factor for bias against marginalised and underrepresented groups is the use of data, algorithms and the mostly non-diverse development teams. Therefore, rules and responsibilities should be distributed among all actors involved in the design, development and implementation of AI systems.

Once again the importance of strong standardisations can be pointed out!


Once again: Facial recognition in public spaces should be prohibited


The EU Parliament suggests prohibition of live remote biometric identification systems represents a significant advancement. While the text doesn't explicitly prohibit retrospective mass surveillance, it restricts its application to law enforcement purposes and within strict legal parameters. Additionally, the proposed legislation aims to ban various detrimental uses of AI systems that perpetuate discrimination against marginalized communities. This includes technologies claiming to predict crimes, social scoring systems that hinder access to essential public and private services, as well as emotion recognition technologies utilized by law enforcement and border officials to identify suspicious individuals. Prohibition of live remote biometric identification systems is highly recommended from a feminist lens due to the high discriminatory potential. For a deep dive into biases and threats of facial recognition software, check out this paper.

As part of the 10th Feminist Roundtable on AI and Digital Policy, a group of diverse experts developed a feminist perspective on the EU AI Act. This 75-minute call aimed to open a debate on a feminist perspective on the EU AI Act, which is currently largely absent but much needed. Although this analysis cannot be considered conclusive and holistic enough, we want to provide a starting point for the further development of this feminist critique that goes beyond the discussion on the definition of AI and high-risk AI systems.


With Feminist AI, we aim to shape the AI of tomorrow in a way that empowers diverse people around the globe. We are committed to bringing intersectional and inclusive feminism to the world of AI to create an inclusive future for all. Feminist AI and Feminist Digital Policy describe an inclusive, intersectional approach to unlocking the potential of AI to create equality and a better life for all. It particularly focuses on the role of women, LGBTIQ+, and other marginalized and underrepresented groups regardless of their gender, age, religion and belief, disability, sexual identity, ethnicity, and appearance. If you have any further insights on the EU AI Act to be added, please reach out to us: info@fem-ai.com

The acknowledge the underrepresentation of sustainability aspects in this analysis. We are going to provide a separate Roundtable on Sustainability and AI in regards of the EU AI Act.

Sources:

  • Birhane, Abeba & Prabhu, Vinay & Kahembwe, Emmanuel. (2021). Multimodal datasets: misogyny, pornography, and malignant stereotypes.

  • Guevara-Gómez, Ariana & Ortiz de Zárate Alcarazo, Lucía & Criado, J. Ignacio. (2021). Feminist perspectives to artificial intelligence: Comparing the policy frames of the European Union and Spain. Information Polity. 26. 173-192. 10.3233/IP-200299)

  • Wudel, A., Schulz, M. Der Artificial Intelligence Act – eine Praxisanalyse am Beispiel von Gesichtserkennungssoftware. HMD 59, 588–604 (2022). https://doi.org/10.1365/s40702-022-00854-z

  • https://www.lawfareblog.com/eus-ai-act-barreling-toward-ai-standards-do-not-exist

  • https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

  • https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/



159 views0 comments

Recent Posts

See All
bottom of page