top of page

A feminist view on EU AI Act standardization processes

Updated: Jun 15, 2023



Under the EU AI Act, the debate around the standardization of artificial intelligence (AI) is getting louder. Decisions are currently being made that will change our future society and the way we live together. The use of AI has become an integral part of our everyday life, even if it is partly invisible: not only ChatGPT is a present example of this, but also the use of Siri and Alexa already in the children's room. In these current socio-technological-convergent times, regulations and standardizations serve as necessary instruments to ensure the protection of human rights in the use of AI systems.

The EU AI Act is a proposed regulation that aims to create a single legal framework for the development, deployment, and trade of artificial intelligence (AI) in the European Union. Thus, the EU sets the following target:


„The way we approach Artificial Intelligence (AI) will define the world we live in the future. To help building a resilient Europe for the Digital Decade, people and businesses should be able to enjoy the benefits of AI while feeling safe and protected.“[1]


The EU AI Act assigns AI systems to three risk categories:

(1) prohibited applications and systems that pose unacceptable risk,

(2) special legal requirements for high-risk AI applications, such as a resume scanning tool that ranks job applicants,

(3) largely unregulated applications that are not explicitly prohibited or classified as high-risk remain.


Although the positions in the trialogue between the Council of Europe, the EU Parliament, and the EU Commission in the context of the EU AI Act have not yet been finally decided, it can be assumed that compromises will be made at the expense of human rights due to the high influence of tech companies in Brussels and the concern that the EU AI Act will restrict innovation too much. Meanwhile, the bulk of the EU AI Act seems to have been decided.[2]


For example, remote identification through facial recognition software, which is supposed to be explicitly prohibited in the German coalition agreement, was not classified as prohibited AI in the EU AI Act.

An article by AlgorithmWatch analyzes this topic under the title: "How the German government decided not to protect people against the risks of AI". Even in the context of migration and immigration AI applications, respect for human rights seems to be ranked behind innovativeness and the idea of progress. These observations allow for speculation as to whether ethical AI, people-centric AI, trustworthy AI (...) sufficiently focus on protecting the most marginalized and underrepresented groups.


Has the EU AI ACT become an instrument for ethic-washing? Could a feminist perspective that analyzes power relations and focuses on the most marginalised and underrepresented groups have changed the outcome in the EU AI ACT?

In the context of the EU AI ACTS process, we have arrived at the question: "How to bring principle into practice?" - a call for standardization approaches. How do we measure AI systems? What scales will be used to evaluate? What are the goals of the standard? Are the goals set high enough? Who is involved in standardization processes, and who is not?


In principle, the standardization of AI systems should be classified as an opportunity, beyond their risk classification. Standards require simplification, and clear rules. Therefore, it can be assumed that standardization is a necessary means of protection in the digital space, even if some simplifications have to be made through the classification into evaluation clusters, which reduce diversity criteria. A feminist perspective should reveal power relations. In addition, feminism aims to set new standards that, on the one hand, guarantee the protection of marginalized and underrepresented groups and, on the other hand, develop a positive future narrative as a target image. Both aspects are not pronounced in current standardization concepts.

From the observation of recent months, the involvement of civil society and NGOs - especially from the global South - is not guaranteed. An inclusive, feminist perspective that raises foreign and development policy to a new level of quality [3] hardly exists so far. Therefore, FemAI - Center for Feminist Artificial Intelligence has engaged in a substantive discussion on AI standardization approaches from a feminist perspective.

Over the last 5 months, a digital expert panel has been established to ensure the inclusion of the most marginalised[4], underrepresented groups that are largely absent from current legislative and standardization processes. Not least because of the high discrimination potential in the use of AI systems (facial recognition software, HR processes, medicine, ...), an exclusion of key stakeholder such as NGOs and organizations representing diversity and inclusion is not timely and purposeful.



Together, the Fellows of the Feminist AI and Digital Policy Roundtable explored the processes, criteria, and content of the AI standardization world. Among other things, they discussed the relationship between AI standardization and the EU AI Act.

Although the EU AI Act points out various evaluation criteria of AI systems in its draft, it is not a concrete evaluation framework in its form. Beyond the text of the EU AI Act regulation, it is therefore necessary to create a regulation in which concrete, measurable, harmonized characteristics for measuring AI systems under ethical aspects are to be mapped. This measurability can be created through standards. Just like the power consumption of electrical appliances or the food traffic light in the context of content declaration, an evaluation of AI systems should be carried out with the help of a scale. Which individual measured values will be included in the assessment and to what extent is a key debate in the current standardization process. The decisions are guided by the EU AI Act and are thus dependent on the people participating in the current standardization debates. In this context, the committees of the EU AI Act process are essential information carrier that help shape the content of standardization. Again, we critically ask who gets a voice in this process, and who does not?


The standardization of AI systems under the EU AI Act is one of the aspects of the so-called harmonization approaches. These are intended to help create a uniform legal framework for the use of AI in the EU that ensures the protection of fundamental rights and public safety while promoting innovation and growth. Regarding harmonization of legislation, the EU AI Act provides for various approaches.


This article focuses on the harmonization of testing and certification procedures. The EU AI Act also aims to introduce uniform testing and certification procedures for AI systems to ensure that they meet the requirements of the regulation. Standards are needed for this. The EU AI Act is also intended to introduce uniform transparency and information obligations for those who develop, operate, or provide AI systems to ensure that data subjects are informed about the use of AI and that their rights are safeguarded.


Representation in committees and decision-making functions

That it makes sense to include people from marginalized and underrepresented groups in our society in decision-making processes is not a new insight. Unfortunately, reliable figures on the composition of the EU AI Act standardization bodies are not known - and are not subject to publication. A 100% male quota in overall societal decision-making processes is not acceptable in 2023! Both transparency and representation are necessary at this point to make the results of the standardization debate inclusive and purposeful.


Wording could be improved: Justice instead of fairness

Making fairness measurable is a challenge from a feminist perspective. When fairness is asked, we look at the world from a humanistic perspective. This ignores the fact that power structures prevail in our system. Patriarchal structures must not be reinforced by standardization in AI systems. Rather, our standards must ensure that power structures and potential for discrimination are reduced. This line of reasoning underscores the need for high standards. After all, nothing is more dangerous to the preservation of our democracy than the use of poor, inadequate, and ultimately ethical-washing[5] guidelines. Instead of using the term Fairness, we propose the use of the term Justice. Justice recognizes that people are not on a level playing field. Justice[6] argues the need for human intervention in data, processes, and structures along AI lifecycles and recognizes power structures.


Reducing commercialization: are we all a target group?

Participation in EU AI ACT discussions and standardization bodies can be read as a privilege. National funding of participation opportunities remains largely absent. Thus, funding of civil society representatives in the various bodies by the federal government would be conceivable. The problem is often that the targeted, non-participating groups are not provided with financial resources to engage in the process, rather than being explicitly excluded. At the same time, it must be recognized that not all people have the personal resources to volunteer. Marginalized groups will have poorer qualifications in aspects such as wages, care work, language, for example. It is a multifactorial, intersectional problem. The representation of BIPOC, trans* inter* and non-binary folks as well as people with physical disabilities, who also have to fight for a place in the gender data debate, are too little included in policies, processes and measures.


This non-exhaustive analysis shows that the inclusion of diverse perspectives is essential to make standardization approaches inclusive and to expose power relations. With this article, we call on standardization organizations and EU policymakers to strengthen the inclusion of civil society, experts outside of technical implementation, and feminist voices. This requires financial resources, transparency, and courage with which we can create standardization under the EU AI Act that meets the regulation's mandate: The European approach to AI will help build a resilient Europe for the digital decade with people at the center!



Sources:



[4] Marginalisation is a social process in which population groups are pushed to the "fringes of society" and thus have little opportunity to participate in economic, cultural, and political life.These terms apply to groups of people who, due to factors normally beyond their control, do not have the same opportunities as other, more fortunate groups in society. Examples might include the unemployed, refugees, and other socially excluded people. Marginalization involves the loss of resources, opportunities for influence, as well as status, and can affect mental and physical health. If the marginalized group is a minority, it is also possible to speak of minority stress in the case of psychological and physical consequences. But marginalization does not only affect minorities. For example, in a patriarchal society, femininity is marginalized even though women are not a minority.



89 views0 comments
bottom of page