This article is building up on our previous publication
In a world where technology is reshaping the way we interact, the EU AI Act has emerged as a significant milestone in ensuring ethical AI development. From a feminist standpoint, it is imperative to scrutinize the contentious facets of this legislation through a lens that prioritizes the marginalized, the under-represented and the under-privileged groups. This article dives into the most widely debated areas of the EU AI Act, dissecting its implications within the framework of feminist principles, after establishing the distinctions between the governing institutions involved.
The key arguments of this article are based on the 11th Feminist AI and Digital Policy Roundtable Talk. Our speaker, Dr. Marco Wedel, is a political scientist at TU Berlin. For the past three years, Marco has been involved in a research project called KIDD, which focuses on the implementation of AI systems in companies in the service of diversity. As part of this project, Marco has been following the development of the EU AI Act. The discussion following his presentation involved people with different backgrounds and advanced knowledge of EU AI Act.
In this article, we want to shed light on the areas which are most controversial in the legislative process from a feminist perspective:
1. Defining AI: Understanding the Essence
2. Identifying Prohibited Areas of Application
3. Defining High-Risk AI Applications
4. The Threat of General Purpose AI
5. Unpacking the Notion of "Good Data"
6. Establishing an AI Office
7. Rejecting Voluntary Aspects: A Feminist Critique
The primary objective of this article is to employ a feminist lens to scrutinize and illuminate the various dimensions of the EU AI Act. By weaving feminist principles into the fabric of AI legislation, the article aims to put marginalized, under-represented and under-privileged groups into focus. The following seven areas will show how dimensions of power influence the drafting of the EU AI Act, and in doing so we want to remind people of the importance of an intersectional feminist perspective.
1. Defining AI: Understanding the Essence
At the heart of the EU AI Act lies the definition of AI itself. From a feminist perspective, this classification matters, as it shapes the future landscape where intersectional dynamics interact with technology. Emphasizing transparency in AI's capabilities and limitations aligns with the feminist commitment to dispelling obscurities, enabling women and marginalized groups to better comprehend and navigate these systems.
In a journey through time, the variances in AI definitions as proposed by different institutions reflect their nuanced responses to the evolving technological landscape. Let's delve into these developments, tracing their evolution from 2021 to 2023.
In the year 2021, the Commission's portrayal of AI encompassed a broad spectrum, encapsulating machine learning (ML), logic-based approaches, and statistical methods. This relatively wide definition mirrored the diverse methodologies harnessed in the realm of artificial intelligence and can be seen as a very broad approach.
Fast forward to November 2022, where the Council, in response to the expansive Commission definition, adopted a more refined stance. AI was redefined as a "system," hinged upon machine learning or logic- and knowledge-based approaches. This revision introduced the novel concept of General Purpose AI systems, carving out an autonomous dimension within the AI discourse.
In May 2023, the Parliament brought forth an even more expansive definition than the Commission, painting AI as a system rooted in machinery that generates outputs. Notably, this definition called attention to the imperative of risk delineation and the inclusion of foundational models in AI considerations.
Not stopping there, the Parliament also illuminated the societal, provider, and employer perspectives, emphasizing the significance of AI literacy. This multifaceted approach underscored the need for comprehensive understanding, extending beyond the technological sphere.
As we observe this dynamic progression of AI definitions, we can discern the conscious attempts by each institution to capture the essence of AI within the ever-evolving technological dynamics. This evolution reflects not only the maturation of the technology but also the deepening understanding of its multifarious impacts on society, underscoring the critical role of legislative bodies in shaping the AI landscape.
As these definitions continue to evolve, feminist engagement remains crucial in ensuring that AI development promotes gender equality, inclusivity, and the dismantling of existing biases. The Parliament's emphasis on AI literacy from societal, provider, and employer perspectives aligns with feminist aims. This holistic approach acknowledges the multidimensional nature of AI's impact, including its potential to amplify or mitigate existing inequalities.
2. Identifying Prohibited Areas of Application
The EU AI Act outlines areas where AI application is restricted, a crucial domain for feminist analysis. By acknowledging that certain applications can exacerbate inequalities, the act addresses concerns such as surveillance and discrimination. This resonates with feminism's focus on protecting vulnerable groups from harm and exploitation, especially when technology can amplify existing power imbalances.
Within the intricate landscape of AI, certain applications have raised concerns about the potential for psychological harm, social scoring, and widespread biometric recognition in public spaces, excluding usage in law enforcement. These contentious issues are currently subject to debate from various perspectives, sparking essential discussions on potential prohibitions within the realm of artificial intelligence.
In the context of the parliamentary draft, a notable proposition emerges: if an AI system contributes to significant harm, its usage should be prohibited, as stipulated by the Parliament. However, a fundamental challenge persists—how to precisely define and ascertain this harm, a query intimately linked to the process of standardization.
From a feminist vantage point, the precise delineation of “harm“ holds profound significance. The question reverberates with the core tenets of gender equality, as it calls for the recognition and mitigation of potential gender-specific harm. Feminists understand that systemic biases can amplify harm for marginalized groups, particularly women. Therefore, the ongoing process of standardizing risk assessment warrants vigilant observation to ensure that gender-specific implications are thoughtfully addressed.
In this evolving discourse, feminist principles spotlight the necessity of a comprehensive, intersectional approach. It is essential to not only scrutinize the immediate physical and psychological ramifications, but also unearth the potential indirect and systemic consequences. A feminist viewpoint underscores the importance of examining harm within broader social, cultural, and historical contexts, thereby striving for a more equitable and just AI landscape.
3. Defining High-Risk AI Applications
In the Commission’s draft, the following areas are defined as high risk:
a) Biometric identification and categorization of natural persons
b) Management and operation of critical infrastructure
c) Education and vocational training
d) Employment, workers management and access to self-employment
e) Access to and enjoyment of essential private services and public services and benefits
f) Law enforcement
g) Migration, asylum, and border control management
h) Administration of justice and democratic processes
The Council's alignment with the Commission's proposed dimensions for defining high-risk AI marks a significant convergence of perspectives. However, a closer inspection reveals nuanced deviations, particularly evident in alterations to the Annex outlining high-risk application domains. While this convergence is notable, certain Council provisions introduce potential pitfalls, introducing elements that call for feminist analysis.
One such instance pertains to the inclusion of 'accessory' AI systems, a concept that remains ambiguously defined. This introduces a loophole in the high-risk AI definition, potentially permitting systems that might contribute to significant harm to evade scrutiny under this classification. For instance, providers would be able to de-risk AI systems by themselves which leads to less rules to follow. The risk-based approach gets severely compromised.
Moreover, the Council's stipulation that high-risk regulations need not apply if an AI system's risk is intentionally denied presents another potential vulnerability. While the Parliament counters this with penalties for misusing the re-risking opportunity, these loopholes create an environment where AI systems with inherent dangers could evade regulatory measures. The unintended consequence of these provisions is the dilution of risk-categorization, possibly undermining the very purpose of safeguarding against hazardous AI for innovation's sake.
In this discourse, feminist perspectives shed light on the broader implications of these loopholes. A gender-sensitive lens emphasizes the importance of comprehensive risk assessment, recognizing that these exceptions might inadvertently exacerbate existing gender inequalities and disproportionately affect marginalized communities. The potential for harm is magnified, particularly if these loopholes enable unchecked AI that perpetuates biases and discrimination.
The definition of a "high-risk" AI should also be regarded from a feminist standpoint. By incorporating considerations of potential biases and discriminatory effects, the act could better safeguard against harmful consequences that disproportionately impact women. A feminist analysis emphasizes the importance of comprehensive risk assessments that account for diverse societal experiences.
As discussions persist, it is paramount to harmonize the definitions and regulations surrounding high-risk AI, considering the multifaceted implications through various lenses, including feminist, ethical, and societal viewpoints. Striking the right balance between innovation and accountability is essential to foster a future where AI empowers without compromising the rights and well-being of individuals, regardless of gender or identity.
4. The Threat of General Purpose AI (GPAI)
The introduction of General Purpose AI (GPAI) into the regulatory landscape has sparked a multifaceted discourse. The Commission's proposal to designate GPAI as a high-risk entity, subject to similar regulations as high-risk systems, adds a layer of complexity. It's important to mention that OpenAI has been notably active in lobbying against this classification.
Meanwhile, the Parliament has outlined a set of overarching principles that GPAI systems should adhere to. These principles include human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination, fairness, and societal and environmental well-being. However, it's crucial to underline that despite these principles, GPAI is not classified as high-risk AI.
The extent to which GPAI will be regulated as high risk emerges as a pivotal question. Through a feminist lens, the necessity for stringent regulation becomes glaringly apparent, particularly when considering the discriminatory potential inherent in GPAI systems. The uncontrollable nature of input data, combined with GPAI's vast scalability, accentuates the risk of marginalizing and harming vulnerable groups.
Feminist analysis emphasizes that strong regulation is compulsory to protect these marginalized groups. It underscores the need to address inherent biases, prevent perpetuation of systemic discrimination, and ensure that GPAI systems prioritize the well-being and rights of all, regardless of gender, identity, or background.
In this pursuit, collaboration among feminist advocates, policymakers, AI experts, and marginalized communities is pivotal. Together, they can pave the way for regulations that safeguard against potential harm, uphold principles of fairness, and contribute to a more equitable and inclusive AI landscape. The decisions made in this context hold the power to shape the future of technology in a way that aligns with feminist ideals of equality, justice, and empowerment.
5. Defining "Good Data": A Feminist Inquiry
When discussing the concept of "good data," a multifaceted examination unfolds. The Commission asserts that data-governance mechanisms should ensure unbiased datasets, being free from errors, for tasks like training, validation, and testing. However, applying a feminist lens to this framework reveals its limitations. While these requirements sound laudable, the practical realization is often elusive—perfect datasets are non-existent.
Responding to these challenges, the Council introduced a pivotal amendment. They emphasized the necessity to scrutinize biases that could impact health, safety, or result in discrimination, as prohibited by EU law. This sparks a vital question: Does the current EU law adequately address the intricate, multifactorial, and intersectional discriminatory potential embedded in AI systems?
In a bid to refine this discourse, the Parliament provided a more elaborate definition of "good data" compared to its counterparts. While building on the Council's framework, the Parliament adds a crucial dimension—analyzing negative impacts on fundamental rights, particularly where data outputs influence future operations through "feedback loops." Notably, from a feminist standpoint, the introduction of measures to detect, prevent, and mitigate biases is commendable and resonates with the commitment to justice and equity.
Furthermore, the Parliament's stance on training data sets is noteworthy. In addition to validity and relevance, they emphasize the need for data to be "sufficiently representative, appropriately vetted for errors, and as complete as possible in view of intended purposes." This nuanced approach not only acknowledges the practical complexities of data collection but also introduces the potential for positive discrimination—an intriguing prospect from a feminist perspective.
In conclusion, the quest to define "good data" unveils a complex landscape with varying approaches from different institutions. By applying a feminist lens, we illuminate the nuanced intersections of gender, discrimination, and societal implications embedded in these definitions. The evolving discourse offers an opportunity to shape data governance that aligns with feminist ideals of inclusivity, justice, and fairness, ultimately contributing to an AI landscape that empowers and respects all individuals.
6. Empowering the AI Office: Paving the Path from What to How
We find ourselves in a state of excitement and curiosity as we anticipate the implementation details - the "how" of the Act's provisions.
Within the framework of the EU AI Act, the establishment of an AI Office is a significant development. This entity holds the potential to provide clarity through concrete definitions, translating the Act's overarching principles into actionable guidelines.
The transition of power from private standardization organizations to a public institution, embodied by the AI Office, is an aspect of this evolution that particularly captures attention. From a feminist perspective, this shift signifies a noteworthy stride towards increased transparency and accountability. This movement aligns with feminist ideals, promoting a more inclusive and participatory approach to shaping AI regulations.
In this context, it is worth delving into the comprehensive debate surrounding this topic. The article titled "Feminist View on EU AI Act Standardization Processes" offers a nuanced exploration of the implications and potential benefits of transferring influence from private entities to a public AI Office. This detailed analysis examines how such a transition may impact gender equality, fairness, and the overall democratization of AI governance.
As we eagerly await the unfolding of the AI Office's role, we anticipate a transformative step toward bridging the gap between the conceptual framework of the EU AI Act and its tangible realization. The journey from "what" the Act envisions to "how" it will be executed embodies a vital phase of progress. Through collective engagement and continued dialogue, we have the opportunity to shape a future AI landscape that embodies the principles of transparency, inclusivity, and equity.
7. Rejecting Voluntary Aspects: A Feminist Critique
From a feminist perspective, the inclusion of certain aspects as voluntary measures within the EU AI Act raises significant concerns. While the Act proposes comprehensive guidelines for responsible AI development, the designation of certain dimensions as voluntary – particularly within Chapter 2, Article 69 – is unacceptable. This approach, while acknowledging key considerations, falls short of ensuring the robust and gender-equitable implementation that feminist principles demand.
Examining the facets that remain voluntary reveals a disconcerting pattern:
Environmental Sustainability: Environmental concerns intersect deeply with feminist ideals, as both movements advocate for a world that prioritizes well-being and justice for all. Treating environmental sustainability as voluntary undermines the urgent need for AI systems to align with eco-conscious values, particularly considering their far-reaching implications on marginalized communities.
Stakeholder Participation: Feminist perspectives emphasize inclusivity and participation. By relegating stakeholder engagement to voluntary status, the Act risks sidelining vital voices and perspectives, including those of women, marginalized groups, and other underrepresented communities. This erodes the principle of democratic decision-making and perpetuates existing power imbalances.
Accessibility for Persons with Disabilities: Central to feminist values is the dismantling of barriers that hinder full participation. Treating accessibility as a voluntary aspect ignores the imperative of accommodating persons with disabilities. This perpetuates exclusionary practices and denies individuals their right to equitable access to AI systems.
Diversity in Development Teams: Feminism champions diversity as a catalyst for innovation and fairness. Allowing diversity in development teams to remain voluntary fails to acknowledge the need for a deliberate and structured approach to fostering gender and intersectional diversity. Without clear objectives, meaningful progress may remain elusive.
The Council’s and the Parliament's reluctance to significantly amend these voluntary aspects is disappointing. While the Act is a notable step forward, the omission of these critical dimensions from mandatory provisions contradicts the spirit of gender equality and social justice that feminism upholds.
A truly feminist lens compels us to demand substantive change, urging for the transformation of these voluntary considerations into binding requirements. Only then can the EU AI Act fully embody the principles of inclusivity, equality, and fairness, ensuring that AI development is indeed equitable and beneficial for all individuals, irrespective of their gender or identity.
Conclusion: A Feminist Vision for the EU AI Act
In the intricate tapestry of the EU AI Act, the application of feminist principles emerges as an indispensable tool for comprehensive evaluation. By weaving these principles into the fabric of the legislative process, we gain a profound understanding of its far-reaching implications. From the definition of AI to the delineation of prohibited applications, and from gender-sensitive risk assessment to data inclusivity, feminists wield a transformative lens to shape a more equitable AI landscape.
As we unfurl the tapestry of the EU AI Act through a feminist lens, we envision a horizon where technology transcends barriers and empowers every individual. This shared commitment fuels our collective determination to mold an AI landscape that honors diversity, fosters equity, and upholds the rights and dignity of all, regardless of gender or identity.