150 civil society organisations are calling on the European Parliament, the European Commission and the Council of the EU to put people and their fundamental rights first in the AI Act as EU institutions proceed to ‘trilogue’ negotiations. These decisive meetings will determine the final legislation and how much it centres human rights and the concerns of people who could be affected by ‘risky’ AI systems.
Thank you for reading this post, don’t forget to subscribe!
AI systems are are already having a far-reaching impact in our lives. They’re increasingly being used to monitor and identify us in public spaces, predict our likelihood of criminality, re-direct policing and immigration control to already over-surveilled areas, facilitate violations of the right to claim asylum, predict our emotions and categorise us. They are also used to make crucial decisions about us, for example who gets to access welfare schemes.
Without proper regulation, these will exacerbate existing societal harms of mass surveillance, structural discrimination, and centralised power of large technology companies.
The AI Act is a crucial opportunity to regulate this technology and to prioritise people’s rights over profits. Through this legislation, the EU must ensure that AI development and use is accountable, publicly transparent, and that people are empowered to challenge harms
This includes requiring fundamental rights impact assessment before deploying high-risk AI systems, registration of high-risk systems in a public database, horizontal and mainstreamed accessibility requirements for all AI systems, a right for lodging complaints when people’s rights are violated by an AI system, and a right to representation and rights to effective remedies.
For the AI Act to be effectively enforced, negotiators need to push back against Big Tech’s lobbying efforts to undermine the regulation. This is especially important when it comes to risk-classification of AI systems. This classification needs to be objective and must not leave room for AI developers to self-determine whether their systems are ‘significant’ enough to be classified as high-risk and require legal scruity. Tech companies, with their profit-making incentives, will always want to under-classify their own AI systems.
Drafted by: European Digital Rights, Access Now, Algorithm Watch, Amnesty International, Bits of Freedom, Electronic Frontier Norway (EFN), European Center for Not-for-Profit Law, (ECNL), European Disability Forum, Fair Trials, Homo Digitalis, Irish Council for Civil Liberties (ICCL), Panoptykon Foundation, Platform for International Cooperation on the Rights of Undocumented Migrants (PICUM).
read and download the proposal at https://edri.org/our-work/civil-society-statement-eu-protect-peoples-rights-in-the-ai-act-trilogue-negotiations/
Leave a Reply