AI systems are are already having a far-reaching impact in our lives. They’re increasingly being used to monitor and identify us in public spaces, predict our likelihood of criminality, re-direct policing and immigration control to already over-surveilled areas, facilitate violations of the right to claim asylum, predict our emotions and categorise us. They are also used to make crucial decisions about us, for example who gets to access welfare schemes.
Thank you for reading this post, don’t forget to subscribe!
Without proper regulation, these will exacerbate existing societal harms of mass surveillance, structural discrimination, and centralised power of large technology companies.
The AI Act is a crucial opportunity to regulate this technology and to prioritise people’s rights over profits. Through this legislation, the EU must ensure that AI development and use is accountable, publicly transparent, and that people are empowered to challenge harms:
- Empower affected people by upholding a framework of accountability, transparency, accessibility and redress
This includes requiring fundamental rights impact assessment before deploying high-risk AI systems, registration of high-risk systems in a public database, horizontal and mainstreamed accessibility requirements for all AI systems, a right for lodging complaints when people’s rights are violated by an AI system, and a right to representation and rights to effective remedies.
- Limit harmful and discriminatory surveillance by national security, law enforcement and migration authorities
When AI systems are used for law enforcement, security and migration control, there is an even greater risk of harm and violations of fundamental rights, especially for already marginalised communities. There need to be clear red lines for such use to prevent harms. This includes bans on all types of remote biometric identification, predictive policing systems, individual risk assessments and predictive analytic systems in migration contexts.
- Push back on Big Tech lobbying and remove loopholes that undermine the regulation
For the AI Act to be effectively enforced, negotiators need to push back against Big Tech’s lobbying efforts to undermine the regulation. This is especially important when it comes to risk-classification of AI systems. This classification needs to be objective and must not leave room for AI developers to self-determine whether their systems are ‘significant’ enough to be classified as high-risk and require legal scruity. Tech companies, with their profit-making incentives, will always want to under-classify their own AI systems.