The UK intelligence agencies are lobbying the government to weaken surveillance laws they argue place a “burdensome” limit on their ability to train artificial intelligence models with large amounts of personal data.
Thank you for reading this post, don’t forget to subscribe!
The proposals would make it easier for GCHQ, MI6 and MI5 to use certain types of data, by relaxing safeguards designed to protect people’s privacy and prevent the misuse of sensitive information.
Privacy experts and civil liberties groups have expressed alarm at the move, which would unwind some of the legal protection introduced in 2016 after disclosures by Edward Snowden about intrusive state surveillance.
The UK’s spy agencies are increasingly using AI-based systems to help analyse the vast and growing quantities of data they hold. Privacy campaigners argue rapidly advancing AI capabilities require stronger rather than weaker regulation.
However, a recent but little-noticed review of surveillance powers reveals how the intelligence agencies are arguing for a reduction in the safeguards regulating their use of large volumes of information, known as bulk personal datasets (BPDs).
These datasets often contain information, some of which may be sensitive, about extremely large groups of people, most of whom are unlikely to be of intelligence and security interest.
MI5, MI6 and GCHQ frequently use BPDs that are drawn from a wide range of closed and open sources and can also be acquired through covert means.
The agencies, who argue these datasets help them identify potential terrorists and future informants, want to relax rules about how they use BPDs in which they believe people have a “low or no expectation of privacy”.
The proposed changes were presented to David Anderson, a senior barrister and member of the House of Lords, whom the Home Office commissioned earlier this year to independently review changes to the Investigatory Powers Act.
In his findings, Lord Anderson said the agencies’ proposals would replace existing safeguards, which include a requirement for a judge to approve examination and retention of BPDs, with a quicker process of self-authorisation.
Anderson said the agencies had used AI for many years and were already training machine-learning models with BPDs. He said significant increases in the type and volume of the datasets meant machine learning tools “are proving useful” to British intelligence.
But he said the existing regulations relating to BPDs were perceived by the agencies as “disproportionately burdensome” when applied to “publicly available datasets, specifically those containing data in respect of which the subject has little or no reasonable expectation of privacy”.
The intelligence services have argued this information should be placed into a new category of BPDs which, according to Anderson, could include content from video-sharing platforms, podcasts, academic papers, public records, and company information.
The cross-bench peer concluded the law should be amended to create “a less onerous set of safeguards” for the new category of BPDs and said the “deregulatory effect of the proposed changes is relatively minor”.
However, he recommended retaining a degree of ministerial and judicial oversight in the process, rather than allowing intelligence officers alone to decide which BPDs are placed into the new category.
While considering how the intelligence services would use the new category of BPDs, Anderson acknowledged that it seemed the “use of data for training models might be a factor pointing towards a lower level of oversight”.
Last week, during a Lords debate about AI, Anderson said that “in a world where everybody is using open-source datasets to train large language models” the intelligence agencies are “uniquely constrained” by the current legislation.