In 2015, Leicestershire Police scanned the faces of 90,000 individuals at a music festival in the UK and checked these images against a database of people suspected of crimes across Europe. This was the first known deployment of Live Facial Recognition (LFR) at an outdoor public event in the UK. In the years since, the surveillance technology has been frequently used throughout the country with little government oversight and no electoral mandate. 

Face recognition presents an inherent threat to individual privacy, free expression, information security, and social justice. It has an egregious history of misidentifying people of color, leading for example to wrongful arrest, as well as failing to correctly identify trans and nonbinary people. Of course, even if overnight the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. 

EFF has spent the last few years advocating for a ban on government use of face recognition in the U.S.–and we’ve watched and helped as many municipalities have, including in our own backyard–but we’ve seen enough of its use in the UK as well. 

That’s why we are calling for a ban on government use of face recognition in the UK. We are not alone. London-based civil liberties group Big Brother Watch has been driving the fight to end government-use of face recognition across the country. Human rights organization Liberty brought the first judicial challenge against police use of live facial recognition, on the grounds that it breached the Human Rights Act 1998. The government’s own privacy regulator raised concerns about the technical bias of LFR technology, the use of watchlist images with uncertain provenance, and ways that the deployment of LFR evades compliance with data protection principles. And the first independent report commissioned by Scotland Yard challenged police use of LFR as lacking an explicit basis and found the technology 81% inaccurate. The independent Ryder Review also recommended the suspension of LFR in public places until further regulations are introduced.

What Is the UK’s Current Policy on Face Recognition? 

Make no mistake: Police forces across the UK, like police in the US, are using live face recognition. That means full-on Minority Report-style real-time attempts to match people’s faces as they walk on the street to databases of photographs, including suspect photos. 

Of the five forces that have used the technology in England and Wales, the silent rollout has been primarily driven by London’s Metropolitan Police (better known as the Met) and South Wales Police, which oversees the over-1-million-person metro area of Cardiff. The technology is often supplied by Japanese tech company NEC Corporation. It scans every face that walks past a camera and checks it against a watchlist of people suspected of crimes or who are court-involved. Successful matches have resulted in immediate arrests. Six police forces in the UK also use Retrospective Facial Recognition (RFR), which compares images obtained by a camera to a police database, but not in real-time. Police Scotland has reported its intention to introduce LFR by 2026. On the contrary, the Police Service of Northern Ireland apparently has not obtained or implemented face recognition to date.

Unfortunately, the expanding roll-out of this dangerous technology has evaded legislative scrutiny through Parliament. Police forces are unilaterally making the decisions, including whether to adopt LFR, and if so, what safeguards to implement. And earlier this year the UK Government rejected a House of Lords report calling for the introduction of regulations and mandatory training to counter the negative impact that the current deployment of surveillance technologies has on human rights and the rule of law. The evidence that the rules around face recognition need to change are there–many are just unwilling to see or do anything about it. 

Police use of facial recognition was subject to legal review in an August 2020 court case brought by a private citizen against South Wales Police. The Court of Appeal held that the force’s use of LFR was unlawful insofar it breached privacy rights, data protection laws, and equality legislation. In particular, the court found that the police had too much discretion in determining the location of video cameras and the composition of watchlists. 

In light of the ruling, the College of Policing published new guidance: images placed on databases should meet proportionality and necessity criteria, and police should only use LFR when other “less intrusive” methods are unsuitable. Likewise, the then-UK Information Commissioner, Elizabeth Denham, issued a formal opinion warning against law enforcement using LFR for reasons of efficiency and cost reduction alone. Guidance has also been issued on police using surveillance cameras, most notably the December 2020 Surveillance Camera Commissioner’s guidance for LFR, and the January 2022 Surveillance Camera Code of Practice for technology systems connected to surveillance cameras. But these do not provide coherent protections on the individual right to privacy.

London’s Met Police 

Across London, the Met Police uses LFR by bringing a van with mounted cameras to a public place, scanning faces of people walking past, and instantly matching those faces against the Police National Database (PND). 

Images on the PND are predominantly sourced from people who have been arrested, which includes many individuals that were never charged or were cleared of committing a crime. In 2019, the PND reportedly held around 20 million facial images. According to one report, 67 people requested that their images be removed from police databases; only 34 requests were accepted; and of those, 14 were declined and the remainder were pending. Yet the High Court informed the police in 2012 that the biometric details of innocent people were unlawfully held on the database. 

This means that once a person is arrested, even if they are cleared, they remain a “digital suspect” having their face searched again and again by LFR. This violation of privacy rights is exacerbated by data sharing between police forces. For example, a 2019 police report detailed how the Met and British Transport Police shared images of seven people with the King’s Cross Estate for a secret use of face recognition between 2016 and 2018.

Between 2016 and 2019, the Met deployed LFR 12 times across London. The first came at Notting Hill Carnival in 2016–the UK’s biggest African-Caribbean celebration. One person was a false positive. Similarly, at Notting Hill Carnival in 2017, two people were falsely matched and another individual was correctly matched but was no longer wanted. Big Brother Watch reported that at the 2017 Carnival, LFR cameras were mounted on a van behind an iron sheet, thus making it a semi-covert deployment. Face recognition software has been proven to misidentify ethnic minorities, young people, and women at higher rates. And reports of deployments in spaces like Notting Hill Carnival–where the majority of attendees are Black–exacerbate concerns about the inherent bias of face recognition technologies and the ways that government use amplifies police powers and aggravates racial disparities.

 After suspending deployments during the COVID-19 pandemic, the force has since resumed its use of LFR across central London. On 28 January 2022–one day after the UK Government relaxed mask wearing requirements–the Met deployed LFR with a watchlist of 9,756 people. Four people were arrested, including one who was misidentified and another who was flagged on outdated information. Similarly, a 14 July 2022 deployment outside Oxford Street tube station reportedly scanned around 15,600 people’s data and resulted in four “true alerts” and three arrests. The Met has previously admitted to deploying LFR in busy areas to scan as many people as possible, despite face recognition data being prone to error. This can implicate people for crimes they haven’t committed. 

The Met also recently purchased significant amounts of face recognition technology for Retrospective Facial Recognition (RFR) to use alongside its existing LFR system. In August 2021, the Mayor of London’s office approved a proposal permitting the Met to expand its RFR technology as part of a four-year deal with NEC Corporation worth £3,084,000. And whilst LFR is not currently deployed through CCTV cameras, RFR compares images from national custody databases with already-captured images from CCTV cameras, mobile phones, and social media. The Met’s expansion into RFR will enable the force to tap into London’s extensive CCTV network to obtain facial images–with almost one million CCTV cameras in the capital. According to one 2020 report, London is the third most-surveilled city in the world, with over 620,000 cameras. Another report claims that between 2011 and 2022, the number of CCTV cameras more than doubled across the London Boroughs. 

While David Tucker, head of crime at the College of Policing, said RFR will be used “overtly,” he acknowledged that the public will not receive advance notice if an undefined “critical threat” is declared. Cameras are getting more powerful and technology is rapidly improving. And in sourcing images from more than one million cameras, face recognition data is easy for law enforcement to collect and hard for members of the public to avoid. 

South Wales Police

South Wales Police were the first force to deploy LFR in the UK. They have reportedly used the surveillance technology more frequently than the Met, with a June 2020 report revealing more than 70 deployments. Two of these led to the August 2020 court case discussed above. In response to the Court of Appeal’s ruling, South Wales Police published a briefing note claiming that it also used RFR to process 8,501 images between 2017 and 2019 and identified 1,921 individuals suspected of committing a crime in the process. 

South Wales Police have primarily deployed their two flagship facial recognition projects, LOCATE and IDENTIFY, at peaceful protests and sporting events. LOCATE was first deployed in June 2017 during UEFA Champions League Final week and led to the first arrest using LFR, alongside 2,297 false positives from 2,470 ‘potential matches’. Similarly, IDENTIFY was launched in August 2017 but utilizes the Custody Images Database and allows officers to retrospectively search CCTV stills or other media to identify suspects.

South Wales Police also deployed LFR during peaceful protests at an arms fair in March 2018. The force convened a watchlist of 508 individuals from its custody database that were wanted for arrest and a further six people that were “involved in disorder at the previous event.” No arrests were made. Similar trends are evident in the United States where face recognition has been used to target people engaging in protected speech, such as deployments at protests surrounding the death of Freddie Gray. Free speech and the right to protest are essential civil liberties and government use of face recognition at these events discourages free speech, harms entire communities, and violates individual freedoms. 

In 2018 the UN Special Rapporteur on the right to privacy criticized the Welsh police’s use of LFR as unnecessary and disproportionate, and urged the government and police to implement privacy assessments prior to deployment to offset violations on privacy rights. The force maintains that it is “absolutely convinced that Facial Recognition is a force for good in policing in protecting the public and preventing harm.” This is despite face recognition getting worse as the number of people in the database increases as when the likelihood of similar faces increases, matching accuracy decreases. 

The Global Perspectives 

Previous legislative initiatives in the UK have fallen off the policy agenda, and calls from inside Parliament to suspend LFR pending legislative review have been ignored. In contrast, European policymakers have advocated for an end to government use of the technology. The European Parliament recently voted overwhelmingly in favor of a non-binding resolution calling for a ban on police use of facial recognition technology in public places. In April 2021, the European DataProtection Supervisor called for a ban on the use of AI for automated recognition of human features in publicly accessible spaces as part of the European Commission’s legislative proposal for an Artificial Intelligence Act. Likewise, in January 2021 the Council of Europe called for strict regulation of the tech and noted in their new guidelines that face recognition technologies should be banned when used to solely determine a person’s skin color, religious or other belief, sex, racial or ethnic origin, age, health, or social status. Civil liberties groups have also called on the EU to ban biometric surveillance on the grounds of inconsistencies with EU human rights.

The United States Congress continues to debate ways of regulating government use of face surveillance. Also, U.S. states and municipalities have taken it upon themselves to restrict or outright ban police use of face recognition technology. Cities across the United States, large and small, have stood up to this invasive technology by passing local ordinances banning its use. If the UK passes strong FRT rules, they would be an example for governments around the world including the United States.

Next Steps

Face recognition is a dangerous technology that harms privacy, racial justice, free expression, and information security. And the UK’s silent rollout has facilitated unregulated government surveillance of this personal biometric data. Please join us in demanding a ban on government use of face recognition in the UK. Together, we can end this threat.