Unmasking Facial Recognition is a report exploring the racial bias challenges of the police’s use of live facial recognition technology in the United Kingdom.
Through a series of workshops, interviews, roundtables, and desk-based analysis, the project looks beyond the question of accuracy and situates the technology within the broader context of racialised surveillance. It is concentrated on the implications of the technology for people of colour and Muslims – two heavily surveilled groups in society.
The project was supported by the Joseph Rowntree Reform Trust, an independent funder focused on political and democratic reform in the UK.
Download and read the report here.
Publication date: 28 August 2020.
What is live facial recognition technology?
Live facial recognition technology is a system which analyses an individual’s face in order to determine an identification in real time. The technology works by examining facial patterns (e.g. distance between eyes, length of nose) in order to create a template of a face and by making a comparison with a template held on record. If the comparison renders a match, the system may provide a confidence score, e.g. 90% for a strong match. The threshold for a strong or weak match is set by the entity deploying the system.
There are two types of facial recognition. The first is known as ‘one-to-one’ matching. In this scenario, the system confirms that an image matches a different image of the same person in a database. This type of facial recognition system is used for unlocking smartphones or for checking passports at an airport. The second is known as ‘one-to-many’ matching. These systems are deployed in order to verify whether the face in an image has any match within a database. This is the system used for identifying a person of interest as part of a surveillance strategy. It is this ‘one-to-many’ system that our report is focused on.