Governments and corporations are tracking how we go about our lives with a unique marker that most of us cannot hide or change: our own faces. Across the country, communities are pushing back with laws that restrain this dangerous technology. In response, some governments and corporations are claiming that these laws should only apply to some forms of face recognition, such as face identification, and not to others, such as face clustering.
We disagree. All forms of face recognition are a menace to privacy, free speech, and racial justice. This post explores many of the various kinds of face recognition, and explains why all must be addressed by laws.
What Is Face Recognition?
At the most basic level, face recognition technology takes images of human faces and tries to extract information about the people in them.
Here’s how it usually works today:
First, the image is automatically processed to identify what is and is not a face. This is often called “face detection.” This is a prerequisite for all of the more sophisticated forms of face recognition we discuss below. In itself, face detection is not necessarily harmful to user privacy. However, there is significant racial disparity in many face detection technologies.
Next, the system extracts features from each image of a face. The raw image data is processed into a smaller set of numbers that summarize the differentiating features of a face. This is often called a “faceprint.”
Faceprints, rather than raw face images, can be used for all of the troubling tasks described below. A computer can compare the faceprint from two separate images to try and determine whether they’re the same person. It can also try to guess other characteristics (like sex and emotion) about the individual from the faceprint.
The most widely deployed class of face recognition is often called “face matching.” It tries to match two or more faceprints to determine if they are the same person.
Any face recognition system used for “tracking”, “clustering”, or “verification” of an unknown person can easily be used for “identification”
Face matching can be used to link photographs of unknown people to their real identities. This is often done by taking a faceprint from a new image (e.g. taken by a security camera) and comparing it against a database of “known” faceprints (e.g. a government database of ID photos). If the unknown faceprint is similar enough to any of the known faceprints, the system returns a potential match. This is often known as “face identification.”
Face matching can also be used to figure out whether two faceprints are from the same face, without necessarily knowing whom that face belongs to. For example, a phone may check a user’s face to determine whether it should unlock, often called “face verification." Also, a social media site may scan through a user’s photos to try to determine how many unique people are present in them, though it may not identify those people by name, often called “face clustering.” This tech may be used for one-to-one matches (are two photographs of the same person?), one-to-many matches (does this reference photo match any one of a set of images?), or many-to-many matches (how many unique faces are present in a set of images?). Even without attaching faces to names, face matching can be used to track a person’s movements in real-time, for example, around a store or around a city, often called “face tracking.”.
All forms of face matching raise serious digital rights concerns, including face identification, verification, tracking, and clustering. Lawmakers must address them all. Any face recognition system used for “tracking”, “clustering”, or “verification” of an unknown person can easily be used for “identification” as well. The underlying technology is often exactly the same. For example, all it takes is linking a set of “known” faceprints to a cluster of “unknown” faceprints to turn clustering into identification.
Even if face identification technology is never used, face clustering and tracking technologies can threaten privacy, free speech, and equity. For example, police might use face-tracking technology to follow an unidentified protester from a rally to their home or car, and then identify them with an address or license plate database. Or police might use face clustering technology to create a multi-photo array of a particular unidentified protester, and manually identify the protester by comparing that array to a mugshot database, where such manual identification would have been impossible based on a single photo of the protester.
Accuracy, Error, and Bias
In 2019, Nijeer Parks was wrongfully arrested after being misidentified by a facial recognition system. Despite being 30 miles away from the scene of the alleged crime, Parks spent 10 days in jail before police admitted their mistake.
Despite being 30 miles away from the scene of the alleged crime, Nijeer Parks spent 10 days in jail after being misidentified by a facial recognition system.
Nijeer Parks is at least the third person to be falsely arrested due to faulty face recognition tech. It’s no coincidence that all three people were Black men. Facial recognition is never perfect, but it is alarmingly more error-prone when applied to anyone who is not a white and cisgender man. In a pioneering study from 2018, Joy Buolamwini and Dr. Timnit Gebru showed that face identification systems misidentified women of color at more than 40 times the rate of white men. More recently, NIST testing of various state-of-the-art face recognition systems confirmed a broad, dramatic trend of disparate “false positive” rates across demographics, with higher error rates for faces that were not white and male.
Furthermore, face identification systems that may perform better on laboratory benchmarks— for example, attempting to identify well-lit headshots—are usually much less accurate in the real world. When that same technology is given a more realistic task, like identifying people walking through an airport boarding gate, it performs much less well.
For many reasons, widespread deployment of facial identification—even if it was accurate and unbiased—is incompatible with a free society. But the technology today is far from accurate, and it is deeply biased in ways that magnify the existing systematic racism in our criminal justice system.
We expect that researchers will find the same kinds of unacceptable errors and bias in face tracking and clustering, as has already been found in face identification. Which is one more reason why privacy laws must address all forms of face recognition.
Another Form of Face Recognition: Face Analysis
Face recognition has many applications beyond matching one faceprint to another. It is also used to try to guess a person’s demographic traits, emotional state, and more, based on their facial features. A burgeoning industry purports to use what is often called “face analysis” or “face inference” to try to extract these kinds of auxiliary information from live or recorded images of faces. Face analysis may be used in combination with other technologies, like eye tracking, to examine the facial reaction to what you are looking at.
Some vendors claim they can use face recognition technologies to assign demographic attributes to their targets, including gender, race, ethnicity, sexual orientation, and age.
It’s doubtful that such demographic face analysis can ever really “work.” It relies on the assumption that differences in the structure of a face are perfect reflections of demographic traits, when in many cases that is not true. These demographics are often social constructs and many people do not fit neatly under societal labels.
When it does “work”, at least according to whomever is deploying it, demographic face inference technology can be extremely dangerous to marginalized groups. For example, these systems allow marketers to discriminate against people on the basis of gender or race. Stores might attempt to use face analysis to steer unidentified patrons towards different goods and discounts based on their gender or emotional state—a misguided attempt whether it succeeds or fails. At the horrific extreme, automatic demographic inference can help automate genocide.
These technologies can also harm people by not working. For example, “gender recognition” will misidentify anyone who does not present traditional gender features, and can harm transgender, nonbinary, gender non-conforming, and intersex people. That’s why some activists are campaigning to ban automated recognition of gender and sexual orientation.
Face analysis also purportedly can identify a person’s emotions or “affect,” both in real-time and on historical images. Several companies sell services they claim can determine how a person is feeling based on their face.
This technology is pseudoscience: at best, it might learn to identify some cultural norms. But people often express emotions differently, based on culture, temperament, and neurodivergence.
This technology is pseudoscience: at best, it might learn to identify some cultural norms. But people often express emotions differently, based on culture, temperament, and neurodivergence. Trying to uncover a universal mapping of “facial expression” to “emotion” is a snipe hunt. The research institute AI Now cited this technology’s lack of scientific basis and potential for discriminatory abuse in a scathing 2019 report, and called for regulators to ban its use for important decisions about human lives.
Despite the lack of scientific backing, emotion recognition is popular among many advertisers and market researchers. Having reached the limits of consumer surveys, these companies now seek to assess how people react to media and advertisements by video observation, with or without their consent.
Even more alarmingly, these systems can be deployed to police “pre-crime”—using computer-aided guesses about mental state to scrutinize people who have done nothing wrong. For example, the U.S. Department of Homeland Security spent millions on a project called “FAST”, which would use facial inference, among other inputs, to detect “mal-intent” and “deception” in people at airports and borders. Face analysis can also be incorporated into so-called “aggression detectors,” which supposedly can predict when someone is about to become violent. These systems are extremely biased and nowhere near reliable, yet likely will be used to justify excessive force or wrongful detention against whomever the system determines is “angry” or “deceptive.” The use of algorithms to identify people for detention or disciplinary scrutiny is extremely fraught, and will do far more to reinforce existing bias than to make anyone safer.
Some researchers have even gone as far as to suggest that “criminality” can be predicted from one’s face. This is plainly not true. Such technology would unacceptably exacerbate the larger problems with predictive policing.
Mitigating the risks raised by the many forms of face recognition requires each of us to be empowered as the ultimate decision-maker in how our biometric data is collected, used, or shared. To protect yourself and your community from unconsented collection of biometric data by corporations, contact your representatives and tell them to join Senators Jeff Merkley and Bernie Sanders in advocating for a national biometric information privacy act.
Government use of face recognition technology is an even greater menace to our essential freedoms. This is why government agencies must end the practice, full stop. More than a dozen communities from San Francisco to Boston have already taken action by banning their local agencies from utilizing the technology. To find out how you can take steps today to end government use of face recognition technology in your area, visit EFF’s About Face resource page.
For a proposed taxonomy of the various kinds of face recognition discussed in this post, check out this list of commonly used terms.