Decades ago, when imagining the practical uses of artificial intelligence, science fiction writers imagined autonomous digital minds that could serve humanity. Sure, sometimes a HAL 9000 or WOPR would subvert expectations and go rogue, but that was very much unintentional, right?  

And for many aspects of life, artificial intelligence is delivering on its promise. AI is, as we speak, looking for evidence of life on Mars. Scientists are using AI to try to develop more accurate and faster ways to predict the weather.

But when it comes to policing, the actuality of the situation is much less optimistic.  Our HAL 9000 does not assert its own decisions on the world—instead, programs which claim to use AI for policing just reaffirm, justify, and legitimize the opinions and actions already being undertaken by police departments.

AI presents two problems: tech-washing, and a classic feedback loop. Tech-washing is the process by which proponents of the outcomes can defend those outcomes as unbiased because they were derived from “math.” And the feedback loop is how that math continues to perpetuate historically-rooted harmful outcomes. “The problem of using algorithms based on machine learning is that if these automated systems are fed with examples of biased justice, they will end up perpetuating these same biases,” as one philosopher of science notes.

Far too often artificial intelligence in policing is fed data collected by police, and therefore can only predict crime based on data from neighborhoods that police are already policing. But crime data is notoriously inaccurate, so policing AI not only misses the crime that happens in other neighborhoods, it reinforces the idea that the neighborhoods they are already over-policed are exactly the neighborhoods that police are correct to direct patrols and surveillance to.

How AI tech washes unjust data created by an unjust criminal justice system is becoming more and more apparent.

In 2021, we got a better glimpse into what “data-driven policing” really means. An investigation conducted by Gizmodo and The Markup showed that the software that put PredPol, now called Geolitica, on the map disproportionately predicts that crime will be committed in neighborhoods inhabited by working-class people, people of color, and Black people in particular. You can read here about the technical and statistical analysis they did in order to show how these algorithms perpetuate racial disparities in the criminal justice system.

Gizmodo reports that, “For the 11 departments that provided arrest data, we found that rates of arrest in predicted areas remained the same whether PredPol predicted a crime that day or not. In other words, we did not find a strong correlation between arrests and predictions.” This is precisely why so-called predictive policing or any data-driven policing schemes should not be used. Police patrol neighborhoods inhabited primarily by people of color--that means these are the places where they make arrests and write citations. The algorithm factors in these arrests and determines these areas are likely to be the witness of crimes in the future, thus justifying heavy police presence in Black neighborhoods. And so the cycle continues again.

This can occur with other technologies that rely on artificial intelligence, like acoustic gunshot detection, which can send false-positive alerts to police signifying the presence of gunfire.

This year we also learned that at least one so-called artificial intelligence company which received millions of dollars and untold amounts of government data from the state of Utah actually could not deliver on their promises to help direct law enforcement and public services to problem areas.

This is precisely why a number of cities, including Santa Cruz and New Orleans, have banned government use of predictive policing programs. As Santa Cruz’s mayor said at the time, “If we have racial bias in policing, what that means is that the data that’s going into these algorithms is already inherently biased and will have biased outcomes, so it doesn’t make any sense to try and use technology when the likelihood that it’s going to negatively impact communities of color is apparent.”

Next year, the fight against irresponsible police use of artificial intelligence and machine learning will continue. EFF will continue to support local and state governments in their fight against so-called predictive or data-driven policing.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2021.