The arrival of government-operated autonomous police robots does not look like predictions in science fiction movies. An army of robots with gun arms is not kicking down your door to arrest you. Instead, a robot snitch that looks like a rolling trash can is programmed to decide whether a person looks suspicious—and then call the human police on them. Police robots may not be able to hurt people like armed predator drones used in combat—yet—but as history shows, calling the police on someone can prove equally deadly. 

Long before the 1987 movie Robocop, even before Karel Čapek invented the word robot in 1920, police have been trying to find ways to be everywhere at once. Widespread security cameras are one solution—but even a blanket of CCTV cameras couldn’t follow a suspect into every nook of public space. Thus, the vision of a police robot continued as a dream, until now. Whether they look like Boston Dynamics’ robodogs or Knightscope’s rolling pickles, robots are coming to a street, shopping mall, or grocery store near you. 

The Orwellian menace of snitch robots might not be immediately apparent. Robots are fun. They dance. You can take selfies with them. This is by design. Both police departments and the companies that sell these robots know that their greatest contributions aren’t just surveillance, but also goodwill. In one brochure Knightscope sent to University of California-Hastings, a law school in the center of San Francisco, the company advertises their robot’s activity in a Los Angeles shopping district called The Bloc. It’s unclear if the robot stopped any robberies, but it did garner over 100,000 social media impressions and 426 comments. Knightscope claims the robot’s 193 million overall media impressions was worth over $5.8 million. The Bloc held a naming contest for the robot, and said it has a “cool factor” missing from traditional beat cops and security guards.

The Bloc/Knighscope promotional material released via public records request by UC-Hastings

As of February 2020, Knighscope had around 100 robots deployed 24/7 throughout the United States. In how many of these communities did neighbors or community members get a say as to whether or not they approved of the deployment of these robots?

But in this era of long-overdue conversations about the role of policing in our society—and in which city after city is reclaiming privacy by restricting police surveillance technologies—these robots are just a more playful way to normalize the panopticon of our lives.

Police Robots Are Surveillance

Knightscope’s robots need cameras to navigate and traverse the terrain, but that’s not all their sensors are doing. According to the proposal that the police department of Huntington Park, California, sent to the mayor and city council, these robots are equipped with many infrared cameras capable of reading license plates. They also have wireless technology “capable of identifying smartphones within its range down to the MAC and IP addresses.” 

The next time you’re at a protest and are relieved to see a robot rather than a baton-wielding officer, know that that robot may be using the IP address of your phone to identify your participation. This makes protesters vulnerable to reprisal from police and thus chills future exercise of constitutional rights. "When a device emitting a Wi-Fi signal passes within a nearly 500 foot radius of a robot,” the company explains on its blog, “actionable intelligence is captured from that device including information such as: where, when, distance between the robot and device, the duration the device was in the area and how many other times it was detected on site recently."

In Spring 2019, the company also announced it was developing face recognition so that robots would be able to “detect, analyze and compare faces.” EFF has long proposed a complete ban on police use of face recognition technology. 

Who Gets Reprimanded When a Police Robot Makes a Bad Decision? 

Knightscope’s marketing materials and media reporting suggest the technology can effectively recognize “suspicious” packages, vehicles, and people. 

But when a robot is scanning a crowd for someone or something suspicious, what is it actually looking for? It’s unclear what the company means. The decision to characterize certain actions and attributes as “suspicious” has to be made by someone. If robots are designed to think people wearing hoods are suspicious, they may target youth of color. If robots are programmed to zero in on people moving quickly, they may harass a jogger, or a pedestrian on a rainy day. If the machine has purportedly been taught to identify criminals by looking at pictures of mugshots, then you have an even bigger problem. Racism in the criminal justice system has all but assured that any machine learning program taught to see “criminals” based on crime data will inevitably see people of color as suspicious

A robot’s machine learning and so-called suspicious behavior detection will lead to racial profiling and other unfounded harrassement. This begs the question: Who gets reprimanded if a robot improperly harrasses an innocent person, or calls the police on them? Does the robot? The people who train or maintain the robot? When state violence is unleashed on a person because a robot falsely flagged them as suspicious, “changing the programming” of the robot and then sending it back onto the street will be little solace for a victim hoping that it won’t happen again. And when programming errors cause harm, who will review changes to make sure they can address the real problem?"

These are all important questions to ask yourselves, and your police and elected officials, before taking a selfie with a rolling surveillance robot. 

Related Issues