This post has been updated to provide additional context about patents and patent applications, which are indications of an entity’s interest in a particular product but not proof that the product is currently in development or available for use. You can read more about the role of patents in this series in our post, “The Catalog of Carceral Surveillance: Patents Aren't Products (Yet)”
There are too many people in U.S. prisons. Their guards are overworked, underpaid, and prone to human errors. Some have taken this as a sign that we need to rework our criminal justice system. Prison technology companies have another approach prepared: robots.
Human guards, of course, have pesky needs like work breaks and food. They’re entitled to paychecks and sick days. They possess flaws that can lead to outbursts of violence, racism, and sexual harassment.
“The ratio of prisoners to prison guards is too high,” wrote prison telecommunications company Global Tel*Link in a recent patent application, and “a substantial amount of the total funds available to correctional facilities is spent on guards, leaving little money left over to pay for programs to reduce recidivism.”
Global Tel*Link, a company notorious for overcharging inmates for phone calls, but like its major competitor Securus, it has been diversifying its offerings in the years since federal efforts to rein in prison phone costs.
According to a patent application for “mobile correctional facility robots” filed by GTL, these robots can perform many tasks: delivery of parcels and visitors, monitoring of the environment for suspicious words or actions, and execution of “non-lethal force” (actually lethal in many cases) such as “an electroshock weapon, a rubber projectile gun, gas, or physical contact by the robot with an inmate.”
The company states that the robots can deploy such force “autonomously” or “at the remote direction of a human operator.” In fact, these robots can perform many of the same responsibilities as human prison guards but without a lot of those aforementioned pesky human needs.
GTL also suggests that the cost savings of this approach could go to harm reduction programs — though we suspect they’re likely to instead go toward increasing shareholder profits.
[caption caption=" The flowchart that GTL robots would use to make decisions. “Enforcement action” here is a euphemism for potentially lethal disciplinary methods including rubber bullets and electrical shock."]
A “Central Controller” can direct multiple robots to work together. While GTL has said little on it, this central controller is a computer presumably using some form of artificial intelligence to direct multiple robots to work together as they monitor an area for bad words or perform an “enforcement action.” One might be concerned about granting this robot the ability to do something like electrocute an inmate it has decided is threatening—especially given the well documented shortcomings of AI systems.
Depending on the data set such an AI is trained on, it might decide that a hug is a threatening gesture, or a fist bump, or a high five. If someone were to trip and fall that might be seen as a threatening gesture by the AI. AI also tends to reflect cultural biases and if the people who are creating the training data tend to view people of color as more intimidating, then this bias will be infused into the AI as well.
There are, of course, many activities that, without context, might seem strange, perhaps even threatening: sitting in a strange position, having a non-violent psychological episode, or holding a threatening broom while performing assigned cleaning duties. Again, thanks to the well documented infallible nature of AI, we are certain autonomous robot prison guards will never inappropriately deploy these potentially lethal weapons against an inmate undeservedly.
GTL robots won’t just be versed in the punishments of today; they’ll also be equipped to detect the crimes of tomorrow. With the power to monitor for “events of interest,” the robots may identify a predetermined spoken word or behavior as a cause for reasonable suspicion.
On the surface, the GTL robots look similar to the Knightscope guard robots that have been deployed in parks, garages, and other public areas. Hopefully, the GTL robots won’t have the same problems with drowning, stairs, blindness, or interference with their LIDAR based navigation systems.
Thanks to the well documented infallible nature of AI and GTL, those who run a prison or Immigration and Customs Enforcement detention camp will soon have an alternative to human guards: robots able to dole out twice the less-lethal force for half the cost.
An earlier version of this article incorrectly identified the owner of the patent as Securus, rather than GTL. EFF apologizes for the error.