We often imagine technological tyranny and yet our famous dystopian stories are never successfully filmed: HBO's Fahrenheit 451 breaks the trend.
The Verge has an insightful article on how the Chicago Police Department is using predictive policing as a means to identify individuals who are likely to be involved in a violent crime–they are placed on the so-called “heat list” and monitored closely.
When the Chicago Police Department sent one of its commanders to Robert McDaniel’s home last summer, the 22-year-old high school dropout was surprised. Though he lived in a neighborhood well-known for bloodshed on its streets, he hadn’t committed a crime or interacted with a police officer recently. And he didn’t have a violent criminal record, nor any gun violations. In August, he incredulously told theChicago Tribune, “I haven’t done nothing that the next kid growing up hadn’t done.” Yet, there stood the female police commander at his front door with a stern message: if you commit any crimes, there will be major consequences. We’re watching you.
What McDaniel didn’t know was that he had been placed on the city’s “heat list” (link no longer available) — an index of the roughly 400 people in the city of Chicago supposedly most likely to be involved in violent crime. Inspired by a Yale sociologist’s studies and compiled using an algorithm created by an engineer at the Illinois Institute of Technology, the heat list is just one example of the experiments the CPD is conducting as it attempts to push policing into the 21st century.
Predictive analytical systems have been tested by police departments all over the country for years now, but there’s perhaps no urban police force that’s further along — or better funded — than the CPD in its quest to predict crime before it happens. As Commander Jonathan Lewin, who’s in charge of information technology for the CPD, told The Verge: “This [program] will become a national best practice. This will inform police departments around the country and around the world on how best to utilize predictive policing to solve problems. This is about saving lives.”
While much of the article focuses on the Fourth Amendment, in terms of reasonable expectations of privacy, my curiosity lies in another element of the Fourth Amendment–probable cause.
Can an algorithm augment an officer’s calculation of, or even by itself identify probable cause? In other words, say an officer, by his own knowledge, lacks probable cause to search someone. Could the officer, by relying on an algorithm, acquire specific intelligence to obtain the warrant and conduct the search? Or, more likely, can an algorithm help generate the requisite reasonable suspicion needed for a stop-and-frisk? Can these algorithms operate like virtual drug dogs of sorts, that can trigger further searches?
You can imagine an officer viewing the world through Google Glass,with a status indicator that alerts for probable cause when a match is found, like The Terminator’s Heads-Up Display.
Society should be treading very lightly with these types of programs.
First, I am hesitant to put my faith in an algorithms designed by police departments that can be used to calculate reasonable suspicion or probable cause. Programmers “engaged in the often competitive enterprise of ferreting out crime” should be viewed skeptically–especially if their algorithms are not open-sourced.
Second, in light of the CSI-effect, judges and juries may be more likely to believe an algorithm that can calculate probable cause than a potentially-fallible officer–even though these algorithms may cast a very wide net, and yield many false negatives.
Third, and most troubling, officers equipped with these programs can leverage the algorithms to focus on people already of interest. No doubt, a smart enough computer can gin up reason to search just about anyone. And for reasons 1 and 2, these searches are more likely to be upheld.
Cross-Posted at JoshBlackman.com.