A crime prediction algorithm could be used to reveal biases and audit law enforcement for them, according to a new study published on Thursday.
The study, which was published in the peer-reviewed Nature journal, was conducted by the University of Chicago.
Predictive policing became possible with the emergence of artificial intelligence and machine learning, but it sparked much controversy because the predictive models often don't take into account systemic bias that affects law enforcement and government actions.
The study used an algorithm to predict crimes a week or two in advance at about 90% accuracy based on patterns discerned in public records and geography of past crimes committed in Chicago in two categories: Violent crimes like murder, assault and battery and property crimes like various kinds of theft.
One of the patterns analyzed was the number of individuals arrested in each recorded crime compared to the police reports of the events. The researchers found that arrest rates rose in crimes committed in wealthier neighborhoods and dropped significantly in low socio-economic neighborhoods, suggesting a bias in law enforcement.
In comparison to other predictive models that use bias-inducing data like politics and neighborhood location and socio-economic condition, the study's model relied solely on the exact time and location that past crimes were committed and found that when the bias was removed, it was as accurate, if not more, as other models.
"We created a digital twin of urban environments. If you feed it data from what happened in the past, it will tell you what's going to happen in future. It's not magical, there are limitations, but we validated it and it works really well," said senior author of the study, Ishanu Chattopadhyay.
"Now you can use this as a simulation tool to see what happens if crime goes up in one area of the city, or there is increased enforcement in another area. If you apply all these different variables, you can see how the systems evolves in response."Ishanu Chattopadhyay