Statistics have long shown that people of color are no more likely, due to their races, than White people to commit crimes. However, according to the federal Bureau of Justice Statistics, Black people, Latinos, and poor people are more likely that wealthier white people to report crimes to the police. In effect, more crimes are reported in poorer and more ethnic neighborhoods, even if crime is not actually worse there.
Moreover, law enforcement has been shown to focus their efforts much more on poor and minority neighborhoods, where there are more crime reports. It becomes a self-fulfilling prophecy. Minorities and the poor report crimes more often, the police get accustomed to seeing crime as higher in some areas, and they patrol those areas more thoroughly. That results in more crimes being found in those areas.
It may be true that some poor and minority areas actually do experience more crime than wealthier, whiter neighborhoods. However, it would be a mistake to assume that crime primarily or only occurs in those areas. It would be a mistake to focus all enforcement efforts on those areas alone.
So why does a major brand of crime prediction software do just that?
Two news outlets, Gizmodo and The Markup, decided to look into how crime prediction software works and whether it could be a way to avoid the bias against the poor and minorities that has plagued traditional policing efforts. They note that, between 2018 and 2021, over 1 in 33 Americans may have experienced police patrolling decisions that were directed by a crime prediction software known as PredPol. (PredPol changed its name to Geolitica in March.)
The journalists found it surprisingly easy to get data from cities’ use of PredPol. That is because the data was kept on an unsecure cloud storage platform that was, until after this story, open to the public.
The journalists looked at data sets from 38 law enforcement agencies to see whether PredPol, which does not use race-based data, was free from bias.
It was not. Because it relied on historical crime reports to predict future crimes, it simply perpetuated the over-policing of the past.
What this meant for residents was stark. Communities of color and people whose children would qualify for the federal free lunch program were targeted much more often than whiter communities. The software recommended daily (or more frequent) patrols in poor and minority areas, targeting them “relentlessly.” In contrast, whiter, wealthier communities received no recommendations for patrols at all.
Do we want to continue the historical pattern of over-policing poor and minority areas? If so, PredPol and its competitors seem to be willing to promote that.