Facial recognition technology is flawed. Unfortunately, the technology tends to misidentify some people as others, which can mean the police arrest the wrong person. And, the technology has been shown to be racially biased.
In fact, in 2018 the ACLU used Amazon’s Rekognition tool to compare photos of members of Congress with those in a mugshot database. The tool misidentified 28 members of Congress as matches to mugshots. While the false matches included people of all races and ages, they disproportionately fell on people of color. The problems illuminated by this test have been verified by scientific research.
Recently, the first known case came to light of a person being wrongfully arrested due to a false facial recognition match. Robert Julian-Borchak Williams was falsely arrested in Michigan after an algorithm told officers that the man in a surveillance video was Williams.
The technology simply isn’t ready for prime time. False positives are likely, especially among African-American men, and innocent people are being caught up by these errors. Robert Julian-Borchak Williams was held in jail for hours even though he did not match the photo of the suspect.
Earlier this month, IBM, Microsoft and Amazon decided to stop selling facial recognition software to law enforcement until bias and privacy issues have been addressed. However, those large companies aren’t the main players in the industry, and we can be certain other companies will fill in the gaps.
Now, as part of our society’s reckoning with police racism, another group is taking a stand. That group may be more influential than the big tech companies because its work is fundamental to the development of the software. That group? Mathematicians.
A June 15 letter was recently addressed to the trade journal Notices of the American Mathematical Society. It calls for mathematicians and their colleagues to stop collaborating on technology with police because of the documented racial disparities in how police treat the people they encounter.
The letter, which was signed by 1,400 researchers, is especially critical of predictive policing algorithms. These algorithms claim to be able to predict where and when crime is most likely to occur, but they may be based on data that contains structural and/or implicit biases against people of color. For example, the algorithm may rely on historical data about which neighborhoods are more “prone” to crime. But policing “high crime” neighborhoods has often been shorthand for policing African-American neighborhoods.
Like facial recognition technology, predictive policing could simply be reinforcing existing stereotypes under a newly “scientific” guise.
The mathematicians are right. It is time to put the brakes on police technology before it comes into even wider use.