How Facial Recognition Software Influences Law Enforcement

November 6, 2020

In recent months, the city of Minneapolis has been a focal point for the unfortunate events of police brutality. An issue that is not new yet continues to fuel the need for change in the law enforcement system. Some cities think that alternative policing, such as algorithms and surveillance software, could aid the modern issues of unlawful policing.

In this current era of ‘defunding the police’ there ignites the question of new standards. Could artificial intelligence play a partnering role in the hopes to abolish law enforcement corruption? There are significant examples and research revealing why this is problematic.

Implementing “new” tech

Although it seems like a more modern approach, the use of algorithms is not a new resource for police work. There is evidence of monitoring algorithms being used in the pursuit of criminals. This is typically bounced off of mugshots and license photo databases. Another way to implement such software in aid of current issues is by using it to analyze the behavior and actions of the officers. This would be in an effort to recognize and prevent problematic behavior in the field.

In a press conference streamed on June 10, 2020, Minneapolis Police Chief Medaria Arradondo said, “For the first time in the history of policing, we here at MPD will have an opportunity to use real-time data and automation to intervene with officers who are engaged in problematic behavior.” [cite: inverse and press conference]

However, not everyone is on board with the idea of police using this tech. According to an article from the American Civil Liberties Union, many companies including IBM, Amazon, and Microsoft are actually holding off any sales of their facial recognition technology to the police. There is way too much room for error in surveillance technology. Observing its messy track record, this could be considered a problematic medium for identifying suspects.

Ways that AI actually functions off of systemic racism

The principle of implementing this software continues as controversial. The primary identifiers for these programs are facial features and skin color. This can be seen as an unethical approach to the issues of systemic racism. A computerized system is not going to do a better job of identifying an individual because it is simply not that advanced.

Even if someone had nothing to do with a crime or did not look similar to a known criminal’s appearance, there have been multiple cases of black individuals being flagged by AI software. In a specific incident, a black man named Julian-Borchak Williams was falsely arrested because a robbery case’s photos were run through an algorithm. Williams claimed that he didn’t even look like the suspect in question. [cite: inputmag]

Many reports have shown that facial recognition technology most accurately identifies only white faces. One study conducted by the National Institute of Standards and Technology was able to reveal that these algorithms were 10 to 100 times more likely to inaccurately identify a photograph of a black or East Asian face, compared with a white one. [cite: Scientific American]

There is no denying that current facial recognition technology possesses an innate racial bias. Finding a way to improve on these inaccuracies could build a better case for the artificial policing algorithms, but it currently presents as very inhumane. However, with the current element of mask-wearing, it is not likely that we will see significant improvements.

Misidentification and the variable of masks

Facial recognition algorithm data inaccuracies have always been present. This was even before the age of widespread mask-wearing. Detroit Police Chief James Craig even made a statement about how inaccurate their technology is. The software they used, developed by DataWorks Plus, is said to seldom bring back proper face match results. Craig had estimated that “If we were just to use the technology by itself, to identify someone, I would say 96 percent of the time it would misidentify.” [cite: Vice]

Currently, wearing a mask is a large factor that has to be considered. Aside from its health-promoting qualities, it has created a virtually faceless world. People have become increasingly unidentifiable, especially to a computerized system. In a new study launched by the National Institute of Standards and Technology, they were able to examine the inaccuracies that occur when someone is wearing a mask. By adding a digital mask to someone’s photo the facial recognition algorithm’s error rates range from five percent to 50 percent.

Being misidentified in the situation of a crime is a scary enough thought for anybody. AI software is simply not advanced enough to provide consistently accurate data. Despite the motivation to adapt to changes involving mask-wearing, there is significant unreliability that has been present even before this factor.

Where are these algorithms sourced?

The most common, and controversial, companies that provide these softwares are Clearview AI and NTechLabs. MIT Technology Review examines both of the company’s claims that they had resolved the “bias problem” and improved their accuracy. However, there has not been sufficient evidence to support these claims.

Ethical and privacy issues

Resources such as surveillance footage and social media posts have served as a way for police to identify people at events such as protests. This could be used as an intimidation tactic and deter the public from taking a physical stance on current issues. Although it could help police pinpoint unlawful actions, it also lacks the ethical support of people’s right to protest.

Potential privacy infringement calls for careful analysis of any algorithm implementation. Although Minneapolis has not adopted facial recognition in their surveillance tech as of now, City Council Member Steve Fletcher of the Public Safety Committee predicts that it is only a matter of time. With the likely future use of this biometric technology, Fletcher is hoping to be proactive by developing a draft policy for data privacy. [cite: StarTribune]

Many sources worry that biometric surveillance software will actually “supercharge” racial biases. Since algorithms are programmed through mugshot databases, there is a higher concentration of black subjects because of arrest rates. An article from the ACLU reads, “Since Black people are more likely to be arrested than white people for minor crimes like cannabis possession, their faces and personal data are more likely to be in mugshot databases. Therefore, the use of face recognition technology tied into mugshot databases exacerbates racism in a criminal legal system that already disproportionately polices and criminalizes Black people.”

This highlights just how severe the flaws of algorithm technology truly are. If we hope to fight against racial inequalities within law enforcement, then this software will only set us back. It enhances social depravities instead of creating solutions.

Inevitably, these algorithms are creating more problems than they’re solving. They are programmed in a way that limits context, human judgment, and accurate identification; all of which are required to charge someone with a crime. So should this be adopted as a form of policing within our cities and communities? Some may argue the pros and cons but it is crucial to look at the problems that formulate as a result of AI policing.

General awareness

Despite what has generally been conveyed to the public, facial recognition software has a greater presence in law enforcement than we think. According to an article from the Los Angeles Times, the Los Angeles Police Department has used this tech over 30,000 times since 2009 and is accessed by more than 300 persons within the LAPD.

The use of algorithms needs to be more clearly explained to the public. Especially considering the hindrance this tech has on the issue of racial inequality, we need to see more opinions involved. There are many red flags including the LAPD denying the use of facial recognition and Detroit making false arrests based on algorithm info.

This software has the potential to not only prolong the issues of racial bias in law enforcement but actually condone it as well. It is clear how this would negatively affect communities of color especially. The use of this technology should not be kept under wraps in consideration of the safety and privacy of the public.

In light of "Defund the Police," police departments are considering using AI to combat problematic behavior by officers