The Aurora Police Department recently announced plans to add in facial recognition to investigations, drawing criticism from opponents who argue that the new technology still has the chance to misidentify suspects.

Already presented to the City Council’s Public Safety Committee, the addition of facial recognition software is expected to cost $16,000 in its first year of use, with the potential to rise to $67,000 by the fourth year. Effectively funded by the department’s current budget, the final decision on whether to implement the new technology will be made by the Aurora City Council.

If approved, the software will be used for active investigations. The technology will pull from two libraries, consisting of mug shots in their database and photos available on social media. Any photos of potential suspects, considered as “unknown people,” that are submitted while the investigation is ongoing will not be saved.

“We oftentimes put out photographs of unknown folks, and we’re hoping the community will recognize them,” said the Commander of the Aurora Police Department, Chris Poppe, per CBS News. “We’re just going to use software to do the same thing.”

Nonprofits such as the ACLU, however, have already voiced concerns about how facial recognition software could potentially violate privacy rights for those who are misidentified.

“Facial recognition, historically, has a problem identifying certain populations of people, mainly Black folks. It’s not great at accurately identifying women. It’s not great at accurately identifying, generally, people of color,” said Public Policy Director at ACLU, Anaya Robinson, per CBS News. “It has some trouble when it comes to people with disabilities, because of height differences. Misidentification is a huge concern.”

Overall, at least eight people have been wrongfully arrested due to the use of AI facial recognition, according to The Washington Post.

Earlier this year, a 37-year-old Detroit woman announced plans to sue the Detroit police department following an unjust arrest.

Filed in March, LaDonna Crutchfield said that she was led away from her home in handcuffs in front of her children after the police used facial recognition technology that wrongfully identified her as the suspect in an alleged robbery and carjacking. According to the lawsuit, a video from the crime scene was run through the new technology, incorrectly pairing the image of the person who committed the crime with an image of Crutchfield.

In New York, last month, civil rights groups called for an investigation into the wrongful arrest of Trevis Williams. Williams was arrested by the police two months after the crime due to the use of facial recognition tools. He was jailed for two days despite being eight inches taller and 70 pounds heavier than the suspect. Data from his cell phone also confirmed that he was miles away from where the crime was committed.

“Everyone, including the NYPD, knows that facial recognition technology is unreliable. Yet the NYPD disregards even its own protocols, which are meant to protect New Yorkers from the very real risk of false arrest and imprisonment,” said the Staff Attorney at the Legal Aid per ABC News. “It’s clear they cannot be trusted with this technology, and elected officials must act now to ban its use by law enforcement.”

Veronika Lleshi is an aspiring journalist. She currently writes for Hunter College's school newspaper, Hunter News Now. In her free time, she enjoys reading, writing and making music. Lleshi is an Athena scholar who enjoys getting involved in her community.

Comments are closed.

Exit mobile version