Amazon's Facial Recognition Software is (Still) Racist and Sexist

Amazon is rapidly emerging as a pioneer of facial recognition technology with their Rekognition software. However, the organization has remained in the firing line as the software continues to be shown to have flaws.

The significant issue is that the program underperforms when it comes to identifying female features, particularly when the subject has darker skin. According to research published by the MIT Media Lab this week, Rekognition easily identified the faces of Caucasian men but failed to register women 19% of the time. Furthermore, the software performed even worse when the individual in question wasn’t white – when the subject was dark skinned and female, Rekognition misfired in 31% of trials.

This study has emerged after computer scientist Joy Buolamwini, also of the MIT Media Lab, found that discriminatory tendencies were built into similar software marketed by IBM, Microsoft, and Face++. Since releasing her study in February last year, Buolamwini has founded the Algorithmic Justice League, which aims to call out biases in technology. After publishing her findings, IBM and Microsoft announced they would be taking measures to improve the accuracy of their software. On top of promises of greater accuracy, Microsoft went one step further and called for industry regulation.

Amazon, on the other hand, is distancing itself from the discussion. Moreover, executives have gone as far as to deny that these studies reveal anything about the reliability of their software. Instead, Matt Wood, Amazon’s general manager of deep learning and AI, highlighted the difference between facial identification and facial analysis. According to Wood, Rekognition identifies gender by scanning for things like facial hair, whereas facial identification compares scans against a database of mugshots. In a recent press statement, Wood said: “It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case — including law enforcement — based on results obtained using facial analysis.”

However, the potential for bias is alarming civil liberties campaigners. This is especially the case as Amazon has been aggressively marketing the product to law enforcement. Despite the company’s seemingly non-threatening catalog of suggested uses listed on their website, the American Civil Liberties Union has reported that Amazon executives met with ICE last year. Considering that Congress hasn’t approved the use of facial recognition software for immigration enforcement, this is a troubling development – especially as the technology has a tendency to get it wrong. As Buolamwini wrote in a previous paper:

“The potential for weaponization and abuse of facial analysis technologies cannot be ignored nor the threats to privacy or breaches of civil liberties diminished even as accuracy disparities decrease.”

Leave a Reply

Your email address will not be published. Required fields are marked *