Amazon’s facial recognition software erroneously flagged 28 members of Congress as having been arrested for crimes during tests run by the American Civil Liberties Union (ACLU).
The retail giant’s Rekognition platform also disproportionately misidentified people of colour in a database of mugshots. People of colour who were falsely identified accounted for 40% of the wrongly-matched faces – 11 of 28 – even though they make up only 20% of US Congress.
The group cross-referenced a database of 25,000 public arrest photos with public photos of every member of the US House and Senate.
Leading privacy campaigners have urged CEO Jeff Bezos to suspend sales to government and police agencies.
“Our test reinforces that face surveillance is not safe for government use,” Jacob Snow, a technology and civil liberties attorney at the ACLU Foundation of Northern California, said in a statement.
“Face surveillance will be used to power discriminatory surveillance and policing that targets communities of colour, immigrants, and activists. Once unleashed, that damage can’t be undone.”
The ACLU said it used the default or ‘out of the box’ match settings set by Amazon. Responding in a statement Amazon said that “when using facial recognition for law enforcement activities, we guide customers to set a threshold of at least 95% or higher.”
Three Congress members wrote to Bezos requesting copies of internal accuracy or bias assessments Amazon had conducted on Rekognition and a list of police or intelligence agencies either using the software or that had enquired about doing so.
Although facial recognition technology has made great strides in recent decades, performance remains uneven. There have been several other high-profile controversies over AI-driven platforms – many related to racial or gender bias.
1. Beauty contest AI judge shows racial bias
The first international beauty contest judged by ‘machines’ disproportionately favoured contestants with white skin.
Launched in 2016 Beauty.ai supposedly measures beauty by factors like facial symmetry and wrinkles. But nearly all of the 44 winners were white, with a handful being Asian and only one having dark skin.
Although most contestants were white, many people of colour submitted photos, with substantial cohorts from India and Africa.
Around 6,000 people from more than 100 countries submitted photos.
2. White males recognised more readily than women or non-whites
Facial recognition software correctly identified white males 99% of the time but darker skinned women only 65% of the time, according to a study by the M.I.T. Media Lab.
The dataset that trained the software was disproportionately white and male.
Another research study found that one widely used facial-recognition data set was more than 75% male and more than 80% white.
3. Metropolitan Police’s facial recognition technology failed 98% of the time
Facial recognition software used by the Met police had false positives in more than 98% of alerts generated, a freedom of information request showed.
The UK’s biometrics regulator said it was “not yet fit for use” after the system had only two positive matches from 104 alerts.
The Met Police said it did not consider the inaccurate matches “false positives” since alerts were then checked a second time.
Another system used by South Wales Police achieved only 10% of correct matches from 234 alerts.
These shortcomings explain why the Met Police still values its super recogniser unit, which was born in the wake of the 2011 London riots, so highly.
Facial recognition software identified only one culprit in the 2011 riots, compared to 609 by the super recognisers, the unit’s founder DCI Neville has pointed out.
Although that was now seven years ago, facial recognition still has a huge gulf to bridge, he insisted. Humans have a particular advantage in assessing people from side-on views and can even identify people from the back of their head alone.