Tech

AI Is Still Really Bad At Recognizing Minorities' Faces

A test of facial recognition software conducted by the ACLU found accuracy drops along racial lines.

AI Is Still Really Bad At Recognizing Minorities' Faces
Getty Images
SMS

To us, a face consists of nuances: a smile, dimples, big eyes. To facial recognition software, a face is data: the distance between the eyes, the width of the nose, depth of the eye sockets, shape of the cheekbones, length of the jaw line.

One version of software, FaceIt, calls these facial landmarks "nodal points." Each human face has approximately 80 nodal points. They're measured to create a numerical code representing the face in the database. 

Regardless of the factors included, the technology basically works like this: A database of images is created, and new images are compared with the database to see if the new images are in there. It could include employees in a company, and the face of a person entering a building is instantly designated a visitor or worker.

It also could be a database of people with outstanding warrants or people sought for terrorism. Or the parents and guardians of kids in a particular school. And facial recognition software is getting smarter, but some tests show that it is not yet smart enough.

In a test the ACLU conducted of Amazon's Rekognition software, 28 members of Congress were misidentified as people who have been arrested for a crime. Forty percent of the misidentified Congress members were minorities, who only make up 20 percent of Congress. The database consisted of 25,000 publicly available arrest photos.

Amazon said the ACLU used the wrong settings.

But researchers at MIT's Media Lab showed that facial recognition has a color and gender problem. They loaded 1,270 politicians' faces from a number of countries and tested the accuracy of systems made by Microsoft, IBM and Megvii of China.

The MIT results: Gender was misidentified in less than 1 percent of lighter-skinned males; in up to 7 percent of lighter-skinned females; in up to 12 percent of darker-skinned males; and in up to 35 percent in darker-skinned females.

Google Won't Use AI For Weapons Or For Surveillance 'Violating Norms'
Google Won't Use AI For Weapons Or For Surveillance 'Violating Norms'

Google Won't Use AI For Weapons Or For Surveillance 'Violating Norms'

Google's CEO has listed guiding objectives on how the company will and won't use artificial intelligence.

LEARN MORE

The database makes all the difference. If it includes a lot more white men than black women, it will be worse at identifying black women. In other words, the deeper and more diverse the database, the more accurate it is likely to be.

And the engineers who create the algorithms apparently make a difference, too. One possible reason: An engineer might rely on personal experience in distinguishing between faces — a process that would be influenced by the engineer’s own race. Research suggests diversity in the engineering staff could help avoid some unintentional bias.

One thing's for sure: Facial recognition systems are becoming common in the U.S. and around the world. And as police rely on the software to identity possible threats, consistently misidentifying the black and brown population becomes more than a tech problem — it could easily become a civil rights debacle.