Members of the House Homeland Security Committee today questioned Federal government officials about the Department of Homeland Security’s (DHS) use of facial recognition technologies, and potential racial bias in the technology.
A December study from the National Institute of Standards and Technology (NIST) appeared to indicate the evidence of racial bias in facial recognition algorithms, but based on witness testimony at today’s hearing the issue appears to be more complex than that.
Charles Romine, Director of NIST’s Information Technology Laboratory, told the committee that “in the highest performing algorithms, we don’t see [racial bias] to a statistical level of significance.” The highest performing algorithms are the “one-to-many” algorithms that determine whether a person in a photo has any match in a database.
Examination of “one-to-one” algorithms – which seek to confirm that one photo matches a different photo of the same person in a database and often are used for unlocking a smartphone or checking a passport – yielded different results.
“For the verification algorithms – the one-to-one algorithms – we do see evidence of demographic effects for African-Americans, for Asians, and for others,” Romine said.
John Wagner, Deputy Executive Assistant Commissioner for the Office of Field Operations within DHS’ Customs and Border Protection (CBP) organization, said CBP concurs with findings in the NIST report. He added that apparent racial biases in facial recognition algorithms aren’t the only contributors impacting the quality of matching. Movement in the image, bad lighting, and aging are all “operational issues,” that can contribute to poor matching, Wagner said.
The CBP official also told committee members the government has thus far run images of 43.7 million people through its facial recognition software, and found only 252 apparent imposters. Of those, he said, 75 had used fake U.S. travel documents.