A top leader at the U.S. Equal Employment Opportunity Commission (EEOC) said last week that AI technology has the potential to eliminate bias in the hiring process.
According to EEOC Commissioner Keith Sonderling, “if properly designed and carefully used,” AI has the “promise” to improve employment decision making “by removing some of the normal issues with human hiring.”
“Why this technology is being developed and deployed at this rate is that the technology promises to do it better than humans. And what has been the biggest problem within HR?” Sonderling said during a CSIS event on May 30. “Biased humans. And this is where AI comes into play saying, well, we can design, develop, and deploy AI in a fashion without bias, you’ll have better employment decision making.”
Sonderling said that as employers are moving away from traditional resumes and towards more skills-based hiring approaches, “AI has the promise to [help] by removing some of the normal issues with human hiring that the human would see.”
The commissioner said that the current human bias problem has existed for a long time, and in the last two years the EEOC has collected $1.2 billion from employers who are violating these laws, and received 80,000 new cases this past fiscal year.
“The problem is out there and some of the very basic problems can be solved by AI. Like, a male and a female resume apply for the same job – a male is more likely to get selected,” he said. “African Americans and Asian Americans who whiten their resume by deleting any references to their race or where they’re from are more likely to get selected without those on there.”
Sonderling emphasized that there are a lot of “long standing issues with entering the workforce,” noting that when humans walk into an interview they see “everything about that person that the EEOC has said you are not allowed to make an employment decision on – their sex, their national origin, potentially their religion, if they’re disabled, if they’re pregnant.”
“That’s where AI – and I qualify this every time and this is really important for this discussion – if it’s properly designed and carefully used, it can actually help eliminate that bias,” Sonderling said. “By how? By designing AI to not look at any of those indications in a resume that may show a protected class.”
But, Sonderling emphasized that AI needs to be designed and used appropriately.
“If it’s not properly used, if it’s not carefully designed, these tools can have those factors play an awful role in a hiring decision and scale discrimination far greater than any human being,” he said. “So that’s why we really need to break down what it means to have these tools actually not have bias and actually have that approach where it’s going to allow individuals to get in the workforce and not be discriminated against like they were potentially by humans.”