For businesses in New York City, a new law states they must prove that any AI tools used in their hiring process are free from racism and sexism.
Concerns have steadily grown around the data this type of AI software – known as an Automatic Employment Decision Tool, or AEDT – is trained on. So now, it must pass a third-party audit.
The ruling comes into effect on Wednesday 12th July, and is believed to be the first of its kind. AI may be creeping in to every aspect of work life, but now we’re seeing a stricter call for accountability.
Companies Using The Software Need To Be Transparent
AEDTs have fast become a popular tool in the recruitment process owing to the vast amount of applicants businesses get per job listing.
“In the age of the internet, it’s a lot easier to apply for a job and there are tools for candidates to streamline that process. Like ‘give us your resume and we will apply to 400 jobs. [Businesses] get just too many applications. They have to cull the list somehow, so these algorithms do that for them.” – Cathay O’Neil, CEO of consulting firm Orcaa
However, studies have shown that AI programs in general can display racism, sexism and other biases. This has understandably led to calls for an investigation on whether potential job candidates are being ruled out unfairly.
The new law states that companies who operate with these recruitment tools need to have them audited or they’ll be using them illegally. More than this, they’ll have to publish the results of the audits in an effort to provide total transparency.
New York’s Department of Consumer and Worker Protection will be the ones to enforce the law and investigate complaints of company violations. However, it’s not yet known how and to what extent.
Regulation of AI Tools Is Growing
Businesses the world over are scrambling to figure out which AI tools are beneficial to them. However, their use won’t remain without regulation. We’ve already seen the impact of what happens in the legal sector when ChatGPT is asked to contribute to research. Spoiler alert: fines and red faces.
As well as this there are plagiarism and cybersecurity issues. All of these unknowns have led to tech company executives and federal lawmakers calling for regulation.
While we don’t yet know what this could look like, it’s important to note that Italy temporarily blocked ChatGPT for potentially violating European Union Data protection laws.
This Move Is Progress, But Not Foolproof
While the new law has generally been well received by workers and regulators, it’s not without flaws. Gaps remain in the auditing requirements that include disability or aged-based discrimination.
As well as this, there’s no concrete evidence that the AI tools are particularly great at selecting potential employees. It’s more about reducing the volume of applications.
So while the audits will be beneficial in reducing biases, a wider question remain on whether the tools need further refining before widespread use.