Artificial intelligence is increasingly playing a role in companies’ hiring decisions. Algorithms help target ads about new positions, sort through resumes, and even analyze applicants’ facial expressions during video job interviews. But these systems are opaque, and we often have no idea how artificial intelligence-based systems are sorting, scoring, and ranking our applications.

It’s not just that we don’t know how these systems work. Artificial intelligence can also introduce bias and inaccuracy to the job application process, and because these algorithms largely operate in a black box, it’s not really possible to hold a company that uses a problematic or unfair tool accountable.

A new Illinois law — one of the first of its kind in the US — is supposed to provide job candidates a bit more insight into how these unregulated tools actually operate. But it’s unlikely the legislation will change much for applicants. That’s because it only applies to a limited type of AI, and it doesn’t ask much of the companies deploying it.

Set to take effect January 1, 2020, the state’s Artificial Intelligence Video Interview Act has three primary requirements. First, companies must notify applicants that artificial intelligence will be used to consider applicants’ “fitness” for a position. Those companies must also explain how their AI works and what “general types of characteristics” it considers when evaluating candidates. In addition to requiring applicants’ consent to use AI, the law also includes two provisions meant to protect their privacy: It limits who can view an applicant’s recorded video interview to those “whose expertise or technology is necessary” and requires that companies delete any video that an applicant submits within a month of their request.

As Aaron Rieke, the managing director of the technology rights nonprofit Upturn, told Recode about the law, “This is a pretty light touch on a small part of the hiring process.” For one thing, the law only covers artificial intelligence used in videos, which constitutes a small share of the AI tools that can be used to assess job applicants. And the law doesn’t guarantee that you can opt out of an AI-based review of your application and still be considered for a role (all the law says is that a company has to gain your consent before using AI; it doesn’t require that hiring managers give you an alternative method).

“It’s hard to feel that that consent is going to be super meaningful if the alternative is that you get no shot at the job at all,” said Rieke. He added that there’s no guarantee that the consent and explanation the law requires will be useful; for instance, the explanation could be so broad and high-level that it’s not helpful.

“If I were a lawyer for one of these vendors, I would say something like, ‘Look, we use the video, including the audio language and visual content, to predict your performance for this position using tens of thousands of factors,’” said Rieke. “If I was feeling really conservative, I might name a couple general categories of competency.” (He also points out that the law doesn’t define artificial intelligence, which means it’s difficult to tell what companies and what types of systems the law actually applies to).

Because the law is limited to AI that’s used in video interviews, the company it most clearly applies to is Utah-based HireVue, a popular job interview platform that offers employers an algorithm-based analysis of recorded video interviews. Here’s how it works: You answer pre-selected questions over your computer or phone camera. Then, an algorithm developed by HireVue analyzes how you’ve answered the questions, and sometimes even your facial expressions, to make predictions about your fit for a particular position.

HireVue says it already has about 100 clients using this artificial intelligence-based feature, including major companies like Unilever and Hilton.

Some candidates who have used HireVue’s system complain that the process is awkward and impersonal. But that’s not the only problem. Algorithms are not inherently objective, and they reflect the data used to train them and the people that design them. That means they can inherit, and even amplify, societal biases, including racism and sexism. And even if an algorithm is explicitly instructed not to consider factors like a person’s name, it can still learn proxies for protected identities (for instance, an algorithm could learn to discriminate against people who have gone to a women’s college).

Facial recognition tech, in particular, has faced criticism for struggling to identify and characterize the faces of people with darker skin, women, and trans and non-binary people, among other minority groups. Critics also say that emotion (or affect) recognition technology in particular, which purports to make judgments about a person’s emotions based on their facial expressions, is scientifically flawed. That’s why one research nonprofit, the AI Now Institute, called for the prohibition of such technology in high-stakes decision-making — including job applicant vetting.

“[W]hile you’re being interviewed, there’s a camera that’s recording you, and it’s recording all of your micro facial expressions and all of the gestures you’re using, the intonation of your voice, and then pattern matching those things that they can detect with their highest performers,” AI Now Institute co-founder Kate Crawford told Recode’s Kara Swisher earlier this year. “[It] might sound like a good idea, but think about how you’re basically just hiring people who look like the people you already have.”

Even members of Congress are worried about that technology. In 2018, US Sens. Kamala Harris, Elizabeth Warren, and Patty Murray wrote to the Equal Employment Opportunity Commission, the federal agency charged with investigating employment discrimination, asking whether such facial analysis technology could violate anti-discrimination laws.

Despite being one of the first laws to regulate these tools, the Illinois law doesn’t address concerns about bias. No federal legislation explicitly regulates these AI-based hiring systems. Instead, employment lawyers say such AI tools are generally subject to the Uniform Guidelines, employment discrimination standards created by several federal agencies back in 1978 (you can read more about that here).

The EEOC did not respond to Recode’s multiple requests for comment.

Meanwhile, it’s not clear how, under Illinois’ new law, companies like HireVue will go about explaining the characteristics in applicants that its AI considers, given that the company claims that its algorithms can weigh up to tens of thousands of factors (it says it removes factors that are not predictive of job success).

The law also doesn’t explain what an applicant might be entitled to if a company violates one of its provisions. Law firms advising clients on compliance have also noted that it’s not clear whether the law applies exclusively to businesses filling a position in Illinois, or just interviews that take place in the state. Neither Illinois State Sen. Iris Martinez nor Illinois Rep. Jaime M. Andrade, legislators who worked on the law, responded to a request for comment by the time of publication.

HireVue’s CEO Kevin Parker said in a blog post that the law “entails very little, if any, change” because its platform already complies with GDPR’s principles of transparency, privacy, and the right to be forgotten. “[W]e believe every job interview should be fair and objective, and that candidates should understand how they’re being evaluated. This is fair game, and it’s good for both candidates and companies,” he wrote in August.

A spokesperson for HireVue said the decision to provide an alternative to an AI-based analysis is up to the company that’s hiring, but argued that those alternatives can more time-consuming for candidates. If a candidate believes that a system is biased, the spokesperson said recourse options “are the same as when a candidate believes that any part of the hiring process, or any individual interviewer, was unfairly biased against them.”

Under the new law in Illinois, if you participate in a video interview that uses AI tech, you can ask for your footage to be deleted after the fact. But it’s worth noting that the law appears to still give the company enough time to train its model on the results of your job interview — even if you think the final decision was problematic.

“This gives these AI hiring companies room to continue to learn,” says Rieke. “They’re going to delete the underlying video, but any learning or improvement to their systems they get to keep.”

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.