At a recent MIT event on the future of work in New York City for its high-achieving alumni network, Andrew McAfee, co-director of MIT's Initiative on the Digital Economy and a principal research scientist at the university's Sloan School of Management, said leaders are realizing that a lot of their human practices, human resources and human capital practices are simply outdated. McAfee's view: "If you want the bias out, get the algorithms in." Silicon Valley is investing in many start-ups selling the idea that they can solve the problem of human bias in job-hiring decisions with artificial intelligence. But a new class of independent algorithm auditing firms and public policy experts — with experience at some of the largest tech companies in the world and educations from elite institutions — say 'algorithmic bias' has already been proved to exist in other areas. As a result, the rapid uptake of AIs for hiring in the market has moved too fast, and with too little scrutiny, they say.

Source: Pymetrics

Algorithms can help HR professionals make smart hiring decisions, but these algorithms can often be biased against minorities, said speakers on a panel at the MIT event. The biases creep in because human bias influenced the algorithm, and it's up to humans to notice the bias and fix it. Traditional résumé review leads to women and minorities being at a 50 percent to 67 percent disadvantage, according to start-up pymetrics, which attempts to go well beyond the résumé in assessing job applicants using neuroscience games and AI. Companies using AI can reduce those figures dramatically, pymetrics said, as long as the input data is accurate and remains unbiased. That's a big "if."

AI can work, 'as long as' the input data is accurate

Cathy O'Neil, who also spoke at the MIT future-of-work event, said the hiring algorithms now coming into the human resources field are a perfect test case for her skepticism about the tech utopian movement, and she uses these job algorithms often in presentations. O'Neil, an academically trained mathematician who studied and worked at UC Berkeley, Harvard and MIT — and left a job on Wall Street to join the Occupy Wall Street movement and write a book on the dangers of algorithms — often employs a thought experiment in talks she gives: Imagine what a machine-learning hiring algorithm trained on Fox News data would result in, even if reasonable choices were being made by the data science team. Then she points out that it doesn't have to be an outrageous example like Fox News, because there is no perfect workplace with perfect hiring policies, perfect raises and promotion methods, and a culture that welcomes all people equally.



When we blithely train algorithms on historical data, to a large extent we are setting ourselves up to merely repeat the past. ... We'll need to do more, which means examining the bias embedded in the data. Cathy O'Neil author of "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy"

"It's going to take some real thought," said Dr. Lori Kletzer, an economics professor at Colby College in an interview with CNBC. "It's not just going to happen. And it's important to raise the questions now. ... The implications are societal, so we can't just leave it to the market, because the market only cares about the bottom line." To start, the makeup of the tech industry creating the hiring algorithms isn't perfect. While Silicon Valley has a long history of encouraging immigrant entrepreneurs and bringing in foreign workers from around the world on skill visas, it has been criticized for diversity in terms of hiring from within the national population. A report from the federal government's Government Accountability Office released in November 2017 found that the technology industry is behind other sectors in diversity of its workforce. "The estimated percentage of minority technology workers increased from 2005 to 2015, but GAO found that no growth occurred for female and black workers, whereas Asian and Hispanic workers made statistically significant increases. Further, female, black and Hispanic workers remain a smaller proportion of the technology workforce — mathematics, computing and engineering occupations — compared to their representation in the general workforce." "When we blithely train algorithms on historical data, to a large extent we are setting ourselves up to merely repeat the past. If we want to get beyond that, beyond automating the status quo, we'll need to do more, which means examining the bias embedded in the data. The data is, after all, simply a reflection of our imperfect culture," O'Neil, who now runs her own algorithm auditing firm, said via email.

The traditional job application process isn't working

Dr. Frida Polli, pymetrics CEO and co-founder, also has an extensive academic résumé, which includes an MBA from Harvard and a postdoctoral fellowship in neuroscience from MIT. Though despite the impressive accomplishments, she feels that simply listing them on her résumé didn't provide employers much information about her potential. Pymetrics is working with companies such as Unilever, Accenture, LinkedIn and Tesla. The company uses behavioral neuroscience and artificial intelligence to help identify candidates in a more predictive and unbiased way. Pymetrics bypasses the résumé, using data generated from brain games to match applicants with roles.

HireVue is another start-up working with the corporate industrial psychologists to make sure employer assessment tools are up to industry standards and, by adding AI to the mix, eliminating bias. It has been around for more than a decade, starting with tech that allowed for video interviews and moving more recently to AI-based job assessments. "We can measure it, unlike the human mind, where we can't see what they're thinking or if they're systematically biased," Lindsey Zuloaga, director of data science at HireVue, recently told CNBC. Once the candidate reaches a human recruiter, companies using HireVue have reported a much more diverse candidate pool: Unilever has improved the diversity of its talent pool by 16 percent since partnering with HireVue. "If the team does notice a skew in results, it can evaluate the algorithm to see what went wrong and remove the bad data," Zuloaga said. "AI is not impartial or neutral," said Meredith Whittaker, co-founder of the AI Now Institute at New York University, and founder of Google's Open Research group. AI Now — which Whittaker co-founded with Kate Crawford, an NYU professor and principal researcher at Microsoft Research — aims to move beyond what it describe as "minimal oversight" of AI. Algorithmic bias is one of its core research areas. "In the case of systems meant to automate candidate search and hiring, we need to ask ourselves: What assumptions about worth, ability and potential do these systems reflect and reproduce? Who was at the table when these assumptions were encoded?" Whittaker asked. More from @Work:

There are two classes of workers in Silicon Valley

Co-working spaces for women rise as response to #MeToo

A new robot poised to make coffee better than a barista Whittaker said HireVue, for instance, creates models based on "top performers" at a firm, then uses emotion detection systems that pick up cues from the human face to evaluate job applicants based on these models. "This is alarming, because firms that are using such software may not have diverse workforces to begin with, and often have decreasing diversity at the top. And given that systems like HireVue are proprietary and not open to review, how do we validate their claims to fairness and ensure that they aren't simply tech-washing and amplifying longstanding patterns of discrimination?" In a statement to CNBC, Loren Larsen, CTO of HireVue, said, "It is extremely important to audit the algorithms used in hiring to detect and correct for any bias. ... No company doing this kind of work should depend only on a third-party firm to ensure that they are doing this work in a responsible way. Third parties can be very helpful, and we have sought out third-party data-science experts to review our algorithms and methods to ensure they are state-of-the-art. However, it's the responsibility of the company itself to audit the algorithms as an ongoing, day-to-day process."

The potential 'drastic and harmful' downside of AI

Pymetrics said the biggest hurdle with HR teams within corporations are legal concerns about bias. That's why pymetrics developed a process to de-bias their algorithms and has open-sourced that methodology on GitHub. It "wants all companies, regardless of industry, to have the tools to detect and remove bias from their algorithms," Polli said. But it does not let third-party algorithm auditing firms, like O'Neil's, review its actual job-hiring code for undetected bias. "The algorithms themselves are not the solution, because they could actually make it worse. Audited algorithms that are shown to be free of gender bias ... are the answer to removing bias," Polli said. She added, "If you want to have a third-party auditor, fantastic. But the most critical thing is that it is being done however it is getting done." Polli said the pymetrics process has now 50,000 pieces of data tested, and that does give it confirmation of no gender or ethnic bias. That kind of confidence in an internal review process doesn't sit well with Dipayan Ghosh, a Harvard fellow and former Facebook privacy and public policy official who is now with the New America think tank. He said the use of advanced algorithms and AI in recruiting can create tremendous value for the industry, where discrimination by hiring managers has been rampant, but if implemented irresponsibly, it can have drastic and harmful effects for job candidates. "Algorithms discriminate. There have been countless episodes in different contexts that have illustrated this in high resolution in recent years, from social media advertising to creditworthiness decision-making to subsidy dispensations." He also said companies reviewing their own code is not enough, especially in the corporate sector, where returns are optimized against near-term revenue, forward investment and stock return, above all else. "We know of too many past cases where all a company needed to do is to self-certify, and it was shown to be perpetuating harms to society and, specifically, certain people. ... The public will have little knowledge as to whether or not the firm really is making biased decisions if it's only the firm itself that has access to its decision-making algorithms to test them for discriminatory outcomes."

There could be a serious risk, and it has the potential to open up the floodgates to something very bad. Davida Perry co-founder and managing partner of Schwartz, Perry & Heller LLP, a firm that specializes in employment law