Businesses around the world have clued in on the significance of creating a diverse workplace – not only because this is increasingly important to staff, but also because having workers who represent a wide range of backgrounds, ethnicities and genders is good for the bottom line.
To that end, many companies are turning to technological solutions to eliminate bias from hiring and increase workplace diversity. Here is a look at how artificial intelligence is impacting the recruitment process.
The business case for diversity
Research published by Mckinsey & Company has found clear correlations between companies with more diverse workforces and improved financial performance.
Their study authors caution that this correlation does not necessarily equal causation. That is to say, having greater gender and ethnic diversity in corporate leadership does not automatically lead to more profits for the company. However, the correlation does indicate that when organizations have diversity in their leadership teams, they are more successful.
Market research demonstrates that companies that are more diverse are better able to attract and retain top talent while improving the experience of their customers and augmenting employee satisfaction and overall decision-making. These factors all contribute to increased business returns.
Mckinsey found that by comparison, companies in the top 25 percent for racial and ethnic diversity are 35 percent more likely to have financial returns above their respective national industry medians. The top quarter of companies for diversity tends to outperform the average of the other three quarters.
Plus, along with the financial rewards and generating positive corporate and employer brand messaging in the eyes of the public, maintaining a welcoming, inclusive, and diverse workplace is generally considered to simply be the right thing to do.
AI in recruitment
For many years now, companies have been turning to artificial intelligence solutions to augment their recruitment processes. AI can automate tasks, reduce response times for candidates, filter and rank applications, and much more. These efficiencies allow recruiters to focus their time and energy on selecting from a shortlist of prescreened applicants.
As technology advancements allow artificial intelligence to grow even more intelligent, many experts have theorized that it could take an even larger role in recruiting – up to and including making hiring decisions. Along with time and cost savings, one key advantage of having software determine who gets hired is that it removes the human element entirely from the decision-making process.
This is seen as a way to rule out any conscious or unconscious biases a recruiter may have that could influence their choice and impact organizational diversity. There is a growing industry of AI-powered recruitment tools used to process high volumes of job applicants, with the goal of removing human biases and eliminating discrimination from hiring.
It's a growing trend. A 2020 study of 500 companies from a range of industries in five countries found that a quarter (24 percent) of businesses have implemented AI for recruitment purposes and more than half (56 percent) of hiring managers planned to adopt this technology in the following year.
The technology aims to cancel out human biases against gender and ethnicity during recruitment by using algorithms that read vocabulary, speech patterns and even facial micro-expressions to assess a candidate's personality type and cultural fit with the organization.
How it works
Artificial intelligence can analyze data and form complex calculations – without emotion or biases. However, like all technology, the output is only as good as the input. Therefore, AI recruitment systems that are programmed to look for patterns in successful past hires when evaluating future candidates will tend to hire similar applicants for the open roles.
In fact, new research from Cambridge's Centre for Gender Studies has found that AI may actually be increasing bias in recruitment. Their report says that the use of AI to narrow candidate pools could ultimately increase uniformity rather than diversity in the workforce, as the technology is calibrated to search for the employer's predetermined perception of an ideal candidate.
"These companies reduce race and gender down to insignificant data points," concludes study co-author Dr. Eleanor Drage. This can end up harming efforts to intentionally recruit a more diverse workforce.
The Cambridge study found that when not used properly, AI recruitment tools may not actually boost diversity. When they are based on a company's past hiring data, they could focus on candidates who are most similar to the current workforce.
The way forward
Creating a diverse workforce requires intentionality. More and more companies in the traditionally male-dominated tech sector are using data and technology to increase representation in their ranks. For example, in 2022, 193 tech firms participated in the Diversity in Tech Dashboard survey, providing industry-wide data on Diversity, Equity and Inclusion (DEI) policies and practices and employee representation. The study found that 40 percent have already set company-wide DEI goals and another 39 percent are considering or currently working on this.
This can be the key to the future of AI as an inclusive, bias-free recruitment tool. Diversity among the people working in data science and machine learning is critical. Unlike people, machines do not have inherent biases, but they are subject to the choices of data and algorithmic features programmed by the people who build them.
Despite some early pitfalls, as artificial intelligence advances, this technology is showing increased potential for supporting bias-free, diverse hiring practices. Learning from past stumbles, AI programmers are now more aware of the potential for bias being coded into their recruiting and hiring software, and they are taking steps to safeguard against this.
“There has been a lot of news about the impact of bias in the algorithms looking to identify talent,” says Bret Greenstein, PwC partner for data analytics and AI. "I think using AI [for DEI] will move from experiment to production for recruiting and hiring as people get better at understanding and identifying bias and understanding how to assess future performance better."
According to a report by IBM, "85 percent of AI professionals believe the industry has become more diverse over the past few years; of those, 91 percent think that shift is having a positive impact.”
The business case for having a diverse workforce is clear. Bias in algorithms is usually caused by human error or oversight. Having a more diverse group of people developing the technology mitigates this risk. Artificial intelligence can also be programmed to recognize bias and flag gendered or exclusionary language in job ads and recruitment communications to make them more inclusive from the outset.
It is a positive cycle forward. Advancements in technology are creating artificial intelligence solutions with even greater potential to eliminate biases from the hiring process and, therefore, create more diverse and inclusive workplaces. In turn, this diversity of input leads to further advancements in bias-free technology.