AI revolutionaries

By Andy Charlwood and Nigel Guenole

AI has been widely tipped as a powerful potential asset in the field of human resources – particularly recruitment, promotion, and reward. But what of the frequently expressed reservations concerning the potential downside? Won’t AI entrench already-existing bias into the new digital systems? Not if we’re careful, say Andy Charlwood and Nigel Guenole.

Arecent industry study found 300+ human resources (HR) technology start-ups developing AI tools and products for HR or people management, with around 60 of these companies “gaining traction” in terms of customers and venture capital funding. Eightfold, an AI-based talent intelligence platform that helps attract, develop, and retain top talent, has raised $220m in funding and is now valued at over $2bn. Phenom, an HR technology firm that automates job tasks and improves the job search experience, was reported by Forbes as having “quietly become a billion dollar unicorn”. Accenture has made a strategic investment in the London-based start-up Beamery, which offers its own operating system for recruitment and has a reported valuation of $800m.

If AI in HR is to be developed ethically, it is important that AI use is agreed with workers and not imposed on them.

Large global technology firms have also started to make AI central to their people management systems and processes; IBM recently reported that in a single year its cost savings due to implementing artificial intelligence in human resources exceeded $100m. Although use of AI tools and apps in HR is currently limited, investors are betting big that a take-off in use is just around the corner.

Ai and human

A fairer, more efficient AI-driven future?

Proponents of AI for HR argue that it can dramatically improve the efficiency and fairness of how people are managed. Human decision making over hiring, pay, and promotion decisions is already horrendously biased against people who are different from decision-makers, to the detriment of women, ethnic minorities, and the working class. AI systems, they claim, can cut through this bias by making decisions and recommendations based on objective evidence of performance and potential.

The promise of AI is that new systems will be able to advertise posts and seek out job searchers to create new and more diverse applicant pools (doing the job currently done by recruitment consultants more cheaply and at greater scale) and then automate early stages of selection by using a combination of more sophisticated algorithmic analysis of CVs and applications, gamified applicant tests, and robot interviews that select candidates with job-relevant traits and skills that mirror those of the businesses’ existing top employees. The use of AI will mean that organisations can recruit and select highly efficiently from larger applicant pools without selection decision making being biased by recruiter heuristics (e.g., a preference for candidates who have attended a prestigious university) and biases (e.g., gender and ethnicity) that have no relevance for job performance.

The same principles can be applied within organisations. AI can use a range of digital data created during employee software use, communications, manufacturing and service delivery censors, and audio and video image feeds to judge employee performance more accurately and fairly than human managers are able to do, and then use this information to make or recommend decisions related to talent management, promotion, and reward.

Bias concerns are real, but solutions exist

Revolutions often provoke fear among those whose lives are at risk of being upended. The nascent AI revolution in HR is no exception. Responding to an article on the use of AI for assessing job candidates, the technology journalist James O’Malley’s tweet, “there is no way this will not turn out to be racist,” received 115,000 likes from other Twitter users. Such fears are not without foundation. It has been widely reported that Amazon was forced to abandon the use of an AI hiring tool when it was found to reproduce and exacerbate existing tendencies not to hire or promote female applicants.

In thinking about how to respond to these concerns, it is important to differentiate between bias, adverse impact, and fairness. “Adverse impact” is a technical term referring to majority and minority group differences in the employment-related opportunities, e.g., hiring, promotion, and termination, that are distributed as a result of using an AI system. The differences in the distribution of opportunities across groups may result from real differences between groups, or may be due to AI models mismeasuring for particular groups. Fairness is a social judgement of the appropriateness of these outcomes. However, in the computer science literature, and in the media and popular culture more broadly, the terms “adverse impact”, “bias”, and “fairness” are used interchangeably to describe adverse impact.

Biased and unfair AIs are not inevitable. There are a number of tools and methods that can be used to develop AIs that are largely free of bias, and we can use existing industrial psychology principles to manage adverse impact. AI systems can also be designed in ways that minimise opportunities for bias. Job analysis would be carried out that identified the knowledge, skills, abilities, and attributes that are required for successful performance. Assessments can be designed to assess these attributes. It is the difference in scores on these job-related attributes that determine candidate ranking in competitive hiring situations. If adverse impact remains a concern after systems have been de-biased, there are also a number of statistical techniques for reducing or removing adverse impact without compromising the effectiveness of the system.

Although use of AI tools and apps in HR is currently limited, investors are betting big that a take-off in use is just around the corner.

The key point here is that using domain knowledge in the development of AI systems can ensure that such systems do not reproduce the discriminatory behaviours produced by human biases. The outputs of AI systems can also be interrogated using constantly improving tools for explaining how “black-box” AIs make decisions. The challenge is to ensure that these methods are widely adopted in our field.

AI HR

The dark side of AI for HR

Can we rise to this challenge? Close observers of the developing AI industry have significant doubts. They think that AI developers tend to express an ideological objection to incorporating domain knowledge into AI development because they think that it is better to let AIs learn from all available data without human interference (because human domain knowledge is supposed to carry its own biases). Developers also tend to take a narrow view of ethics, equating it with legal compliance, and think in terms of there being a trade-off between fairness and efficiency, preferring to ignore fairness in order to prioritise efficiency.

The Princeton computer scientist Arvind Narayanan is writing a book on “AI snake oil” – the purported AI use-cases that no AI expert seriously believes an AI can credibly undertake. He singles out the claim that an AI can make recommendations on the best candidate to hire based on short videos as a particularly egregious example of AI snake oil. The problem here is that, through a combination of hubris and ignorance, AI developers overclaim for what AI is capable of. The risks are that firms looking to prioritise efficiency in HR processes fall for the snake oil with consequences that are bad for workers and society more broadly.

AI systems can also be designed in ways that minimise opportunities for bias. Job analysis would be carried out that identified the knowledge, skills, abilities, and attributes that are required for successful performance.

The tendency for AI developers to prioritise efficiency over ethics is also resulting in the development of AI tools for monitoring and controlling workers that enforce high levels of effort and reduce worker autonomy. Amazon has been at the forefront of these developments, which are reported to have resulted in high rates of worker burnout and injury. Critics have argued that such systems represent an ideological project to control human behaviour that increases dependency and reduces autonomy in ways that are inimical to human flourishing.

How to avoid buying AI snake oil

Given these problems and concerns with the AI for HR revolution, HR professionals might be tempted to take a counterrevolutionary position. We think this approach would be misguided. Companies are going to be tempted by the promised efficiencies of AI. It is better that HR is in the room and part of the process when decisions are made. HR’s domain knowledge is essential if organisations are to avoid buying HR snake oil, but HR professionals also need to develop “big-data literacy”, an appreciation and understanding of the principles of statistics, machine learning, and critical thinking, so that they understand what AI can realistically achieve.

The ethical values and principles of the HR profession are also needed to help avoid AI use cases that cause harm. Central to the ethical development of AI is consultation and engagement with those who will be affected by it. If AI in HR is to be developed ethically, it is important that AI use is agreed with workers and not imposed on them.

developed ai HR

What HR can do

We think this is an exciting time for HR. New AI applications hold out the possibility of a revolution in fairness and efficiency in the way people are managed. However, these developments are not without significant risks. To avoid these risks, HR professionals should develop the skills and know-how to contribute to the development of AI, to ensure that appropriate domain knowledge is incorporated into systems design and that systems are developed and deployed in an ethical way.

About the Authors

Andy CharlwoodAndy Charlwood is a professor of human resource management at the University of Leeds. A founder member of the HR Analytics ThinkTank, he is an educator, researcher, speaker, and consultant on the use of analytics in human resource management.

NigelNigel Guenole is a senior lecturer and director of research at the Institute of Management, Goldsmiths, University of London. A chartered psychologist who specialises in talent management and applied statistics, he recently co-authored an IBM white paper on the business case for AI in HR.

LEAVE A REPLY

Please enter your comment!
Please enter your name here