Skip to content
A promotional image from Welliba featuring Fred Oswald, PhD, a professor from Rice University. The title reads, 'Charting a Path for Responsible AI for HR Professionals.' Fred Oswald is shown speaking, wearing a dark suit and a white shirt, with a microphone clipped to his jacket. The image is part of Welliba's EXcelerate Series and has a modern, geometric background with shades of blue, grey, and orange.

Charting a Path for Responsible AI for HR Professionals

Cartouche Fred Oswald

AI technologies and tools in the workforce continue to evolve in their sophistication – as and in their potential and limitations. HR professionals in particular recognise the promise of AI in their work – with some promises already realised by HR early adopters. Whether it is composing emails, summarising meetings, drafting presentations, or assisting in interpreting performance appraisals, the use of AI tools can make common tasks of an HR professional, easier, better, more efficient, and scalable. 

Thus, organisations and HR leaders alike clearly appreciate how AI tools might make them more effective and competitive in today’s competitive organisational environment. Yet important and deep concerns linger around AI and HR with respect to data privacy, the proprietary nature of data, which AI tools and technologies to use that manage and mitigate risk, and emerging regulations that safeguard the use of AI tools. The balancing act is very real here; between the excitement for AI innovation and desire for the capabilities of HR to progress, and the caution and need for guidance and assurance around the use and outcomes of AI tools in the workplace. 

Organisations benefit from evaluating AI tools and systems not in terms of comparing them against an unachievable standard of perfection, but rather whether the implementation of AI makes the current state of affairs better or worse. This serves as an opportunity to re-examine what matters in HR. An example would be to carefully define and measure inputs, processes and outcomes in HR and people practices that might be affected by incorporating AI. This then leads to more specific data-informed discussions of potential sources of biases and inequities that can be better understood and addressed. Put another way: subjectivity, skewed assessments, personal bias and inequitable practices have long been challenges in organisations. Will the use of AI tools eliminate, reduce or exacerbate these? How and when will we know?

Navigating the Data: Validity and Fairness

Whether data is actively collected or passively obtained from existing data or the internet, AI algorithms can analyse such data. However, just because data can be collected does not mean that it should, and many types of data relevant to organisations can be messy, contradictory, and otherwise difficult to use and make sense of. Sophisticated as AI algorithms might be, related substantive and ethical HR questions can be even more challenging. In general, HR leaders using AI with their organisation need to consider two crucial aspects: 

 

  • Validity: For example, what evidence supports the job-relevance of the data being input into and AI tool? For example, is the data reliable (e.g., does it accurately reflect intended employee characteristics)? Does the data predict appropriate organisational outcomes (e.g., employee performance on the job)?
  • Fairness: How is fairness defined in your organisation? Are groups of individuals treated equitably, such as groups defined by demographics or disability? What types of data can be collected in a manner that is both ethically and legally responsible?

These two aspects are not specific to AI tools in HR; they reflect fundamental considerations and requirements of all forms of assessments used in organisational settings. However, in the AI context, stronger conceptual and empirical support for the job-relevance of the data collected can lend itself to greater interpretability and trust in AI algorithms and tools.

 

High-stakes and Low-stakes: The Role of AI in HR

 

Employee hiring is often considered a high-stakes setting, given that it may significantly impact both the lives of job applicants and the success of businesses. Given this context, candidates and job applicants should be informed and aware of the data that is being collected and used as input into hiring decisions. 


Currently, legislation around the use of AI in hiring contexts has been limited, but it will ramp up in the coming years, for example with the implementation of the EU AI Act which will prohibit emotion recognition and include obligations of transparency, instructions for use, and human oversight of AI tools. Some US states have also instituted laws relevant to informing job applicants about AI tools used in hiring. New York City, for example, currently requires organisations deploying AI tools in selection processes to inform candidates of the use of AI and report hiring rates within particular subgroups protected by the Title VII as well as intersectional groups, in the service of being transparent. Through such laws, greater responsibilities may be placed on the developers of AI tool as much as their organisations in their use of those tools. When it comes to the responsible use of AI in HT, the reputation and the ethics of AI developers and organisations will play key roles as much as the law. 


Low-stakes scenarios involving AI in HR relate to activities such as employee training and development, such as providing real-time coaching and feedback on technical performance or interpersonal interactions. Although low-stakes settings like this will also require users to submit their data to an AI algorithm, users themselves tend to have more control in how they use the output, taking what they see as the best, and leaving the rest. This iterative process of user engagement and review allows for the customised pacing of training and development. With the assistance of AI tools, individuals get to develop themselves; they are in control. 

 

The Equity Challenge in AI Access

 

Employees might have access to AI tools through their organisation, but if training is lacking and/or some groups of employees tend to use the tools more than others, this can weigh against ensuring an equitable workplace. Moreover, some employees might also have greater access to AI tools that are no endorsed by organisations but are readily and publicly available, such as tools using large language model tools (LLMs, too many to name here). In this context, equity in AI access and use is also an important concern. Some applicants and employees will have more time and resources to access and use AI tools outside of the organisation, understand the legal context, and reach positive outcomes, potentially gaining an advantage over other employees lacking time and resources. Organisations can usefully investigate how their top-down AI policies, combined with bottom-up use of AI tools by employees, not only affect their bottom line, but how they affect equity in the workplace.

 

Charting a Path for Responsible AI in HR 

 

As a society, organisations are still in the relative infancy of implementing AI within HR, where across the globe, we already see HR professionals vigorously experimenting with AI tools in their work environment. In those experiments, the media already reports notable examples of how bias can creep into AI – just as it does with human HR decisions (e.g., when interviewing applicants an screening resumes). 


Therefore, HR leaders must proactively pursue knowledge in the AI space. Over many decades, I-O psychologists have grappled with employment testing, job performance, and other organisational issues relevant to AI tools, and are eager to share their insights and learn from organisations, helping us to educate and connect the dots around vetting, evaluating, and using AI tools within a local organisational context. To learn more about I-O psychology, see https://www.siop.org.

 

It takes a team, if not a village, to work together and ensure that organisational practices – both in high-stakes and low-stakes settings – are using AI tools in an effective and responsible manner. 

 

 

 

Get in touch

Contact