The ultimate goal of most I/O psychologists is to predict human behaviour at work.
Traditionally, they approach people data from an inferential standpoint; if A happens, then B follows.
But these conventional methods struggle to process the many data points needed to create a more comprehensive or holistic picture of an individual, their fit for a role, their job satisfaction, or their engagement.
We believe that many more insights can be extracted from the increasingly vast array of data now available than is currently done so.
And AI can help in this – with some caveats in place.
HR leaders know that assessment tools generate valuable data points and have transformed the recruitment and selection space. Companies define roles, determine the skills needed, evaluate candidates against these, and then recruit the best-fit individuals.
But post-hire, the challenge shifts.
Leaders then need to know what drives their people: to understand their motivations and current state. Everyone has their optimal state, and this is influenced by both their mindset and the context.
Consider the world of athletics: Usain Bolt possesses the skills to run 100 meters within a specific range of times. However, his performance – whether he approaches his personal best or the world record – depends on context and mindset. Factors such as sleep quality, coaching, crowd support, track conditions, wind direction, and footwear all impact his results.
When mindset and context align, extraordinary performance becomes possible.
The same principle applies to employees.
Skills are the foundation, but performance hinges on optimal context and mindset factors that either boost or block performance. And the two are interconnected.
By measuring these accurately, we learn of the performance influencers – and what needs to change. After all, success stems from sustainable, long-term productivity, delivered time and again.
These are the key components identified in the Welliba Context and Mindset Model.
We know that context and mindset are fundamental to the employee experience and performance – and we know that when working with data, a strong theoretical model is needed to provide structure to the analysis.
By measuring these accurately, we learn of the performance influencers – and what needs to change. After all, success stems from sustainable long-term productivity delivered time and again
We know that context and mindset are fundamental to the employee experience and performance – and we know that when working with data, a strong theoretical model is needed to provide structure to the analysis.
To develop this, our measurement team drew together a comprehensive collection of mindset and context factors faced by employees across various settings, before bringing them together into a single model.
Its comprehensiveness allows us to overlay it onto any organisation, mapping both the data from traditional, direct data collection and also leveraging existing indirect or passively acquired data.
When working with a client we are able to analyse all their available data, categorising it as either a positive or negative indicator for each factor. The result is a score or detailed description for specific dimensions within a particular company. Of course, it is the interplay of these dimensions that shapes the success profile for an organisation, a department, location, or role.
The challenge is twofold: accessing and choosing the data to work with and using AI transparently.
Let us address that issue first.
Before advanced large language models such as GPT-4, complex data processing often needed the integration of multiple, distinct AI services. For example, an analysis might involve the separate deployments of a natural language classifier, a text mining system, and sentiment analysis tool. The outputs from these discrete tools would then be combined to generate meaningful insights.
The landscape changed with the introduction of large language models (LLMs) that incorporate the agentic workflow. These offer the ability to simply input what might be a complex query and receive an output almost instantly.
But with such ‘convenience’ comes a significant caveat: an increased risk of the ‘black box’ phenomenon. That is, users are unable to trace the reasoning or foundational source behind a given output.
This opacity presents a substantial risk.
Relying on a tool that generates plausible results without a clear understanding of its internal processes can lead to misguided decisions and unforeseen consequences.
Transparency is critical. And this is why our team has developed and adopted our ‘Expert in the Box’ approach.
‘Expert in a Box’ aims to replicate human expert behaviour, and take data understanding beyond traditional AI.
It emulates the decision-making of human experts, integrating psychometric principles and prioritising ethical considerations.
It means that we get to define the rules governing its functionality and get a crystal clear explanation of how the outputs are determined – without compromising on the power of AI, and still harnessing LLMs or other neural networks.
As such, it produces more transparent, explainable, and trustworthy AI solutions. It reduces bias, improves the output reliability, and builds user trust.
This paradigm shift helps to address critical concerns surrounding the ethical and practical implementation of AI.
From a quality perspective, the best data is active data, collected by simply asking someone to do something.
The existence of data rarely poses a challenge when working with us; instead, the task is to work out how best to extract the data and navigate the legal considerations.
But collecting it can be tedious, incur downtime, and is not always accurate.
The use of already existing, surrounding, passive data, gives you an indication of certain things without specific data points.
Back to Usain Bolt. If we know his training schedule, his diet, and his training partner, we have a good understanding of how he might perform, even without the actual data point.
And that's what we get with passive data.
One hurdle often faced is that most HR teams are aware of only a fraction of the data available and what they can leverage. And yet, typically, companies possess vast data pools.
The existence of data rarely poses a challenge when working with us; instead, the task is to work out how best to extract the data and navigate the legal considerations. Relevance and completeness is also important, as well as theoretically underpinning the way the data might be used.
For instance, consider the tracking of interview timing and outcomes. While seemingly irrelevant on its own, studies have shown that interviews conducted at certain times of the day or without adequate breaks may yield varying results. Similarly, clients often want manager ratings to be included within success predictors, and yet are often very unreliable as a data source.
And often very good data is heavily overlooked just because of the complexity of dealing with it. Data tracked and measured may be intriguing but not necessarily theoretically sound. The key lies in identifying data that provides the specific insights you seek, not simply relying on readily available information.
Welliba’s measurement team is driven by our curiosity about human behaviour and the reasons people do what they do, as well as our desire to understand this from a wholly data perspective. The developments in AI and their knowledge of data to push the boundaries and access the information from previously untapped passive data really excites us.
Data-driven decisions are typically good decisions, and organisations have a lot of data. We know we can extract many significant insights that will really transform the way people experience work.
It might prevent burnout, lift performance, and bolster belonging.
We invest in our own knowledge development, with dedicated time to try things out and talk them through with both internal and external leaders in the field. We are exploring and pushing the boundaries of working with data, often without a blueprint.
For us, our guiding star is our Expert-in-the-Box approach. Whatever we do, we focus on building a defensible and transparent decision-making framework.