Why business leaders should force employees to use AI
Employees may be willing to use AI for everyday tasks, but many are resistant to adopting it for important work when there is more to lose. New research shows a short period of forced use can help overcome this reticence
Article at a glance:
-
Forcing staff to use AI helps them overcome instinctive mistrust of the technology, finds Dr Jiankun Sun in new research from ÌìÃÀ´«Ã½.
-
When staff are forced to use AI for a short trial period, they tend to continue using it even after the trial ends.
-
Learning is key to sustained use of AI among employees, as it allows them to develop a comprehensive, unbiased understanding of its performance.
Ìý
Artificial intelligence (AI) is sweeping through the commercial world, with in at least one function. This is, though, a wave of enthusiasm largely driven by business leaders. Among employees and a lack of enthusiasm about adopting AI.
However, this is something of a chicken-and-egg problem. AI’s impact on productivity can only increase if the technology develops and improves. But this improvement relies on user-generated data through broad adoption and sustained usage, which employees are reluctant to embrace. So how can businesses and leaders encourage employees to make more use of AI and overcome what is known as ‘algorithm aversion’?
Negative bias
explores this question, in particular the potential benefits of mandating the use of AI – in other words, forcing employees to use AI tools in order to help them understand their benefits, learn how to use them productively, and embed continued use to help AI improve. To test this, we worked with an online education company using an AI algorithm to help salespeople select candidates for leads (i.e. matching teachers with ÌìÃÀ´«Ã½), aiming to maximise post-trial conversion.
Prior to our experiment the company found that the algorithm, despite being designed to maximise sales conversion rates, was underutilised by staff. They tended to use it for low-stakes leads that were less likely to be converted, trusting their tried-and-tested manual processes for higher-importance leads.
We found that a key driver for this was employees’ inherent negative bias against AI, especially at the early stage when the technology is newly introduced to them and they lack knowledge about its capabilities. This creates a vicious cycle, which often goes unnoticed in practice, where negativity means AI is used selectively for low-importance work, which makes its performance appear poor, which reinforces the negative beliefs. This is at the heart of why algorithm aversion persists and deepens over time.
Forcing and learning
We found that forcing workers to use the AI tool increased their post-experiment use of it by almost 16% compared to the control group – a significant rise. What’s more, this held true across staff members with different pre-experiment usage levels, and did not drop over time.
We also observed that among those who were forced to use the AI tool, the ones that experienced a greater uptick in sales conversions during the experiment tended to be more likely to use it going forward, which was not the case in the control group. After the experiment, those in the forced group were also more likely to use AI for high-quality leads compared to beforehand.
Why did this happen? We found that learning is a key mechanism to break the vicious cycle. Forcing workers to use AI across all types of leads allows them to develop a more comprehensive and less biased understanding of its performance. They then realise that rather than performing worse than manual selection, AI performance is actually comparable or possibly even superior. This builds trust and encourages employees to continue using AI voluntarily even after the forced trial period is over.
Lessons for leaders
Our findings suggest that staff tend to have a negative bias against using AI tools from a cold start, as they lack faith in their capabilities, and are more likely to use them for low-stakes work. Forcing staff to make use of AI tools for more important, high-stakes tasks can help to overcome this bias.
In the long term, this can help to move from a vicious cycle of negativity to a positive loop through data network effects. Broader adoption generates more diverse usage data, which enables AI to improve, which encourages further adoption. This is important because AI requires usage data to evolve, and – as AI is capable of improving at a pace faster than humans – this creates the possibility of genuinely AI-integrated organisations.
In turn, this is important for leaders concerned about the return on investment in AI technology. Sinking funds into functionality that nobody takes advantage of is a nightmare scenario for businesses. Understanding that a forced-use trial period can start a sector on the road to high-level AI integration removes this problem and opens up a future of ever-increasing efficiencies through improved AI tools.