One Contact Centre's Experience of ChatGPT and The Results
The arrival of generative AI has become the perfect catalyst for worried minds in a world still high on anxiety pumped up by back to back crises and sustained by a set of geo-political clashes still to come. Collectively we see a ghost in the machine ready to spook us.
Dr Geoffrey Hinton, an AI legend has suddenly got cold feet. He’s quit Google to tell the rest of us about impending existential risks. I wonder why China’s existing use of AI to scale its authoritarian hold did not already count?
Elon Musk’s Loki-like sense of malevolent mischief has accused Larry Page, via Fox News of course, of wanting “digital superintelligence, basically a digital god, if you will, as soon as possible”. Reflect on what flashed into your brain with that deliberately chosen imagery as a clue to possible motive. He's another signatory to the 'stop everything till we figure things out' movement.
And in an attempt to avoid being the tortoise in this current round of ‘where’s the legislation we need if the industry can’t provide the guardrails’, UK government has just thrown its hat into the ring to review the dangers of AI and protect us all from harmful content. But the sheer scope of enquiry makes it doubtful that the assigned committee even completes the research phase before the next general election.
For sure, the tide rises when a new phase of technology arrives. Bad actors will be able to pump out higher quality fake reviews at greater scale. Their synthetic voices will be able to con more vulnerable people more of the time. Governments, news channels and extremist public figures will be able to invent fictitious realities in 4k definition to attract even more into their echo chambers.
You can also be sure that counter measures in the form of regulation, education, and technology will turn up and help us adapt, get smarter and move on. And isn’t it the case we often forfeit an innocence in the forward momentum for greater knowledge and ability?
But we need to be vigilant. It’s during the ‘goldrush’ phase that our judgement is weakest since we are still establishing our own knowledge and opinions on the potential benefits versus the risks.
We often need a point of reference to do this. So we can moderate our decisions with a reality check of what’s really under the bonnet and understand how it does what it does.
Here’s one story to lay down a marker. It comes from a recently released piece of research on how generative AI is being used in a US contact centre. It’s a familiar scene offering a credible set of findings that will not be unexpected. Especially for those already familiar with the value of ‘agent assist’ solutions.
Of course many will already be weighing up the best way to use the clear uptick in capability that generative AI offers relative to the default perception of how bots have impressed customers thus far.
The fly in the ointment of course is the often quoted tendency of these systems to ‘hallucinate’. Of course, we can’t risk that and information on causes and cures are still thin on the ground. Is this terminally deviant behaviour? A case of ‘bots gone bad’. Is it recoverable? Even better can the issue be removed before even reaching a live production environment?
The likely answer lies in some common sense explanation on how these large language models are typically produced and where their QA workflow can fall short. A topic for another day.
Back to the backstory.
The combined research team from the Stanford Digital Economy Lab and Massachusetts Institute of Technology examined the staggered deployment of a chat assistant for a Fortune 500 software firm that provides business process software.
The tool trained on data from over 5,000 advisors at the company, monitors customer chats and offers customer advisors real-time suggestions for how to respond to customers. Advisors could use those suggestions but were also free to ignore them. In other words, responsibility for the quality and flow of conversation remained in human hands.
The agent assist style solution was built on a recent version of the Generative Pre-trained Transformer (GPT) family of large language models developed by OpenAI. Probably still the best-known brand in the public mind in what is a rapidly expanding market of Large Language Models.
The team reported four findings. I reproduce them verbatim so as not to add another filter to the analysis.
AI assistance increases worker productivity, resulting in a 13.8 percent increase in the number of chats that an advisor is able to successfully resolve per hour. This increase reflects shifts in three components of productivity: a decline in the time it takes to an advisor to handle an individual chat, an increase in the number of chats that an advisor is able to handle per hour (multiple session allowed), and a small increase in the share of chats that are successfully resolved.
OK but how does this compare? Cut ‘n paste answer templates are nothing new. Can you recall a baseline against which you can compare these figures? How does the productivity uplift compare if you are already using a pre-generative AI version of ‘smart assistance’?
AI assistance disproportionately increases the performance of less skilled and less experienced workers across all productivity measures we consider. In addition, we find that the AI tool helps newer agents move more quickly down the experience curve: Assisted advisors with two months of tenure perform just as well as unassisted advisors with over six months of tenure.
In the current context of recruitment and retention challenges, I find this especially interesting. A win-win for individual motivation and overall productivity levels. If this turns out to be a benefit that replicates across sectors and territories then there is an opportunity to significantly optimise current onboarding and ‘time to value’ for everyone using this type of solution. Assuming of course existing in-house solutions are not already out performing!
Our third set of results investigates the mechanism underlying our findings so far. We posit that high-skill workers may have less to gain from AI assistance precisely because AI recommendations capture the potentially tacit knowledge embodied in their own behaviors. Rather, low-skill workers are more likely to improve by incorporating these behaviors by adhering to AI suggestions. Consistent with this, we find few positive effects of AI access for the highest-skilled or most-experienced workers. Instead, using textual analysis, we find suggestive evidence that AI assistance leads lower skill agents to communicate more like high-skill agents.
This is not the first time such conclusions have been made. Traditional knowledge management solutions often plateau during early phases of deployment for the same reasons until a new tranche of content is produced that meets more advanced user needs.
However, what is new in their observation comes from the wonderfully fluent language that GPT-4 level solutions are capable of. Something that had the greatest impact on me when I first experienced a generative response.
On top of this is the suggestion that we can also level up the sophistication of laguage being used. Normally this is the hallmark of only the most experienced advisors.
If this translates into more positive customer experience, then we have another winning benefit. One of the tenets of emotive CX is that every interaction has impact. Either strengthening or weakening the customer-organisation relationship bond. Having the means to optimise this loyalty benefit at scale is well worth trialling.
Finally, we show that AI assistance markedly improves how customers treat agents, as measured by the sentiments of their chat messages. This change may be associated with other organizational changes: turnover decreases, particularly for newer workers, and customers are less likely to escalate a call by asking to speak to an agent’s supervisor.
So yes, the research seems to confirm the point and makes another one in the process. If customers like the tone of conversation, they reciprocate. Empathy begets empathy. Successful dialogue is built on trust, positive emotion and collaboration. Both customer and advisor enjoy the benefits of successful outcomes. Again, something that been under threat as the mood of customers has often turned more negative in recent times.
So, in summary, there are some interesting benefits to be gained. Especially when we keep a human in the loop as an extra safety measure. Whether they out-perform existing agent assist solutions depends how you measure. But it’s certainly a tick from me in terms of improved quality of conversation.
On broader matters which I started out with, I'll defer to one of my favourite experts on the nature of consciousness. Anil Seth has enough clarity of thought and research pedigree to see that greater intelligence is not an inevitable gateway to consciousness. Something I suggested was currently spooking public debate. And more to the point possibly putting us off getting going with Generative AI.
‘Conscious machines are not coming in 2023. Indeed, they might not be possible at all. However, what the future may hold in store are machines that give the convincing impression of being conscious, even if we have no good reason to believe they actually are conscious. They will be like the Müller-Lyer optical illusion: Even when we know two lines are the same length, we cannot help seeing them as different.’
No ghost then. At least not yet.
Instead it’s time to self-educate, trial and map the use cases.
Marketing Manager at Sytel - uniquely flexible cloud contact center software
1yThe voice of reason :)
Turn customer feedback from every source into actionable insights using Lumoa | Hybrid AI text analytics
1yMIT's study is a great example of how AI isn't a replacement for a human workforce but an enhancement. Improved experiences and increased work productivity show benefits to the business and customers. Decreased turnover also indicates that employees benefit from using AI in the right cases as well. Thanks for sharing, Martin!
I am leading GTM adventures in AI, Insurance and iBanking. Building new and marvelous cloud apps and systems to make customers, advisors and agents lives easier. AI ++
1yTrevor James Newell David Shepherd important... software call centers..and Insurance call centers.
Marketing Strategy I Customer Strategy I CX Transformation I Sustainability Champion I Strategic Partnerships I Mentor
1yGreat article and perspective as always Martin!
Good article and perspective as always - thanks