thomas.wieberneit@aheadcrm.co.nz

What’s gonna happen with generative AI and CX in 2024?

What’s gonna happen with generative AI and CX in 2024?

It is this time of the year. Everybody (and their dog), has some predictions for 2024. As you can guess, reading this, I am participating in this game.

Last year, I published three humble wishes to better the industry – and I am sad to say, that my three wishes stay wishes also in 2024. I’d say that this is partly because 2023 became the year of generative AI. We all know why.

Pretty much every vendor got caught flat-footed by the meteoric rise of OpenAI.

Correspondingly, in the course of 2023, we have seen a huge number of pre-announcements of one generative AI scenario or another being integrated into their software and then offered by enterprise software vendors.

Mostly, these announcements were about low-hanging fruit. Which does not mean that they are useless or not valuable, quite on the contrary. Solutions, once they are available, have the potential to increase employee productivity and the customer experience.

But, they are announcements or early adoptions.

So, based on this, what will we see in 2024? And let’s limit ourselves to the realms of CRM, CX and customer engagement.

Success stories

The more announcements of something being available soon turn into actual usage, we will be able to see actual success stories. Customers will more and more move from trial mode to actually addressing business challenges and measure the degree of success of an implementation by the change of KPIs that can be attributed to this implementation. In some instances, we can see this already starting. Diginomica’s Jon Reed recently interviewed a representative of Loop insurance who gave some highly interesting insight into Loop’s implementation of a Large Language Model based chatbot for customer service. With the competition getting fiercer on all levels (vendors as well as their buyers), we will see this a lot more.

More sophisticated use cases

For all that it’s worth, and naturally, vendors started with the implementation of fairly simple use cases like the summarization of a text/conversation, writing an email, providing answers to questions etc. Even writing code. In 2024, we will see the adoption of more sophisticated use cases, e.g., the improvement and creation of customer service documents based on service interactions, or the more rigorous pre-testing of campaigns using (generated) personas, we can even think of the creation of whole ABM campaigns – and why not thinking of truly individualized content based on individual’s interactions with one or more companies?

The emergence of specific foundation models

This one is related to the rise of more sophisticated use cases. One of the fundamental flaws of LLMs is that they do not know anything about the industry my company is active in, me as a company, or my customers. General LLMs are also not likely to learn this, as the necessary data is highly proprietary. Not many vendors have access to the necessary data – and the capability to build business foundation models. Still, to increase the accuracy of models it is extremely important to have and to be able to apply this knowledge. Vendors with access to this data will soon publish industry and business specific models that will help improve generation of the right content and to facilitate data driven decision making. Keep an eye out for vendors like SAP, Microsoft, Oracle, Salesforce and Zoho.

On top of this, my bias is your truth and my ethics are different from yours. Groups of people with different cultural backgrounds do have different value- and belief systems. These require different sets of guardrails that govern different views on what is toxic or acceptable and what not. This means that there cannot be a one foundation model fits it all approach. Instead, we will see the emergence of models with regionally different guard rails although this can be partly covered by fine-tuning existing models.

Return on investment

The training and running of large models is expensive – incredibly expensive. They use a lot of compute power and therefore require power hungry chips – lots of them – and the corresponding cooling. Creating yet another generation of more performing chips helps only that far, especially since the possible savings that these more efficient chips promise are likely to be morphed into higher profits for the chip manufacturers. Remember, there isn’t that much of competition in this area. This will at least for some time break the trend of ever-growing models. We will see some smaller ones, specialized ones, that can be run more efficiently while not significantly losing accuracy.

Platform play

AI, generative AI is a platform play. Platform plays favor the bigger players. The current 800-pound gorilla is OpenAI. According to CB Insights OpenAI has an estimated revenue of more than 6 times of the closest competitor – Anthropic. This, btw, is probably the one and only reason why OpenAI might not end up as a Microsoft subsidiary. As a consequence, we will see a lot more specialization of LLM vendors and likely some tuck-in acquisitions by enterprise vendors in (perceived) need of an own technology.

Exciting times ahead.

What do you think?