thomas.wieberneit@aheadcrm.co.nz
The Great GenAI Divide: Debunking the Myth of 95% Failure

The Great GenAI Divide: Debunking the Myth of 95% Failure

These days, we are drowning in conflicting information about the value of generative and/or agentic AI. I, myself am researching for good studies that dive into the ROI that is generated by this technology, with limited success. Most information is anecdotal, or comes from success stories, which cannot get used too literally. Two major 2025 reports from MIT and Wharton, respectively, paint starkly different pictures of AI adoption and adoption success. While the meanwhile often quoted MIT NANDA “report” on the state of AI in business often gets quoted with 95 percent of all businesses not getting any ROI from their gen AI initiatives, a recent study by the Wharton Business School shows a very different result with 74 per cent of enterprises showing a positive ROI. Why is one so pessimistic and the other so optimistic? As I have written before, a closer look at the data reveals the 95% “failure” narrative is a myth, or even a scare, and the real story is probably a different and far more differentiated one, which Wharton names Accountable Acceleration. Is GenAI really a 1-in-20 lottery ticket or is it rather a core business function? So, let’s have a look. Methodology matters – debunking the 95% failure rate In contrast to the NANDA “report” that relies on a fairly small sample of about 150 survey responses and 52 structured interviews, the. Wharton report bases on a large-scale, quantitative and longitudinal study. It surveyed around 800 senior decision-makers at businesses of different sizes and is tracking trends for the third consecutive year. Therefore, its data is built for statistically valid conclusions. In...
Beyond the hype – How to use chatGPT to create value

Beyond the hype – How to use chatGPT to create value

Now, that we are in the middle of – or hopefully closer to the end of – a general hype that was caused by Open AI’s ChatGPT, it is time to reemphasize on what is possible and what is not, what should be done and what not. It is time to look at business use cases that are beyond the hype and that can be tied to actual business outcomes and business value. This, especially, in the light of the probably most expensive demo ever, after Google Bard gave a factually wrong answer in its release demo. A factual error wiped more than $100bn US off Google’s valuation. I say this without any gloating. Still, this incident shows how high the stakes are when it comes to large language models, LLM. It also shows that businesses need to have a good and hard look at what problems they can meaningfully solve with their help. This includes quick wins as well as strategic solutions. From a business perspective, there are at least two dimensions to look at when assessing the usefulness of solutions that involve large language models, LLM. One dimension, of course, is the degree of language fluency the system is capable of. Conversational user interfaces, exposed by chatbots or voice bots and digital assistants, smart speakers, etc. are around for a while now. These systems are able to interpret the written or spoken word, and to respond accordingly. This response is either written/spoken or by initiating the action that was asked for. One of the main limitations of these more traditional conversational AI systems is that they are...