thomas.wieberneit@aheadcrm.co.nz
Salesforce lets the Genie out of the bottle!

Salesforce lets the Genie out of the bottle!

The news During the Salesforce AI Day on June 12 as well as the Salesforce AI Industry Analyst Forum on June 20, Salesforce provided a lot of interesting information on how the company addresses the challenge – or should I say problem – of trust into artificial intelligence. Salesforce sees this gap caused by hallucinations, lack of context and data security as well as toxicity and bias. According to Salesforce, this gets compounded by the need for integrating external models into business software. To address this problem, Salesforce has announced its AI Cloud that combines an “Einstein GPT Trust Layer”, Customer 360 and its CRM to offer AI-powered business processes that are built right into the system, based on an AI that can be trusted. The main vehicle is the Einstein GPT Trust Layer that takes care of secure data retrieval from business applications, dynamic grounding to reduce the risk of hallucinations and to increase response accuracy by automatically enriching prompts with relevant business-owned data, data masking, the anonymization of sensitive data to avoid its unintentional exposure of sensitive data to external tools, toxicity detection to make sure that generated content adheres to corporate policy, is free of unwanted words or images, and unbiased, creating and maintaining an audit trail, the external (or internal) AI not retaining, storing, any corporate information that gets sent to it via the request. This trust layer sits in between the used AI models and the apps and the respective development environments. All requests to the models, along with their data, get routed through this layer, ensuring authorization protected retrieval of data, the grounding...
How to make efficient use of generative AI

How to make efficient use of generative AI

Generative AI is here to stay. It is not only a hype that probably gets worse before it gets better. And we clearly still are in a hype, as the following chart showing the search interest for ChatGPT between October 1, 2022 and April 12, 2023 from Google Trends shows. Similarly, the Gartner Group sees Generative AI technology approaching the peak of inflated expectations in its 2022 hype cycle for artificial intelligence. To be sure, we see only the tip of the iceberg when looking at voice, text or image based services that we all know and use. The Gartner Group also foresees many industrial use cases reaching from drug and chip design to the design of parts to overall solutions.  You think that these scenarios lie far in the future? Read this Nature article from 2021 and think again. And in contrast to some of the other hypes that we have seen in the past few years, there are actual use cases that support the technology’s survival of the trough of disillusionment. As there are viable use cases, unlike “Metaverse”, Blockchain or NFTs have shown, generative AI is not a solution in search of its problem. Apart from OpenAI’s GPT and Dalle-E models that surely caught everybody’s attention in the past weeks and months, there are a good number of large language models that are just less known. A brief research that I recently conducted, unearthed more than 50 models that got published over the past few years. For their paper A Survey of Large Language Models that focuses on “review[ing] the recent advances of LLMs by introducing...
How vendors help generating value with generative AI

How vendors help generating value with generative AI

The hype around generative AI, in particular ChatGPT is still at a fever pitch. It created thousands of start-ups and at the moment attracts lots of venture capital.  Basically, everyone – and their dog – jumps on the bandwagon, with the Gartner Group predicting that it is getting worse, before it is going to be better. According to them, generative AI is yet to cross the peak of inflated expectations.  Gartner Hype Cycle for Artificial Intelligence, 2022; source Gartner There are a few notable exceptions, though. So far, I haven’t heard major announcements by players like SAP, Oracle, SugarCRM, Zoho, or Freshworks. Before being accused of vendor bashing … I take this is a good sign. Why? Because it shows that vendors like these have understood that it is worthwhile thinking about valuable scenarios before jumping the gun and coming out with announcements just to stay top of the mind of potential customers. I dare say that these vendors (as well as some unmentioned others) are doing exactly the former, as all of them are highly innovative. Don’t get me wrong, though. It is important to announce new capabilities. It is probably just not a good style to do so too much in advance, just to potentially freeze a market. This only leads to disappointments on the customer side and ultimately does not serve a vendor’s reputation.  For business vendors, it is important to understand and articulate the value that they generate by implementing any technology. Sometimes, it is better to use existing technology instead of shifting to the shiny new toy. The potential benefits in these cases simply do not outweigh the disadvantages, starting...
Beyond the hype – How to use chatGPT to create value

Beyond the hype – How to use chatGPT to create value

Now, that we are in the middle of – or hopefully closer to the end of – a general hype that was caused by Open AI’s ChatGPT, it is time to reemphasize on what is possible and what is not, what should be done and what not. It is time to look at business use cases that are beyond the hype and that can be tied to actual business outcomes and business value. This, especially, in the light of the probably most expensive demo ever, after Google Bard gave a factually wrong answer in its release demo. A factual error wiped more than $100bn US off Google’s valuation. I say this without any gloating. Still, this incident shows how high the stakes are when it comes to large language models, LLM. It also shows that businesses need to have a good and hard look at what problems they can meaningfully solve with their help. This includes quick wins as well as strategic solutions. From a business perspective, there are at least two dimensions to look at when assessing the usefulness of solutions that involve large language models, LLM. One dimension, of course, is the degree of language fluency the system is capable of. Conversational user interfaces, exposed by chatbots or voice bots and digital assistants, smart speakers, etc. are around for a while now. These systems are able to interpret the written or spoken word, and to respond accordingly. This response is either written/spoken or by initiating the action that was asked for. One of the main limitations of these more traditional conversational AI systems is that they are...