thomas.wieberneit@aheadcrm.co.nz

Generative Engine Optimization: The New Tech Hustle or a CX Reality?

Generative Engine Optimization: The New Tech Hustle or a CX Reality?

The digital marketing landscape is undergoing a massive tectonic shift. Since the ascent of Google, search engine optimization was the undisputed king of visibility. The industry grew into an eighty-billion-dollar behemoth built entirely on gaming Google search results. Marketers optimized keywords, built backlinks, and structured their websites to appease a single, dominant algorithm. Times are changing rapidly. We are now entering the era of Generative Engine Optimization. The rules of engagement have completely shifted.

In our latest CRMKonvos episode, Ralf and I sat down with Noriko Yokoi. Noriko holds a PhD from the London School of Economics and is co-founder of 3cubed.ai. We discussed this exact transition. The shift from traditional SEO to Generative Engine Optimization is leaving many enterprise buyers completely confused. Marketers are flooding LinkedIn and other platforms with colorful PDFs trying to explain the difference. The reality is far less glamorous and much more chaotic. Large language models like ChatGPT, Claude, and Gemini are changing how consumers find information. Consequently, brands are scrambling to ensure their products show up in AI-generated answers.

TL;DR

If you want to watch the full CRMKonvo, please go ahead here (optimized for smartphones) or here (optimized for tablets/computers).

Else, be my guest and continue to read.

Or do both …

At the core problem lies the absolute lack of transparency. Traditional SEO relies on concrete metrics. Google publishes keyword volumes. Marketers know exactly how many people searched and search for specific terms. Generative AI platforms provide absolutely nothing comparable, nothing at all, in fact. The big AI companies keep their vaults tightly sealed. They do not share search volumes, prompt frequencies, or exact ranking methodologies. This creates a massive void in the market. As a result, new startups are emerging and rushing in to fill it. They offer dashboards and visibility scores that attempt to quantify the unquantifiable.

The Black Box of Generative Visibility

Noriko pointed out the fundamental difference between traditional search and generative search. SEO was always about ranking links on a page. You wanted to be the top blue link. Or at least be on the first page. Generative AI does not give you a list of links. It gives you a single, synthesized answer. It might cite three or four sources behind that answer. If you are not one of those cited sources, you practically do not exist.

This creates a fierce competition for authority. The LLMs scour the internet for sources they deem highly authoritative and citable. They look for deep expertise and comprehensive knowledge. Ironically, this is where the modern marketing machine fails spectacularly. For years, marketers have produced short, punchy, keyword-stuffed articles. They created marketing fluff designed to catch a quick click. LLMs absolutely hate fluff. They prefer deep, substantive content written by recognized experts.

This brings us to the cynicist view I raised during the episode. Ironically, LLMs are great at creating fluff. LLMs generated content is akin to instant mediocrity. We are seeing a flood of AI-generated articles hitting the web. The internet is drowning in average, uninspired, LLM-generated text. If a brand wants to stand out to an LLM, it therefore cannot rely on generative AI to write its core thought leadership. You cannot feed instant mediocrity to an algorithm that is actively hunting for unique expertise. The brands that win will be the ones that invest in genuine, human-led research and deep-dive analysis.

The Measurement Dilemma and the Red Face Test

How do you measure success in a non-deterministic system? If you ask a search engine the same query five times, you get the same result, well almost, as there is AI behind them as well, but you get the picture. Search engines are inherently not probabilistic. If you ask an LLM the same prompt five times, you might get different citations. This makes building a reliable tracking tool incredibly difficult.

Noriko explained that the current crop of Generative Engine Optimization tools uses a mix of methodologies to make sense of this unpredictability. Some of them rely on opt-in user panels to gather real-world prompt data. Others use synthetic prompts. They program bots to ask the LLMs thousands of questions and scrape the citations that come back. They compile this data to give brands a share of voice metric. It is still an educated guess, a pin dropped into a massive ocean of data.

These services are not cheap. Companies are charging anywhere from a few hundred to several thousand dollars a month for these insights. Enterprise buyers are essentially paying to find out if another machine likes them. Before you write that check, you must apply what Noriko called the red face test. You need to look at the methodology behind these tools. Ask the vendor exactly how they calculate their visibility scores. If they cannot give you a straight answer without turning red in the face, you should walk away. You must be comfortable with the level of guesswork involved before committing your scarce CX budget.

Writing for the Machine

We have reached a bizarre inflection point in content creation. For years, we wrote for human readers. Then we started writing for search engine crawlers. Now, we must write specifically to educate LLMs. And don’t be of the illusion that they all work the same. The don’t. And there is no dominant player like Google is for search yet. Asking Gemini, Claude and ChatGPT the very same question:” Which are the most used LLMs? Rank by number of monthly users?”, they agree that there the leading three players are Meta AI, ChatGPT, and Gemini – not necessarily in that order, though, but the usage numbers are fairly similar. So, there are at least three to consider.

During our conversation, I asked Noriko if authors now need to change their writing style. Should we stop addressing our human audience and start addressing Claude and Gemini directly? Her answer was a partial yes. To become a cited authority, your content must be structured in a way that an LLM easily digests. You have to break down complex topics into clear, distinct sub-segments. You must avoid superficial marketing jargon. Instead, provide deep, factual knowledge. Part of this, btw, is exactly what would make a human editor, or an LLM, for that matter, flag your content as potentially AI generated.

Still, this requires a massive shift in corporate marketing strategy. The days of publishing five shallow blog posts a week are over. Brands need to publish comprehensive, authoritative pillars of content. They need to earn their citations by being the most credible source on a specific topic. The machine will ignore you if you offer nothing but noise.

Tools like Profound, Peak AI, and others are emerging as the early leaders in this measurement space. They are trying to standardize a chaotic environment. It is advisable to utilize tools like these to get a baseline understanding of your AI visibility. However, you must treat their metrics as directional indicators rather than absolute facts. The ecosystem is simply too volatile for absolute certainty.

The Reality Check for Buyers: Stop Feeding the Hype Machine

Let’s make some sense out of the chaos. Enterprise AI buyers are constantly bombarded with pitches promising magical visibility. The shift to Generative Engine Optimization can’t be ignored, but it is deeply misunderstood. You cannot buy your way to the top of an LLM response (yet?) with a simple subscription. You must fundamentally change your organization’s approach to data and information. Here are the three main takeaways you need to implement immediately.

Integration Realities Must Guide Strategy. Do not purchase a standalone visibility tool expecting it to magically fix your customer experience, or to bring you into an LLM’s response. Let alone all of them. These tools provide directional data at best. You must integrate these insights into your broader CRM and CX architecture. A visibility score is useless if it does not translate into better customer interactions and ultimately improved conversions. The technology must serve your overarching strategy; it cannot become a siloed, self-serving vanity metric.

Data Quality Over Generative Hype. The market is obsessed with using AI to generate content. This is a fatal error. LLMs prioritize depth, expertise, and authoritativeness. If you feed your channels with recycled marketing fluff, you will achieve nothing but instant mediocrity. Algorithms will bypass you in favor of sources providing genuine knowledge. Invest your budget in subject matter experts who produce high-quality content. The machines look for truth; make sure you provide it.

The Human-in-the-Loop Necessity. Generative AI is inherently unpredictable. It hallucinates and changes answers based on invisible updates. But it’s here to stay. You cannot put your brand reputation entirely in the hands of automated dashboards. As your vendors do, you must pass the red face test, too. Therefore, demand transparency from vendors regarding their methodologies. Keep skilled human analysts in the loop to interpret the data. Trust the tools but strictly verify outputs before making strategic CX decisions.