thomas.wieberneit@aheadcrm.co.nz

Flipping the math: How AI changes Build vs. Buy

Flipping the math: How AI changes Build vs. Buy

For the longest time, companies have been trapped by enterprise software vendors.

First by shrink-wrapped software packages.

Then by SaaS offerings.

Both situations led to what one even in a SaaS world can call shelfware – although these days the shelf is a virtual one instead of a physical one. Buyers still get enticed to purchase more capabilities than they need, which leads to them paying more than necessary while often using software packages that offer overlapping capabilities.

One of the promises that SaaS started with, was to end this. Sadly, it looks like this promise was not kept. And this is no wonder; after all vendors want to be sticky. And they need to have increasing revenues. This means that they need to offer an ever-increasing number of capabilities, aka features, to warrant their pricing and eventually regular price increases. Combined with the frequently used strategy of offering related capabilities, i.e., seats for an adjacent software that is not yet needed by a customer, this led to two things: bloat and shelfware. Both go at the expense of the enterprise buyer.

Since the dawn of packaged software, the argument to buy, i.e., to voluntarily step into this trap, is the same: Buying is cheaper than building.

Which probably was correct. Buying from a specialist was the logical choice. Engineering talent was, and still is, scarce. Building software includes a lengthy process of requirements engineering, years of development and ultimately never-ending maintenance.

Just that most of this is true for most implementations of purchased enterprise software, too.

And the buying process is arguably broken. Need identification is often done without the right stakeholders, the software selection becomes a procurement-heavy process that is more based on checking boxes than in fulfilling user needs and the implementation turns out to be a death march. Who has not read – or at least heard of – the statistics that show implementation failure rates of more than 60 percent?

But then, who got ever fired for buying IBM, or Salesforce or SAP, for that matter. Or Oracle? Take your pick.

The result?

As a consequence, we see processes that are not improved or that not necessarily differentiate the company, as they are either implemented to follow the “same ole” or along “best practices”, which often translates to “average”, i.e., mediocrity. Users are forced to adapt to the tool, and not the other way round. Their pain is not solved. This, combined with shelfware, contributed to low adoption and shadow IT which ultimately harms all efforts of a digital transformation.

And it is costly.

Entry low-code, no-code and GenAI

Low- and no-code environments are basically there since, well … forever. At least as measured in Internet time. I had my first experiences with one back in 1995 (yeah, I am that old).

Depending on who got its fingers on these environments, results have been good or not so. Anyone remember the infamous Lotus Notes app graveyards? It needs guardrails.

However!

The combination of low-code, no-code and generative AI has the potential to reduce the marginal cost of software development to almost zero. Instead of engaging into a multi person year software implementation project, it is now theoretically possible to “vibe-code” a bespoke application in a short time and at low cost. The scarcity of IT personnel is mitigated, and the procurement process is no hurdle anymore.

At least theoretically. Again, it needs guardrails and the right tools for the right people.

Still, and this is important, it is now possible to create what one could call a throwaway MVP, or a working prototype, that covers a requirement’s happy path at almost zero cost. To be clear, this is a capability that we didn’t or only barely have with all the traditional low-code and no-code environments. And this is a big deal.

With this prototype it is possible to quickly identify whether a real problem is solved or at least mitigated; and this before big money is spent for the customizing and deployment of a new SaaS solution. This flips the procurement process to something for which one could use the term prompt-to-product.

A new paradigm?

As said, the traditional software procurement lifecycle: requirements identification → software selection → implementation is flawed. It relies on abstract and static written requirements. Text is ambiguous whereas software is explicit. The gap between a fuzzy, written requirement (e.g., “The system must support flexible workflows”) that we see all too often and the delivered reality is where millions of dollars in enterprise value can get tanked.

With the help of Generative AI, it is possible to establish a methodology that moves the build phase to the very beginning. It serves not as the delivery mechanism, but as an agile discovery tool in a three phased process.

Phase 1: Dynamic Discovery

Instead of collecting stakeholder needs in a static document, this process begins with live prototyping. Business stakeholders work with an AI engineer or directly with an LLM-enabled no-code environment to describe their problem in natural language, rapidly developing a working prototype that supports the happy path to the desired outcome. This is agile development on steroids. The prototype does not need to be secure or scalable; it only needs to fulfill the job and be interactive. As a result, it becomes very clear what the users actually want. Plus, some implicit requirements get surfaced early in the process instead of after the purchasing decision and project budget assignment.

Questions like “Does the user actually want a dashboard, or just a daily email summary?” or “Does the data structure actually fit the way the team works?”, and more, are answered before they require costly change requests.

Phase 2: Stress Test

After the prototype solves the business users’ pains, IT leadership is in a better position to decide whether to buy, build, or opt for composing a low-code solution. Based on the assumption that the existing software packages do not cover the requirements, this decision can be taken by answering three main questions based on the generated code.

  • Does this tool need to read/write to business-critical system like the ERP, or does it live in isolation?
  • Does the logic involve high-liability calculations (tax, payroll, health data)?
  • Is the logic static, or will it require constant updates based on external factors (e.g., changing shipping tariffs)?

Phase 3: Strategic Fork

Based on the answers, the organization moves down one of three paths. Crucially, the outcome of phase 1 is valuable on all three paths.

Build

If the prototype is self-contained, low-liability, and specific to the company’s internal operations, the decision is to build.

The prototype code gets refined to cater for edge scenarios and for compliance and security, if the development environment of the prototype didn’t already take care of these. After that, it can get deployed.

Because the cost of generation stays at near zero, the resulting software is disposable. If the process changes over time, the application is not patched but simply discarded and regenerated.

As a result, the company has a solution with perfect process fit, low implementation cost and zero licensing fees.

Buy

If the prototype reveals that the requirements are more complex than anticipated, for example, if there are more regulations to consider, the decision is to buy. In contrast to the traditional process, this is now an informed decision.

The organization stops building but uses the functional prototype as a key part of the RFP that demonstrates the desired process. The conversation shifts from “Can you meet our requirements?” to “Here is exactly how our process works; demonstrate that your software can replicate this specific behavior.”

The result is risk mitigation for both the company and the winning vendor. The prototype proves that building potentially creates unmanageable technical debt. It also prevents buying vaporware by forcing vendors to prove capability against a live model. For the vendors, it takes away considerable uncertainty in assessing the project size.

Compose

If the prototype requires the flexibility of custom logic but the governance of a standard platform (Microsoft, SAP, Oracle, Zoho, Salesforce, etc.), the decision is to compose it using a low-code environment.

The AI-generated logic gets transferred to an existing low-code/no-code platform. This platform handles identity management, UI standardization, and hosting, while the generated code still handles the unique business rules.

This enables speed of deployment with the safety net of IT governance.

What does this mean?

Executives should flip the purchasing process using three key actions.

  • Provide an infrastructure that allows for rapid, AI-supported prototyping, aka vibe-code environments. Ideally, this environment already embraces security and compliance rules.
  • Train users, business analysts or IT personnel to use this environment to bridge the gap between business and AI.
  • Instead of asking for written requirements only, make it the creation of prototypes in this environment that serve as core elements of the demand mandatory.

This flipped process opens the build vs. buy question to no longer being binary. It creates a build-to-define process that ensures that the decision of how to deliver required functionality in a better informed, de-risked way that has a higher chance for success at lower cost. It isn’t killing SaaS but stopping to buy hope. Low-code/no-code in combination with GenAI can help to know more exactly what gets delivered, regardless of whether you build or buy.