The most consequential enterprise AI governance document published this year arrived in late April with surprisingly little fanfare. SAP’s updated API Policy, version 4/2026, is a short document in plain English. The clause that is most interesting is Section 2.2.2. It restricts how autonomous and generative AI systems are permitted to interact with SAP APIs. Read literally, it has the potential to change the architecture of agentic AI projects across every SAP customer landscape.
Read carefully, it is also more interesting than the lock-in headlines suggest. The policy targets a specific category of AI behavior, not AI as such. It connects to commercial mechanics that go well beyond API stability. And the literal text, in its current form, will probably not survive the next two policy revisions intact. There is a lot to unpack.
I will walk through what the policy actually says, how the SAP-watching community is reading it, what the rest of the major enterprise vendors are doing in comparison, what counts as an “endorsed architecture”, and what customers and partners should be doing about it now. I’ll close with a view on whether the policy can stand the test of time.
What Section 2.2.2 actually says
The operative sentence is direct. “Except through and within the limits of SAP-endorsed architectures, data services, or service-specific pathways expressly identified and intended for such purposes, SAP prohibits API use for interaction or integration with semi-autonomous or generative AI systems that plan, select, or execute sequences of API calls”. The same paragraph also prohibits scraping, harvesting, or systematic large-scale data extraction.
Three things flow from that. First, only Published APIs, those listed on the SAP Business Accelerator Hub or in product-specific documentation, are usable at all. Internal, private, and reserved-namespace APIs are out. Second, published APIs must be used for their documented purpose. Third, any agentic use of those APIs has to flow through SAP-endorsed pathways. The policy explicitly reserves enforcement rights including throttling, suspension, and termination of access. It also explicitly prohibits circumvention through proxies, intermediary services, custom code, or impersonation.
That is the legal fence. The interesting question is what it means.
The five readings circulating in the community
The professional discourse on this policy has organized into roughly five interpretations, and most of them are simultaneously true.
The first reading is that SAP is closing the back door on undocumented APIs. For years, real projects depended on internal SAP endpoints that worked in practice but were never officially supported. Marian Zeis, who maintains the curated registry of SAP MCP servers and runs one of the more careful technical blogs in the community, told The Register that “the changes are more restrictive than the community expected” and that SAP is too slow to publish or improve templates, leaving real projects dependent on undocumented APIs to keep pace with what their use cases require..
The second reading is more commercial. As SAP CX architect Jorge Ocampos puts it directly, SAP is not objecting to Claude, GPT, or Gemini. It is controlling the path through which agents touch SAP data and SAP transactions (Spanish). That path is BTP, Joule, AI Core, the Generative AI Hub, SAP Build, Integration Suite, and Business Data Cloud. The same agent running outside this stack may be non-compliant; running through it consumes AI Units under SAP’s new consumption-pricing model. Snap Analytics reaches the same conclusion from the data side: all roads now lead to BDC. That’s cynical, but probably accurate.
The third reading is the lock-in concern. The Register captured this most directly, and DSAG, the German-speaking SAP user group, made it formal. DSAG’s board went on record demanding contractual clarity (German), transition timelines, transparent fair-use thresholds, and protection for existing integrations. Their basic position is that SAP cannot announce that the SAP Business Accelerator Hub and product documentation govern customer architecture without first making those documents formal contract components.
The fourth reading is more sympathetic. The policy does not kill AI on SAP. It targets a specific category that practitioners have started calling attached AI, agents that plan, select, and execute API calls against productive systems, as opposed to detached AI, which helps humans understand SAP, generate code, search documentation, or design data models without touching live transactions. Distinguishing the two is the most useful conceptual move available right now. Most coverage skips it.
The fifth reading is procedural. SAP has created compliance fog by not publishing an enumerated whitelist. The phrase “SAP-endorsed architectures, data services, or service-specific pathways expressly identified and intended for such purposes” is doing enormous work, and right now nobody knows exactly what is on the list. That ambiguity is uncomfortable when enforcement can include throttling and termination.
All five readings hold. The policy is technically defensible, commercially self-serving, contractually ambiguous, conceptually sound for its stated target, and procedurally underdeveloped. Customers and partners need to internalize all five at once.
The attached versus detached distinction
This is the single most important conceptual handle on the policy, and it is worth slowing down for.
Detached AI is what most people are using today. ChatGPT helps a developer read an SAP help page. Claude drafts an ABAP method based on documentation. GitHub Copilot in agent mode edits a UI5 application. A community MCP server lets a coding assistant pull SAP documentation into context. None of this touches a productive SAP system. None of it is targeted by Section 2.2.2.
Attached AI is different. A LangGraph agent reads open purchase orders from S/4HANA OData, decides which to escalate, drafts a follow-up email, and posts updates back. A Bedrock-based finance agent calls invoice APIs, validates against vendor data, and triggers a payment release. A custom MCP server exposes SAP business objects to a general-purpose Claude or GPT agent, which then plans and sequences calls to mutate records. This is what Section 2.2.2 is talking about, and this is what now requires an SAP-endorsed pathway.
The distinction matters because most of the panic is misdirected. The customer who is running Copilot for ABAP development is fine. The customer who has a non-SAP agent platform reaching into S/4HANA over OData to execute business workflows is not, unless that path is routed through Joule, the MCP Gateway, BTP, or BDC.
How this compares to what other vendors are doing
Across the major enterprise software vendors, every player is doing something to govern agentic API access. The interesting observation is how differently they are choosing to do it.
SAP regulates the pathway. Section 2.2.2 demands that agent traffic enter through approved architectures. Salesforce, with one important exception, regulates the result. Agentforce sits behind the Einstein Trust Layer with per-conversation pricing and an Acceptable Use Policy that limits automated decision-making with legal effect. The exception is Salesforce’s tightening of Slack data terms last year, which restricted external AI tools like Glean from indexing Slack messages. That move is narrower than SAP’s, but it points in the same direction.
Microsoft regulates the gateway. The Azure AI Gateway, Agent 365, and the Microsoft Agents SDK are explicitly framework-agnostic. Microsoft’s documentation advertises support for OpenAI, Anthropic, LangChain, Copilot Studio, and AWS or Google-hosted agents. The control mechanism is identity, observability through Entra and Purview, and token-rate limiting. ServiceNow is similar in spirit. The December ’25 ServiceNow release added A2A v0.3 with tested interop against Vertex AI, AWS Bedrock, and Azure AI Foundry, plus recursive-loop protection for agents that might trigger themselves. Oracle has gone resource-bound, with default tenancy limits of two agents and capped tool counts per agent. HubSpot has gone outcome-based, charging roughly fifty cents per resolved conversation. Zoho’s MCP server is explicitly model-agnostic.
In other words, every vendor is choosing a control point. SAP is alone in choosing architectural restriction at this scope. That is not in itself wrong. It is, however, a competitive contrast that Microsoft, ServiceNow, and the hyperscalers will exploit aggressively in CIO conversations over the next two quarters.
What counts as an SAP-endorsed pathway
The policy does not list the endorsed pathways. The SAP Architecture Center, the AI Golden Path, and product documentation do, and the working inventory is reasonably stable.
For published API access, anything on the SAP Business Accelerator Hub, plus product-specific documented APIs across S/4HANA Cloud, SuccessFactors, Ariba, CX, Concur, Fieldglass, and BTP services, used as documented. For the agent runtime stack, AI Core with Kubernetes-namespace-based resource isolation, the Generative AI Hub for foundation-model access with prompt registry and content filtering, AI Launchpad, the SAP Cloud SDK for AI, the SAP Cloud Application Programming Model, Joule Studio in SAP Build, and the BTP Cloud Foundry and Kyma runtimes.
For action and process, Joule itself as the orchestrator, Joule Skills for deterministic operations, SAP Build Process Automation and Build Actions, SAP Document AI, and the Document Grounding Service. For execution boundaries, the MCP Gateway running within Integration Suite, which is what enforces tool allow-lists, per-tool authorization, and human-in-the-loop approval before any system change. Also the Intelligent Scenario Lifecycle Management framework for embedded AI inside S/4HANA, where the data never crosses the system boundary.
For integration and eventing, Integration Suite with API Management, Event Mesh and Advanced Event Mesh, Cloud Identity Services with App2App tokens, and the BTP Audit Log. For data, SAP Business Data Cloud as the strategic foundation, BDC Connect for zero-copy sharing into Databricks, Snowflake, Microsoft Fabric, and Google Cloud Platform, Databricks-in-BDC, Datasphere, the HANA Cloud Vector Engine with authorization-aware row-level security, and the Knowledge Graph. For interoperability, A2A as SAP’s preferred external protocol, MCP used internally with community and official MCP servers emerging including a planned official ABAP MCP server in Q2 2026, and the Joule Agent Gateway for inbound agent consumption from Vertex AI, Copilot Studio, and Bedrock. The Agent Gateway is not yet generally available as of this writing, which matters for anyone being told to use it today.
The architectural pattern shift this implies is straightforward. The old pattern was Agent calls APIs calls SAP. The new pattern is Agent calls a governed SAP pathway calls published APIs and events and data products calls SAP. More mediation, more logging, more SAP architecture in the stack, almost certainly more SAP spend.
The three situations and what to do about each
Customers and partners fall into three buckets, and the compliance work differs for each.
If your AI agents are built by SAP and run on SAP technology, this is the lowest risk category. Joule, the Sourcing Agent, the Dispute Resolution Agent, embedded agents in SuccessFactors and Ariba, all of these are inside the intended architecture by construction. The work to do is operational rather than architectural. Track Assist consumption. Document write-action approvals for finance, HR, procurement, and master-data. Press SAP for transparent, predictable pricing of AI Core capacity, foundation-model token consumption, BDC data egress, and the fair-use thresholds DSAG has been asking about (German). SAP-built does not mean risk-free. It means policy-aligned.
If you are a partner or ISV building on SAP technology, the work is to prove your architecture against the policy. Build a compliance pack for every solution. Inventory every SAP API, endpoint, connector, event, and integration artifact. Show, for each one, the link to the SAP Business Accelerator Hub or product documentation. Map every API to its documented purpose. Classify the solution explicitly: does it include an AI system that plans, selects, or executes sequences of API calls? If yes, identify the endorsed pathway used. Define write-action approval thresholds for anything financial, HR-related, master-data-mutating, or supply-chain-critical. Capture audit traces for every agent action. Get written confirmation from SAP for any gray-zone design choice. Verbal assurance from your account team is not contractual.
If you are running AI agents built on non-SAP technology, you face the highest-risk situation, and it is the one where the policy bites hardest. The safer architectural pattern is to separate reasoning from execution. Let the external Bedrock, Vertex, Copilot Studio, or LangGraph agent reason on data grounded through BDC, Datasphere, or HANA Cloud Vector Engine. Let SAP-controlled services execute. Use A2A into Joule for actions, not direct API orchestration. Use IAS App2App tokens, not shared service accounts. Implement human-in-the-loop gates for finance, HR, procurement, supplier master, pricing, payments, inventory, and production. Stop using undocumented APIs. The policy explicitly prohibits using proxies, gateways, custom code, or intermediary services to circumvent these controls, so the technical workarounds are not just policy violations, they are explicitly flagged as such by name.
A consequence worth pointing out is that the policy interacts with SAP Digital Access licensing. An autonomous agent that creates 10,000 invoice documents through SAP, regardless of where the agent itself runs, owes Digital Access fees on those documents. Section 2.2.2 controls the path; Digital Access meters the documents. The two are coupled. Customers who treat them separately will be surprised on their next true-up.
Will the policy stand the test of time?
My thinking is that the spirit of the policy is durable but the literal text is not, and the gap between the two will close through clarification rather than enforcement.
The legitimate parts of Section 2.2.2 are uncontroversial. Anti-scraping language, throttling rights, anti-circumvention clauses, and the principle that published APIs shall be used for their documented purpose are consistent with how every major SaaS vendor protects shared infrastructure. As autonomous agents proliferate, vendors that do not tighten these controls will face genuine availability and security crises. The risk that an unsupervised agent creates for an ERP system is real. SAP is not wrong to insist on execution boundaries and identity enforcement.
The restrictive parts run into headwinds. Enterprise architecture is moving in the opposite direction, toward open multi-agent meshes built on standards like MCP and A2A that are explicitly designed to make API-gated walls obsolete. The autonomous-agent restriction is unenforceable in its strongest reading because SAP cannot reliably distinguish agent traffic from human traffic on the wire. Enforcement will collapse to volumetric throttling, which the policy already authorizes directly, and contractual audits triggered by complaints, both of which exist already. And the policy contradicts SAP’s own open-platform messaging; CEO Christian Klein walked the message back on the investor call (starting minute 53), stating that this policy mainly refers to SAP’s domain know how and not customers’ data, within days of publication, and DSAG has formally surfaced the contradiction.
My prediction is that within the next months, SAP issues clarifying material, starting with an updated FAQ and then a v5 policy, that does three specific things. It explicitly grandfathers existing partner solutions and customer integrations that pre-date the new policy. It defines “SAP-endorsed architectures” as a maintained, versioned list with deprecation timelines. And it softens the autonomous-AI restriction to a fair-use throttling regime plus an explicit anti-circumvention clause, dropping the architecture-bounded prohibition.
The longer the literal text stands without that clarification, the more competitive damage Microsoft, ServiceNow, Salesforce, and the hyperscalers will inflict by framing themselves as the open alternative for any enterprise that does not want to route every agent action through Walldorf’s runway. The risk of an Indirect Access redux, where SAP burns customer goodwill in audit disputes over agent traffic that customers thought was compliant, is certainly there. SAP burned years of trust in 2017 and 2018 over that issue. Customers still keenly remember and are wary.
Closing read
The policy is technically defensible, commercially self-serving, and strategically fragile in its current form. The fragility is curable, and SAP can, and should, cure it, and fast. Doing this requires three things SAP can actually control: publishing a clear, maintained whitelist of endorsed agentic architectures and pathways; certifying non-SAP-runtime agent patterns through A2A, BDC Connect, and the MCP Gateway so that customers do not have to put every agent inside BTP to be compliant; and making the endorsed pathways genuinely valuable rather than merely mandatory. The BDC Connect zero-copy sharing into Databricks, Snowflake, Fabric, and GCP is the working blueprint for what good looks like on the read side. The harder challenge is delivering the same quality on the write side, where Joule and the MCP Gateway need to become the best way to execute SAP transactions from anywhere, not just the only compliant way.
SAP made the perimeter grab. Now it has to earn it. The next two policy revisions will tell us whether the company understood that or not.