Multi-business-unit SaaS companies face a structural problem with AI that most enterprises don't. Serial acquirers with dozens or hundreds of autonomous business units — each with its own product, customer base, and domain expertise — struggle uniquely. That autonomy is the source of their competitive advantage. It is also the thing that makes scaling AI hard.

I call this the Decentralization Paradox: how do you capture enterprise-scale AI efficiencies without breaking the autonomy that makes the organization work?

Three models, one winner

There are three ways to organize AI across a multi-BU company. Each has a structural logic and a structural failure mode.

Centralized: A single team controls all AI infrastructure, models, and deployment. Business units consume AI through internal APIs but don't build independently. JPMorgan Chase runs this model — 2,000+ AI specialists, 400+ production use cases, 150,000 employees using AI tools weekly. It works because universal banking has deep cross-BU data synergies and a regulatory environment that demands uniform governance. The failure mode is the ivory tower: the central team becomes a bottleneck that lacks domain context, and BUs build shadow AI to route around it.

Fully decentralized: Every BU makes its own AI decisions. Large serial acquirers with highly diverse portfolios operate this way by default — autonomous BUs develop their own AI products and applications independently. The failure mode is fragmentation: independent teams build identical chatbots and RAG pipelines, duplicating spend and creating inconsistent security postures.

Hub-and-spoke: A central platform team manages shared infrastructure, guardrails, and governance. BU teams build domain-specific AI applications on top. Schneider Electric runs this model — their CDO describes the hub as providing expertise, process guidance, and a common technology platform, while BU teams apply it to their specific industrial domains. Research from Dataiku shows organizations using this structure are three times more likely to scale AI successfully.

Hub-and-spoke wins because it captures roughly 80% of centralization's governance and cost benefits while preserving the BU autonomy that drives competitive advantage. It's the only model that doesn't have a critical weakness across the five dimensions that matter: governance, cost efficiency, autonomy, innovation speed, and acquisition onboarding speed.

Maturity determines the model

The model you choose should be driven by where you are, not where you want to be. Gartner, AWS, and Dataiku all converge on this: AI maturity is the primary variable.

In the experimentation stage, work is siloed and decentralized by default. As you move to fast adoption, you centralize into an AI Center of Excellence to establish governance and shared infrastructure. At broad adoption, you federate into hub-and-spoke — the hub provides the platform, the spokes build domain applications. At full scale, ownership shifts progressively to BUs, and eventually AI is embedded in every function.

Multi-BU organizations face a complication: dual-track maturity. Enterprise-wide maturity governs the platform and governance decisions. Per-BU maturity governs use case selection and adoption pace. A BU running mature SaaS products with rich customer data might be ready for fine-tuned domain models, while a recently acquired BU still needs pre-built AI services. Hub-and-spoke accommodates both realities simultaneously.

Domain data is the real moat

Foundation models are commoditizing. The sustainable differentiator is domain-specific data — and this is where multi-BU SaaS companies have a structural advantage that pure-play AI companies cannot replicate.

The highest-performing serial acquirers recognize this: differentiation comes from domain knowledge, customer relationships, and embedded data, not building features faster. They evaluate acquisition targets explicitly on the AI potential of their data and customer relationships, because each acquisition brings proprietary domain knowledge and distribution that compounds AI investment.

This reframes the entire AI strategy conversation. The question is not which model to use. It is how to build the data flywheel — where high-quality domain data feeds models, models generate predictions, predictions drive outcomes, and outcomes produce more valuable data. Every business unit in a serial acquirer is an independent data flywheel. The hub's job is to make those flywheels spin faster without homogenizing them.

The AWS implementation

On AWS, hub-and-spoke maps to a multi-account architecture within AWS Organizations. The hub account hosts a central AI gateway: Amazon Bedrock with organization-level Guardrails, a shared model registry, centralized authentication, and cross-account role assumption for tenant isolation. Spoke accounts contain BU-specific Bedrock configuration — agents, knowledge bases, fine-tuned models — connected via Transit Gateway or PrivateLink.

The governance layer is what makes this practical at acquisition speed. Service Control Policies enforce approved models only, region restrictions for data residency, mandatory KMS encryption, and guardrail protection that prevents BUs from modifying organization-level safety policies. Bedrock's cross-account Guardrails enforcement means governance doesn't scale linearly with the number of business units. Combined with Control Tower Account Factory for automated spoke provisioning, you get governance at acquisition speed — which is the only kind that matters for a serial acquirer.

Cost management uses Application Inference Profiles for per-BU tracking, reserved capacity for predictable pricing, and Intelligent Prompt Routing to reduce inference costs by routing simpler queries to smaller models.

Three pitfalls

First: over-centralizing. Central teams that lack domain context will build generic solutions that don't move BU-level metrics. BUs will work around them. You'll end up with an expensive platform nobody uses and shadow AI everywhere.

Second: over-decentralizing. Without shared infrastructure, BU teams independently build identical RAG pipelines and chatbots. The duplicate spend is visible; the inconsistent security posture is not, until it becomes an incident.

Third — and this is the most dangerous: scaling before governance. McKinsey found that only companies where the CEO personally oversees AI governance see measurable returns. Only 6% of organizations qualify as AI high performers. The gap between leaders and laggards has widened by 60% over three years. Governance is not a tax on innovation. It is the foundation that lets innovation compound instead of creating liability.


This post draws on research from McKinsey's 2025 State of AI survey, the Gartner AI Maturity Model, the AWS Cloud Adoption Framework for AI/ML, and Dataiku's operating model research.