Briefing · 13/05/2026

AI hasn’t killed best-of-breed. It has killed unintegrated best-of-breed.

Best-of-breed software still has a place, but agents expose the integration tax humans used to absorb. The new stack test is whether systems are reachable, governed and clear enough for AI-assisted work.

AI hasn’t killed best-of-breed. It has killed unintegrated best-of-breed.

Best-of-breed software is not dead.

Unmanaged best-of-breed is.

For the last twenty years, organisations could bolt together CRM, HR, finance, ticketing, docs, chat, task management and knowledge systems because humans absorbed the mess. People switched tabs, copied context, reconciled conflicting records, remembered which spreadsheet was the real one, and knew when the CRM was out of date.

Humans were the middleware.

AI changes that bargain. Agents do not handle ambiguity the way staff do. They need reachable systems, clear permissions, clean data models, authoritative records, audit logs, and safe ways to act.

The integration tax that humans quietly paid is about to appear on the architecture bill.

Diagram: old best-of-breed relied on humans as middleware; agent-ready stacks need governed integration

Diagram note: the issue is not specialist tools. The issue is making people or agents reconcile disconnected systems without a governed integration layer.

The shift: app-centric to agent-centric

The old operating model was app-centric:

  • humans opened apps;
  • humans navigated interfaces;
  • humans created files;
  • humans moved work between tools;
  • humans remembered the context the software did not carry.

The emerging model is agent-centric:

  • humans express intent, judgement, approval and correction;
  • agents gather context, call systems, draft artefacts, update records and route work;
  • apps become infrastructure underneath the interaction layer;
  • files become outputs, archives, evidence, approvals or compatibility formats;
  • systems of record become more important because agents need somewhere authoritative to read and write.

That does not mean everyone will sit in a chatbot all day. “Conversation layer” is too narrow. The top of the stack is an interaction layer:

  • chat;
  • embedded copilots;
  • buttons like “summarise this” or “draft reply”;
  • meeting agents;
  • voice and ambient context;
  • scheduled agents;
  • triggered workflows;
  • agent-to-agent delegation.

Chat is one interface. The strategic layer is broader: how humans and AI systems interact around work.

The new stack

A useful AI-ready operating stack looks roughly like this:

  1. Interaction layer
    Chat, copilots, voice, buttons, ambient agents, scheduled triggers and approval surfaces.

  2. Agent/orchestration layer
    The runtime that plans work, calls tools, checks state, requests approval, delegates and handles exceptions.

  3. Integration layer
    MCP, APIs, connectors, webhooks, event streams and data contracts. This is the sleeper battleground.

  4. Systems of record
    CRM, HRIS, finance, ticketing, support, policy, asset, client and operational systems. The places where business truth lives.

  5. File/artifact layer
    Docs, PDFs, spreadsheets, slides and exports. Still essential, but increasingly as artefacts of work rather than the primary work environment.

Cutting across all of it:

  • identity;
  • permissions;
  • audit;
  • observability;
  • data governance;
  • versioning and rollback;
  • human approval boundaries.

These are not compliance decorations. They are the spine of agentic operations.

Diagram: the AI-ready operating stack has interaction, orchestration, integration, systems of record and file layers, with identity and audit running through all of them

Diagram note: identity, permissions, audit and rollback are not a separate afterthought. They cut through every layer.

Why best-of-breed gets harder

Best-of-breed worked when a tool only had to be good for the human using it.

The new test is harsher.

A system now has to be good for the human, the organisation and the agent operating on behalf of the human.

That means asking:

  • can an agent reach it without screen-scraping?
  • does the API cover what the UI can do?
  • is there an MCP server or credible agent-access roadmap?
  • are permissions scoped and auditable?
  • can actions be attributed to the user, agent and workflow?
  • does the system expose a clean data model?
  • does it support events, webhooks, versioning and rollback?
  • can it handle machine-speed reads and writes without breaking rate limits?
  • is it clear whether this system owns the customer, case, task, policy, invoice or approval?

A beautiful UI with a weak API is no longer best-of-breed. It is a human-friendly silo.

The systems-of-record question

Systems of record become more strategic, not less.

AI does not remove the need to decide where truth lives. It makes the decision urgent.

If customer data lives partly in Dynamics, partly in spreadsheets, partly in support tickets and partly in a project tool, the agent cannot safely know what is true. If policies live in Employment Hero, Confluence, Loop, Google Docs and PDFs in a drive, the agent will find the wrong one eventually. If Asana and Jira both hold tasks, neither is the source of truth unless the boundary is bright and enforced.

For humans, that is annoying.

For agents, it is operational risk.

The procurement question changes from “does this tool have the features our team wants?” to “can this tool safely participate in AI-run work?”

Procurement criteria for AI-ready systems

For new systems, agent-readiness should be written into the RFP.

Useful criteria:

  1. Agent reachability
    Comprehensive documented API, OpenAPI/schema support, MCP support or roadmap, webhooks/events, and no critical UI-only functions.

  2. Scoped identity
    OAuth, service accounts, delegated access, workload identity, least-privilege scopes, and no “admin or nothing” automation model.

  3. Permission inheritance
    Agents should act on behalf of humans or teams without bypassing existing access controls.

  4. Audit and observability
    Every machine-initiated action should be attributable, timestamped, explainable and exportable.

  5. Data model integrity
    Clean schemas, referential integrity, metadata, typed fields, controlled vocabularies, soft deletes, versioning and rollback.

  6. Concurrency and state handling
    Safe behaviour when multiple humans or agents act on the same record, document or workflow.

  7. Rate limits fit for agents
    Human-scale rate limits are not enough if agents are expected to perform bulk review, reconciliation or reporting.

  8. Interoperability over captive AI
    A vendor’s built-in chatbot is less important than whether your agent layer can work with the system.

That last point matters. A vendor saying “we have AI” is not the same as being AI-ready.

Teams, gravity and the realpolitik of stacks

Most organisations do not get to design this from a blank page.

They already have gravity.

If Teams is where the conversations, meetings, calendar invites, files, approvals and organisational habits live, then Microsoft is already part of the operating substrate. The organisation may still use Google Docs, Confluence, Asana, Jira, Xero, Dynamics and Employment Hero, but Teams creates collaboration gravity.

That does not automatically mean “move everything to Microsoft”.

It does mean the decision is no longer neutral.

The strategic question becomes:

Given our existing gravity, how native do we go, and what do we deliberately keep outside the fence because it is worth the integration cost?

That is a better question than the old religious war between suites and best-of-breed.

A practical stack principle

The rule of thumb:

Consolidate by default. Diversify deliberately.

Keep a specialist tool when one of these is true:

  • it is genuinely better in a way that matters;
  • it owns a clearly bounded system of record;
  • it is agent-reachable through strong APIs, MCP or connectors;
  • it has strong permission and audit controls;
  • replacing it would create more operational risk than integrating it;
  • users are materially more effective in it and the integration cost is funded.

Otherwise, sprawl is not flexibility.

It is agent-hostile architecture.

Diagram: consolidate by default, diversify deliberately when a specialist tool is materially better, agent-ready and clearly bounded

Diagram note: a specialist app can stay, but it has to earn the integration cost. Preference alone is not enough.

The counterargument

There is a real risk in over-consolidating.

If every organisation rushes into one vendor’s suite because that vendor has the cleanest AI story today, they may trade SaaS sprawl for platform lock-in. Microsoft, Google, Salesforce and Atlassian will all try to make their ecosystems feel like the safest place to run agents.

Sometimes they will be right.

Sometimes the better answer will be a vendor-neutral agent layer over a heterogeneous stack. MCP and similar protocols matter because they may reduce the pressure to consolidate by making cross-vendor work more reliable.

But that future is not evenly distributed yet. Today, most organisations are not choosing between a perfectly integrated suite and a perfectly integrated neutral agent layer. They are choosing between deliberate architecture and accidental sprawl.

Best-of-breed remains legitimate.

Best-of-breed without funded integration, clear ownership and agent-ready access does not.

The new stack test

AI has not made stack strategy simpler. It has made the hidden architecture visible.

The question is no longer just which apps people like using.

The question is whether the organisation has an operating layer where AI can safely reach the right systems, act with the right permissions, use the right source of truth, produce the right artefacts, and leave the right audit trail.

That is the new stack test.

Best-of-breed can pass it.

The SaaS junk drawer cannot.

Was this useful?

Quick signal helps Rob sharpen future briefings.

Share this signal
Signal soundtrack Dark Driving Techno
0:00 0:00