Briefing · 01/05/2026

Reverse shadow IT is here

Shadow IT was the business doing technology work without IT. AI creates the inverse: IT deploying systems that start doing the organisation's work.

Reverse shadow IT is here

For years, shadow IT meant the business doing IT’s job without permission.

A team bought its own SaaS tool. Someone built the real workflow in Airtable. A spreadsheet became the operating system. Staff used WhatsApp, Dropbox, or personal AI accounts because the official stack was too slow, too clumsy, or too detached from the work.

The usual complaint was governance: the business went around IT.

The newer phrase shadow AI mostly describes the same direction of travel: employees using unsanctioned AI tools around the official technology function.

AI now creates the more interesting inverse.

Call it reverse shadow IT: IT doing the business’s work without the business.

Not just deploying systems for staff to use. Deploying agents, workflows, and automation layers that start performing pieces of operational work directly: triage, routing, drafting, checking, escalating, summarising, reconciling, and eventually executing.

That is not email drafting. That is not document summarisation. That is a boundary shift.

The live question is simple:

Who gets to redesign the work once software can do part of it?

The old boundary was clear enough

Traditional enterprise technology had a workable fiction:

DomainUsual owner
InfrastructureIT
ApplicationsIT, procurement, vendors
Process designbusiness units
Judgementworkers, managers, subject-matter experts
Exceptionshumans

It was never this neat in practice, but the separation mostly held.

IT provided the tools. The organisation did the work.

Shadow IT broke that from one side. Business units started building or buying their own technology because waiting for the sanctioned route meant losing momentum. Sometimes that was reckless. Often it was rational. A business process had a need, and the official system did not meet it.

AI breaks the boundary from the other side.

Tools like Microsoft Copilot Studio and the wider Power Platform are not just form builders or reporting layers. Microsoft describes custom agents that can use tools to interact with external systems and perform actions, from sending emails to reading and writing business data.

Similar pressure is visible in enterprise agent platforms from vendors such as Salesforce Agentforce and ServiceNow AI Agents. The important signal is not any single vendor claim. It is the direction of travel: enterprise software is moving from systems people use toward systems that participate in the work.

That shift is broader than product marketing. McKinsey’s ongoing State of AI work has been tracking AI’s movement from isolated experimentation toward organisational and workflow transformation. The pattern to watch is not whether a chatbot can answer questions. It is whether AI becomes part of how work is routed, checked, prepared, and completed.

Much of the field is still operating at the demo layer

A lot of visible AI adoption still sounds like this:

  • use AI to draft emails
  • summarise long documents
  • turn meeting notes into actions
  • rewrite a policy in plain English
  • make a slide deck faster

That work is useful. It is also the shallow end.

The real capability is not prettier text. It is operational capacity:

  • holding context across systems
  • identifying exceptions
  • checking whether required information is missing
  • routing work to the right person
  • maintaining state between handoffs
  • preparing decisions for review
  • comparing a case against policy
  • reconciling intake against evidence
  • triggering the next step when conditions are met
  • escalating when the workflow breaks

A basic customer-service example makes the difference obvious.

A demo-layer AI summarises a complaint and drafts a polite reply.

An operational AI checks the customer’s history, identifies the complaint type, compares it with policy, drafts the response, routes edge cases to the right team, watches for missing evidence, updates the case record, and prompts a human only where judgement or authority is required.

That is where the politics change.

The question is no longer, “Which tool should staff use?”

It becomes: who is allowed to change the workflow?

The revenge version

There is a slightly ugly emotional truth here.

Shadow IT often frustrated IT teams because it looked like the business saying: “You are too slow, so we will do your job ourselves.”

Reverse shadow IT is the counter-move:

Fine. If you are going to build your own tools because IT is too slow, IT will deploy agents that do your work because the organisation is too messy.

That is not a documented motive. It is the revenge version of the pattern — the thing people will not put in the business case but may still feel in the room.

The official language will be cleaner:

  • “standardising intake”
  • “improving workflow efficiency”
  • “reducing manual handling”
  • “embedding AI into operations”
  • “automating low-value work”
  • “creating a single front door”

Some of that will be good. A lot of organisational process is genuinely bloated. Many workflows are a pile of status meetings, manual copying, avoidable approvals, duplicated records, and human beings acting as routers between systems that should already talk to each other.

People who can see that clearly will not stop at email drafting. They will build systems that remove chunks of work.

The danger is automating the visible skeleton

The risk is not simply “AI takes jobs.” That framing is too blunt.

The sharper risk is that IT can see the visible process but miss the tacit, invisible work that keeps the organisation functioning.

A workflow diagram might show:

  1. customer submits request
  2. coordinator reviews request
  3. manager approves
  4. analyst prepares response
  5. case is closed

An automation team might see obvious inefficiency. Why not classify the request, check policy, draft the response, route exceptions, and close the simple cases?

Sometimes that is exactly right.

But the real work may include things not captured in the system:

  • knowing which manager is overloaded this week
  • spotting when a customer is anxious rather than merely confused
  • recognising when a policy answer will create a reputational problem
  • bending a process because the official path is wrong for this case
  • catching data that looks valid but is socially or operationally suspicious
  • knowing which exception is routine and which one matters

Reverse shadow IT becomes dangerous when it treats the process map as the work.

AI makes this temptation stronger because the output looks competent. It can draft the email, fill the form, summarise the case, and recommend the next step. The surface quality can hide the fact that the system does not understand the organisation’s lived judgement.

The opportunity is real anyway

This is not an argument for moving slowly.

In many organisations, moving slowly is now the risk.

If leadership is still debating whether AI should be allowed to summarise documents, someone else will start using it to redesign intake, case handling, compliance review, procurement triage, customer support, and internal reporting.

In many enterprise workflows, the bottleneck has shifted from model capability to integration, governance, and process literacy.

That is why AI governance cannot just be a permission wall. Frameworks like the NIST AI Risk Management Framework and the OWASP Top 10 for LLM Applications matter because operational AI is not just producing text. It can touch data, tools, permissions, and decisions.

The useful version of reverse shadow IT looks like this:

Bad versionBetter version
IT automates around the businessIT and operators map the real work together
Process diagrams are treated as truthProcess evidence is tested against lived exceptions
AI executes hidden decisionsAI prepares work for accountable human review
Efficiency is the only measureQuality, judgement, risk, and trust are designed in from the start
Governance blocks experimentationGovernance defines safe lanes for real operational trials

The right move is not to keep AI trapped in office productivity demos.

The right move is to make operational redesign explicit.

The signal

Shadow IT was a symptom of business needs outrunning sanctioned technology.

Reverse shadow IT will be a symptom of technology capability outrunning organisational process design.

That is the shift to watch.

Not whether employees can use AI to write better emails. Not whether a chatbot can summarise a PDF. Those are the visible demos.

The real signal is when internal technology teams, automation leads, and AI-capable operators begin deploying systems that do not just support the work, but quietly absorb pieces of it.

That can remove waste. It can also erase judgement.

The organisations that handle this well will not ask, “How do we stop people using AI?” or “How do we automate everything?”

They will ask:

Which parts of the work are process, which parts are judgement, and who is allowed to move the boundary?

Was this useful?

Quick signal helps Rob sharpen future briefings.

Share this signal
Signal soundtrack Dark Driving Techno
0:00 0:00