10 workflows you should not automate yet
Automation is powerful when the process is ready. When it is not, AI and workflow tools just make the mess faster, louder, and harder to unwind.
10 workflows you should not automate yet
The fastest way to waste money on automation is to automate a process that has not earned the right to be automated.
Automation does not fix a broken workflow. It preserves it in software.
This is where a lot of AI workflow projects go sideways. The demo works. The tool looks capable. The model can read, draft, classify, summarise, route, and trigger. Everyone can see the possible efficiency gain.
Then the build hits reality.
The inputs are inconsistent. The process owner is unclear. The exception path lives in one person’s head. The approval exists because of politics, not risk. The spreadsheet is not the source of truth; it is just the least-broken artefact everyone recognises.
Worse, AI can make the output look polished enough that people trust a broken workflow for longer.
So before you buy a tool, brief an automation consultant, or ask a model to run part of the process, look for these ten readiness smells.
1. Nobody agrees what the process actually is
If three people describe the same workflow three different ways, you do not have a process. You have a habit cluster.
Automating one person’s version will annoy the other two. They will route around it, maintain side spreadsheets, or keep doing the real work in email while the official system becomes theatre.
Fix first: get the current-state process into one shared map. Not the ideal process. The real one.
2. The trigger is vague
A good workflow has a clear start signal.
Bad triggers sound like:
- “when someone asks us”
- “when sales says it is ready”
- “when the customer seems serious”
- “when finance has everything”
- “when Jess forwards the email”
That may be workable for humans. It is terrible for automation. A vague trigger creates false starts, missed cases, duplicate work, and arguments about whether the process has begun.
Fix first: define the exact event that starts the process.
3. “Done” means different things to different people
If one team thinks the process is done when the customer is emailed, another thinks it is done when the CRM is updated, and finance thinks it is done when the invoice is reconciled, automation will close the loop in the wrong place.
This is common in handoff-heavy work. Each team optimises for its own local finish line.
Fix first: define the end state in operational terms. What record exists? Who has been notified? What is no longer pending?
4. The work depends on one person’s memory
Some workflows only function because one person knows the trick.
They know which customer always needs a call. They know which spreadsheet column is wrong. They know that the official approval path is ignored after 3pm on Fridays. They know which supplier names are duplicates. They know when a request looks normal but is actually suspicious.
That knowledge is valuable. It is also a single point of failure.
Automating around it usually produces a brittle system that works until the first real exception arrives.
Fix first: interview the person who knows the trick. Turn tacit judgement into explicit rules, examples, and exception notes where possible.
5. The data is scattered across email, spreadsheets, chat, and memory
This is the classic automation trap.
A team says, “We want this automated.” Then the build discovers the required data lives in five places:
- one field in the CRM
- three columns in a spreadsheet
- a note in an email thread
- a PDF attachment
- the team lead’s memory
AI can sometimes read across messy inputs, but that does not remove the need for a source of truth. If the system cannot reliably know what data to trust, it will either ask humans constantly or make confident mistakes.
Fix first: create structured intake or a defined source-of-truth record.
6. Exceptions are more common than the standard case
If exceptions outnumber the standard case, do not pretend you are automating the process. You are automating the easy slice and leaving the expensive mess untouched.
That can still be worth doing, but only if everyone understands the scope.
The danger is selling automation as if it handles the whole workflow when it actually handles the clean minority of cases.
Fix first: separate standard cases from exceptions. Automate the standard path only after the exception rules are visible.
7. The approval step exists but rarely changes the outcome
Phantom approvals are everywhere.
Someone signs off because they always have. A manager clicks approve because the system requires it. A senior person is technically accountable but does not have enough context to make a real decision.
Automating around a phantom approval usually preserves a useless bottleneck. Automating through it can create political risk.
Fix first: decide whether the approval protects against a real risk. If it does, define the criteria. If it does not, remove or redesign it.
8. Errors are discovered too late
A process is not ready for automation if bad inputs travel too far before anyone notices.
Late-discovered errors are expensive because downstream work has already happened. AI can make this worse by laundering bad inputs into confident drafts, summaries, or recommendations.
The surface quality improves while the underlying error survives.
Fix first: move validation earlier. Check required fields, formats, permissions, and obvious contradictions before the work proceeds.
9. Nobody owns the whole workflow
Automation needs an owner. Not a committee. Not “operations” in the abstract. A person or role that can answer:
- what counts as success?
- what exceptions matter?
- who approves changes?
- what happens when the automation fails?
- who reviews the output?
Without ownership, every improvement becomes a negotiation and every incident becomes archaeology.
Fix first: name the process owner before the build starts.
10. The team that runs the process has not been consulted
This is the fastest way to create official automation and unofficial shadow work at the same time.
The people closest to the process know where the diagram lies. They know which fields are ignored, which customers break the rules, which workarounds are load-bearing, and which steps only exist because another team once complained.
Ignore them and you will build something tidy, logical, and wrong.
Fix first: talk to the people who run the process. Not as user-acceptance theatre at the end. At the start.
The ten readiness smells
| Smell | Fix first |
|---|---|
| Nobody agrees what the process is | Map the real current-state workflow |
| The trigger is vague | Define the exact start event |
| “Done” means different things | Define the operational end state |
| The work depends on one person’s memory | Capture tacit rules, examples, and exceptions |
| Data is scattered | Create structured intake or a source-of-truth record |
| Exceptions dominate | Separate standard cases from exception paths |
| Approval rarely changes the outcome | Redesign or remove phantom approvals |
| Errors are discovered late | Move validation earlier |
| Nobody owns the workflow | Name the process owner |
| The team was not consulted | Talk to the people who run the process |
The decision rule
Do not ask, “Can this be automated?”
Almost anything can be automated badly.
Ask:
Will automation reduce the mess, or preserve it?
That is the real threshold.
If the answer is yes, build carefully.
If the answer is no, the first move is not AI. It is process repair.
Quick self-audit
Before you automate, run the Process Digitisation Readiness Scorecard. It is a 26-point audit covering process stability, failure points, data readiness, AI/automation fit, and risk controls.
If the score is low, that is not bad news. It is useful news. You just saved yourself from automating a mess.
If the score is high and the workflow is commercially important, Process Digitiser can take the evidence pack further: current-state map, failure-point diagnosis, future-state workflow, AI/automation candidates, and a practical fix plan.
The boring work before automation is where most of the value hides.
Quick signal helps Rob sharpen future briefings.