Essay · 30/04/2026

The office AI question is mostly a governance question

For employees, the safest way to use AI at work is not to sneak tools into workflows. It is to understand the policy, classify the data, keep an audit trail and use AI where it improves judgment without becoming an unapproved system of record.

Most workplace AI conversations start in the wrong place.

People ask: which tool is best? ChatGPT, Claude, Copilot, Gemini, some internal chatbot with a cheery name and a procurement smell?

That matters, but it is not the first question. The first question is: what are you allowed to put into it, and what can it do with that information?

For office workers, AI use is becoming less a productivity hack and more a governance problem. The risky version is not “I used AI to improve a sentence”. The risky version is feeding confidential files, client data, personal information, internal strategy, meeting transcripts or credentials into an external system because the paste box was convenient.

That is how a useful assistant becomes an unapproved shadow system.

The sane order of operations

Before connecting AI to work email, Teams, Google Drive, SharePoint or document repositories, read the AI policy. Not the inspirational launch blog. The actual policy.

Look for five things:

  1. Approved tools — which AI systems are allowed?
  2. Data classes — what counts as public, internal, confidential, restricted, personal or client data?
  3. External services — can information leave the organisation’s tenant?
  4. Human review — what outputs require checking before use?
  5. Record keeping — do AI-assisted decisions, drafts or summaries need disclosure or audit trails?

If the policy is vague, treat that as a risk signal, not a loophole. “It did not explicitly forbid it” is a poor defence when the file had client names in it.

Helpful versus dangerous use

There is a clean line between using AI as a thinking tool and turning it into an unsanctioned processor of workplace data.

Usually safer:

  • rewriting a non-sensitive paragraph;
  • drafting a structure for a report;
  • creating checklists;
  • explaining formulas;
  • role-playing a meeting using anonymised facts;
  • summarising public policy or vendor docs;
  • turning your own rough notes into clearer prose after redaction.

Riskier:

  • uploading raw client files;
  • pasting private employee information;
  • connecting an external bot to Teams or email;
  • asking AI to make decisions about people;
  • feeding it commercial strategy;
  • using it as the only record of a meeting;
  • sending AI-generated advice without human review.

The difference is not whether the model is smart. The difference is whether the organisation can explain where the data went, who had access, what was generated, and who approved the result.

Redaction is not a vibe

“Remove sensitive stuff” sounds easy until the document contains names, account numbers, project codes, locations, email addresses, transaction details, screenshots, internal URLs, filenames and metadata.

A useful redaction routine is explicit:

  • replace names with roles;
  • replace companies with generic labels;
  • remove addresses, phone numbers and emails;
  • remove account IDs, ticket IDs and policy numbers;
  • remove internal URLs and file paths;
  • strip screenshots unless they are necessary;
  • summarise numbers into ranges where exact figures are not needed;
  • keep a private mapping only if the work genuinely requires it.

Then ask the AI to help with the pattern, not the secrets.

Bad prompt: “Summarise this client dispute.”

Better prompt: “Using this anonymised scenario, help me structure a neutral briefing note. Do not invent facts. Mark assumptions.”

The audit trail is your friend

People sometimes dislike audit trails because they feel bureaucratic. In AI work, they are protective.

A simple record can say:

  • what material was shared;
  • whether it was redacted;
  • which tool was used;
  • what output was generated;
  • who reviewed it;
  • what final version was sent or saved.

That is enough to turn “I used some AI thing” into a defensible workflow.

It also catches model errors. AI systems are fluent enough to make weak reasoning look finished. A review step is not decoration. It is part of the control system.

The office assistant should not be a secret employee

The tempting future is an AI agent that reads your inbox, watches Teams, opens Drive, drafts replies, schedules meetings and nudges colleagues. Technically possible. Organisationally loaded.

An agent with direct access to office systems is not just a productivity tool. It is a delegated actor inside the organisation’s information environment. That means permissions, logging, retention, incident response, acceptable use, and probably management approval.

Until that exists, the safer pattern is narrower:

  • the human selects the material;
  • sensitive details are removed or kept inside approved tools;
  • the AI helps draft, analyse or structure;
  • the human reviews and decides;
  • final work stays in official systems.

Less magical, more defensible. Annoying. Also how grown-up infrastructure works.

The practical test

Before using AI on office work, ask:

  1. Would I be comfortable telling my manager exactly what I pasted?
  2. Does the policy allow this tool for this data class?
  3. Could this expose personal, client or confidential information?
  4. Is the output advice, a draft, or a decision?
  5. Is there a human review and a record?

If the answer is muddy, slow down.

AI can be genuinely useful in office work. It can make writing clearer, meetings less chaotic, processes easier to understand, and analysis faster. But the secure version starts with governance, not clever prompts.

The productivity win is real only if it does not create a compliance hangover later.

Was this useful?

Quick signal helps Rob sharpen future briefings.

Share this signal
Signal soundtrack Dark Driving Techno
0:00 0:00