Essay · 30/04/2026

Productivity for whom? The political split forming around AI

AI is becoming a political-economy fight disguised as a technology story. The real question is who captures the productivity surplus, who carries the risk, and who gets a say in deployment.

AI has escaped the product pages and entered politics.

The same technology is now being described as a productivity miracle, a labour threat, a sovereign capability race, a surveillance machine, a slop engine, and a public-service opportunity. Which version you hear often depends less on the model release itself and more on who is doing the talking.

That is what makes the current AI debate interesting. It is not simply that right-leaning or business media likes AI and left-leaning or public-interest media dislikes it. That is too neat, and mostly wrong.

The deeper story is that AI lets each political and economic camp make its existing worldview look newly urgent.

Business audiences see weak productivity, expensive labour, sluggish institutions, and global competition. So AI appears as efficiency, margin repair, innovation, and national competitiveness. Worker advocates see surveillance, precarity, deskilling, wage pressure, and power moving upward. So AI appears as a workplace control system. Governments want the upside without the backlash, so they speak in the uneasy dialect of “pro-innovation guardrails”.

The disagreement is not really whether AI is powerful. The disagreement is who captures the surplus, who carries the risk, and who gets a say when the demo becomes workplace policy.

The same technology, four political stories

AI coverage is increasingly a Rorschach test.

One person sees productivity. Another sees replacement. Another sees deepfakes. Another sees a sovereignty race. Another sees surveillance. Another sees an accessibility tool. They may all be seeing something real.

The useful split is not “pro-AI” versus “anti-AI”. It is the political story being attached to the technology:

StoryWhat AI representsThe political question
Capital productivityEfficiency, automation, margin repair, scaled expertiseWho captures the gains?
Labour powerJob loss, deskilling, surveillance, automated disciplineWho has consent and recourse?
State capacitySovereign capability, public-service reform, safety regulationWho sets and enforces the rules?
Culture and trustSlop, deepfakes, copyright, education, authenticityWhat happens when synthetic output floods public life?

This is why two outlets can cover the same AI development and sound like they live in different countries. They are not only reporting different facts. They are selecting different stakes.

Government wants innovation without panic

Governments are trying to hold two claims at once.

First: AI can lift productivity, improve services, and strengthen national competitiveness. Second: AI can create serious risks around discrimination, misinformation, labour disruption, privacy, safety, and accountability.

You can see the balancing act in the policy language. Australia’s AI Ethics Principles frame responsible AI around human, social and environmental wellbeing, fairness, privacy, reliability, transparency, contestability, and accountability. The Australian Government also consulted on mandatory guardrails for AI in high-risk settings, while noting later that it would not proceed with those previous proposals “at this time” and would instead fold feedback into a National AI Plan.

The same tension appears elsewhere. The EU’s AI Act is explicitly risk-based: some uses are prohibited, high-risk systems face obligations, and lower-risk systems are treated differently. The UK chose the opposite rhetorical emphasis in its pro-innovation AI regulation white paper, preferring regulator-led principles over a single sweeping AI law.

Different machinery, same political bind: move fast enough to capture the upside, but not so fast that voters conclude the state has handed the workplace, classroom, and public square to opaque systems.

Business media sees productivity because its audience has a productivity problem

Business media is not subtle about what it is listening for. Productivity, efficiency, competitiveness, labour leverage, and investment are its native language.

That does not make the framing fake. Australia really does have a productivity anxiety. Boards really are asking whether AI can reduce cost, accelerate professional work, and defend margins. Executives really are being told that waiting too long is its own risk.

The Australian Financial Review’s AI topic stream makes the pattern visible: headlines about whether AI can fix Australia’s productivity problem, whether the productivity payoff is arriving, whether companies are closing the AI productivity gap, and whether AI can produce efficiency dividends for Australian business. Even sceptical business coverage often asks a business question: if AI is so powerful, why is the productivity payoff not showing up yet?

That is a different starting point from “what happens to workers if this is deployed badly?” Business coverage can absolutely include transition pain and regulation — AFR has also covered unions, workplace AI rules, and worker concerns — but the gravity of the frame is economic performance.

The business question is: how do we turn capability into output?

Public-interest media sees harm because its audience has a power problem

Public broadcasters and left-liberal outlets tend to start from a different anxiety: what happens when powerful systems are deployed on people rather than with them?

ABC’s AI topic page recently put the contrast on the surface: AI deepfakes in schools, workplace AI regulation, AI-enabled traffic fines, withdrawn camera fines, cybersecurity risk, and wildlife management all sit under the same broad technology label. The recurring theme is not only job loss. It is institutional power, error at scale, surveillance, and accountability.

The Guardian’s AI coverage has a similar centre of gravity. Its AI topic page mixes model failures, deepfakes, copyright, labour disruption, state dependence, and surveillance. A recent Guardian column on Britain’s AI future warned against ending up “at the mercy of US tech giants”; another reported on Disneyland deploying facial recognition. Again, the anxiety is not simply “technology bad”. It is power without enough democratic control.

That framing can underplay genuine productivity upside. But it notices something business coverage can miss: the person being “made more efficient” is often not the person who decides how the system is introduced.

The public-interest question is: who is this being done to, and what happens if it fails?

Labour is not simply anti-AI

The lazy version of this debate says unions want to stop technology. The better reading is that labour wants bargaining power over how technology enters the workplace.

The ACTU has called for enforceable agreements on the use of AI. The UK Trades Union Congress has published an Artificial Intelligence (Regulation and Employment Rights) Bill. In the US, the AFL-CIO and Microsoft announced a tech-labour partnership on AI and the future of the workforce built around worker education, feedback from labour leaders and workers, and joint policy work.

That is not an anti-AI position. It is a demand for voice, notice, consent, accountability, and a share of the gains.

The labour version of responsible AI tends to ask for things like:

That is the class politics of AI in plain clothes. Not “robots are coming”. More like: management has a new lever, and workers want rules before it is pulled.

Productivity gains are not neutral

The strongest pro-AI argument is real: if AI lets people do higher-quality work faster, it can raise living standards, improve services, reduce drudge work, and unlock things small teams could not previously attempt.

The strongest anti-AI argument is also real: productivity gains do not distribute themselves.

The ILO’s analysis of generative AI and jobs argued that the greater near-term effect is likely to be augmentation rather than full automation, while still warning that job quality, intensity, and inequality matter. The IMF put the tension more bluntly: AI could boost productivity and incomes, but could also replace jobs and deepen inequality, with almost 40 percent of global employment exposed to AI and about 60 percent of jobs in advanced economies potentially affected.

That is the whole fight.

If AI gains become shorter hours, better services, safer work, lower prices, higher wages, or more capable small businesses, the politics looks one way. If they become headcount reduction, tighter monitoring, lower bargaining power, synthetic content floods, and executive margin expansion, the politics looks very different.

The machine is not politically neutral once it enters an organisation. It changes leverage.

Media bias is real, but audience selection is stronger

Yes, media outlets have political leanings. But the AI divide is not just editorial ideology. It is audience incentive.

A financial newspaper serves readers who need to know whether AI changes investment, productivity, competition, margins, and regulation. A public broadcaster serves citizens who need to know whether AI changes schools, fines, scams, public services, privacy, and work. A labour outlet serves members who need to know whether AI changes discipline, surveillance, staffing, and bargaining. A tech vendor serves buyers who need to believe transformation is both inevitable and manageable.

Each frame is partial. Each can be useful. Each can become propaganda when it pretends to be the whole story.

The mistake is reading only the frame that flatters your existing politics.

The practical test

Whenever someone praises or attacks AI, ask three questions:

  1. Productivity for whom? Who gets the surplus if this works?
  2. Risk for whom? Who absorbs the errors, displacement, surveillance, or degraded service if it fails?
  3. Decision by whom? Who had a real say before the system was deployed?

Those questions cut through most of the noise.

AI is not just another productivity tool. It is a redistribution machine: of tasks, authority, attention, risk, and money. That is why the politics around it is getting sharper.

The fight is not between people who understand AI and people who fear it. The fight is over what kind of bargain society makes once AI becomes ordinary infrastructure.

That bargain is only just starting to be written.

Was this useful?

Quick signal helps Rob sharpen future briefings.

Share this signal
Signal soundtrack Dark Driving Techno
0:00 0:00