AI is moving from labs into everyday life and services. The risks are real and uneven, but stopping research is neither likely nor practical. Instead, focus on real-world rules: buying guidelines for governments, support for displaced workers, and policies to keep markets fair. Across OECD economies, about 27–28% of jobs are in occupations at the highest risk of automation, with many more set to change materially, so the policy choice is how to cushion and spread gains, not whether to halt inquiry (OECD 2025).

The real worry

Controversial innovation can be weaponised, and adoption is outpacing governance. For technology firms, their worry is political and epistemic asymmetries could mean in policy circles “negative” results can be over-interpreted, while “positive” results fade (Kitcher 1997). But, the real worry is that AI will widen gaps for those already at risk—recent studies suggest it could exacerbate income inequality by displacing entry-level jobs while augmenting high-skill ones, with AI-exposed sectors seeing fourfold productivity growth but a 56% wage premium for skilled workers (PwC 2025)

“AI could also affect income and wealth inequality within countries. We may see polarization within income brackets, with workers who can harness AI seeing an increase in their productivity and wages—and those who cannot falling behind." — Kristalina Georgieva, Managing Director, International Monetary Fund (IMF)

The more realistic framing in 2025 is not whether research proceeds—it will—but how societies handle the knowledge. The distributional outcome depends on the "middle step" or "translation layer": rules for buying AI, work standards, competition laws, and enforcing rights fairly.

Democratic mediation works when we invest in it

Evidence from other domains shows that contentious research does not automatically translate into harm. Over decades, research into learning differences helped to normalise workplace accommodations and expand legal protections for people with disabilities; the aggregate welfare of the affected group improved even as prejudice persisted, with recent analyses confirming stronger enforcement and inclusive hiring practices (Cordero 2005; Römhild et al. 2023).

The lesson is institutional, not moralistic. When rights are clear, due process is credible, and anti-discrimination law is enforced, how knowledge is used shifts. The same research can justify exclusion or trigger support.

We should design for the latter.

Distributional design beats blanket prohibitions

AI substitutes some tasks and complements others, though we are still yet to see the upper limit of the disruptions with talks of AGI—topic for another day. Higher-skill roles are often augmented; routine roles face higher substitution risk (WEF 2025). A purely utilitarian calculus might tolerate deep losses for a minority in exchange for aggregate gains. Contemporary democracies, however, tend to prefer a Rawlsian stance: raise the floor for the worst-off while allowing innovation to proceed (Rawls 2009).

Policy, not the lab result, sets the distribution. Where severance, retraining, placement services, wage insurance, and competition policy are active, innovation’s gains diffuse and losses are cushioned. Where they are absent, concentrated firms capture most of the upside and the transition is harsher.

Governing the "middle-step" (what to do)

  • Design for inclusion upstream. Bake accessibility, bias testing, and clear performance thresholds into public and large-buyer procurement. Require algorithmic impact assessments before sensitive deployments.

  • Condition public funding. Where research plausibly displaces workers or affects access to essential services, require a downstream support plan as part of the grant or contract terms.

  • Targeted cushions, time-bound. Use wage insurance, portable benefits, and retraining vouchers for at-risk occupations, with sunset clauses and published uptake data to keep programmes focused.

  • Antitrust and openness. Curb excessive concentration and switching costs so innovation does not become winner-takes-all. Support open standards and interoperability to accelerate diffusion.

  • Measurement and transparency. Track impacts by income, region, and demographic group; publish results to trigger corrective action when outcomes skew—watch Gini coefficients alongside deployment metrics.

A fair counter‑view—and the reply

Sceptics argue that government institutions can't keep up with technology; that capture and weak enforcement turn protections on paper into empty promises; that redistribution cannot keep pace with displacement; and that misaligned systems pose tail risks (Bostrom 2017).

"The challenge presented by the prospect of superintelligence, and how we might best respond is quite possibly the most important and most daunting challenge humanity has ever faced." — Nick Bostrom, "Superintelligence: Paths, Dangers, Strategies"

These points deserve weight.

Two replies follow. First, capacity is a choice. Investment in regulator capacity, audit infrastructure, and faster enforcement yields measurable improvements; these are budgetary and governance decisions, not laws of nature. Second, trajectories are not flat. "Over the long run, rights and safety have thickened across many democracies, albeit unevenly" (Pinker 2011). The prudent path is to push capacity faster than adoption in high-stakes domains, while maintaining open inquiry to keep learning where risks actually lie.

Bottom line

The right debate today is how to steer deployment so that people are protected and gains are shared. Keep the innovation open. Govern the translation layer with clear standards, credible enforcement, and smart redistribution.

What to watch

  • Labour market signals: placement rates and wage growth in routine roles vs AI deployment milestones.

  • Regulatory capacity: audit backlogs, enforcement timelines, and transparency of algorithmic assessments.

  • Market structure: changes in concentration and switching costs in AI‑enabled value chains.

Reply

Avatar

or to participate

Keep Reading