Research Briefing  ·  AI & The Future of Work

The AI Layoff Trap: Why Every Firm Races Toward the Cliff

A landmark new study proves that companies know AI-driven layoffs will destroy their own customer base, and they can’t stop anyway. Here’s what that means for your career.

Imagine every CEO in your industry sitting in the same room. They can all see the same data. They all understand that if they lay off enough workers, consumer spending collapses and everyone’s revenue tanks, including their own. They know the cliff is ahead. And they drive toward it anyway.

That’s not a thought experiment. That’s the central finding of a peer-reviewed paper published in March 2026 by economists at the University of Pennsylvania and Boston University. The study, The AI Layoff Trap, builds a formal economic model to answer one of the most pressing questions of our time: if companies can see that mass AI-driven displacement is collectively self-destructive, why aren’t they stopping?

The answer is both intellectually elegant and deeply unsettling. And if you’re a working professional, it changes how you need to think about your career right now.


The Trap, Explained

Here’s the core mechanic. When a firm replaces workers with AI, it captures the full cost savings from that automation. But the displaced workers stop spending. They were customers too. That demand destruction doesn’t fall entirely on the firm that automated. Under competitive pricing, it spreads across every firm in the market.

So each company bears only a fraction of the damage it causes, roughly 1/N, where N is the number of competitors. The rest falls on rivals. This creates what economists call a demand externality: a cost that is real and visible to everyone, but never fully accounted for by any individual decision-maker.

“Each firm reaps the full savings of replacing its own workers yet bears only a sliver of the demand it destroys; the rest lands on rivals. No firm can afford to be the one that holds back.”

The result is a dominant-strategy equilibrium, game theory terminology for a situation where the “rational” move is the same regardless of what anyone else does. Every firm automates. Every firm loses. And no amount of foresight, goodwill, or industry communication changes the outcome. This isn’t a coordination failure. It’s a true structural trap.

80%
of U.S. workers hold jobs with tasks susceptible to LLM automation
100K+
tech workers laid off in 2025, AI cited in over half the cases
2x
Competitive markets automate at twice the cooperatively efficient rate


Three Things That Make It Worse

1 · More competition accelerates the problem

Counterintuitively, industries with more competitors suffer the widest gap between what firms actually do and what would be collectively optimal. A monopolist has an incentive to restrain automation; it fully internalizes the demand it destroys. In fragmented markets, each firm’s share of the damage is so small it barely registers. The researchers identify customer support, back-office operations, and software services as the highest-risk sectors; all highly competitive, all deploying capable AI fast.

2 · Better AI makes the gap wider, not smaller

The paper identifies what it calls a “Red Queen effect.” When AI becomes more productive, each firm perceives a competitive advantage from automating faster than rivals. But at the market equilibrium, all firms expand equally, and the gains cancel out. What doesn’t cancel is the additional demand destruction. More capable AI amplifies the distortion rather than resolving it.

3 · The damage falls on everyone, including owners

Over-automation is not simply a redistribution of value from workers to shareholders. The paper proves it is a deadweight loss; genuine economic destruction that leaves both workers and firm owners worse off. CEOs laying off thousands are not profiting at workers’ expense. They are all jumping off the same cliff together, just at different speeds.


The Policy Scorecard: What Doesn’t Work

The researchers systematically evaluated every major policy response currently in the public debate. The results are sobering.

Policy What it does Fixes it?
Universal Basic Income Raises the living-standards floor but adds a constant to demand; doesn’t change any firm’s per-task automation math No
Capital income taxes Scales profits uniformly; cancels out of the optimization entirely No
Coasian bargaining Voluntary agreements fail because automation is a dominant strategy; firms defect regardless of what others promise No
Worker equity / profit-sharing Narrows the gap but can’t close it; would require sharing more than 100% of profits when consumer spending is below 1 Partially
Upskilling & retraining Helps only if displaced workers land in higher-paying roles; replacing lost income isn’t enough, they must exceed it Partially
Pigouvian automation tax A per-task levy equal to the demand destruction imposed on rivals; the only instrument that directly corrects the externality Yes

The bottom line

The only mechanism that actually works is a Pigouvian automation tax; set equal to the demand loss each firm inflicts on its competitors. If that revenue funds workforce retraining, it can become self-limiting over time: better reabsorption of workers shrinks the externality, shrinks the required tax, and the correction phases out. Don’t hold your breath waiting for Congress to implement it.


What This Means for You, Personally

Here is the gap that matters most: between what policy should do and what it will do, there is a window measured in years. Possibly a decade. During that window, the structural trap described in this paper operates at full force. Firms in fragmented industries with capable AI will automate at twice the collectively rational rate, and there is no systemic brake.

The research tells us precisely where the risk is highest. You are most exposed if you work in:

Customer support, operations, or middle management in competitive industries
Back-office roles in financial services, healthcare administration, or logistics
Software development teams where agentic AI enables one engineer to do the work of five
Any role in a fragmented sector deploying capable AI rapidly. The wedge is widest there

There is also a nuance buried in the paper that most coverage will miss. Ordinary retraining, the kind that lands you in a comparable job at a similar wage, barely moves the economic needle. What actually shrinks the externality is reabsorption into higher-paying roles. The income replacement rate has to exceed 100%, meaning you come out earning more than before. That distinction is the entire difference between a career strategy that works and one that merely delays the problem.

“The goal isn’t to survive displacement. It’s to land on the other side of it in a stronger position than you started.”


The Careeroria Approach: Building What Policy Won’t

The “AI Layoff Trap” paper frames the problem with precision: this is a market failure that no individual firm can prevent and no voluntary agreement can fix. The authors call for a Pigouvian tax. That’s the right answer for the economy. It is not the answer for you, today, in your career.

The answer for you is to stop being the kind of worker the automation trap targets, and become the kind who benefits from it.

Careeroria’s PivotUp™ program is built around exactly the transition the research describes as effective. Not “keep your job.” Not “learn some AI tools.” The objective is to position you in roles where AI works under your direction, roles requiring human judgment, contextual authority, accountability, and trust, and where your income doesn’t just recover, it exceeds where you started.

Three things drive that outcome:

AI fluency at the strategic level: understanding what AI can and can’t do, where its judgment fails, and how to make decisions it can’t make for you
Career capital in automation-adjacent roles: the people who design, deploy, audit, and govern AI systems are not in the automation trap; they are the ones setting it
A personal transition roadmap: not generic advice, but a structured path from where you are to where the research says you need to be

Find out where you stand before the trap closes

The AI Threat Score quiz takes 4 minutes and gives you a precise read on your displacement risk: by industry fragmentation, role exposure, and income replacement likelihood. It’s the starting point for every PivotUp™ participant.

Take the AI Threat Score Quiz →
Learn about PivotUp™


The Larger Stakes

The “AI Layoff Trap” paper is careful to note that its model is conservative. A single sector, a single period, symmetric firms. In the real economy, layoffs in one sector reduce spending across every sector. Platform ecosystems amplify the cascade. AI investments are largely irreversible. And there’s no force in the model that pulls firms toward labor-augmenting AI rather than labor-replacing AI; if anything, the competitive pressure accelerates the latter.

The researchers close with a sentence worth sitting with: “The results suggest that policy should address not only the aftermath of AI labor displacement but also the competitive incentives that drive it.”

They’re right. And while the policy apparatus works through that question, slowly, imperfectly, with significant lag, millions of professionals are navigating this transition in real time. The research is clear that waiting is not a strategy. The trap is structural. The timeline is now.

The question isn’t whether your industry will be affected. It’s whether you’ll be on the right side of the line when it is.

Source: Brett Hemenway Falk & Gerry Tsoukalas, “The AI Layoff Trap,”, March 21, 2026. University of Pennsylvania / Boston University. This post summarizes and interprets the paper’s findings for a general professional audience. All model results and quotations are drawn directly from the published paper.