How AI Agents Could Destroy The Economy

AI Agents Could Trigger Economic Collapse, Report Warns

Could AI agents really destroy the economy? A new scenario from Citrini Research suggests agentic AI could trigger mass unemployment and a steep market decline within two years. The report outlines a chilling feedback loop: as AI capabilities improve, companies cut workers, spending drops, and firms double down on automation. While not a prediction, the analysis raises urgent questions about how quickly businesses might adopt autonomous AI systems—and what happens when they do.

How AI Agents Could Destroy The Economy
Credit: Alexander Spatari / Getty Images

What the Citrini Research Scenario Actually Predicts

The Citrini scenario imagines a report dated two years from now, painting a stark economic picture. Unemployment has doubled. Stock market value has fallen by more than a third. White-collar layoffs have accelerated as AI agents take over tasks once handled by contractors and employees. The core idea isn't about rogue AI or sci-fi disasters. Instead, it focuses on the gradual, rational adoption of agentic AI by companies seeking efficiency. As outside contractors get replaced by cheaper in-house AI systems, entire business models built on B2B transactions face disruption. The scenario doesn't assume malicious intent—just relentless optimization.
This isn't a fringe theory. The analysis walks through a plausible chain of corporate decisions, each logical in isolation but potentially catastrophic in aggregate. Companies adopt AI agents to reduce costs and boost output. Early movers gain market share. Competitors follow to avoid falling behind. The result? A rapid, economy-wide shift in how work gets done—and who gets paid to do it. The report stresses this is a stress test, not a forecast. But its internal consistency makes it hard to dismiss outright.

How AI Agents Could Disrupt White-Collar Work

AI agents differ from traditional chatbots or tools. They can plan, execute, and iterate on complex tasks with minimal human oversight. Think of an agent that handles procurement, negotiates with suppliers, or manages customer onboarding—all autonomously. In the Citrini scenario, these capabilities scale rapidly across industries. Companies realize they can reduce headcount by deploying agents for roles in sales, support, finance, and operations. The initial savings look attractive. But when many firms act simultaneously, the broader economy feels the strain. Displaced workers spend less. Consumer demand softens. Revenue pressures mount.
The disruption targets knowledge work first—roles involving analysis, coordination, and routine decision-making. These are precisely the jobs that have anchored middle-class stability in developed economies. If AI agents can perform these functions faster and cheaper, the incentive to automate is powerful. The scenario doesn't claim every job vanishes overnight. Instead, it models a steady erosion of demand for human labor in specific sectors, creating ripple effects across housing, retail, and services.

The Negative Feedback Loop Explained

The report describes a self-reinforcing cycle with no natural brake. AI capabilities improve, so companies need fewer workers. White-collar layoffs increase, so displaced workers spend less. Lower spending creates margin pressure, pushing firms to invest more in AI to cut costs. That investment further improves AI capabilities, restarting the loop. The system becomes a "long daisy chain of correlated bets on white-collar productivity growth." If productivity gains outpace new job creation or consumer demand, the economy could contract rather than expand. This isn't about technology failing—it's about success creating unintended consequences.
Economists call this a paradox of automation: efficiency at the micro level can create instability at the macro level. The Citrini scenario applies this principle to the age of agentic AI. Unlike previous automation waves that targeted manual tasks, today's AI agents excel at cognitive work. That shifts the risk profile. The feedback loop isn't theoretical—it mirrors patterns seen in past technological transitions, but accelerated by software's near-zero marginal cost of replication.

Why This Bear Case Differs From Typical AI Fears

Most AI doomsday scenarios focus on misalignment: superintelligent systems acting against human interests. The Citrini bear case is subtler and, some argue, more plausible. It doesn't require AI to become sentient or hostile. Instead, it examines how rational business decisions, made at scale, could destabilize markets. The scenario also extends beyond the "Death of SaaS" idea. It implicates any model relying on optimizing transactions between companies. If AI agents handle procurement, logistics, or marketing more cheaply than external vendors, entire sectors could shrink overnight. The threat isn't rebellion—it's replacement.
This perspective shifts the conversation from existential risk to economic resilience. It asks: What if AI works exactly as intended? What if companies adopt it widely because it delivers real value? The answer may not be dystopian—but it could demand significant adaptation. Policymakers, business leaders, and workers all need to consider how to distribute the gains from AI-driven productivity without triggering a demand crisis.

Are Businesses Ready to Hand Decisions to AI Agents?

A key assumption in the scenario is that companies will trust AI agents with significant operational decisions. Today, many leaders remain cautious. Handing off purchasing, hiring, or strategy to autonomous systems feels risky. Yet the Citrini analysis notes that many of these decisions are already outsourced to third-party contractors. If an AI agent can perform the same function at lower cost and higher speed, the incentive to switch grows. Early adopters may gain a competitive edge, pressuring rivals to follow. The question isn't just technical readiness—it's about organizational courage and regulatory guardrails.
Trust is built incrementally. Companies may start by using AI agents for low-stakes tasks, then expand their scope as reliability improves. The scenario assumes this progression happens faster than anticipated. It also assumes limited coordination among firms to manage systemic risk. In reality, industry groups, governments, and standards bodies could slow or shape adoption. But in a hyper-competitive market, the pressure to automate may outweigh caution.

What Experts and Skeptics Are Saying

The report has sparked intense debate online. Some economists argue that past technological shifts created more jobs than they destroyed, and AI could follow the same path. Others worry the pace of change this time is unprecedented. Even Citrini frames the analysis as a scenario, not a forecast, meant to stress-test assumptions. Skeptics point out that human oversight, consumer preferences, and policy interventions could slow or redirect the trajectory. Still, the scenario's strength lies in its internal logic. It's hard to pinpoint exactly where the chain of reasoning breaks—which is why it resonates with founders, investors, and policymakers.
The conversation isn't just academic. Venture capitalists are already funding agentic AI startups. Enterprises are piloting autonomous workflows. The window to shape norms, safeguards, and transition strategies is narrow. Ignoring the scenario doesn't make it less relevant. Engaging with it—critically and constructively—helps organizations prepare for multiple futures.

How Companies Can Prepare for an Agentic AI Future

Whether or not the Citrini scenario unfolds, agentic AI is advancing quickly. Businesses can take proactive steps to navigate uncertainty. First, audit workflows to identify where AI agents could augment—not just replace—human work. Second, invest in reskilling programs to help teams adapt to new tools. Third, monitor macroeconomic indicators that could signal broader shifts in demand or labor markets. Finally, engage with policymakers on frameworks that balance innovation with worker protection. Preparation isn't about fearing AI agents. It's about ensuring that productivity gains translate into shared prosperity, not systemic fragility.
Leaders should also diversify their talent strategy. As routine cognitive tasks become automated, uniquely human skills—creativity, empathy, complex judgment—gain value. Companies that cultivate these capabilities will be better positioned to thrive alongside AI agents. Transparency matters too. Being open about how AI systems make decisions builds trust with employees, customers, and regulators.
The Citrini Research scenario serves as a cautionary tale, not a crystal ball. AI agents hold immense promise for efficiency, creativity, and problem-solving. But their rapid integration into the economy demands careful consideration of second-order effects. By focusing on human-centered design, equitable transition strategies, and adaptive governance, businesses and leaders can help steer this powerful technology toward a more resilient future. The goal isn't to stop progress—it's to shape it wisely.

Comments