Every week, someone tells me my job is going to be done by an AI by 2027. The number of people willing to make this prediction is, in my experience, in inverse proportion to their actual familiarity with my job. The people closest to the work are the ones most uncertain. The people furthest from it are the ones writing 1500-word LinkedIn posts about the death of the knowledge worker.
I don't have a strong opinion on the timeline of AGI or whether GPT-9 is going to write better code than I can. What I do have an opinion on — having watched this same drumbeat play out for cloud computing, mobile, devops, and "everything is software now" — is the shape of the prediction itself. The argument goes: AI automates X, therefore people who do X lose their jobs, therefore millions are unemployed, therefore the economy collapses, therefore neo-feudalism, therefore subscribe to my Substack.
It's not a wrong prediction so much as a prediction that's missing a step. The step it's missing has a name: Jevons' Paradox.
Quick history. William Stanley Jevons was a Victorian economist. In 1865, he wrote a book called The Coal Question, in which he made a then-counterintuitive argument: improving the efficiency of steam engines wasn't going to reduce British coal consumption. It was going to increase it.
The reasoning is straightforward once you see it. When you make a resource cheaper to use, you don't just do the same things with less resource. You do more things, because more things become economically viable. The cost per unit drops, the total volume of activity rises, and the net effect is that demand for the input goes up, not down. In Jevons' specific case, more efficient steam engines made it economical to put steam engines in places they hadn't existed before — factories, mines, ships, locomotives — and total British coal consumption went up sharply.
The pattern holds embarrassingly often. Fuel-efficient cars led to more total miles driven. Cheaper international flights led to more total air travel. LED lights are vastly more efficient than incandescents and yet the total electricity we use for lighting hasn't dropped, because we now light up things we never bothered to light up before. The cost of digital storage has dropped by something like four orders of magnitude over my career, and we're using more of it than ever — by orders of magnitude. The phrase "should we keep this log?" has been quietly retired in most shops. We keep all the logs. Why wouldn't we. It's basically free.
I want you to hold that pattern in your head while we talk about AI.
The argument that AI is going to obliterate white-collar work assumes that the demand for white-collar work is fixed. The math goes: there's a pile of analysis to be done, a pile of code to be written, a pile of memos to be drafted, and AI is going to do all of it for nothing, leaving no work for the humans who used to do it.
This is, in my opinion, very confidently wrong about a thing that's actually pretty knowable.
The amount of analysis a typical organization wants to have done is not a fixed number. It is, almost without exception, far larger than the amount of analysis the organization can currently afford to do. Every CFO has a list of questions they'd like to investigate that don't make the cut because there isn't an analyst available. Every product manager has a backlog of customer behavior they'd like to study. Every marketing team has segments they'd like to personalize for and don't, because the cost of personalizing copy at that level of granularity outweighs the return.
Same for code. I have spent twenty-something years in IT and I have never — not once — worked at an organization that was getting through its actual backlog. The backlog is always larger than the team. The wish list is always larger than the backlog. The list of things that would obviously be valuable to build but aren't even on the wish list, because nobody bothered to write them down, is larger still. If AI makes writing software cheaper by 10x, the rational organizational response is not to fire 90% of the developers. It is to actually start building the stuff that's been sitting on the "we'd love to but we can't afford it" pile for the last decade. And then the next pile under that. And then the next.
This is, by the way, more or less what happened the last time the cost of producing software dropped dramatically. In 1970, writing software meant punch cards and assembly language and a tiny priestly class of people who could do it. The number of professional programmers in the United States has gone up by something like two orders of magnitude since then, despite — because of — every successive wave of "this will let one programmer do the work of ten." Each wave made software cheaper. Each wave made software more pervasive. Each wave made the demand for software people enormously larger. There are more programmers now than there have ever been. And almost every organization on earth is still understaffed for software.
I see no obvious reason this story doesn't replay one more time.
OK, but back up. So far this is all theory. What does the actual evidence say?
It says, at least so far, that the companies most loudly betting on AI to replace human workers have spent the last two years quietly reversing the decision.
Klarna is the loudest example. In 2022–2024, the Swedish fintech replaced around 700 customer service staff with an AI assistant built in partnership with OpenAI. CEO Sebastian Siemiatkowski did the full thought-leader victory lap — productivity up, cost down, headcount cut by roughly a quarter. Then, in 2025, he very quietly admitted the company had "gone too far," that quality had declined, that customers wanted humans, and that Klarna was now rehiring. The CEO of a company that publicly bet on AI replacing humans publicly conceded that AI did not, in fact, replace humans. Not because the bots were broken — they handled the simple stuff fine — but because the demand for human-quality service did not vanish along with the cost of producing the simple stuff.
IBM did a similar dance. In 2023, the company eliminated around 8,000 jobs, most of them in HR, and replaced them with an internal AI tool called AskHR. Two years later, IBM is rehiring. And here's the line worth quoting carefully — IBM CEO Arvind Krishna, talking to the Wall Street Journal in 2025, said: "while we have done a huge amount of work inside IBM on leveraging AI and automation on certain enterprise workflows, our total employment has actually gone up, because what it does is it gives you more investment to put into other areas."
That's Jevons' Paradox stated almost verbatim by the CEO of a 114-year-old IT company. Total employment is up. Not despite automation. Because of it. The savings from automating one set of tasks freed capital to invest in other areas, and those other areas created more jobs than the automation eliminated.
That's two anecdotes, not a settled empirical fact. But the macro projections are pointing in the same direction. The World Economic Forum's Future of Jobs Report 2025 — based on survey responses from over a thousand companies employing roughly 14 million people across 22 industries — projects 170 million new roles created globally by 2030 against 92 million displaced. Net positive: 78 million jobs. Forecasts are forecasts, and the WEF is not exactly an underdog of macroeconomic prediction. But "AI nets us 78 million new jobs over five years" is a plausible forecast supported by actual survey data, made by an institution with no particular reason to talk down the AI revolution. It is meaningfully different from "AI is going to eat all the jobs," which is the prediction usually made by people whose business model is selling you a newsletter about the apocalypse.
Two honest caveats, because I want to be useful, not just contrarian.
First: "the aggregate story is fine" is cold comfort if you are a specific human being whose specific job is being restructured around you. Jevons' Paradox is a story about total economic activity. It is not a guarantee about your particular role. Plenty of 19th-century artisans were genuinely displaced by the factories the cheap steam engines enabled, even though those factories created more total jobs than they destroyed. Aggregate gains and individual costs can coexist, and pretending otherwise is the sort of thing that gets you booed off stage at the union hall.
What I think is more likely than mass unemployment is mass rearrangement. The skill mix changes faster than people can re-skill, the job titles shift, and the people who are flexible about what their job actually does on a Tuesday afternoon will be okay. The people who define their value by the specific tasks they do today, rather than by the problems they solve, are going to have a harder time. This has been true of every technology transition I've watched, and I've watched a few.
Second: Jevons' Paradox is a tendency, not a law. It assumes elastic demand. Some categories of white-collar work are genuinely inelastic. Tax filings have a ceiling. Some kinds of compliance work have a ceiling. If your job is mostly "produce the legally required volume of paperwork," and AI can produce that paperwork cheaper than you can, the org pockets the savings and there are fewer paperwork jobs. Don't be that.
But the categories of knowledge work where demand is genuinely elastic — analysis, writing, code, design, research, customer engagement, training material, internal documentation — are where most of us actually work. And in those categories, the historical pattern is that cheaper means more, not less.
So when someone tells me with great confidence that AI is going to eat all the white-collar jobs, I think two things. The first is that they are probably correct that AI is going to eat the specific tasks a lot of white-collar workers are doing today. The second is that they are wildly overconfident about what happens next, because they are skipping the part where the cheaper-task economy unlocks a much larger pile of currently-unaffordable work.
The pile of work an organization wants to have done is functionally infinite. It always has been. AI doesn't change that. AI just makes more of it economical.
We're going to be fine. Probably. The specific humans who are not fine are still going to need help, and that's a real problem worth taking seriously rather than waving off. But the species-level "white-collar work is over" prediction is the kind of thing you say when you've never tried to actually staff a backlog.
— Chris