You have to hand it to the folks at Citrini Research. Their story of an AI-induced global economic meltdown had a big impact. The stock market almost crashed. It was a dystopian tale of how by 2028 AI will take over a growing proportion of white-collar jobs. Unemployment will rise, people will spend less money, and they will no longer afford their mortgages. The financial system will collapse.
As brilliant as the story is, I think it is wrong. But it is wrong in a good, useful way. What it got spot-on is the observation that AI will change our lives like nothing before. The Luddites of this world, including members of my own profession, have persistently underestimated the impact of AI. The financial media, especially here in Europe, has been associating the term “AI” with the word “bubble”. Jason Furman, Barack Obama’s chief economic advisor, and now a Harvard economics professor, tweeted recently: “A few months ago the discourse was about whether or not AI was a bubble. Now it’s shifted to whether or not we’re about to enter a dramatically new era.”
This may have been the discourse in his profession, and the echo-chamber media like the Financial Times, The Economist, and Bluesky where all the economists hang out. But anybody who has seriously engaged with AI and its underlying technology has known for a long time that AI is not a bubble. Some market valuations may have been too optimistic, but the thing itself is rock-solid.
Take a look at the technology behind modern AI: the neural network. What it does is pattern recognition on an unimaginable scale. In 2012, a team led by the British computer scientist Geoffrey Hinton tested a neural network called AlexNet by tasking it with identifying 1,000 everyday objects from a collection of over one million pictures. Each picture came with a label, like “cup” or “spoon”. When the machine was fed the first picture, it made a random guess. Every time it got something wrong it changed the algorithm. Do this a million times, and you will eventually get there.
This is not how humans learn. But pattern recognition is also an important component of human intelligence. We think of chess grandmasters as very intelligent. A lot of their skill is memorization of a large number of moves combined with superior pattern recognition. One of the things AI has taught is to be much more careful in the way we think about human intelligence.
“Anybody who has seriously engaged with AI and its underlying technology has known for a long time that AI is not a bubble.”
AI can even do things we would have previously classified as logical reasoning. But logical reasoning, too, includes, at least to some extent, memorization and pattern recognition. The mathematician Terence Tao recently reported that AI managed to find a mistake in one of his proofs. It also managed to come up with the proof of a conjecture in discrete mathematics. But AI is still a pattern recognition technology at its heart.
There is stuff that AI struggles with. It cannot do “context” in the way humans do. Tao made the point that if a human asks for tea, someone puts a cup in front of them and pours the tea into the cup. An AI robot, on the other hand, might take your demand literally and just pour the tea over your head. It is also not creative. It cannot be a Leonardo da Vinci, Vincent van Gogh or Pablo Picasso. But it can do cliché art, just as humans can.
The question now becomes: will AI replace the bottom 10% of artists or 90% of them? This is where our story intersects with economics. If the answer is at the higher end of that range, most of us should probably start growing vegetables. I think the answer is more likely to be in the lower or the middle range.
In some sectors, AI has already started to replace humans. If you are a freelance advertising copywriter, chances are that you are struggling to find commissions. When the AI company Anthropic introduced legal plugins for its Claude Cowork platform, it took over the work of junior lawyers and legal assistants. It makes the senior lawyers more productive, but it did not get rid of lawyers.
A fateful prediction came from Hinton himself, the godfather of the neural network. In 2016, he predicted that within five years, AI would replace all radiologists. Back then, AI had become very good at detecting tumors on X-rays or other medical images. Hinton must have thought that looking at pictures is what radiologists do all day.
His prediction was completely wrong. One of the best datasets we have is from the Royal College of Radiologists in the UK. In 2015, the UK had 3,318 consultant radiologists. By 2024, that number has jumped to 4,923.
In a way, Hinton fell for the classic “lump of labor” fallacy — the idea that the supply of jobs in an economy is fixed. In our case, the fallacy is the idea that if AI takes half of our jobs, the rest of us have to share the remaining half. This is essentially what the Citrini report did too. It fell for the fixed pie fallacy.
When radio was introduced in the UK in the Twenties, people predicted the end of newspapers. When TV came in the Thirties, they predicted the end of radio and of newspapers. That did not happen because the new media expanded the overall market.
But today, a hundred years later, the newspaper industry finds itself in an existential crisis. This is not because the pie has stopped growing, but because newer business models replaced old ones. A lot more people work in the wider media industry today than they ever did before.
So, instead of asking how AI will affect the economy, we should ask three separate questions. What effect will it have on existing white-collar employment? Over what period? And will it eventually create more jobs than it destroys?
Citrini is probably right about the impact on existing white-collar jobs, but wrong about the timing and the creation of new ones. Machine learning is a long process. “I’m so old I remember when fully autonomous cars were going to be ready for mass deployment by late 2017,” tweeted the AI software developer and computer scientist François Chollet a few years ago.
The robotaxi industry has made progress, but autonomous driving is stuck at level four out of five. By September last year, Waymo robotaxis had clocked 127 million miles collecting data without a driver in the car. When they reach level five, or full automation, they will be ready for a bigger rollout. They will still lack human intuition. They will not be perfect, but they will be better, on average, than humans because they don’t get angry, they don’t speed, they don’t send text messages while driving or fall asleep. When we get to the final level, Waymo will have been collecting data for some 20 years.
The American futurologist Roy Amara in the Seventies said that “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run”. That applies to AI too. Just like how the discovery of electricity killed off the jobs of people who made a living by walking around the streets in the evening to ignite our streetlights, AI will destroy entire sectors. But its main economic effect is that it will make us more productive over time.
An economic meltdown is much less worrying than a political one. The de-industrialization of the early part of this century drove workers to support politicians of the Right. Now imagine if the same were to happen to sections of the white-collar middle classes.
Countries with flexible labor markets, such as the US, China, and to a lesser extent the UK, will be more affected by this, both positively and negatively, than countries with high degrees of employment protection such as France and Germany. The latter will probably avoid some of the disruptions in the short run, but the quid-pro-quo is that they will not benefit from the productivity gains that come from AI later.
All over the Western world we have seen a trend away from centrist politics to the extremes of the Right and Left. The middle class are the median voters. If they lose their jobs, so will their constituency MP or congressperson.
But the economy as a whole is going to be fine if only because it takes time for AI to replicate human skills. My more optimistic scenario rests on the fact that even the Luddites have finally caught on to this. We all have time to prepare.
















