Tech accelerationists insist that now that artificial intelligence has been invented, the genie can’t be put back in the bottle. In doing so, they betray a naive faith in progress. There have been plenty of times in history when a technological genie was released, only to be securely corked again.
Consider the electric taxi. In 1897, a fleet of battery-powered “Hummingbirds” — named for their distinctive hum and yellow-and-black design — roamed the streets of London. These horseless carriages had swappable batteries, which could be replaced in minutes via hydraulic lifts at central stations. Their inventor, Walter Bersey, sounded a little like Elon Musk when he declared: “There is no apparent limit to the hopes and expectations of the electric artisan.” Yet within two years, the Hummingbirds were scrapped due to their high costs, frequent breakdowns and accidents, and the fact that they were slower than horse-drawn alternatives. Electric cabs didn’t return to London until 2019 — after a 122-year-long pause.
Many other vaunted technologies have met similar fates. Consider DNA cloning. In 1996, the birth of Dolly the sheep proved cloning possible, and subsequent experiments edged toward human replication. Yet, in 2005, after illnesses within cloned animals had been exposed, and fears were raised around the unethical use of human embryos, the United Nations adopted a declaration against all forms of human cloning. While non-binding, 84 member states voted in favour of prohibiting reproductive cloning, embryo research for non-therapeutic purposes, and the exploitative trade of human reproductive materials.
Eugenics, too, promised to shape the future. From the 1880s to the Second World War, eugenics societies flourished across industrialised nations, championed by elite progressives including the English writer H.G. Wells and the Irish playwright George Bernard Shaw. Wells envisioned a world where “fine and efficient” humans thrived while “base and servile types” were eradicated via “mercy killings”. The Nazi atrocities of the Second World War eventually discredited this pseudoscience, putting a stop to it.
Other banished technologies include dangerous drugs such as thalidomide, marketed as a “wonder drug” for morning sickness until it caused severe birth defects. Asbestos, once a “magic mineral”, was banned after links to lung cancer emerged. Leaded gasoline, despite boosting engine performance, was phased out globally by 2021 due to neurotoxicity. Even GM crops, touted as farming’s future a decade ago, now face restrictions over biodiversity and health and safety concerns. Currently 26 countries, including France, Germany, Russia, China, and India have partially or fully banned genetically modified organisms, while another 60 countries have placed significant restrictions on them.
Faster-than-sound passenger travel also faltered. The Concorde, a marvel of Sixties engineering, halved flight times from Europe to the USA, but was retired in 2003 after a fatal crash. Similarly, Apollo-era dreams of lunar bases dissolved when funding dried up; today, even the National Space Foundation admits that NASA no longer has the specific technologies, tooling and manufacturing capabilities that created the Sixties Apollo programme, making the reconstruction of such powerful technologies as the Apollo Saturn V’s F-1 rockets impossible.
Although we are encouraged to think of technology as developing in a smooth, upwards curve over time, the reality is that the graph of tech adoption shows many collapses and failed promises, as technologies have proved to be either too expensive or too dangerous to continue producing.
When it comes to AI, companies have embraced the accelerationist myth of unstoppable, exponential growth that can’t possibly be contained. The story goes like this: large language models (LLMs) will lead to human-level artificial general intelligence (AGI), creating a direct pathway through exponential self-learning to “the singularity” and AI superintelligence — the all-powerful digital deity. This epic story has inspired hundreds of billions in investment since the early 2000s.
But as recent tests and studies of LLMs have shown, this technology is not the pathway to AGI that we were promised. The hopes that “scaling”, “emergent properties” and “reasoning” would lead to AGI have all failed. The path to AGI lies elsewhere, if anywhere. As Yann LeCun, chief AI scientist at Meta, has said: “There’s absolutely no way that autoregressive LLMs… will reach human intelligence. It’s just not going to happen.”
He has been joined by an increasing number of technologists and public figures including Gary Marcus, Sandeep Reddy, and Thomas Dietterich, as well as François Chollet, who claimed that “LLMs are a dead end to AGI”. Last year’s calls of “AGI is near” — a line championed by Anthropic’s Dario Amodei, Google’s Eric Schmidt, and Elon Musk — now echo as an embarrassing cliché, trotted out to lessening effect each time the goalposts for “near” are moved into the future. The recent excitement over Google’s Deep Mind winning a gold medal in the International Mathematical Olympiad (IMO), and Open AI claiming its model had done the same, changes nothing: these were specialised systems trained for this specific task. Their success does not imply broad reasoning ability, let alone stand as examples of “general intelligence”.
“Last year’s calls of ‘AGI is near’ now echo as an embarrassing cliché.”
Large language model technology has also failed to generate a return on investment for many companies. While Open AI CEO Sam Altman may boast of 20 million paying subscribers of ChatGPT, their contributions are minuscule compared to the vast amount of venture capital that has been sunk into AI companies: an estimated $600 billion. A recent study from Boston Consulting Group shows that 75% of companies which have invested in AI haven’t seen any return yet. Like other technologies that fail in the open marketplace, AI companies are now turning to the military for financing.
So much for AI’s unstoppable march forward. If we dig into the history of AI development, we also find that the technology has not been on an unbroken exponential growth curve over time. Since the Seventies, there have in fact been two “AI winters”: periods during which AI research funding and research froze.
The first winter lasted roughly from 1974 to 1980, when overhyped expectations about narrow AI led to profound disillusionment when promised breakthroughs didn’t materialise. The US Defense Advanced Research Projects Agency (DARPA) pulled its funding for five years, after discovering that “many researchers were caught up in a web of increasing exaggeration” while “what they delivered stopped considerably short”. Meanwhile, in Britain, the Lighthill Report claimed that the government-funded experiments in AI and robotics “fail to reach their more grandiose aims” and called for a halt to all government subsidy. By 1974, investment in AI projects in the UK and US had dwindled.
The second AI winter lasted roughly from 1998 to 2005, after early neural networks, “expert systems” and machine learning models underperformed as a result of insufficient data and computing power. Once again there was a failure to deliver on the big promises. Once again DARPA, the US government and private investors pulled their funding in a rush of capital flight. In all, 300 AI companies shut down.
Oddly enough, you don’t hear much about the two devastating AI winters from tech companies these days. This is probably because the tech companies are fearful that any whisperings about the true history of AI failure could burst the large language model bubble and usher in the third AI winter. But all the same elements are visible today: grandiose claims, hype and market frenzy followed by a failure to deliver.
In 1970, AI pioneer Marvin Minsky told Life Magazine: “In three to eight years we will have a machine with the general intelligence of an average human being.” Fifty-five years later, his promise has yet to come true — though the likes of Altman and Musk are hawking the same dream. Who is to say this third attempt at AI isn’t another dead end?
This leads us to Silicon Valley’s fatal flaw. It fuses “fake it till you make it” tech optimism with the Californian belief in manifestation: the idea that enough belief and investment can will anything into existence. This characteristic reveals a naïve faith in historical fatedness, as if progress were preordained and capital could rewrite reality.
Now that the “LLMs are the pathway to AGI” narrative is collapsing, the only factors stopping the AI bubble from bursting are the Silicon Valley mechanisms of hype and denial, the sunk-cost fallacy of investors, and the fact that chatbots and generative AI have become widely diffused across the Internet. Today, there may be 115 million daily users of AI.
Far from accelerating us into a perfect future, this flawed technology is instead slowing us down. We are inundated with the sludge and slop of AI generated material — failings which seem incurable within LLMs. Google and Bing now force AI-generated answers to the top of search results, with hallucination rates that only increase as AI gets more powerful; social media is flooded with AI sludge and bot-farm boosted ads; app stores and websites are swamped with fake reviews, while image banks are choked with generative AI slop. AI is even contaminating the news. As a result, early-adopter companies and employees are now turning against AI, realising it is bad for productivity and that using it to replace human labour has been counterproductive. A study from MIT also claims that LLM use may be eroding cognitive skills.
It’s clear the AI genie isn’t going to become the digital God we were promised, but can we put it back into the bottle? The clean-up will be enormous, given AI is now lurking within millions of locations, but it has already begun, with companies that replaced humans with AI now re-hiring humans again. Companies are also employing “slop cleaners”’ to tidy up the mess AI has made.
Yet given that governments have naïvely bought into the AI ethos, the clean-up is going to fall to us citizens. We may not have political power, but we have history on our side. Against the myth of fated progress, history shows even the most hyped technologies can be stopped when they fail humanity and society demands they be contained, corked and put back on the shelf. If this were not true, then we would all currently be inhabiting the Metaverse while wearing 3D headsets.