The internet is making us stupider. Or so my smartphone tells me, and who am I to argue? IQ scores are falling across the world; people are getting worse at basic maths and logic; university students lack sufficient attention spans to read whole books. I would have included more evidence but my editor cut it to keep you interested.
Correlation is not causation, but the timing of the descent is suggestive. It apparently started just as polite society decided it was a good idea to put a small interactive screen into each and every fist, from toddlers up. “Interactive” here is a bit of a misnomer. In fact, we abandoned freewheeling conversational back-and-forths for intermittent jabs at glassy surfaces, only vaguely in control of the deluge of content hitting our retinas. We also swapped staring into the middle distance and thinking hard for posting links that made it look like we were thinking hard. Still, it all felt gratifyingly cool and sci-fi at the time.
It’s not like some people didn’t see it coming. In 2011, Google’s then CEO, the frantically boosterish Eric Schmidt, was keen on saying with some relish that children had only two modes: “asleep or online”. Yet even he worried that “nobody’s getting what comes from the deep reading of a book”. Neurophysiologist Susan Greenfield predicted that constant clicking would leave the mind “more child-like, reactive and dependent on the behaviour and thoughts of others”. She also wrote a book on the subject, though few could concentrate for long enough to read it properly.
A more succinct and memeable take came from the late playwright Richard Foreman, describing people with “cathedral-like” personality structures who are now being eclipsed by the “pancake people”. The endangered type is a “highly educated and articulate personality — a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West”. The pancake people, in contrast, are “spread wide and thin”. Thanks to “the pressure of information overload and the technology of the ‘instantly available’”, we are facing “a world that seems to have lost the thick and multi-textured density of deeply evolved personality”.
I think Foreman was correct to worry; and now we are out of the frying pan and into the fire. But another train of thought, emerging around the same time, said there was nothing to fret about. What became known in philosophy as the “extended mind thesis” told us to enjoy the collapse into flatness. It urged us to think of the new smartphones and tablets and their data as literal extensions of our minds: integrated cognitive mechanisms on a par with frontal lobes, neurons, and grey matter.
“My iPhone is not my tool, or at least it is not wholly my tool. Parts of it have become parts of me,” wrote one of the thesis’s chief architects, David Chalmers, in the foreword to a book by its other chief architect, Andy Clark. The phone’s memory is part of your memory; its capacity to gather and sort information can legitimately be claimed as your mind’s, too. Or, as Clark argued in the same book, “the activity of brain-meat” is not the whole of human thought. “Cognition leaks out into body and world.” Big thinkers used to have their heads in the clouds; now the cloud is in their heads instead.
That’s all very well, you might think: but what if the cognition leaking out of your eardrums leaves you with a mostly empty skull? What if abilities that used to flourish in brain-meatspace are negatively affected by excessive dependence upon search engines? Writing his foreword in 2008, Chalmers didn’t seem concerned — “I am not worried by… the threat that the core role of the brain will be lost” — though that was before there was good evidence for our deterioration. But 17 years later, in an article published this month, Clark considers the possibility that, thanks to the internet and AI, “[w]isdom suffers, collective self-plagiarism looms, and human creativity becomes all but obsolete”. And he doubles down.
We can remain relaxed about this prospect, he says, as long as we remember we are “hybrid thinking systems defined (and constantly re-defined) across a rich mosaic of resources only some of which are housed in the biological brain”. We should therefore reframe the narrative. Emerging evidence of “brain-bound” deficits “need not be shrinkage and loss so much as the careful husbanding of our own on-board cognitive capital”. In other words: since our minds are already extended into surrounding technology, deficits between the ears don’t really matter much, as long as the technology keeps coming up with the goods. And in fact, from the perspective of the point of thinking, outsourcing this way is often quite efficient.
There are many sceptical questions you could ask about this, including about what happens when the national grid fails or a cyberattack hits. Presumably we continue to gaze slack-jawed at our former oracles, dully poking their inert screens from time to time, while our homes get plundered by digital detoxers and passing members of the Bruderhof. But the bigger issue is about what exactly Clark thinks minds do. The danger is that, in expanding the concept of a mind to fit in phones and tablets as part of its native furniture, academics with tech fetishes are consigning valuable aspects of conscious mental life to the bin.
Clark holds that the main point of thinking is to be good at prediction. The mind creates a mental model of the world as efficiently as possible, in order to move about, solve problems, and complete goals. The model updates only when information comes in from the environment suggesting it has got something wrong. Entities around you that reliably help achieve this core function can be counted as part of your mind too.
As you’d probably expect from a philosopher of cognitive science, this is all quite abstract, impersonal stuff. It makes little reference to personal-level, social aspects of human thought and personality: cleverness, funniness, creativity, playfulness, quirkiness, or having an astonishing memory, for instance. Yet these are qualities which we appreciate in others and often feel proud of in ourselves. They are central to social life: how we appraise people, what jobs and roles we assign to them, whether we warm to them or not, what we feel we can talk to them about. To use Foreman’s image, these capacities are part of the cathedral. When our dependence on technology reduces their presence, we lose ornate columns, flying buttresses, and quite a bit of height.
“When our dependence on technology reduces their presence, we lose ornate columns, flying buttresses, and quite a bit of height.”
As the tide of internet dross gradually erodes our brain-bound stock of information and the habit of making new connections within it, we will be less able to make intellectual leaps between disparate topics; invent quick-witted jokes on the fly; lie in bed running through the lines of a memorised poem; stare at a crossword clue until light suddenly dawns; create a new sonnet or song that ingeniously bends the rules of the game, and have others recognise what it is we accomplished. The fact that AI could have searched for the poem, invented the song, or given us a cheat on the crossword clue is completely beside the point.
And if we carry on staring at screens all day, these qualitative losses to a rich life of the mind won’t be the only casualty. Perhaps it is true that with the help of developing technology, humans are becoming better at solving problems and completing goals. But put like that, the truism is incredibly vague. If we lose our critical faculties along the way, how are we supposed to tell which goals and problems are the right ones to pursue in future, given what else we also care about? It’s a comforting fantasy that humans might outsource their ultimate aims and objectives to the machines too.
In his fabulously entertaining, book-length ramble You Are Not A Gadget, early virtual reality guru Jaron Lanier argued that: “Whenever a computer is imagined to be intelligent, what is really happening is that humans have abandoned aspects of the subject at hand in order to remove from consideration whatever the computer is blind to.” For Lanier, “every instance of intelligence in a machine is ambiguous” — did the machine really get cleverer, or did we make ourselves more stupid to fit with it? It seems to me that is exactly what tech lovers do when they rebrand great losses from the personal-level life of the mind as efficiency-seeking victories. We must try to put the machines back in their boxes, before we get squashed quite flat.