A chatbot and a caveman walk into a bar ...
- Christian Filli
- Jul 11
- 13 min read
Updated: Sep 13
Ask not whether the machine is making us dumber or smarter, but whether it will allow us to co-exist with it.

During his guest appearance in The Tonight Show last year, Jerry Seinfeld shared his take on the emerging era of AI:
“At the beginning of mankind, all we had was real intelligence, right? It didn't work. We were dumb … [and are] still dumb. Intelligence didn't work. So we kept thinking until we could create a fake version of it, so dumb people would seem smart. And then we thought, well, maybe that wasn't the smartest thing, 'cause what if the fake brain gets smarter than the real brain? We would look even dumber. So, if I got this right … we're smart enough to invent AI, dumb enough to need it, and so stupid we can't figure out if we did the right thing.”
So what should we make of the study conducted by MIT and published on June 10th by Cornell University, titled Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task? In it, the authors attempted to diagnose the effects of Large Language Model (LLM) use during essay composition on brain circuitry engagement, i.e. thinking, memory, learning and synthesis.
A Brainy Recap
Ever since my 23andMe report showed that I have some Neanderthal DNA in me, I have been fascinated, amused and intrigued by the fact that our stocky, broad-shouldered genetic cousins had bigger brains than modern humans. How come we outsmarted them in the race for supremacy among hominids?
Well, as the saying goes, size isn’t everything. Though Neanderthals had larger average brain volumes - about 1,400–1,750 cm³, compared to 1,200–1,400 cm³ - what mostly set us apart from them is that we are wired differently. Neanderthal brains were optimized for navigating the extremely rugged environment of Europe and parts of Asia during the Pleistocene epoch - glacial landscapes, wholly mammoths, and all that good stuff. So yes, on average, their brains were bigger, more elongated in shape, with a proportionally larger occipital region, which is believed to have equipped them with better visual and overall sensorimotor processing. But Homo Sapiens developed a more elaborate prefrontal cortex, which is associated with problem solving, strategic planning, social cognition, language and symbolic thought. In other words, with a different structure came a different software, making us more cognitively advanced.
These developments are intrinsically linked to modern humans’ capacity to build larger and more interconnected social networks, allowing for division of labor, long-distance trade, and cultural transmission (ideas, stories, tools, etc.). Such versatility enabled our species to adapt to a wider variety of environments - from deserts to tundras - through innovation and cooperation, and later build a global civilization.
By contrast, Neanderthals lived in smaller, more isolated groups, thus limiting innovation spread. Although they did show some symbolic behavior, it appeared to be far less frequent and less complex than with modern humans, as shown by the vast amounts of cave art and musical instruments left behind by our ancestors, along with more sophisticated tools, burial rituals and spiritual symbols.
In sum, Neanderthals had bigger brains, but Homo Sapiens developed smarter brains. The edge came not from brute brain mass, but from better networking, both inside the brain and among individuals and groups. And as far as my own DNA goes … well, interbreeding between species may have given some of us a dose of useful Neanderthal genes (e.g. a stronger immune system), but not necessarily better judgement.
Cognitive Debt?
Neanderthals were generally more robust and muscular than modern humans, thus requiring more energy for physical maintenance and movement. And their larger brain, despite lagging somewhat in imagination, was energetically expensive to maintain. Sapiens, on the other hand, may have struck a better deal, since our smaller yet more complex brains are metabolically more efficient. Still, that is a relative matter. At only about 2% of body weight, the human brain consumes approximately 20-25% of its total energy budget. And no, despite a die-hard myth, we are not utilizing only 10% of it.
With this in mind, the term “cognitive debt” that MIT researchers have alluded to seems quite poignant, if not outright provocative. But what do they mean by “debt”? The study, which had a sample of 54 participants between the ages of 18 to 39, draws attention to “cognitive activity scaled down in relation to external tool use” and “cognitive atrophy through excessive reliance on AI-driven solutions”. They go on to suggest that “there is a strong negative correlation between AI tool usage and critical thinking skills, with younger users exhibiting higher dependence on AI tools and consequently lower cognitive performance scores”.
This has gotten some people rattled on LinkedIn and elsewhere. Even Joe Rogan brought it up as a concern in his podcast interview with computer scientist Roman Yampolskiy. But before anyone panics, we should keep a few things in perspective.
First of all, how did the researchers arrive at those conclusions? The methodology included the use of electroencephalography (EEG) to record participants' brain activity and assess neural activation and connectivity patterns during essay writing. For the output analysis, they used Natural Language Processing (NLP), collected questionnaires, and held interviews with the participants, who were split into three groups: LLM group, Search Engine group, Brain-only group, where each participant used a designated tool (or no tool in the latter) to write an essay. LLM group participants were then asked to use no tools (referred to as LLM-to-Brain), and Brain-only group participants were asked to use LLM (Brain-to-LLM).
That’s it? Yes. For all its headline-grabbing statements about “AI tools”, the study focused its analysis entirely on LLMs. This leaves out many more Generative AI tools, as well as many other categories and applications of AI, ranging from voice/face recognition to CRM platforms to virtual assistants to self-driving cars and, perhaps most importantly, AI agents. So let’s just say the scope of the study was quite narrow in relation to the vast array of AI tools already in use across business, education, science, industry, warfare, public services, and civilian day-to-day life.
Second, the study is currently an arXiv preprint, not yet peer-reviewed or replicated. That means we should read the findings as suggestive, not conclusive (arxiv.org). As mentioned earlier, the sample of participants was not just extremely tiny but they were all recruited from just a handful of universities. Plus, the study captures short-term effects only, and therefore lacks longitudinal data. And lastly, it measures the effects of LLM usage within the specific task of essay writing only, not across a variety of cognitive domains or activities (e.g. research, problem-solving, etc.).
Bottom line: The findings are not remotely solid enough to declare irreversible declines in human intelligence. We need replication across tasks, populations, and time frames.
Moreover, not to rain on anyone’s parade, it appears rather obvious to me that the brain would make less of an effort when outsourcing a big chunk of the work to the LLM than it would by running a Google search or using no external assistance at all. Of course the act of flipping a switch demands less of our neurons than fiddling with candles and oil lamps to produce the same amount of light! I mean, just imagine trying to browse through Amazon’s infinite catalog without the help of its algorithm. So I’m unclear as to why we needed a 200+ page report to explain this.
What might unfold?
Speculation abounds with regards to what the future will look like as AI continues to infiltrate every aspect of our lives. While we may find it irresistible to be entertained by utopian and dystopian views alike, I don’t find either one to be very helpful. One thing on which experts across the spectrum seem to agree is that “there is no putting this genie back in the bottle”, hence we might all benefit from gaining a foundational understanding of the technology itself, and equipping ourselves as best as we can for the journey ahead.
Last year, Slack’s Workforce Lab ran a survey of 5,000 full-time desk workers, which helped uncover five distinct AI personas that employers need to understand as they implement AI and bring staff up-to-speed on the technology. Unsurprisingly, they found that the majority of people still have mixed emotions and perspectives regarding the adoption of AI.
What is becoming increasingly evident, however, is that no one who wishes to stay in the game can afford to watch these developments from the sidelines. Given the inevitability of AI, it behooves us all to think as clearly and as deeply as we can about what it all means for our culture, our education, our well-being, our labor market, our economic model, our moral framework, our system of governance, and our personal sense of fulfillment.
As far as I can tell, the conjectures around LLMs lowering our cognitive performance are poorly conceived. The widespread use of calculators has not kept us from figuring out how to land a rover on Mars. As awesome as reading ocean currents and constellations may be (shoutout to the Polynesian Voyaging Society!), navigating with a GPS has not made us any less interested in reaching the farthest corners of our globe. And I am confident that not many graphic designers today feel particularly nostalgic about cave painting.
This is not to say that ride-sharing won’t atrophy our driving abilities, or perhaps even diminish our appetite for getting behind the wheel. But skill erosion does not necessarily equate to a drop in cognitive capacity. If it did, we would not have had the amount of progress we’ve seen in the past several thousand years. When used properly and wisely, tools have a solid track record of helping redirect - and enhance - our faculties.
Have modern humans become irreversibly - even dangerously - dependent on the power grid and the internet? One could certainly argue that. Most of us would feel quite helpless in the aftermath of a major natural disaster or cyber attack. But even if the Bushmen of the Kalahari were to gain a competitive edge in a post-apocalyptic world, that doesn’t mean a hunter-gatherer lifestyle produces a happier, sexier, or more meaningful human experience overall.
A better question, I believe, is how we want to make use of our mental capacity, and to what end. TikTok is probably a good analogy - it can be a fantastic learning tool or, as Jesse Singal would put it, a place “where nuance goes to die”. While it is true that some technologies can be more problematic - and more counterproductive - than others, part of the responsibility inevitably falls upon the user. The fact that we no longer need to memorize a bunch of phone numbers because we have all of our contacts on speed dial hardly produces any negative outcomes (except in some very extreme circumstances). On the other hand, staring down at our devices for many hours a day, even while crossing the street or walking the dog, is indeed messing up our physical and mental health, as well as shrinking our attention span and distorting our worldview.
Akin to short video platforms, it is plausible that the over-reliance on LLMs as mere crutches leads to shallow knowledge retention and surface-level understanding of concepts that require greater levels of engagement to be fully internalized. At this stage of their technological development, outsourcing too much of our thinking to LLMs can in fact render us more ignorant given the high prevalence of inaccurate information they can spew ("garbage in, garbage out"), as well as their strong bias towards validating the user’s stance on any particular subject. Worst case scenario, they could just be playing us.
On the flip side, LLMs can serve as cognitive amplifiers, allowing us to offload some of the grunt thinking and therefore free up mental bandwidth for higher-level strategic analysis, creative exploration, problem-solving, and decision making. The quality of my writing, for instance, tends to be enriched both in content and form when aided by Chat GPT during both, the ideation and editing phases. Far from inhibiting the creation process, it enables me to elevate and bring to life the original concept in ways that I wouldn’t anticipate doing otherwise. As long as I feel I’m expanding my horizons and achieving a better quality output, I find the tool to be a worthy companion. Whether it makes the process more or less efficient is secondary to me.
So far in this article, I have only reflected on AI as a tool. And it is a rapidly moving target. While most of us are barely proficient at prompt engineering, those working inside the tech industry already have their eyes set on the next big thing: Artificial General Intelligence (AGI), Artificial Super-intelligence (ASI) and Robotics - now commonly referred to as “embodied AI” (don’t get me started on that). And their message as of late has been exceedingly consistent: brace for impact!
Deeper Questions
There is a pivotal moment in the movie Good Will Hunting when Dr. Sean Maguire (Robin Williams) meets with Will (Matt Damon) at the park to have a conversation. In a final attempt to connect with the young genius, Dr. Sean draws a distinction between what it is to know everything there is to know about the world versus what it means to experience living in it wholeheartedly. By revealing his personal struggles with grief and loss, and recognizing Will’s own pain and defenses, he breaks through the shield of his intellectual arrogance and helps him realize that there is a lot more to his existence as a human being than mental acuity.
The movie came out almost three decades ago but it resonates very strongly with me in the present context. As Dr. Sean so eloquently lays out in his monologue, knowing the story of Michelangelo and the Sistine Chapel down to the tiniest detail is one thing, but visiting the site and experiencing its splendor first hand is something else entirely. Towards the end of the scene, Will is noticeably moved by what he just heard. The rug has been pulled from under his feet, and he is prompted to embark on a journey of personal transformation.
As the pendulum of the debate on this technological revolution continues to swing back and forth between promise and peril, it’s easy to be distracted, dazzled or distraught by the genius of AI, and I wonder if we risk missing the forrest for the trees. AI can gather, digest, analyze and interpret massive amounts of data in seconds. It can even come up with stories and learn how to push our buttons with mesmerizing precision (allegedly!). But it's very unlikely that AI will ever know the first thing about what it actually means to experience struggle, heartbreak, surprise, triumph or a visceral reaction to danger, just as it is unlikely that we will ever know what it’s like to be a bat.
My point here is that the machine may be capable of answering any question, mimicking human emotion and performing a variety of tasks, but it will take a lot more than Sophia to convince me that it can "embody" TRUTH. Now, I know what you’re thinking: humans aren't very good at truth, either! And that’s a sticky point, for sure. We assume we are the most intelligent species on this planet yet we are the most delusional, too, capable of believing the most nonsensical ideas. Nowadays, we don’t even seem capable of focusing our attention on anything for more than a minute, so how on earth can we even begin to comprehend “the good, the true and the beautiful”?
And the plot thickens. What will happen when even the smartest cat in the room (such as Will) is quickly - and repeatedly - outsmarted by a chatbot? What could anyone possibly tell me about, say, Greek philosophy, when I can have Aristotle whispering directly into my ear? Why invest in a doctorate degree once Grok becomes smarter than all Nobel Prize winners combined? The implications for long-standing precepts about leadership, authority and expertise, and power dynamics more broadly, are difficult to fathom at this point. Will any decisions at all be left for humans to make?
Tech veteran, entrepreneur and co-author of Genesis: Artificial Intelligence, Hope, and the Human Spirit, Eric Schmidt, actually said the following in his book promo: “We’re always concerned that AI will make us the dogs to them as humans. It’s important that we control our master better than dogs control us.” Wait, what? If only Kafka or Cantinflas were still alive to decipher this one.
Romantics will observe that the game of chess only increased in popularity after Deep Blue defeated Gary Kasparov in 1997 (not true, by the way). But noble sentiments start to feel a little quaint when you hear the CEO of Anthropic, Dario Amodei, using the phrase "country of geniuses in a datacenter" to describe the potential future of AI, and estimating that "it could eliminate half of all entry-level white-collar jobs" within the next five years. That sounds no longer like a game but a total wipe-out. And who will be in charge of picking up the pieces?
As we continue to witness AI’s progression from tool to agent to a reorganizing force in society, the rumblings about a complete paradigm shift are intensifying. In his essay, Machines of Loving Grace, Mr. Amodei emphasizes that "the right way to think of AI is not as a method of data analysis, but as a virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world". Wowza! Let's hope this virtual biologist doesn't get as excited about pursuing gain-of-function research as its bone-and-flesh peers have.
Meanwhile, one of the most audacious projects that are quietly (or maybe not so quietly) underway is the altering of the human genetic code, which, despite current regulatory guardrails, could eventually lead to the creation of “super-humans”, soon followed by AI-augmented super-humans. Transhumanists, after all, claim that we are way overdue for an up-grade. But it goes without saying that such scientific endeavors - though not entirely without merit - are set to bring about a split in our evolutionary path, the consequences of which are eerily familiar. Can we - or should we - draw the line somewhere along the spectrum between cochlear implants and Homo Deus? Will we even have a choice?
Whether we are struggling with the ethics of incorporating the use of LLMs or creating a new class of biosynthetic overlords, the biggest question that comes up for me is: What the heck are we optimizing for? Is it intelligence? Efficiency? Speed? Productivity? Cyborgs? Are these our highest ideals now?

Given Silicon Valley’s operating assumption that AI is en route to acquiring a life of its own, the reasonable question to ponder is: once the AI becomes fully capable of optimizing itself, what will it decide to optimize for? Take, for instance, the financial system. As we speak, algorithms are already running the show to a large extent, and we are not far from instructing AI agents to take over completely. Once that happens, how long will it take for them to lock us out of the system?
As a final reflection, I suspect it’s not so much about the machines making us dumber or smarter, but whether we will be welcomed to co-exist with them. Singularity for all? I doubt it. In the end, Neanderthals may have the last laugh.
::::::::::::::::::::::::::::::::::




Comments