Outsourced Thinking

As humans, we have an incredible ability to adapt to new levels of convenience. I’m sure it’s rooted in some Darwinian philosophy, but once we get a taste of progress, it’s painfully difficult to go back. 

You crave business class after experiencing it once. We can never go back to dial-up internet. And can you imagine reading an encyclopedia to research a topic instead of Googling it? Each upgrade quietly rewires us at the expense of previously learned skills and tolerance. 

It makes me wonder: what are we giving up as we adopt AI? 

Over the past few months, I’ve found myself outsourcing things to ChatGPT more and more—for critical and menial tasks alike. For example, I’m using it to conduct Deep Research on highly technical topics that would otherwise take me hours. But just last week, I did something new. I typed half a reply to a private DM, then asked AI to finish my thought. Perhaps the most concerning part of this isn’t that I asked AI for help; it’s how hard it felt to complete my sentence without it.  

Thinking feels hard. It’s as if my neurons are coated in molasses and struggling to connect. It leads me to believe that AI isn’t just here to supplement our thinking; it’s here to supplant it. 

I’m not a technopessimist. As someone who works in the technology industry, I’m pro-innovation. But I am reflecting on what we might be giving up in the name of convenience and how we can retain some semblance of the quintessential human characteristic: the ability to think.

Feedback Loops 

The adoption of AI tools tells a clear story: we’ve crossed a threshold, and there’s no going back. When it launched in December 2022, ChatGPT became the fastest-growing consumer app in history, reaching 100 million users in just two months. Today, it sees 700 million weekly active users. And it’s not just individuals—over two million businesses around the world have integrated it into their workflows. 

What’s often overlooked is that every interaction with these tools feeds back into the system. ChatGPT and other Large Language Models (LLMs) are fine-tuned using a process called Reinforcement Learning from Human Feedback (RLHF). It’s a process where human preferences guide the model toward more helpful and aligned responses. This involves users ranking outputs, providing corrections, or implicitly signaling quality through continued use. Combined with additional training on publicly available datasets and, in some cases, proprietary data, this feedback loop helps shape future model behavior.

That loop has a second-order effect. As the models become more capable, we rely on them more. And as reliance increases, we delegate more of our cognitive load to them. See above, my lazy investment in personal relationships. The result isn’t just better AI; it’s a second digital brain available to us 24/7/365—assuming servers and global energy production can scale to meet our demands. 

But that convenience comes at a cost. Studies are beginning to show declines in memory recall, weaker critical thinking skills, and lower cognitive engagement when we use AI to complete tasks we used to do ourselves.

Cognitive Atrophy 

Up until now, the bold thesis of this blog post has been nothing more than a musing. However, a recent MIT study should help cement the argument in real-world data. You know, the one that’s been all over social media lately. 

Researchers wanted to examine how AI tools impacted cognitive engagement. Over the course of four months, 54 adults were asked to write a series of essays using either AI, a search engine, or their own unassisted thinking. Researchers measured cognitive engagement through brain activity and linguistic analysis. The findings showed that participants in the AI group exhibited significantly lower engagement than the other two groups. Their brains literally went from active participants to passive delegators. 83% of them couldn’t recall a single sentence from the essays they’d written. Even after stopping AI use, their brains remained under-engaged and performed worse than those who never used AI. It’s a clear example of how over-reliance on AI atrophies our ability to think. 

This isn’t an isolated example. An older, 2023 study conducted among universities in Pakistan and China found a 28% decline in decision-making ability among students using AI. A separate peer-reviewed study found a strong negative correlation between AI tool usage and critical thinking ability. Interestingly, the impacts were particularly profound on younger participants. 

The pattern is clear. My example above is validated—critical thinking, like muscles, can atrophy without regular training. AI tools, which are so wonderfully convenient, make it easier for us to abdicate that effort. Without intentional guardrails, we’ll continue to see a decline in critical thinking skills.  

The Prompt Forward 

The MIT study didn’t just show that AI use leads to lower engagement; it also revealed how to get the most out of these tools without sacrificing critical thinking. The group that started by thinking for themselves and only integrated AI later showed the highest brain activity. They were more engaged, retained more information, and expressed greater ownership over their work. In contrast, the group that relied on ChatGPT from the beginning not only produced more formulaic, “soulless” writing, but by the end, many had resorted to copy-and-paste, skipping almost all independent thinking.

For me, that’s the lesson: don’t start with AI. Build in “non-AI time” for the first pass at a problem. Whether it’s drafting a document, brainstorming ideas, or writing code. Use the tool as an amplifier, not a crutch. Personally, I’ve designated time in my calendar where I intentionally avoid the use of AI tools. This simple shift keeps the neural “muscles” of critical thinking active while still letting you benefit from the leverage AI provides. Without that discipline, we risk training our brains to sit on the sidelines while the machine does the work.

Epilogue: Maintaining our Humanity 

We’re living in a time of immense change. Political instability is reshaping nations, fueling civil unrest. International law is being undermined by the very leaders who swore to uphold it. And younger generations are questioning the philosophies that define societies. 

It’s disorienting and feels as though rage is the overwhelming emotion in the zeitgeist. 

The risk with AI is that it provides a convenient solution for the disorientation. A simple prompt can amplify echo chambers, feed loneliness, and reinforce our biases, all while surreptitiously supplanting our ability to think.

The ability to think is the most innately human characteristic, and I worry that if we give that up, we will lose sight of ourselves at a time in the world when humanity is needed most. 

Key Takeaway: Be intentional about how you use AI. Carve out cycles of non-AI time to preserve your critical thinking.

Leave a comment