Preliminary note: Now that ‘brain rot’ is officially Oxford’s word of the year, I felt compelled to get this off my drawer (aka Medium drafts) and finish this text I started writing some time last month. If it’s not of use to you, I get it: the internet is full of content and my contribution is nothing but a speck of dust in this seemingly unlimited world of piled up data. I’m a victim of brain rot myself, so no worries: follow me along if you may.
As a digital native who boasts about being chronically online since my Club Penguin days, I was immediately hooked by the idea of OpenAI’s generative chatbot when I first heard about it in November of 2022. By then, I was fresh out of college, an intern-turned-journalist working in a bustling newsroom in the city, so the idea of getting structured and accurate texts within seconds seemed like the Promised Land. After a dull day of work compiling, crossing and checking information, I would get home and run experiments writing prompts based on my daily assigned tasks.
At first glance, the results were amusing and the content was precise. ChatGPT could be a resourceful tool to optimize the writing of news articles. Human intervention would be deployed to making small adjustments to the fluidity of the text and adding quotes from sources. In fact, Nicholas Diakopoulos, a Northwestern Professor whose work I admire a lot, dives deep into the implications of automation on his research and has actually written a whole book about it, so if you’re interested on the matter, check out Automating the News: How Algorithms Are Rewriting the Media (totally not sponsored, I’m just a big fan).
Now, two years later, I decided to take a step forward in life by pursuing graduate school, and started looking into some programs abroad in my fields of interest. Not surprisingly, every university had clear restrictions to the use of AI-generated statements of purpose and personal statements. With this, my plan was to use ChatGPT only as an assisting tool for writing my admission documents. That would definitely enhance my arguments on why I would make a good fit to each program, right? All I had to do was write a different prompt for every new application, and voilà, I’d easily get a crafted model that fit the program’s criteria instead of writing from scratch. Piece of cake.
Unless, it turned into a twist to both my academic and personal lives.
Needless to say I was inputting a lot of personal data into OpenAI’s chatbot. Like, a lot, especially for someone who has been interested in algorithmic governance and surveillance since my early college days (oops!). I’m embarrassed to admit how much I shared in the undertaking of this task: by now ChatGPT probably knows me better than my own mother.
Yet, despite all my detailed inputs and contributions, the outputs would always lack something critical: emotion.
“No shit, Sherlock!”, you may think, to which I am entitled to defend myself and state that I was fully aware large language models operate based on statistical methods that predict and place words based on the data they’re trained on, not on feelings and perceptions. However, after generating and regenerating a dozen SoPs, I was disappointed at how little ‘Chat’ would customize each of them. They kind of merged into more or less the same homogeneous blob and made me look as plain as an NPC. The uniqueness I initially expected proved to be nothing but a hoax.
This carried on into additional experiments in other GAI chatbots like Perplexity, Gemini and Copilot, and in a deep-dive into the world of how LLMs operate (that is, considering I’m short of both data and math skills).
What I found out through my brief research and literature review was that:
i) Every model has their own pattern in delivering outputs, which obviously stems from each company’s policies and data used for training. Thus, they have both structural and wording limitations;
ii) Artificial Intelligence might be propelling us into a collective brain numbness.
In terms of Finding Number One, there isn’t much to say, and savy scholars are already conducting studies on the uses and adequacies of popular Gen AIs to reach specific goals (for example, ChatGPT is a great tool for teachers seeking ideas for essay proposals and for students who want templates for school assignments; Google Gemini, on the other hand, is excellent for coding newbies, since it points out errors and explains them objectively, like what is the syntax error on line 53, or how to integrate APIs, for example; Perplexity, on the other hand, is good for synthesizing information and handing out reliable information sources).
As for Finding Number Two, there is a series of implications on how using AI — and by AI I mean all its subspecies — might be damaging us (this, in fact, has a lot to do with my personal interest in pursuing a master’s in the first place, so I could point out many ways how AI can undermine democratic institutions, foster disinformation and hate speech, and how it accentuates the collossal socioeconomic gaps between the Global North and the Global South).
But there’s one less obvious outcome I almost missed, and ironically, it was right in front of me: our use of AI might be a side effect of brain rot.
I came across this correlation after pondering how much my own writing style had changed after spending the past half year reading gen AI outputs. Words like ‘hone’, ‘posed’, ‘delve’, ‘navigate’, ‘craft’ and ‘reverberate’ would come off in the most mundane situations (if I ever texted my friends in English rather than in Portuguese, I’m sure I’d be telling them I was delving into a piece of gossip I overheard the other day about who’s-doing-what or who’s-with-who). Then, I found an article by Lance Eliot on his Forbes column published in June and he pretty much sums up what my concerns are. In sum, he believes GAI is leading us to ‘widespread large-scale brain-rot’, and mentions four main concerns that support this idea:
“(1) Over-reliance on generative AI. Your mind becomes overly reliant on generative AI and inevitably causes brain decay such that your ability to think on your own reaches a near-zero point of no return.
(2) Overfilling by generative AI. Your mind is filled up by generative AI with all manner of menial mental garbage, turning your brain into pure mush, and precluding you from forming any more coherent thoughts.
(3) Degenerative collapse via generative AI. Generative AI is so powerfully compelling and persuasive that you become convinced of your mental inferiority and give up any further attempts to compose thoughts.
(4) Destructive disturbance by generative AI. Generative AI destroys or at least mortally weakens your mind and prevents you from ever regaining your mental senses”.
Now, while I agree, I don’t think we should be fatalist and state that we’ve been, or will be, outsmarted by AI chatbots. There’s no need to cower and timidly wave a white flag to cease the human vs. artificial intelligence war.
On a side note, we do need to emphasize, quite urgently, the need for new media and digital literacy for everyone, not only children. Take college students as an example: they write an assignment using ChatGPT, then copy the text on any humanizing platform. A professor might not notice the trick and grade them a solid 10. However, did the student learn anything? Probably not. Did they read everything that came out of their prompt? Unless it’s a short paragraph, they probably just skim through. Did they have access to whichever book or article they cited? I wouldn’t be counting on it.
Given this, what kind of professional will they become if they can’t pull an all-nighter because they procrastinated all month long and now is the day before the deadline? We’ve all been there. Juggling deadlines is a universal college experience, and though a prompt or two won’t kill anyone, writing an entire paper using AI is nothing but unethical and a new kind of self-sabotage. It takes up the merit of hard work — or any work, after all. And this goes on for anyone using ChatGPT for their full-time job, too. It’s a marred way to solve trivial problems.
This is why I believe it all circles back to our (mis)uses of AI being a side effect of brain rot. We’re already so comfortably numb (see what I did there?) receiving ready-to-consume information that generating ready-to-consume information is equally appealing, almost as if it were necessary. Except it might only be bringing us closer to unauthenticity and obstructing both our creativity and our thinking capability.
Instead of taking hold of genAI to get clearer ideas and structure sprawled thoughts, we’re allowing ourselves to be shaped by them. Soon, even if you are not using AI to write, those words, phrasal verbs and connectives will be so intricatedly attached to your brain that you will have to double your efforts to avoid inserting them in a text (yes, I’ve become delve-phobic by now, and will assume you’re using GPT if you insert this word in a sentence).
Wat I mean is: we learn by solving micro puzzles every day: reading, crossing and translating information into graphs or into a whole set of different words that make the content more tangible. We also learn by making mistakes (typos, misusing ‘in/on/at’ or ‘to/for’ as an ESL person) and receiving suggestions from our peers, so if you skip the cognitive process behind daily activities and go straight to ChatGPT, Gemini, Copilot or any other gen AI chatbot at all times, you’re basically being held hostage by these tools. You’re begging to have your brain mushed and marred by your own imbecility, and there’s no going back from this spiralling downfall — and I won’t be sorry for you!
“if you skip the cognitive process behind daily activities and go straight to ChatGPT, Gemini, Copilot or any other gen AI chatbot at all times, you’re basically being held hostage by these tools. You’re begging to have your brain mushed and marred by your own imbecility, and there’s no going back from this spiralling downfall”.
But does this mean we should throw AI tools out the window and shift back to a pre-technological era of writing and problem-solving? Not at all! Like any tool, generative AI has its time, place, and purpose. The issue lies in how we engage with it. Instead of wielding it as an extension of our intellectual toolkit (as proposed by McLuhan in te 1960's), we risk letting it become a crutch that weakens our ability to think critically and independently, and to come up with new ideas that could lead you to your Eureka moment.
So, where do we go from here? First, we need to develop a more intentional relationship with AI. Like any other relationship, we must set boundaries: that means using it sparingly and purposefully (to overcome writer’s block, for instance), instead of taking it as a shortcut to bypass the insufferable process of reasoning. It also means embracing digital literacy as a core skill to understand how to use these tools and how they shape the way we think, work, and respond to demands.
Second, we should normalize the value of imperfection and see the beauty behind our flaws. Writing isn’t just about churning out impeccable sentences and never missing an Oxford Comma, but rather expressing ourselves, wrestling with ideas that oftentimes don’t seem to connect, and, of course, learning from our mistakes (to quote 21st century poet and pop star Hannah Montana, ‘everybody makes mistakes, everybody has those days’). AI may create polished prose, but it can’t replicate the messy, deeply human process of crafting something from scratch and impressing emotion into it. There’s a piece of ourselves behind everything we do, be it our writing style, sentence length, wording choices — and beauty lies within it.
Finally, it’s worth remembering that no algorithm can replace the richness of human connection (I see you peer-review fearing students). Collaborative brainstorming, feedback from colleagues, and even constructive criticism from your boss or supervisor are are irreplaceable. Reading good books, signing up to a daily newsletter, reading academic journals (I myself am a big fan of HKS Misinformation Review and MIT’s Tecnology Review) all help to inspire, enrichen our vocabulary, expand our worldview and thus minimize the side effects of brain-rot.
While quitting ChatGPT was definintely not an easy decision for me (I, too, am an idle gen Z who loves taking things easily), stepping away was still the best option in the long run. It left room for what still remains of my creativity, my oftentimes insufficient vocabulary, and my own messy decision-making process. At the end of the day, I’d rather have a brain that’s burned out from overuse than one that’s rotten right to the core (in case you didn’t notice, this is a reference to Brat). I want to choose my own words to communicate with the word, and if it means a few sleepless nights and having many energy drinks, so be it.