A new study from the University of Texas at Austin, Texas A&M, and Purdue University reveals that large language models fed popular but low-quality social media content experience cognitive decline similar to human “brain rot.” The research demonstrates that AI systems trained on engaging but superficial content suffer reduced reasoning abilities, degraded memory, and compromised ethical alignment—raising concerns about data quality as AI increasingly generates social media content.
What you should know: Researchers tested the effects of “junk” social media content on two open-source models, Meta’s Llama and Alibaba’s Qwen, by feeding them highly engaging posts and sensational text containing phrases like “wow,” “look,” or “today only.”
Why this matters: The findings mirror research on humans showing that low-quality online content has detrimental effects on cognitive abilities—a phenomenon so pervasive that “brain rot” was named Oxford Dictionary’s word of the year in 2024.
The big picture: As AI increasingly generates social media content optimized for engagement, it creates a feedback loop that could contaminate future training data and perpetuate cognitive decline in AI systems.
What they’re saying: “We live in an age where information grows faster than attention spans—and much of it is engineered to capture clicks, not convey truth or depth,” says Junyuan Hong, an incoming assistant professor at the National University of Singapore who worked on the study.