The Sycophantic Spiral: AI's Toll on Thinking and Democracy
- Zac Engler

- Sep 8, 2025
- 5 min read

This idea of the “Sycophantic Spiral” comes straight out of my upcoming book, Turning On Machines. In it, I argue that the most dangerous impacts of AI won’t be robots stealing jobs, but the quieter ways algorithms shape how we think.
AI personalization has moved from convenience to consequence. Research from the last five years shows that heavy reliance on AI systems weakens critical thinking, narrows exposure to diverse ideas, and accelerates political polarization. The pattern is subtle but cumulative. Instead of building capacity, AI’s current design often erodes it. Translation: your phone might be giving you brain abs, but not the kind you want.
Cognitive Offloading Weakens Mental Muscle
When people delegate thinking to machines, their own abilities decline. Studies in cognitive psychology confirm this use-it-or-lose-it effect. Michael Gerlich’s 2025 research with 666 participants showed a strong negative correlation between AI tool use and critical thinking skills, driven by cognitive offloading. Participants leaned on AI for reasoning tasks, then struggled to complete the same activities on their own.
MIT researchers used EEG to track participants over four months. Those who relied on ChatGPT exhibited up to a 55 percent reduction in brain connectivity compared to control groups. Alpha and theta wave activity declined, impairing creativity, memory, and executive control. These changes did not rebound once AI use stopped. Imagine skipping leg day for four months and then being surprised you can’t outrun a toddler. That’s what your brain is doing here. The implication is clear: over-reliance on AI may leave long-term dents in our cognitive circuits.
Personalization Narrows the Field of View
Recommendation systems reduce the diversity of content people encounter. A review of 34 studies found that 85 percent confirmed the presence of filter bubbles in recommender systems. Algorithms favor content that confirms existing beliefs, promotes popularity bias, and limits exposure to different perspectives.
On Facebook, political diversity of posts shrinks by 5 to 8 percent due to algorithmic filtering alone. YouTube pushes users into mild ideological echo chambers where exposure narrows incrementally over time. Like a buffet where every tray is just slightly different flavors of chicken nuggets. Sure, you feel spoiled for choice. Until you realize you’ve eaten nothing but nuggets for years. People may feel they are choosing freely, but the range of options is bounded by the algorithm.
Polarization Grows in Homogeneous Groups
When discussions happen in uniform groups, polarization deepens. Political science research finds that homogeneous discussion increases both affective and policy polarization by measurable degrees, even during short interactions. Right-leaning communities show especially dense clustering and greater isolation from opposing perspectives.
Laboratory experiments reveal that personalized news filtering pushes moderates toward stronger positions while reinforcing the beliefs of those already entrenched. Balanced exposure sometimes paradoxically heightens polarization among extreme participants. It’s like giving a catnip factory tour to cats. Don’t be shocked if they come out a little more wound up than when they went in. The pattern suggests that personalization creates fertile ground for fragmentation, rather than dialogue.
Confirmation Bias Locks Beliefs in Place
AI tools amplify confirmation bias by rewarding agreement and familiarity. Studies with medical and mental health professionals show that experts trust AI more when its recommendations align with their initial judgments. When AI disagrees, skepticism rises, regardless of accuracy. This cycle hardens pre-existing beliefs into what researchers call crystallization.
Imagine if there was a book club where everyone only read their own diary. Riveting for them, intolerable for democracy. As curated streams deliver belief-consistent content repeatedly, individuals become more resistant to contradictory information. This effect compounds when communities form around the same curated feed, strengthening in-group bonds and hostility toward outsiders. Social identity theory helps explain the resulting spike in polarization and animosity.
Executive Function Declines with Overuse
Critical thinking relies on executive functions such as working memory, attention, and cognitive flexibility. Neuroimaging studies consistently reveal reduced activation in these regions among heavy AI users. Reduced connectivity in attention control networks undermines the ability to weigh competing arguments or maintain focus on complex problems.
Educational research finds similar results. Students who use AI heavily show diminished independent reasoning and reduced capacity for reflective decision-making. Over time, this creates what some scholars describe as cognitive debt: each task offloaded to AI compounds the weakening of core analytical skills. This is debt you can’t refinance, and no, a motivational podcast won’t save you here.
From Individual Weakness to Democratic Strain
The consequences extend beyond individual cognition. As thinking narrows, discourse quality declines. Homophilic clustering driven by personalization reduces opportunities for dialogue across political and social divides. Studies across democracies show that this leads to lower tolerance, more populist rhetoric, and higher levels of distrust in institutions.
The spiral is not just a personal issue. It becomes systemic. When citizens lose capacity for independent evaluation, democratic participation suffers. Attention spans shorten. Patience for complexity disappears. Simplistic arguments rise. Or as political campaigns call it: Tuesday.
Breaking the Cycle
Spoiler alert: if we leave it to the algorithms alone, they’ll choose corrosion every time. It pays better in ad clicks. The evidence points to design choices that prioritize engagement over deliberation. Nonetheless, solutions are possible. Research suggests that interventions such as reflective prompts, structured critique of AI outputs, and balanced exposure to diverse viewpoints can mitigate these risks. Educators can teach foundational skills first, then layer AI tools as supports rather than substitutes. Companies can design algorithms that reward exploration instead of repetition.
AI will not disappear from daily life. The choice is whether we build systems that strengthen our abilities or ones that quietly corrode them.
TLDR
The Sycophantic Spiral of AI giving you everything you want is not inevitable. The current trajectory weakens the very skills democracies depend on: reasoning, tolerance, and independent thought. Yet the same technologies that erode these capacities can be rebuilt to enhance them. The responsibility lies with educators, designers, policymakers, and citizens to insist on AI that challenges us to think, rather than flatters us into complacency.
References
Engler, Zac. Turning On Machines. Beaver's Pond Press, (2025): 101-104.
Areeb, S., M. K. Ahmad, R. Khan, and P. Khanna. "Filter Bubbles in Recommender Systems: Fact or Fallacy—A Systematic Review." WIREs Data Mining and Knowledge Discovery 13, no. 6 (2023): e1512.
Fletcher, Richard. Echo Chambers, Filter Bubbles, and Polarisation: A Literature Review. Oxford: Reuters Institute for the Study of Journalism, 2019.
Gerlich, Michael. "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking." Societies 15, no. 1 (2025): 6.
Lorenz-Spreen, Philipp, Lisa Oswald, Stephan Lewandowsky, and Ralph Hertwig. "A Systematic Review of Worldwide Causal and Correlational Evidence on Digital Media and Democracy." Nature Human Behaviour 7, no. 1 (2023): 74-101.
Taylor, Khari, and Nicholas Diakopoulos. "Artificial Intelligence and Democracy: Pathway to Progress or Decline?" Journal of Information Technology & Politics 22, no. 1 (2025): 1-15.
The Register. "Brain Activity Lower When Using AI Chatbots: MIT Research." June 18, 2024. https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/.



Comments