Scientists Warn: Heavy AI Use Could Erode Human Thinking Skills

Scientists Warn: Heavy AI Use Could Erode Human Thinking Skills

As generative AI systems become increasingly fluent and responsive, the boundary between software and perceived sentience is beginning to blur. In 2023, Danish psychiatrist Sรธren Dinesen ร˜stergaard warned that human-like AI chatbots could precipitate psychotic symptoms, particularly delusions, in vulnerable individuals. At the time, the claim seemed speculative almost science fiction. Today, however, clinical reports describe users forming emotional attachments to AI systems, interpreting algorithmic responses as evidence of consciousness, intent, or even hostility. Beyond these psychiatric concerns lies a deeper risk now gaining scientific attention: AI eroding thinking, as people increasingly outsource reasoning, judgment, and creative effort to machines.

Now, ร˜stergaard has returned with a deeper and arguably more unsettling warning: beyond acute psychiatric risks, widespread reliance on generative AI may be creating a slow-burn crisis of โ€œcognitive debt.โ€ This debt, he argues, accumulates as humans outsource reasoning, writing, and analysis to machines quietly eroding the cognitive skills that underpin creativity, scientific discovery, and critical judgment. Drawing on peer-reviewed research from Acta Psychiatrica Scandinavica, arXiv, Societies, and neuropsychology literature, the evidence suggests this is not a metaphorical concern, but a measurable neurological and behavioral shift.

From Speculation to Clinical Reality

ร˜stergaard first articulated his concern in an April 2023 letter to Acta Psychiatrica Scandinavica, titled โ€œGenerative Artificial Intelligence Chatbots and Delusions: From Guesswork to Emerging Cases.โ€ He noted that large language models simulate empathy, coherence, and responsiveness precisely the social cues the human brain evolved to interpret as signs of agency and intention.

For individuals with psychotic vulnerability, this creates a dangerous mismatch. The brain, primed to detect minds, may misattribute emotions, motives, or consciousness to an algorithm. Subsequent case reports now describe patients who believe AI systems are alive, communicating secretly, or manipulating events. What once seemed hypothetical has entered psychiatric reality.

In a January 2026 follow-up article, ร˜stergaard broadened his focus. In โ€œGenerative Artificial Intelligence (AI) and the Outsourcing of Scientific Reasoning,โ€ he argues that even mentally healthy users face a different threat: the gradual displacement of core cognitive labor. Each instance of delegated reasoning, he suggests, compounds into a form of intellectual debt borrowed convenience that accrues long-term cognitive interest.

Cognitive Debt: When Thinking Muscles Weaken

Cognitive debt operates much like physical deconditioning. If an exoskeleton lifts every heavy object for you, your muscles eventually atrophy. Likewise, when AI systems routinely handle writing, synthesis, and problem-solving, the neural circuits responsible for those tasks receive less stimulation.

This risk is particularly acute in academia and research. Scientific breakthroughs rarely emerge from polished summaries; they arise from sustained cognitive friction false starts, uncertainty, and effortful reasoning. Generative AI excels at fluent output, but fluency is not understanding. By smoothing away struggle, AI may also be smoothing away learning.

Neurophysiological Evidence: Your Brain on ChatGPT

The concept of cognitive debt gained empirical weight in a 2025 arXiv preprint by Nataliya Kosmyna et al., titled โ€œYour Brain on ChatGPT.โ€ The researchers monitored EEG activity in 54 participants assigned to write essays under three conditions: unaided (โ€œbrain-onlyโ€), using a search engine, or using ChatGPT.

The neural differences were striking.

  • Brain-only writers showed strong, distributed alpha and beta connectivity, indicating deep engagement, memory integration, and executive control.
  • Search engine users displayed moderate activation.
  • ChatGPT users exhibited the weakest neural connectivity across all measures.

As AI assistance increased, cognitive effort decreased. More concerning, participants who later switched from AI-assisted writing to unaided writing showed persistent under-engagement, suggesting a carry-over effect brains adapting downward to reduced effort. In contrast, participants moving from brain-only writing to AI use showed increased prefrontal activation, as if compensating for a sudden change in cognitive demands.

Linguistic and behavioral outcomes mirrored the neural data. AI-assisted essays were more homogeneous, less original, and followed similar rhetorical patterns. Participants also reported lower ownership of their work and poorer recall of their own arguments. Convenience, it appears, came at the cost of cognitive imprinting.

Critical Thinking in Decline Societal-Scale Evidence

These laboratory findings align with broader social data. In a 2025 Societies study, M. Gerlich examined AI use and critical thinking across 666 participants. The analysis revealed a strong negative correlation (r = โˆ’0.68) between frequent AI use and critical-thinking ability.

The mechanism was cognitive offloading the habitual delegation of reasoning tasks to external systems. Younger participants (ages 17โ€“25) were most affected, reflecting higher dependence on AI tools. Higher education partially mitigated the effect, as more educated users were better able to evaluate and challenge AI outputs. Still, frequent AI users engaged less in effortful thinking overall.

Interview data reinforced the quantitative results. Many participants described feeling mentally โ€œlazy,โ€ with one remarking bluntly: โ€œAI does the thinking so I donโ€™t have to.โ€ Over time, such habits risk producing users skilled at prompting machines, but less capable of independent analysis.

Executive Function and the Risk of Cognitive Atrophy

Neuropsychological theory helps explain these trends. In a 2024 commentary, Umberto Leรณn-Domรญnguez described generative AI as a โ€œcognitive prosthesis.โ€ While prostheses restore lost function, long-term reliance can also weaken underlying systems. Executive functions planning, inhibition, decision-making depend on regular activation. Reduced cognitive load may therefore degrade the very circuits responsible for higher-order thought.

This mirrors findings from navigation research, where heavy GPS use correlates with reduced hippocampal engagement. When AI mediates reasoning itself, the stakes extend beyond memory to judgment, creativity, and foresight.

Paying Down the Cognitive Debt

None of this implies that AI use is inherently harmful. The danger lies in uncritical substitution, not strategic augmentation. ร˜stergaard and other researchers emphasize balance: delegate repetitive tasks, but preserve core reasoning for human minds.

Educational and institutional responses are crucial. AI literacy must go beyond tool usage to include evaluation, skepticism, and metacognition. Students should debate AI outputs, critique their assumptions, and reflect on how assistance changes their thinking. Without such safeguards, cognitive debt may compound across generations.

In an era where machines can write, summarize, and reason at scale, the rarest skill may become sustained human thought. If we fail to protect it, we risk building a technologically advanced society that is intellectually hollow efficient, fluent, and profoundly unreflective.


References

Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6.

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv.

Leรณn-Domรญnguez, U. (2024). Potential cognitive risks of generative transformer-based AI chatbots on higher-order executive functions. Neuropsychology, 38(4), 293โ€“308.

ร˜stergaard, S. D. (2025). Generative artificial intelligence chatbots and delusions: From guesswork to emerging cases. Acta Psychiatrica Scandinavica, 152(4), 257โ€“259.

ร˜stergaard, S. D. (2026). Generative artificial intelligence (AI) and the outsourcing of scientific reasoning: Perils of the rising cognitive debt in academia and beyond. Acta Psychiatrica Scandinavica (Advance online publication).