Too much AI? Study shows high usage may reignite psychological distress

Anticipatory anxiety, such as fear of job loss or skill obsolescence, was positively correlated with general AI anxiety. However, greater use of AI tools was linked to lower anticipatory anxiety, suggesting that early exposure may help counter unfounded fears. This supports exposure-based cognitive behavioral models in which anxiety diminishes with familiarity.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 28-03-2025 20:14 IST | Created: 28-03-2025 20:14 IST
Too much AI? Study shows high usage may reignite psychological distress
Representative Image. Credit: ChatGPT

A new psychological study "It’s Scary to Use It, It’s Scary to Refuse It: The Psychological Dimensions of AI Adoption-Anxiety, Motives, and Dependency" has found that adopting artificial intelligence tools like ChatGPT can paradoxically reduce fear of AI while simultaneously creating dependency and triggering deeper existential concerns. The findings, published in the peer-reviewed journal Systems, point to a complex emotional landscape in which moderate AI use may alleviate anxiety, but overuse could reignite distress and challenge users' sense of identity.

Researchers Adi Frenkenberg and Guy Hochman from the Baruch Ivcher School of Psychology at Reichman University led the study, which surveyed 242 adults across diverse demographics. Using validated psychometric tools, they examined anticipatory anxiety (fear of future disruptions), annihilation anxiety (fear of losing human uniqueness), dependency behaviors, and motivations tied to AI usage.

The implications are especially relevant as AI tools rapidly penetrate workplaces, classrooms, and homes. According to PwC, 73% of U.S. companies had adopted or planned to adopt AI technologies by 2024, and nearly 70% of global CEOs expected AI to necessitate workforce reskilling. But adoption is uneven and met with caution: 37% of Americans surveyed in 2024 reported never using AI, and nearly half expressed concern or skepticism toward it.

The new study provides a psychological lens on this ambivalence. Participants rated their AI usage and anxiety levels using a suite of instruments, including the AI Anxiety Scale, Anticipatory Anxiety Inventory, and AI Dependency Scale. Results confirmed that psychological responses to AI fall into two broad domains: the “shadow side,” representing forward-looking fears of disruption, and the “abyss side,” representing existential distress linked to identity and autonomy.

Anticipatory anxiety, such as fear of job loss or skill obsolescence, was positively correlated with general AI anxiety. However, greater use of AI tools was linked to lower anticipatory anxiety, suggesting that early exposure may help counter unfounded fears. This supports exposure-based cognitive behavioral models in which anxiety diminishes with familiarity.

But the abyss runs deeper.

Annihilation anxiety, defined as fear that AI will erode the boundaries between humans and machines, was strongly associated with overall AI anxiety. Unlike anticipatory fears, annihilation anxiety initially increased with AI use, then declined at high usage levels—producing an inverted U-shaped curve. Researchers interpret this to mean that initial exposure heightens existential concerns, but familiarity may again foster acceptance.

Even more unexpectedly, dependency on AI was not associated with anxiety at all. High-frequency users reported elevated reliance on AI tools, but not corresponding distress. This disconnect challenges previous assumptions that overuse inherently signals dysfunction. 

Yet the line between useful reliance and harmful dependency remains blurred. The study flagged a positive correlation between AI usage and dependency, reinforcing concerns that habitual engagement could evolve into behavioral addiction. Previous research cited in the paper links compulsive AI use to impaired judgment, loss of autonomy, and reduced mental health, including increased depression and anxiety.

The study also examined motivational factors, finding that frequent AI users report stronger reasons for continued use, including perceived usefulness, intrinsic enjoyment, and skill development. These findings align with self-determination theory, which posits that autonomy, competence, and relatedness drive human motivation.

Organizational leaders, the authors argue, must address both the enablers and barriers of AI adoption. Simply promoting tools based on performance gains is not enough. Transparent communication, ethical governance, hands-on training, and phased implementation strategies are essential to foster trust, reduce fear, and prevent over-reliance.

Ethical design also plays a role. Concerns about AI opacity, bias, and data misuse continue to erode user confidence. Machine learning systems have been shown to inherit racial and gender biases, fabricate false information, and make opaque decisions, undermining trust and feeding anxiety. The authors urge developers and regulators to prioritize transparency, explainability, and accountability in AI design to avoid compounding psychological resistance.

In the broader historical context, AI anxiety echoes the technostress observed during earlier revolutions, from the industrial age to the rise of personal computing. Anxieties about dehumanization, displacement, and uncontrollable systems are not new, but they now intersect with systems that mimic cognition itself.

The authors caution that the research is still in early stages. Their cross-sectional approach offers only a snapshot in time. As AI tools become more ubiquitous, psychological patterns may shift. What feels like dependency now could become routine tomorrow. Conversely, initial anxiety may be replaced by overconfidence or even complacency.

Future studies should track these psychological trends longitudinally and across demographic and cultural lines. For now, their research offers a critical insight: it’s not just what AI can do that matters, but how it makes us feel when it does it.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback