With the growing adoption of AI chatbots, more people are now using them for mental health support.
To explore this trend, Cognitive FX surveyed American adults who have used AI chatbots for mental health-related concerns.
Key findings from the survey include:
35m25% turn to AI chatbots mainly due to fear of judgmentÂ
Fear of judgment has become a bigger barrier than money or access to care. A large portion (35,35%) of respondents said they avoid doctors not because help is unavailable, but because they do not feel emotionally safe opening up to another person.
While 32% cited affordability and 22,5% pointed to long waiting times, fear of social stigma ranked highest.
This suggests that even when therapy is accessible, the discomfort of being judged is enough to push people to seek AI conversations instead.
43,75% choose AI as their first response to mental health issues
Nearly half of respondents said an AI chatbot is the first place they turn when mental health issues arise. This indicates a clear shift away from traditional support systems that once played a central role.
While 32,75% said they would turn to friends or family, only 21,75% would approach a doctor first. The preference for AI suggests people want privacy and control before involving anyone who might question their emotions.
One in six respondents reports being discouraged when opening up about mental health
Not every mental health conversation is met with understanding: 16,75% of respondents reported discouraging reactions when sharing their struggles, which can leave people feeling dismissed or misunderstood.
This finding aligns with a 2025 NAMI workplace Mental Health poll, which found that two in five Americans worry they would be judged if they shared about their mental health at work.
Although 60,5% reported supportive responses, 22,75% reported neutral reactions. Together, these responses explain why some people stop opening up to others and instead turn to AI, where responses feel more predictable.
41,2% people have experienced occasionally wrong advice from AI chatbots
A significant portion of users have encountered problems with AI-generated mental health advice. About 41,2% of respondents report receiving occasionally wrong or misleading guidance from chatbots.
This raises concerns about the reliability of AI tools for something as sensitive as mental health support.
On the other hand, 45,25% report never having noticed misleading advice.
However, the fact that over four in 10 people have received incorrect guidance indicates that AI chatbots are far from foolproof.
When mental health is at stake, even occasional mistakes can lead to serious consequences.
38% of Americans use AI chatbots weekly for emotional support
AI chatbots are no longer used only in moments of crisis. In the survey, 38% of respondents said they rely on these tools weekly for managing their mental health.
This indicates that AI has become part of routine mental health maintenance.
In addition, 21,75% of respondents use AI chatbots daily for mental health, and 22,25% use them monthly. Overall, nearly all respondents relied on AI for mental health support.
These patterns indicate a shift from occasional to routine reliance.
30,5% say financial stress is the biggest mental health trigger
Money-related stress remains the leading cause of mental health struggles among respondents. Financial pressure topped the list with 30,5% respondents. It reflects how economic uncertainty continues to affect emotional well-being.
Loneliness followed at 21,25%, while family issues and childhood trauma each accounted for around 15%.
Work-life balance affected nearly 10% of people.
All these factors show that daily pressures and long-term stressors play a major role in mental health challenges.
The study aimed to understand why people are choosing AI chatbots for mental health support instead of talking to family, friends, or professionals. Findings were medically reviewed by Dr Alina Fong.