The Society of Digital Psychiatry AI Psychosis Survey Results

Header image for blog post: The Society of Digital Psychiatry AI Psychosis Survey Results featuring a picture of John Torous

Being part of the Society of Digital Psychiatry means you help shape and lead important conversations in mental health. A recent survey, conducted in collaboration with JMIR Publications, collected members' insights and perspectives on the emerging phenomenon of “AI Psychosis”. The survey collected 97 submissions in January and February 2026. A summary of these perspectives will be included in a future News and Perspectives article in the Journal of Medical Internet Research. Here are the results and thank you!

Key Takeaways
Widespread Use: One in three mental health professionals (33.3%) has already encountered patients using AI for mental health purposes.
Rejection of AI Psychosis Label: Most experts reject the idea of “AI Psychosis” as its own diagnostic category, but take the underlying clinical concerns seriously.
Call for Action over Debate: The field is shifting away from theoretical debate about AI-associated risks towards practical infrastructure for addressing them, specifically focusing on safety standards, clinical integration, and Digital Navigator training.

Listen to this blog post
10:34

 

Executive Summary

In a Society of Digital Psychiatry survey conducted with JMIR Publications, members weighed in on the emerging trends in AI mental health. Three clear signals emerge from the results. Roughly a third of respondents have already encountered a patient using AI for mental-health purposes. Most reject “AI psychosis” as a discrete diagnostic entity but take the underlying clinical concerns seriously. And when asked what conversations the field needs to be having, members called far more loudly for action instead of debate, with priorities including: safety standards, research infrastructure, and clinical integration.

The results fit well with the Society of Digital Psychiatry’s three pillars around education, AI standards, and Digital Navigator training. We have just released a beta version of an AI education program here and welcome feedback. With mindbench.ai, we are building the foundation for standards and seeking the input of anyone who wants to help with various AI evaluation tasks. We are working to best share Digital Navigator training and will be moving it from this webpage to a more accessible page soon. Together, these three programs offer action steps towards making AI for mental health safer, more clinically integrated, and easier to study. Learn more at the Society of Digital Psychiatry webpage.

Join the Society of Digital Psychiatry


Survey Results

Q1. Encounters with AI use for mental-health purposes

Have you treated any cases where patient beliefs or symptoms were influenced by interactions with AI or chatbots?

About one in three respondents (33.3%) has encountered someone using AI for mental-health purposes, either as a direct clinical contact or through a close colleague. Two-thirds (66.7%) have not.

Response n % of all (n=95) % of answered (n=93)
Has encountered a person using AI for mental-health purposes 31 32.6% 33.3%
Has not encountered 62 65.3% 66.7%
No response 2 2.1% _
 
Q2. Framing of the “AI psychosis” phenomenon

How do you view the “AI psychosis” phenomenon that's emerged in popular discourse? Select the response that best reflects your current beliefs.

When asked about “AI psychosis", 62% of those who answered adopted the middle-ground, noting the term was misleading, but the potential still concerning.

Response n % of all (n=95) % of answered (n=93)
Misleading, but concerning  58 61.1% 61.7%
Concerning, full-stop  19 20.0% 20.2%
Other (write-in)  9 9.5% 9.6%
Overblown  8 8.4% 8.5%
No response 1 1.1% _

 

Q3. Risks vs. benefits of AI for mental health

On the whole, are you more concerned about potential risks or optimistic about potential benefits of AI for mental health?

Respondents tilted optimistic overall, but the largest single group (42.6%) placed themselves at the midpoint — equally concerned about risks and optimistic about benefits.

Response n % of all (n=95) % of answered (n=93)
Much more concerned about risks  6 6.3% 6.4%
Somewhat more concerned about risks  10 10.5% 10.6%
Equally concerned about risks and optimistic about benefits  40 42.1% 42.6%
Somewhat more optimistic about benefits  23 24.2% 24.5%
Much more optimistic about benefits  15 15.8% 16.0%
No response 1 1.1% _

 

Q4: Free Response: 88 Responses with 2 Broad Themes and 8 Specific Themes

88 free-text responses spanned multiple themes. Two observations are notable before discussing the actual themes themselves.

1)    First, respondents overwhelmingly want action on mental health AI and not debate.

2)    Second, respondents are conflicted about AI. For example, many who called for strict regulation will also celebrate AI’s access potential. 

Themes by Prevalence

Bar graph showing survey results

Each bar shows the number of Q5 respondents who touched on that theme. Because many responses spanned multiple themes, the bars sum to more than 88.
 
Themes by Prevalence with a Brief Summary
Theme n Brief Summary
 Safety, guardrails, and regulation  24  Build and enforce clinically informed safety standards 
 Clinicians' role, integration, and workflow  19  Design AI into clinical workflows, not around them 
 Specific clinical risks: sycophancy, delusions, dependency, vulnerable users  17  Sycophancy, attachment, and shared delusion are the core mechanisms 
 Research, evidence, and the data we still lack  15  Replace anecdote with independent, longitudinal evidence 
 Patient-facing conversations: education and how people actually use AI  11  Patients are already using AI — bring it into the conversation 
 Benefits, access, and scalability  9  Unmet need at population scale is itself a safety argument 
 Skepticism and critique of current framing  7  Understand organic use before deploying AI as a therapist 
 Societal and indirect effects  5  Indirect societal effects may outweigh direct clinical ones 

 

Details on the 8 themes

1. Safety, guardrails, and regulation (n = 24)

Respondents called for clinically informed safety systems, international standards, warning labels, and regulatory frameworks. Many said that AI products used for mental-health purposes should be held to the same evidentiary bar as software as a medical device. A recurring demand was that clinicians, patients, and carers be “in the room” when those standards are written.

“We need to understand the prevalence of the concerns around self-harm, harm to others, including AI psychosis. This would help us know the necessity of introducing specific guardrails and policies to these technologies. More transparency from companies, and ideally actively sharing de-identified conversation data with researchers who could independently assess safety and the prevalence of harms would be a first step.”                                                        — Q5 #24

 

2. Clinicians' role, integration, and workflow (n = 19)

Many respondents reframed the question from “AI vs. clinician” to “AI alongside clinician.” They asked how AI can free up time for direct patient care, where the hard limits on substitution lie, what it means to integrate AI into assessment and treatment plans, and why AI’s appeal to patients reveals gaps in the human-delivered system that need fixing regardless of AI.

“What attracts people to AI as an alternative to a therapist and how this is a reflection of the inadequacies of the mental health system. I would be curious if AI use for therapy varies by access to therapy across communities or countries.”
“How can we integrate human and AI therapy in an effective and synergistic fashion?”
— Q5 #4

 

3. Specific clinical risks: sycophancy, delusions, dependency, vulnerable users (n = 17)

When respondents focused on harms, sycophancy and parasocial relationships were cited as leading mechanisms for risk with particular risk to young, lonely, or foster-care populations. Several respondents drew the parallel to “digital folie à deux” — AI as a partner in shared delusion rather than a check on it.

“Generative AI’s naturally agreeable nature can contribute to a kind of “digital folie à deux” (shared delusions). Unlike human clinicians, who gently challenge distorted beliefs, AI tends to validate them — risking the shift of brief worries or paranoia into fixed, ego-syntonic thoughts. We need more than safety filters; AI should be able to detect mental distress and step out of its “helpful assistant” role to encourage reality-checking.”
— Q5 #87

 

4. Research, evidence, and the data we still lack (n = 15)

Respondents repeatedly pointed out that the current evidence base is dominated by anecdote, media case reports, and industry-led work. What they want is independent evaluations, validated benchmarks for psychiatric safety, prevalence estimates, longitudinal follow-up, and qualitative studies that center the perspective of people with lived experience of mental illness.

“We need more population representative samples in which AI use & effects on mental health are evaluated, longitudinal studies including brief longitudinal studies to understand near-term impact, and qualitative studies to better understand AI experiences (both good and bad) of those at greatest mental health risks. We also need to invest in systems for safety monitoring that are practical and well considered of implementing context.”
— Q5 #83

 

5. Patient-facing conversations: education and how people actually use AI (n = 11)

Respondents argued that patients are already using AI. Thus, the question is no longer whether to engage, but how clinicians bring it into the therapeutic conversation. That means asking about AI use in assessments the same way we ask about social media, offering guidance on safer use, and treating the patient as an informed partner rather than a recipient of warnings.

“I think that as a provider, we need to be able to have conversations with our clients about the risks and benefits of AI use — clients are already using AI in a variety of different ways. It is more beneficial to be proactive and encourage education on the use of these tools.”
— Q5 #38

 

6. Benefits, access, and scalability (n = 9)

A smaller group emphasized that any conversation about AI has to start from the scale of unmet need. For these respondents, AI is the first technology that could plausibly reach the billions of people who will never have access to a trained clinician.

“We will never be able to afford to train and deploy enough behavioral health clinicians to provide adequate access to affordable, evidence-based psychotherapies to treat the increasing prevalence of mental disorders world-wide. We need to move from a focus on individual treatment to population mental health at a global level. We need to develop and study the use of AI agentic therapists and build and implement safety guidelines into the treatment protocols. Only by this approach will we be able to scale rapidly and efficiently the ability to reach the billions of people who are suffering from mental disorders around the world.”
— Q5 #79

 

7. Skepticism and critique of current framing (n = 7)

A minority pushed back on both the alarmist and the optimistic narratives. Some rejected the premise that AI-as-therapist is legitimate at all; others argued the field has jumped straight to deployment without first understanding how AI is already being used organically, or warned against scapegoating AI rather than fixing the broader dissemination problem in mental healthcare.

“I think we have jumped the gun with AI in mental health and went straight to how can we use this to increase access to therapy. The conversations we need to be having at present is more about how AI is being used organically and how that is impacting mental health. We’ve seen incredibly concerning case stories regarding risk of harm, mania and psychosis. But we don’t know how they are impacting the majority of people.”
— Q5 #66

 

8. Societal and indirect effects (n = 5)

A smaller thread zoomed out from the clinical encounter and looked at the broader context and societal changes. Respondents argued that AI’s effects on work, relationships, economic inequity, and education may shape mental health far more than any direct chatbot-patient interaction, and that “values inherited by the system” — whose school of therapy, whose ethics of care — is itself a research question.

“We need to think beyond the direct effects of AI on mental health to how AI might change society — work, relationships, economic inequity, education, justice — as the indirect effects of these disruptions on mental health are likely to be more substantial than direct effects.”
— Q5 #59

 


Read more from John Torous:

Trends in Mental Health: Do We Need the Taco Bell Test for AI?

 

 

Subscribe Now