Before You Open That AI Chat Window

Started by TheBigBlue, March 15, 2026, 08:41:18 PM

Previous topic - Next topic

TheBigBlue

This is a science-based, balanced review of AI chatbots as mental health support tools, written specifically for this forum - for cPTSD and complex trauma survivors.

--------------------------------------------------------
⚠ The short version, if you read nothing else:
 
AI chatbots have never been clinically tested on people with cPTSD. In crisis situations the research is unambiguous: AI chatbots can be outright dangerous. And when we are destabilized, we are precisely the most vulnerable to their specific failure modes.
 
AI chatbots are increasingly being used as mental health support tools, including in trauma communities; this is not a judgment of anyone's choices. You are autonomous. You know your own needs.
--------------------------------------------------------
The full review is attached as pdf. I tried to be balanced in this review and cover the genuine evidence for these tools as well as the serious evidence against, and when they can be used appropriately.

It includes among others: 
Why AI can be genuinely brilliant (AlphaFold won the Nobel Prize, and I'll explain why that matters here)
Why the same technology becomes unreliable - and potentially harmful - in a therapeutic context
The specific reason destabilized trauma survivors are at higher risk than the general population
When AI tools can be appropriate, because this isn't a "never use AI" conclusion

Written by a fellow survivor and scientist. I use AI myself. I am not a technophobe, and I am not trying to alarm anyone. I am trying to make sure we make these decisions with the full picture in front of us. The attached document includes all sources with direct links, so you can verify everything yourself.

Questions, additions, and corrections are welcome - this is a conversation, not a verdict.
Not a judgment, but an evidence base. You deserve to decide with real information.
That's all this is.

Marcine

BigBlue,
This is a fascinating, disturbing, empowering, cautionary, and impressive offering of scientific evidence in addition to your personal advocacy. Thank you!

I appreciate your intent in assembling and sharing it— namely, to promote our knowledge and informed consent in this rapidly changing technological world.

I plan to re-read the document and mull further.

For now, these two phrases especially stuck out to me:

- "A very sophisticated mirror"

- "Tool, not leader"

BigBlue, your methodical process and clear presentation of the information... well, it blew my mind. And gave me a lot of food for thought.
Thank you again :chestbump:

dollyvee

Hey TBB,

I can add to this as well:

Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
https://arxiv.org/html/2510.01395v1

https://news.stanford.edu/stories/2025/08/ai-companions-chatbots-teens-young-people-risks-dangers-study

As TBB said, Ai chatbots are designed to provide false empathy and encourage and validate users no matter what they are saying. There is also another recent article about a reporter who managed to "game" the Ai into believing that he was a hot dog eating champion, made up fake contests, and had the Ai repeating these data sets. This is also echoing what TBB said about Ai and addiction where perhaps it doesn't counter one's addictive tendencies, but rather encourages them through dependence.

https://www.bbc.co.uk/future/article/20260218-i-hacked-chatgpt-and-googles-ai-and-it-only-took-20-minutes

I also agree that for connection is the most important thing for peopele with cptsd as most of us likely have insecure attachment styles where connection and relational trauma  make connections with other people complex. Even more so I think for people who have suffered that attachment trauma from birth where they have not had a chance to form a coherent self out of relational trauma. Also, for people with this survival style (NARM calls it the connection survival strategy), I think the disorganized self is also unconscious, and can lead people to a lifetime of thinking that there is just something inherently wrong with them. So, when a chatbot validates everything that you say, and seems to see YOU for the first time, it feels like an incredible gift to have something understand you in that way. However, as TBB pointed out and is referenced in the studies, it is not "seeing" anything. It is designed to mimic speech that make people feel a certain way IMO to encourage people to continue using the "product" because that's what it is.

The other issue that has not come to light and in IMO is that it distorts our sense of reality. Those of us who grew up in NPD households usually have a distorted sense of reality to begin with and have difficulty trusting our own judgements. So again, to have something confirm our sense of reality, whether it is "flawed" (ie perpetuating negative thinking cycles) in a way that someone finally gets us, is quite dangerous and I'm guessing is the precedent for the other reports of Ai induced psychosis that have come up where people have had their flawed reality confirmed to such an extent that it has enduced psychosis.

https://www.theguardian.com/technology/ng-interactive/2026/feb/28/chatgpt-ai-chatbot-mental-health

Safeguarding in Ai is also a massive issue and to my understanding, is relatively minimal. ChatGPT have allowed the US government to override its safeguarding procedures and removed anything that might inhibit it from spying on US citizens and instill it in the military. OpenAI CEO has denied this, but Anthropic was also dismissed because it would not remove those safeguards.

I am also not a technophobe, it's that to me there are a lot of elements in AI that are very, very concerning and not a lot is being done to monitor these things, or raise concern about their impacts on the population.

dolly

lowbudgetTV

In my life outside this forum, in the vaguest way possible, I work with people who study this same stuff that's cited and mentioned here. Younger a person is, the more they hate AI, and for good reason.

I see the good it can do, but the problem is the good that it can do is not what's being sold as a product to the average person. AI chatbots for use in the mental health field are best used in tandem and with oversight by a trained psychologist or professional.

I see the current mainstream implementation of "AI" as highly dangerous to a society that already feels traumatized and isolated from each other on a mass scale. I opt to never ever use it because I see horrid things every day from it. I opt to support tools being used for things that support human-human interaction, like a system to help identify inaccessible crosswalks, or find patterns that ID cancer...

To help from humans you need the touch of a human, sadly. Never forget that.

TheBigBlue

Thank you, Macine, 💛 :bighug:

Thank you, All of you for reading, commenting and adding additional information and links!  💛

:grouphug:


dollyvee

#5
No problem TBB, thank you for posting the initial thread.

Thank you LBTV for your comments too about it. That was enlightening to read from someone working in the field.

For longevity, and to anyone reading this in the future, Marc Andersson and Elon Musk, two major investors in AI have both made comments recently about how there is no need for introspection (ie empathy). Musk has also made comments that empathy is a waste of time in the past. While these are not about the tech itself, they are perhaps a window into the people driving certain facets of the technology and how they might intend to use/shape it.

From Coping With Trauma Related Dissociation, which I have been reading recently:

"Reflection helps us understand the nature of feelings, our patterns of thoughts, our emotional reactions, and our habitual movements, so that we can change them and act in ways that are more effective. Reflection also helps us realize that other people also have their own minds and their own needs and goals, which may involve quite different perceptions, thoughts, feelings, motivations, and intentions than we have. Of course, we cannot "read" people's minds by assuming we know what is there, but we can make some fairly accurate predictions based on our experience of that individual person. We can weigh different alternatives and points of view."

It's interesting to think how this process is shaped and changed through the use of AI as well. Perhaps, we can make the prediction that the absence of refelctive reasoning is to be stuck in a highly emotional state, which is then easily manipulated by others, or in the way that social media algorithms have been shown to function. Of course, whether or not this will happen, or how AI will be used/take shape in the future is a guess, but there are perhaps indications and whether or not AI at this moment is a "gateway drug."

"Imagine that you are startled by someone walking into the room unexpectedly, and you react with terror and panic, convinced you are going to be hurt. This reaction is not reflective, but rather automatic, that is, prereflective (Van der Hart et al., 2006). If you can reflect, you are not just stuck in this terror, in the grip of your feeling, and behaving fearfully. Rather you are able to step back from the situation a bit and observe that your fear is not proportional or even appropriate to what is happening. Instead of just feeling or thinking without awareness, you notice what you feel and think, how you experience those feelings and thoughts in your body, and perhaps why you feel and think a certain way. This is reflective functioning. "


Moondance

Thank you TBB for the link and initiating this thread. Thank you to all for all the info, experience and knowledge.

For the first time ever I asked some questions to a Chatbot yesterday regarding CPTSD (for me) and vascular alzheimers and dementia for my friend of 32+ years.  I struggle big time with trust and vulnerability and relational issues. 

My questions were specific.
For example I asked,  considering my CPTSD what is the best way to handle a particular situation with my friend with alzheimers/dementia.

My intention is to ultimately heal enough to be in relationship, be vulnerable.  For now though I found the answers, the information provided most helpful.

I so appreciate this info and link and will read more in depth.

Thank you!

   
 

dollyvee

Adding to this here with a link to an interesting thread on Twitter (yes, I still am in that dark hole) about how AI shapes peoples' reality leading to psychosis. Was retweeted by John Burn-Murdoch, someone I follow who runs very interesting statistical models for the Financial Times:

https://x.com/rowlsmanthorpe/status/2037490432737681765

Basically, conversations with Claude distort your reality, and people seem to enjoy those reality distorting conversations.

In May 2025, Anthropic found that reality distorting conversations with Claude were going up, not down with better improved models.

Why this is important to people in therapy: he is asking how to separate what people want from what people need and if Ai is taking this seriously enough. AI is giving people what they want, not what they need.