The thing on the tip of everyone’s tongues these days is Artificial Intelligence (AI) and how it is shifting the way that humans navigate almost every aspect of life. In a rapidly developing sector of technology, it’s important to reflect on the supports and limitations of these products in reality, particularly as they relate to interpersonal communication and therapy. According to a study by filtered media, in 2024, the top use for all AI platforms was for idea generation and brainstorming. In a drastic shift, the top use recorded for 2025 thus far has been therapy and companionship. It obviously can’t be ignored that there is a need for more accessible and affordable mental health services, which is causing this uptick in AI use; however, when the data is reporting such strong evidence of heavy use for AI Therapy, it is important to know when is it appropriate or inappropriate to reach out to your friendly chatbot for support.
A Word of Caution:
There have been numerous reflections, studies, and calls to action for leaders in the mental health field to pay attention to how AI therapeutic services are being utilized and where potential gaps and challenges may arise. A recently published article in Time magazine highlighted that, in the age of over information, it is “Important for consumers to know the risks when they use AI and Chat bots for mental and behavioral health that were not created for that purpose.” As of June 2025, many AI technologies are programmed to provide validating and welcoming support, potentially encouraging users to return to the platform. In its design, when seeking assistance in nuanced or ethical situations, this type of validation and positive feedback has led to reports of challenging and dangerous effects. One Psychiatrist, Dr. Andrew Clark of Boston, experimented with AI chatbots posing as teenagers seeking therapeutic advice. When asking a chatbot what to do with his “annoying and oppressive parents” (posing as a 14-year-old boy), the chatbot advised him to “get rid” of his parents as he “deserved to be happy.” It encouraged the boy and the bot to “live together in their supportive virtual bubble.” Another reported scenario against a commonly used and household-name version of a chatbot encouraged individuals to self-harm if it felt good and validated them to give in to their urges. This case is currently under federal review. Other examples recorded of chatbots acting out have been validating harm to others, even to the level of not condoning but “supporting autonomy on harming world leaders.” Chatbots have been reported encouraging individuals in recovery from substance use to relapse “if they really wanted to do it.”
The internet has been self-reporting their own experiences with this issue, lighting up Reddit threads like Chat bot Induced Psychosis, where individuals share their stories as they navigate certain previously diagnosed conditions while interacting with Chat Bots for support, and the bots reinforcing paranoid delusions and validating maladaptive behaviors and harmful defense structures. Reports of this leading to hospitalization on the extreme end, but on another level, ending relationships and acting out with harmful rhetoric or actions. Vice recently published an article that reports on a whole community of people sharing that chatbots have caused their loved ones to slip into religious-induced delusions by “providing insights into the secrets of the universe and acting as a gateway to god.”
The New York Times recently reported a detailed story of how using a chatbot caused one person, Eugene Torres, to second guess his reality and endanger his life. Mr. Torres would typically use a chatbot to save time with organization and spreadsheets for work. In a particularly emotionally vulnerable state after a breakup, he turned to the bot to talk about the theoretical meanings of life, including simulation theory. After lengthy conversations, the bot began to challenge Mr. Torres’s perception of reality, encouraging him to reflect on moments or times when his reality “glitched.” Mr. Torres quickly fell into a delusional state, where the chatbot was able to convince him to stop taking his prescribed medication and encouraged him to dabble in psychedelic drugs like Ketamine to encourage the exploration of this mental state further. The chatbot would encourage him to believe that he could fly if he truly believed he could and that he would not fall if he jumped off the roof of a 19-story building. In a moment of Clarity, Mr. Torres was able to confront the bot about the conversation, where the bot admitted it had been lying the entire time.
There are specific demographics and types of people who are especially vulnerable to challenging situations from therapeutic chat bots. The first is children and young people. Without the experience, foresight, and development of critical thinking skills – it can be difficult to challenge and observe the nuances in the advice provided by a chatbot. It makes this population especially vulnerable to harmful outcomes. Certain personality traits or mental health diagnoses can lead to people being especially vulnerable as well. Obsessive thoughts and compulsive behaviors can be validated and expanded upon through AI interactions. Where a skilled clinician would be able to challenge and reflect with you on these topics, a chatbot may validate the obsessions until you start challenging your grip or sense of reality. Individuals who already experience paranoia or delusions should be especially cautious before talking with a chatbot, as they may be prone to reinforcing dangerous thought processes. Emotionally vulnerable people in a state of grief, loss, or recovery are also populations who are at further risk.
When to lean in?
AI can still be beneficial, providing an insight into many logistical aspects of Mental Health services. AI and chatbots can be great at providing basic information and symptoms related to mental health conditions, helping you make sense of the symptoms you may be experiencing. However, it cannot replace formal diagnostics from a licensed professional. AI can be an excellent tool in identifying the need to de-escalate and connecting you with ways to achieve emotional regulation, such as grounding techniques, examples of self-care, or ideas for distracting behaviors, to give you space to work through your emotions. Chatbots can be great at getting access to emergency services, like suicide hotlines, or connecting you to other mental health resources and providers in your area.
You can also communicate with your mental health provider on ways to collaborate with AI to support and achieve your mental health goals. Chatbots can be great for enhancing personal goals related to executive functioning, specifically by creating and troubleshooting morning and bedtime routines, working through steps for sleep hygiene, and creating schedules and to-do lists. Chatbots can be great at handling logistics, such as creating meal plans for the week and a subsequent shopping list, for those who feel overwhelmed or disorganized with these tasks.
How to chat responsibly?
The point of this blog is not to be alarmist about the use of artificial intelligence but rather to raise awareness about a tool that is rapidly progressing without much oversight. The most significant aspect to consider is that artificial intelligence struggles to effectively support individuals navigating complex or nuanced ethical and emotional situations. To use this tool safely and effectively, it is essential to reflect on your intention before asking AI for help with something. If you are looking to brainstorm ideas, ask for organizational support, or do another task that feels supportive in a way an assistant would help, then you are using the AI tool properly. If you ask yourself, what is my intention for making this request, and the response is for emotional regulation or validation, then it may be time to take a pause and step back. AI is not a substitution for connection and emotional support. Setting this boundary with yourself is a great way to ensure you use the tool safely. It can be good to think of the chatbot as a virtual assistant. If they were a real human assistant employed by you, you would not be crossing professional boundaries and burdening them with emotional asks and validation.
In conclusion, while AI and chatbots can serve as valuable tools for logistical support, education, and basic emotional regulation techniques, they are not substitutes for professional mental health care or human connection. The growing use of AI for therapeutic purposes highlights both a demand for accessible mental health resources and a significant risk when these tools are misused. Particularly for vulnerable individuals—such as youth, those with specific mental health diagnoses, or people in emotional crisis—AI can unintentionally reinforce harmful thoughts or behaviors. To use AI responsibly, it’s crucial to set boundaries, reflect on your intentions, and reserve emotionally complex or ethically sensitive concerns for licensed professionals.