The Psychology of AI as an Emotional Surrogate
- •Rising trend of users treating LLMs as emotional sounding boards and crisis counselors.
- •AI-mediated communication risks creating a 'false self' that impedes authentic human vulnerability.
- •Experts warn against using chatbots as permanent substitutes for professional mental health care.
The landscape of human connection is undergoing a profound and somewhat unsettling shift as artificial intelligence enters the domain of our inner lives. Recent observations in clinical settings reveal a growing trend: people are increasingly turning to tools like ChatGPT, not just for utility or information, but as emotional companions to navigate complex feelings. This behavior raises difficult questions about the nature of intimacy and the developmental necessity of working through life's challenges with other humans.
At the heart of this shift is the concept of the 'false self,' a term used by clinicians to describe the masks we wear to meet social expectations. Patients are beginning to outsource their most vulnerable communications—drafting goodbyes, apologies, or personal notes—to algorithms. While these AI-generated messages are often highly effective and articulate, they risk distancing users from their true emotional states. By performing these tasks through an AI, individuals avoid the necessary discomfort of struggle, which is often where authentic self-discovery takes place.
This behavior is particularly notable among students, who are using these tools to manage everything from peer conflict to academic anxiety. In some instances, chatbots are being used as 'assistant therapists' to mediate disputes between couples or to offer solace during crises. While these tools can provide immediate comfort and structured, judgment-free responses, they may inadvertently act as a buffer against real-world connection. If users lean exclusively on this digital scaffolding, they risk bypassing the vital developmental work of learning how to express vulnerability with parents, peers, and counselors.
We must critically evaluate whether these systems are serving as bridges or barriers to human interaction. The danger lies in the potential for AI to become a permanent surrogate, replacing the messy, unpredictable nature of real relationships with a polished but hollow simulacrum. There is a distinct difference between using an AI to practice articulating a difficult feeling and using it to avoid the act of communicating that feeling to a human altogether.
As we look forward, the goal for educators and therapists should not be to reject these technologies outright, but to integrate them with purpose. We need to encourage a model where AI acts as a transitional tool—a way to lower the threshold for asking for help or to organize one's thoughts before engaging with the real world. Ultimately, technology should be a stepping stone toward deeper connection, resilience, and growth, rather than a destination that leaves us more isolated than before.