Spring Health Debuts VERA-MH AI Safety Standard
- •Spring Health co-develops VERA-MH, the first open-source safety standard for mental health AI systems
- •Internal AI scores 82/100 on safety benchmarks, outperforming general-purpose chatbots in clinical risk detection
- •Framework prioritizes clinician escalation and human-in-the-loop oversight for high-stakes psychiatric crisis interventions
Spring Health has introduced VERA-MH, a pioneering open-source safety standard specifically engineered to evaluate AI in the sensitive domain of mental healthcare. In an industry where general-purpose chatbots often lack clinical guardrails, this new benchmark provides a transparent methodology for measuring how effectively AI recognizes and responds to psychiatric crises.
The company’s internal AI model achieved a score of 82 out of 100, a significant improvement from its initial baseline of 76 following targeted iterative development cycles. Unlike broad consumer models, this system is purpose-built to summarize member intake data and identify suicide risk in real-time. When the system detects potential danger, it immediately triggers an escalation protocol to a licensed clinician, ensuring that the AI functions as a bridge rather than a replacement for human care.
This human-in-the-loop architecture is central to the framework's philosophy. By integrating practicing clinicians and suicide prevention experts into the development phase, Spring Health aims to set a rigorous industry precedent for clinical integrity. The move signals a shift away from opaque safety definitions toward measurable, clinically grounded accountability, ensuring that technology enhances rather than compromises the therapeutic relationship.