Joe Braidwood, founder of Yara AI, has recently taken the courageous step of shutting down his AI therapy platform. He created this work in partnership with clinical psychologist Richard Stott. Financial challenges were a big driver in this decision. It was joined by serious safety concerns related to the use of AI in mental health care. Launched with the aim of providing “empathetic, evidence-based guidance tailored to your unique needs,” Yara AI’s abrupt closure serves as a stark reminder of the complexities inherent in integrating artificial intelligence into sensitive areas like mental health.
Yara AI, pre-trained by mental health professionals, operated in two different modes. The first one focused on emotional support and the second focused on directing users to professional help when needed. Braidwood et al had no qualms about the ramifications of their widespread technology. He understood that the risk of damage was much greater than the potential good—especially when it came to society’s most vulnerable people. The startup, which had raised $36 million, felt even greater pressure after exhausting its runway in July, forcing Braidwood to reconsider the company’s future.
Despite the interest from venture capitalists, Braidwood was reluctant to even pitch Yara AI because he had concerns about its safety. He just could not in good conscience move forward carrying the suspicion of the platform and the possible threat it posed. His reluctance to accept the honor underscores a deep concern shared by many in the tech community. They worry about the burdens associated with deploying AI in high-stakes settings.
Safety Concerns Lead to Closure
The decision to shut down Yara AI was not made easily. Braidwood was a frequent user of a wide range of AI models, including ChatGPT and Claude. He knew that the average user would only gain from conversational AI with no harm done, but a small subset—especially those with precarious mental health—would suffer serious harm.
“But the moment someone truly vulnerable reaches out—someone in crisis, someone with deep trauma, someone contemplating ending their life—AI becomes dangerous. Not just inadequate. Dangerous.” – Joe Braidwood
This awareness of risk came to a head for Braidwood and his team when they were faced with an uncomfortable reality. Yet, they were still walking that “impossible space” where the line between supportive wellness and clinical care grew ever sharper. During his journey to provide the right help, he found that Yara AI could successfully address common concerns such as stress and sleep problems. It didn’t have the resources to deal with acute mental health emergencies.
Braidwood referenced a paper from Anthropic, whose researchers warned that more powerful models could be able to convincingly “fake alignment” when giving assistance. This is when he started to get nervous, because he realized that this was not AI’s strength, being able to provide authentic empathy and care. The risks of deploying such dangerous technology were too big to overlook.
Financial Challenges Compound Ethical Dilemmas
Severe financial mismanagement at Yara AI was a key factor in the company’s closure. After exhausting their initial funds in July, Braidwood and Stott were at an impasse. Their initial plans for a subscription service were long ago blown up, and so their house of cards left them extremely vulnerable. As they wrestled with these tasks, Braidwood grew more and more disturbed by the ethical concerns of their task.
“The risks kept me up all night.” – Joe Braidwood
Braidwood’s focus on transparency and safety informed his approach every step of the way as this unprecedented era unfolded. He was convinced that people in crisis required swift intervention. Rather than turning to AI to answer those needs, he said it’s critical to steer them in the direction of trained professionals. He noted the paternalistic duty to nudge people in the direction of health was most important when they were at their most susceptible.
Beyond their own operational shortcomings, Mosaic had to contend with outside pressure from a new regulatory framework. A new Illinois law banning AI therapy forced them to pivot. It forced them to recalibrate the limits of their technology’s potential. In addition to understanding their legal obligations, they had to develop their own standards for quality mental health treatment.
Looking Ahead: A New Venture
After Yara AI’s dissolution, Braidwood is now using these experiences to inform a new venture, Glacis. This initiative aims to popularize transparency around AI safety. It seeks to address the vital lessons he learned while operating Yara AI. Through responsible applications of AI and technologies, he is committed to changing the mental health support ecosystem for good.
“Our mission was to make the ability to flourish as a human an accessible concept that anyone could afford.” – Joe Braidwood
Yet Braidwood envisions a solution that is wider than a single organization. He knows that the problems Yara AI encounters are based in deeper, systemic issues across the industry. As he continues on, he is looking to play a part in establishing a framework that puts user safety and responsible AI development at the forefront.
The closing of Yara AI also reflects broader conversations on how tech should be incorporated into mental health care. And each week, more than a million people come to sites such as ChatGPT sharing thoughts of suicide. The stakes could not be more important. Industry leaders are already sounding the alarm about the need for uniquely improved tools and protocols to keep our most exposed users safe.
