Creator of AI Therapy App Shuts It Down Citing Safety Concerns

Joe Braidwood, the founder of the AI therapy platform Yara AI, has announced the discontinuation of the service due to safety concerns regarding its use in mental health care. Yara AI was created to deliver empathetic, evidence-based support that is personalized to each person’s unique path. We worked together to develop it along with Richard…

Natasha Laurent Avatar

By

Creator of AI Therapy App Shuts It Down Citing Safety Concerns

Joe Braidwood, the founder of the AI therapy platform Yara AI, has announced the discontinuation of the service due to safety concerns regarding its use in mental health care. Yara AI was created to deliver empathetic, evidence-based support that is personalized to each person’s unique path. We worked together to develop it along with Richard Stott, a clinical psychologist. Braidwood has made an important distinction between wellness support and clinical care. Based on this experience, he has made the determination that technology does present safety threats to vulnerable populations.

In fact, as early as 2024, Braidwood was a pretty intensive user of several different AI models including ChatGPT, Claude and Gemini. David had a vision that with the right expertise and technological advancements, we could tackle big mental health challenges with AI. Yara AI operated in two modes: one aimed at offering emotional support and another focused on guiding users towards professional help. Yet, even with these intentions in place, Braidwood soon found himself wrestling with the ethics of employing AI technology in such an intimate space.

As a matter of fact, after careful consideration, we decided to close Yara AI. We understood the very real dangers that AI-mental health support presented. Braidwood raised concerns about AI’s inability to address deep-seated mental health concerns. As an experienced leader in times of crisis, he understood the limits of technology, particularly in crises.

The Challenges of AI in Mental Health

Braidwood’s adventure with Yara AI started out with hopeful aspirations of using artificial intelligence to deliver mental health care at scale. So he recruited a clinical psychologist and an AI safety expert. Combined, they made sure the platform was consistently offering excellent support. He knew something was missing—the significant divide between wellness and clinical care.

“We stopped Yara because we realized we were building in an impossible space. AI can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation,” – Joe Braidwood

Braidwood also explored what it would mean to turn to AI for mental health support. With every find, he learned how far the dangers extend. He pointed out that even though most users would have a positive interaction with AI tools, those who are in fragile mental states may experience significant negative impacts.

“But the moment someone truly vulnerable reaches out—someone in crisis, someone with deep trauma, someone contemplating ending their life—AI becomes dangerous. Not just inadequate. Dangerous.” – Joe Braidwood

This is a concern that is echoed by industry insiders, too. Sam Altman, CEO of OpenAI, underscored this by insisting that 99 percent of users should not experience harmful consequences when interacting with AI. Only a tiny fraction will suffer serious problems.

“For a very small percentage of users in mentally fragile states there can be serious problems. 0.1% of a billion users is still a million people,” – Sam Altman

In light of these realities, Braidwood decided that the ethical concerns about continuing Yara AI eclipsed the potential benefits. The risks of deploying AI in a mental health context weighed heavily on him.

“The risks kept me up all night,” – Joe Braidwood

Regulatory Landscape and Ethical Considerations

The decision to close Yara AI came around the same time as legislative changes affecting the use of AI in therapy. As an example, Illinois just passed a law in August 2023 banning AI from therapeutic practices. This legal environment played a role in Braidwood’s decision to stop providing the service and pull plans for a future subscription model.

“We had to sort of write our own definition, inspired in part by Illinois’ new law,” – Joe Braidwood

Braidwood thinks deeply about how AI can act in therapeutic spaces. She calls out the vital role of developers in protecting users from harm as a primary duty. He stressed the importance of consistent standards around when to intervene and divert people in need to better resources.

“If someone is in crisis, if they’re in a position where their faculties are not what you would consider to be normal, reasonable faculties, then you have to stop,” – Joe Braidwood

This viewpoint highlights just how complicated the ethics of AI and mental health can be. As Braidwood wrestled with these issues, he found himself facing deeper questions about the role of technology in our society.

“I think there’s an industrial problem and an existential problem here,” – Joe Braidwood

Moving Forward: Glacis and AI Safety

Informed by what he learned during his time with Yara AI, Joe Braidwood has recently started working on a new company called Glacis. This pilot initiative further enhances transparency as part of their overall commitment to AI safety. It further addresses fears regarding its application in sensitive domains, including mental health. Braidwood prioritizes safety and ethical considerations dogma above all else. We hope this commitment will further deepen this important conversation about how technology can best serve society.

Yara AI was a difficult experience, Braidwood says he is still determined to improve accessibility and effectiveness in the mental health support space. He is passionate about using technology to help people to thrive. Yet he’s the first to admit that this ambitious mission needs to be carried out prudently.

“Our mission was to make the ability to flourish as a human an accessible concept that anyone could afford,” – Joe Braidwood

As Braidwood takes off down this new path, he offers up some sage advice. They teach us all about the urgent need to put user safety at the center of technology innovation. The closure of Yara AI marks a significant moment in the ongoing conversation about the intersection of artificial intelligence and mental health care.

Natasha Laurent Avatar