#66: Some thoughts on conversational AI

The moral imperative for mental health organisations to build, better, safer AI Agents, and why time is running out.

Hi friends,

I track mental health trends for a living.

This is mostly at the industry and technology level. But I also look closely at consumer behaviour.

How are people thinking about their mental health? And, what products and services are they using?

There is one trend we simply cannot ignore: the rise of conversational AI agents.

Despite the focus this has received in the media and the mental health tech ecosystem, I believe we are still underestimating the impact of these products on population mental health. In both directions.

People are adopting a range of conversational AI agents for mental health use cases at significant rates.

Most use general-purpose agents (like ChatGPT), some use AI companions, and a small few use those clinical agents actually designed by mental health organisations.

In this edition of The Hemingway Report, we discuss why there is a moral imperative for mental health organisations to provide better alternatives to general-purpose AI models. We look at some of the second-order impacts of this mega-trend and explore the battleground for moving people from general-purpose agents to safer, better, clinical tools.

Let’s get into it.

A New Moral Imperative 

The evidence is clear: people are using conversational AI agents for their mental health. My own research from earlier this year found that 41% of respondents had used an “AI-based chat tool” for mental health support.

Most people (81.5%) are using general-purpose agents, not clinically focused tools, for this use case.

There are obvious issues with this.

These general-purpose agents have misaligned incentives (drive engagement over user health outcomes), are not trained on mental health data, have no clinical oversight, and have limited safety protocols. The list goes on. The limitations are clear, drastic, and, because of the scale of their adoption, they have the potential to create significant harm.

We are now seeing the second-order effects of this trend. 

Clients are showing up to therapy rooms having already had long discussions with “Chat” about what’s happening for them. This is Dr. Google on steroids. Of course, some of those conversations may be supportive for clients, and if that context is shared with their clinician, then it may improve their care too.

Some clinicians have told me they have asked their clients to show them their messages with “Chat”, and that has been very helpful to their practice. But I imagine these clinicians are in the minority.

However, because disclosing to an AI is so easy, some people are not disclosing those same things to the real people in their lives who could help them: their friends, their family, or their therapist.

People are sharing more than ever. But therapists may be seeing a thinner and thinner part of that disclosure.

Laura Reily made this point quite clearly in a recent NY Times essay after the death of her daughter, Sophie Rottenberg. She wondered whether Sophie’s life might have unfolded differently if her clinical team had access to the same context that ChatGPT did.

Let’s be very clear. The use of conversational AI agents is a defining trend of our generation. And when a new behaviour emerges with this scale, speed and ferocity, there will be many second and third-order consequences.

The question, of course, is what to do about this?

Yes, we should advocate for better regulation and for safer, better practices by the companies building general-purpose models.

However, as a realist, I think regulation will take a long time and, at best, be imperfect. I also think these companies are unlikely to make significant changes to prioritise population mental health. I see no evidence that they will do anything meaningful.

So now what?

We have hundreds of millions of people in desperate need of support, and demonstrating a preference to use these conversational AI agents. It’s unrealistic to just tell people not to use these agents.

And so I believe we have a new moral imperative: to build our own, better, safer alternatives.

Alternatives designed by clinical experts, grounded in ethics, integrated with care teams, and built with the right safety protocols and incentives to put users’ health first. Alternatives that meet the reality of our world and the reality of the people we are trying to help.

But we’re running out of time

There is a risk here. If these general-purpose AI agents become the default “mental health support,” unsafe, non-evidence-based care may become the norm. To be honest, we may already be too late. But I think we just about have enough time to set a different precedent.

The battleground to set this precedent will be fought over three areas;

  1. Impact: The best way to get people to leave general-purpose AIs and adopt mental health-focused agents is to build a product that actually makes people better. That is what people want. So make them better, build irrefutable evidence and then communicate this in a compelling way. I guarantee that if you do that, you will get people off general-purpose AIs and into safer, better products.

  2. Engagement: Engagement is a necessary evil to deliver these outcomes. One of the hardest challenges in building mental health AI is ensuring adequate user engagement without deviating from good clinical practice. General-purpose agents don’t have this dilemma. As we have seen with the last generation of digital mental health products, if engagement is poor, no one will use the product. I see this today in mental health AI. There are some very safe, ethical products out there, but no one uses them. So what impact are they having? We must have both engagement and good clinical practice. This is hard, but we need our best multi-disciplinary talent working on this problem.

  3. Awareness: One of the main reasons people use general-purpose models is that there is extremely low awareness of an alternative. Nobody in the mental health space has seized this opportunity yet. This is mostly because they want to get their product right before launching large-scale marketing campaigns, and I can respect that approach. Perhaps there is something we can do as a collective to drive awareness of the unsuitability of general models and to promote better alternatives.

General-purpose models have inertia. They are the default AI product for hundreds of millions of people, making them very sticky. Changing user behaviour in a way that gets them off these apps for their mental health conversations will be challenging.

But if we can build mental health-focused AI agents that meet users’ needs, are safe, engaging, and deliver the real outcomes people want, then we can shift user behaviour and establish a much stronger precedent for how AI is used in mental healthcare.

If not, people will continue to use general-purpose models, they will continue to not get the right care, and in the worst-case scenarios, receive terrible guidance that leads to bad outcomes.

This is why there is a moral imperative to create something much better.

That’s all for this week. I recognise that this is an emerging topic that a lot of you are thinking about daily. I certainly don’t have all the answers, but I would really love to hear your thoughts on this.

Keep fighting the good fight!

Steve

Founder of The Hemingway Group

Learn about a THR Pro Membership

If you want to know more about the benefits of a THR Pro membership, I just published a new page on our website with lots of details. As a THR Pro member, you’ll also get access to our vetted community for those shaping the future of mental health. Our memberships are growing every single month, so if you are interested, feel free to check it out.

Reply

or to participate.