Research Report: Mental Health in an AI World

Results from our research with 460 real people

AI in mental health comes with a lot of buzz, a lot of hope, but also a lot of concerns.

Despite the attention it’s received, there has been little data on how people are actually engaging with AI for their mental health.

Who’s using these tools? At what rates? Why? How? And how do they really feel about their experience?

I wanted to try to shed light on that.

So I surveyed 460 people, asking them over twenty questions on how they use AI agents for their mental health.

What we found was super interesting.

To start, we found that AI agents are now a common source of mental health support, with 41% of respondents reporting having used a conversational AI agent for mental health support.

We also learned that most of this usage (81.5%) is with AI agents that are not designed for mental health purposes (ChatGPT, Claude etc.) - which is of course, concerning.

In this Pro edition of The Hemingway Report, I share all of our findings in a 31-page report. 

It covers;

  • Who is using AI agents for their mental health and why?

  • What are the most popular AI agents?

  • How are they using these tools? For what use-cases?

  • What do people like and dislike about using AIs for their mental health?

  • How do people compare AI agents to traditional talk therapy?

  • Do people trust these tools, and what impacts this trust?

  • Quotes from respondents shedding light on their lived experience with these tools.

  • And much more.

Let’s get into it.

This article is for THR Pro members only

Consider becoming a THR Pro member to access this article as well as more insights, analysis and trends on the mental health industry.

Already a paying subscriber? Sign In.

Reply

or to participate.