- The Hemingway Report
- Posts
- #70 AI Regulatory Realignment: Who wins, who loses?
#70 AI Regulatory Realignment: Who wins, who loses?
How are AI regulations changing in the US? How does that influence the market? And what can businesses do about it?
Hi friends,
Regulation isn't just compliance overhead.
It is a market force. It shapes competitive dynamics, defines viable business models, and determines where enterprise value can be created.
In mental health AI, there’s a regulatory shift happening right now. Every leader I talk to is paying close attention.
How this plays out, and how companies react, will determine who creates value, who captures it, and who misses out entirely.
I’ve spent the last few weeks talking to experts, from policy nerds to CEOs to researchers, to get their views on how things are moving and what implications they are having for businesses.
In today’s THR Pro article, we go deep on exactly how AI regulation is reshaping the mental health market and what businesses should do as a result.
We’ll cover:
How the US regulatory landscape is shifting (and what we can expect next)
How these regulatory shifts are shaping the mental health market.
Where I see potential for businesses to win, and exactly what they’ll have to do to get there.
And also, where I see potential losers, and the mistakes they must avoid.
Let’s get into it.
The Key Takeaways
Short on time? I get it. Here are the main things you need to know.
Federal regulation is expected to clarify, not revolutionise. The FDA's November meeting will likely align existing frameworks rather than create new rules, clarifying what "low risk" means for mental health AI. But this clarification matters: it defines boundaries that have previously been ambiguous.
State patchwork is creating chaos. States are moving in different directions: Illinois banning AI therapy, California requiring transparency, Texas taking a lighter approach. This is creating a fragmented compliance landscape. Industry lobbying for federal preemption will intensify as fragmentation costs mount. Businesses need to be active in this conversation, sharing their views and also sharing positive stories of AI in this space.
The regulatory grey zone is closing. Federal clarification and state-level enforcement are forcing a strategic choice: build unregulated wellness products or pursue FDA-cleared clinical devices. The middle ground - serving clinical populations while claiming "wellness" - is becoming untenable.
Regulatory moats will become a bigger source of value. As more of these AI use cases become regulated, those that have FDA clearance will have value in the market. Although, we should remember that the quantum of this value will not just be determined by achieving regulatory approval, but by the outcomes they deliver for patients and the ROI they deliver for payers.
However, distribution will remain a source of significant enterprise value. For regulated clinical AI, leverage depends on scarcity: if few companies achieve FDA clearance while proving ROI, they retain negotiating power. If many succeed, platforms controlling payer relationships and provider networks capture most economics. Either way, distribution remains critical and large mental health platforms woudl do well to realise this.
Three winner categories emerge: (1) Mental health platforms that build consumer wellness tools for distribution and partner with regulated AI companies for clinical products, (2) Clinical AI companies with both FDA clearance and a distribution strategy from day one, (3) Pure consumer wellness products that stay far from clinical claims and compete on experience, not medical positioning.
1. How did we get here?
For years, mental health tech has operated in a relatively permissive regulatory environment. Then generative AI arrived.
Between late 2022 and early 2024, the technology went from experimental to ubiquitous. ChatGPT reached 100 million users in two months. Suddenly, anyone could have a conversation with an AI that sounded authoritative, empathetic, and therapeutic - whether or not it was designed for that purpose.
Mental health became one of its top use cases. In research I published earlier this year, 41% of respondents reported using AI-based chat tools for mental health support, with 82% using general-purpose models like ChatGPT rather than purpose-built clinical tools. Other studies have found similarly high levels of usage in these products for mental health support.
Many mental health businesses started building AI products too. Consumer demand was significant, the technology was promising, and so was the business case. New startups saw opportunities to build new products and raise money on that vision. Larger mental health businesses saw opportunities to innovate in ways that helped users but also accelerated growth, improved margins, and ultimately, allowed them to tell a more compelling narrative to the market (especially important for those considering an IPO in the coming years).
Policy to regulate these products existed. But the enforcement had not kept up with adoption.
But after some high-profile, tragic stories, regulators and policymakers started taking notice. Now, they are acting (sometimes bluntly) to try and catch up.
This means the regulatory landscape is shifting.
2. How is the regulatory landscape shifting?

This article is for THR Pro members only
Consider becoming a THR Pro member to access this article as well as more insights, analysis and trends on the mental health industry.
Already a paying subscriber? Sign In.
Reply