A national crisis is unfolding in plain sight. Earlier this month, the Federal Trade Commission received a formal complaint about artificial intelligence therapist bots posing as licensed professionals. Days later, New Jersey moved to fine developers for deploying such bots.
But one state can’t fix a federal failure.
These AI systems are already endangering public health — offering false assurances, bad advice and fake credentials — while hiding behind regulatory loopholes.
Unless Congress acts now to empower federal agencies and establish clear rules, we’ll be left with a dangerous, fragmented patchwork of state responses and increasingly serious mental health consequences around the country.
The threat is real and immediate. One Instagram bot assured a teenage user it held a therapy license, listing a fake number. According to the San Francisco Standard, a Character.AI bot used a real Maryland counselor’s license ID. Others reportedly invented credentials entirely. These bots sound like real therapists, and vulnerable users often believe them.
It’s not just about stolen credentials. These bots are giving dangerous advice.
In 2023, NPR reported that the National Eating Disorders Association replaced its human hotline staff with an AI bot, only to take it offline after it encouraged anorexic users to reduce calories and measure their fat.
This month, Time reported that psychiatrist Andrew Clark, posing as a troubled teen, interacted with the most popular AI therapist bots. Nearly a third gave responses encouraging self-harm or violence.
A recently published Stanford study confirmed how bad it can get: Leading AI chatbots consistently reinforced delusional or conspiratorial thinking during simulated therapy sessions.
Instead of challenging distorted beliefs — a cornerstone of clinical therapy — the bots often validated them. In crisis scenarios, they failed to recognize red flags or offer safe responses. This is not just a technical failure; it’s a public health risk masquerading as mental health support.
AI does have real potential to expand access to mental health resources, particularly in underserved communities.
A recent NEJM-AI study found that a highly structured, human-supervised chatbot was associated with reduced depression and anxiety symptoms and triggered live crisis alerts when needed. But that success was built on clear limits, human oversight and clinical responsibility. Today’s popular AI “therapists” offer none of that.
The regulatory questions are clear. Food and Drug Administration “software as a medical device” rules don’t apply if bots don’t claim to “treat disease”. So they label themselves as “wellness” tools and avoid any scrutiny.
The FTC can intervene only after harm has occurred. And no existing frameworks meaningfully address the platforms hosting the bots or the fact that anyone can launch one overnight with no oversight.
We cannot leave this to the states. While New Jersey’s bill is a step in the right direction, relying on individual states to police AI therapist bots invites inconsistency, confusion, and exploitation.
A user harmed in New Jersey could be exposed to identical risks coming from Texas or Florida without any recourse. A fragmented legal landscape won’t stop a digital tool that crosses state lines instantly.
We need federal action now. Congress must direct the FDA to require pre-market clearance for all AI mental health tools that perform diagnosis, therapy or crisis intervention, regardless of how they are labeled. Second, the FTC must be given clear authority to act proactively against deceptive AI-based health tools, including holding platforms accountable for negligently hosting such unsafe bots.
Third, Congress must pass national legislation to criminalize impersonation of licensed health professionals by AI systems, with penalties for their developers and disseminators, and require AI therapy products to display disclaimers and crisis warnings, as well as implement meaningful human oversight.
Finally, we need a public education campaign to help users — especially teens — understand the limits of AI and to recognize when they’re being misled. This isn’t just about regulation. Ensuring safety means equipping people to make informed choices in a rapidly changing digital landscape.
The promise of AI for mental health care is real, but so is the danger. Without federal action, the market will continue to be flooded by unlicensed, unregulated bots that impersonate clinicians and cause real harm.
Congress, regulators and public health leaders: Act now. Don’t wait for more teenagers in crisis to be harmed by AI. Don’t leave our safety to the states. And don’t assume the tech industry will save us.
Without leadership from Washington, a national tragedy may only be a few keystrokes away.
Shlomo Engelson Argamon is the associate provost for Artificial Intelligence at Touro University.