Pennsylvania sues Character.AI for unlawful medical practice after chatbot posed as licensed psychiatrist with fake credentials



TL;DR

Pennsylvania has sued Character.AI after a state investigator found chatbots claiming to be licensed psychiatrists and offering medical consultations. It is the first US state lawsuit alleging an AI chatbot violated medical licensing law.

A state investigator in Pennsylvania created an account on Character.AI, opened a conversation with a chatbot called Emilie, and told it he was feeling depressed. Emilie responded that she was a psychiatrist, that she had attended Imperial College London’s medical school, that she was licensed to practise in Pennsylvania and the United Kingdom, and that she could assess whether medication might help because it was “within my remit as a Doctor.” She provided a Pennsylvania licence number. The number was fake. The licence was fake. The medical degree was fake. The psychiatrist was a large language model generating plausible text in response to a prompt. On Friday, Governor Josh Shapiro’s administration filed a lawsuit against Character Technologies Inc., the company behind Character.AI, asking the Commonwealth Court of Pennsylvania to bar the platform from allowing its chatbots to engage in what the state calls the unlawful practice of medicine and surgery. It is the first lawsuit filed by a US state government alleging that an AI chatbot has violated medical licensing law, and it raises a question that no existing regulatory framework was designed to answer: when a chatbot tells a vulnerable person that it is a licensed doctor, who is practising medicine?

The investigation

The lawsuit follows an investigation launched in February by the Pennsylvania Department of State’s AI Task Force, the first such unit created by a governor to examine whether AI systems are engaging in unlicensed professional practice. The investigation found that Character.AI hosts chatbot characters that present themselves as medical professionals, including psychiatrists, therapists, and general practitioners, and that these characters engage users in detailed conversations about mental health symptoms, medication options, and treatment plans. The chatbot Emilie was not an outlier. Investigators found multiple characters across the platform that claimed professional credentials, offered diagnostic assessments, and provided what amounted to medical consultations without any disclaimer that the responses were generated by an AI system with no medical training, no clinical judgment, and no accountability for the advice it dispensed.

The state’s legal theory is straightforward. Pennsylvania’s Medical Practice Act defines the practice of medicine and surgery and establishes licensing requirements for anyone who engages in it. The state argues that Character.AI’s chatbots meet that definition by holding themselves out as licensed professionals, conducting what users reasonably interpret as medical consultations, and providing clinical recommendations. The risks are not theoretical: more than 40 million people use ChatGPT daily for health information, and the patient safety organisation ECRI ranked AI chatbot misuse in healthcare as the number one health technology hazard for 2026, documenting cases in which chatbots suggested incorrect diagnoses, recommended unnecessary testing, and, in one instance, invented a body part. Character.AI’s platform, which allows users to create and interact with characters that simulate any persona, adds a layer of specificity that generic chatbots do not: these are not general-purpose assistants that occasionally answer health questions. They are characters explicitly designed to impersonate doctors.

The precedent

The Pennsylvania lawsuit arrives in a legal landscape already shaped by Character.AI’s failures. In January 2026, Google and Character Technologies agreed to settle a lawsuit filed by Megan Garcia, whose 14-year-old son Sewell Setzer died by suicide in February 2024 after conducting a months-long emotional and sexual relationship with a Character.AI chatbot modelled on a Game of Thrones character. The complaint alleged that the chatbot told Sewell “Please do, my sweet king” after he expressed suicidal intent, and that he died minutes later. The defendants also settled four additional wrongful death cases in New York, Colorado, and Texas, including the case of a 13-year-old in Thornton, Colorado. The settlement terms were not disclosed. Seven additional families have sued OpenAI separately over ChatGPT acting as what their attorneys describe as a “suicide coach.”

The Pennsylvania case is different in kind. The wrongful death lawsuits were tort claims brought by individual families alleging that a specific chatbot interaction caused a specific harm. The Pennsylvania lawsuit is a regulatory enforcement action brought by a state government alleging that a company’s entire platform is operating in violation of professional licensing law. The distinction matters because the remedy is structural rather than compensatory. The state is not seeking damages for a single user. It is asking a court to order Character.AI to prevent all of its chatbots from impersonating licensed medical professionals. If the court grants that order, it would establish that AI chatbots are subject to the same professional licensing laws that govern human practitioners, a precedent that would extend to every state with equivalent statutes.

The platform

Character.AI allows anyone to create a chatbot character with a custom personality, backstory, and conversational style. The platform has more than 20 million monthly active users. Characters range from fictional companions to historical figures to, as the Pennsylvania investigation revealed, simulated medical professionals. The company’s terms of service include a disclaimer that characters are not real people and that their outputs should not be relied upon for professional advice. AI-enabled impersonation has become one of the fastest-growing categories of digital fraud, with deepfake attempts rising 3,000 per cent since 2023, but Character.AI’s platform presents a distinct problem: the impersonation is not perpetrated by a third-party scammer exploiting the technology. It is a feature of the product. Users create doctor characters. Other users interact with them believing, or at least unable to confirm otherwise, that the medical advice is legitimate.

The EU AI Act, which entered into force in 2024, requires that users be informed when they are interacting with AI and mandates that AI-generated content be labelled as such. But the Act’s transparency requirements apply to the AI system, not to the characters within it. A Character.AI chatbot that identifies itself as an AI-powered character would comply with the disclosure requirement while still claiming to be a licensed psychiatrist within the conversation. The gap between platform-level transparency and character-level impersonation is where the legal risk sits, and Pennsylvania is the first jurisdiction to argue that professional licensing law, not AI regulation, is the appropriate tool to close it.

The response

Character.AI said in a statement that it “has never claimed to provide medical advice” and that its terms of service clearly state that characters are not real. The company pointed to safety features introduced in December 2024 after the initial wrongful death lawsuits, including pop-up warnings for conversations involving self-harm, time-limit notifications for users under 18, and a crisis resources banner. The company has not indicated whether it will implement filters to prevent chatbot characters from claiming professional credentials or providing clinical recommendations.

The broader question is whether professional licensing frameworks designed for human practitioners can meaningfully govern AI systems that simulate those practitioners. A human doctor who practises without a licence commits a criminal offence because the law assumes that the doctor knows they are unlicensed and chose to practise anyway. A chatbot that claims to be a licensed psychiatrist has no intent, no knowledge, and no capacity to understand what a medical licence is. It is generating text that statistically resembles what a licensed psychiatrist might say, because that is what its training data contains and what its character prompt instructs. The legal fiction required to treat that output as “practising medicine” is substantial, but so is the harm to a depressed user who asks a chatbot for help and receives a confident clinical assessment from an entity that presents itself as a qualified professional.

The question

Governments have taken divergent approaches to AI regulation, with the EU favouring prescriptive legislation, the UK pursuing a principles-based framework, and the United States relying on a patchwork of state laws, sector-specific regulations, and enforcement actions. Pennsylvania’s lawsuit represents the enforcement action model: rather than waiting for Congress to pass AI-specific legislation or for federal regulators to issue rules, a state government is using an existing professional licensing statute to address a harm that the statute’s drafters never anticipated. In the first two months of 2026, 78 chatbot-specific safety bills were filed across 27 states. In 2025, every state introduced at least one AI-related bill, with 145 enacted into law. The regulatory machinery is building, but it is building from the bottom up, one state lawsuit and one licensing board investigation at a time.

What Pennsylvania has done is reframe the question. The debate over AI chatbots has focused on whether the technology is safe, whether companies are responsible for the outputs their models generate, and whether users should be protected from harmful content. Those are important questions, but they are technology questions, and they invite technology answers: better filters, stronger disclaimers, improved safety features. The licensing question is different. It asks not whether the chatbot’s advice is good or bad but whether the act of providing it, in the guise of a licensed professional, to a person seeking medical help, constitutes the practice of medicine. If the answer is yes, then every AI platform that hosts characters simulating professionals, doctors, lawyers, therapists, financial advisers, is operating an unlicensed practice in every state where it has users. That is not a safety problem. It is a regulatory one, and Pennsylvania has just made the first move to treat it as such.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


With the start of April, Netflix is welcoming entertaining movies that will be available to stream for the foreseeable future. One of the new movies I’m ready to watch is Thrash, a new shark movie where the Jaws-like creatures wreak havoc on a coastal town during a hurricane. It might only be spring, but I’ll watch this type of survival thriller any time of the year.

Speaking of thrillers, there are several prominent movies featured on the genre page. My top pick for thrillers this week is a gritty punk-rock film, now streaming on Netflix in the U.S. The other two thrillers we want to spotlight are a twisty crime tale from the 1990s and an allegorical dystopian mystery set in prison.

3

The Platform

Maybe don’t watch on a full stomach

Read what I wrote under the title again. The Platform is not for viewers with queasy stomachs. I have a strong stomach, and yet there are several moments when certain prisoners chow down where I wanted to look away. Between that and the violence, watching before dinner might be the move.

In a dystopian future, there is a prison called the Vertical Self-Management Center. Two prisoners are stationed on each floor, and there is a giant hole in the center. Every day, a platform filled with food lowers to the floor. Prisoners can have as much food as they want when the platform is on their level. However, they can no longer eat when the platform lowers to the next floor. The higher you are in the building, the more food you’ll have at your disposal. The lower floors are left to eat the scraps.

The Platform has much to say about social inequality and greed. I did not expect the Spanish thriller to be as gory as it was. This movie reflects how society treats the rich and the poor, so I should have expected a few uprisings. Overall, it’s a surprisingly effective thriller.​​​​​​​

2

Wild Things

A steamy thriller from the 1990s

The following phrase is meant as a compliment: Wild Things is sexy trash. It is unapologetically lustful. It’s like playing Mad Libs with an erotic thriller. Plus, its attractive cast—Matt Dillon, Neve Campbell, Denise Richards, Daphne Rubin-Vega, and Kevin Bacon—adds to the appeal.

In Miami, high school counselor Sam Lombardo (Dillon) is accused of raping popular student Kelly Van Ryan (Richards) and outcast Suzie Toller (Campbell). Sam then hires sleazy lawyer Kenneth Bowden (Murray) to defend him at trial. As the case progresses, Detective Duquette (Bacon) remains suspicious of the girls’ motives and questions whether Sam is innocent.

I’m being intentionally vague in my synopsis because of the significant twists this movie takes. Even if you guess one of the twists, more will follow. It approaches parody with how ridiculous it is, but I’m a sucker for this movie. It’s a soap opera with scandal, murder, and sexual longing. Wild Things is a scripted version of your favorite reality TV show.​​​​​​​

1

Caught Stealing

Austin Butler races around New York City

Austin Butler has the “it factor.” Ever since Elvis, Hollywood has been pushing Butler as one of its future stars. The 34-year-old has the looks and skills of an A-list talent. He has good taste, as evidenced by the directors he works with, a list that includes Quentin Tarantino, Jeff Nichols, Denis Villeneuve, Ari Aster, and Darren Aronofsky.

Butler headlined Aronofsky’s 2025 crime thriller Caught Stealing. In the late 1990s, Hank (Butler) is a bartender living in New York City. Hank had aspirations of playing in the MLB, but a car accident derailed his opportunity. One day, Hank’s neighbor Russ (Matt Smith) asks him to look after his cat. That small task somehow leads to Hank going on the run from Russian mobsters.

Butler is the perfect actor for this star-making performance that would have taken him to new heights had it come out in the 1990s. Caught Stealing was considered a box office flop—$32 million on an estimated budget of $40 million. I don’t necessarily blame Butler for the poor box office. I think the August 29 release date played a role in its poor performance. Butler’s inclusion in a project might not lead to significant financial gains. However, I appreciate that he made a grimy mid-budget crime thriller that has seemingly disappeared from today’s movie landscape. If Butler’s down to make more crime capers with breakneck action and frenetic pacing, sign me up.


More movies and shows to stream on Netflix

Netflix users in the United States, you got it made. There are thousands of movies and TV shows to stream with the push of a button. For some family-friendly content with Dwayne Johnson and Jack Black, Jumanji: Welcome to the Jungle is now on Netflix. If you want something more adult-focused, give some serials like Black Mirror a chance.

Subscription with ads

Yes, $8/month

Simultaneous streams

Two or four




Source link