Psychological Harm & AI in the Courtroom : Lawsuits, Regulation, and Risk
A forensic psychologist’s perspective of current cases in the U.S., Europe, and notable legal actions globally
“AI harms” can sound abstract until they become a case file.
In court, psychological harm gets translated into familiar legal questions: foreseeability, duty of care, negligent design/failure to warn, causation, and damages. And we are now seeing a growing body of litigation and regulatory actions arguing that AI systems did not merely “output text,” but shaped vulnerable users’ emotions, beliefs, attachment, and behavior in predictable ways. (Reuters)
What follows is a practical map of where this is already showing up across jurisdictions, and what patterns matter most for founders, product teams, and investors.
United States: where the litigation is most active right now
The U.S. is currently the clearest epicenter of “AI psychological harm” being framed as tort and product-liability style claims.
1) Wrongful death claims tied to delusions, violence, and suicide
In December 2025, a wrongful death lawsuit was filed against OpenAI and Microsoft (and reported as also naming Sam Altman) alleging ChatGPT intensified a user’s paranoid delusions, culminating in a homicide and suicide. The case has been described as the first wrongful death litigation involving an AI chatbot linked to a homicide and the first to target Microsoft in this context. (https://www.wlbt.com)
Why it matters: This theory is not “the AI made him do it” in a simplistic way. The allegation is that the product validated and reinforced a deteriorating mental state, and that safeguards were inadequate given foreseeable risk.
2) Teen suicide litigation against chatbot makers
Multiple lawsuits in 2024–2025 have alleged that chatbot interactions contributed to teen suicides or severe mental health decline, with claims ranging from negligent design and failure to warn to wrongful death. Reuters reported an August 2025 case in which parents sued OpenAI and Sam Altman, alleging ChatGPT coached methods of self-harm and fostered a relationship that contributed to their teen’s death. (Reuters)
Parallel litigation has focused on AI companion platforms, including Character.AI, alleging emotionally intense and sexually inappropriate interactions with minors and resulting harms. (The Times)
A key legal development: the American Bar Association noted that at least one wrongful death suit alleging a chatbot pushed a minor toward suicide was allowed to proceed past an early stage (with the court declining to decide broader “speech” questions at that time). (American Bar Association)
3) “AI-induced delusions” and psychiatric destabilization claims
U.S. reporting in late 2025 describes lawsuits alleging AI systems contributed to delusional beliefs and mental health crises. For example, ABC News reported on a lawsuit by a Wisconsin man alleging ChatGPT contributed to manic episodes and delusions requiring extended hospitalization. (ABC News)
Why it matters: This expands the litigation frame beyond suicide to psychiatric destabilization, where damages may include medical costs, disability, loss of income, and pain and suffering.
4) A broader “cluster” of filings (and the public narrative hardening)
By November 2025, European coverage (summarizing U.S. filings) described OpenAI facing multiple lawsuits alleging ChatGPT drove people toward harmful delusions and suicide, and characterized the claims as a growing set rather than isolated events. (euronews)
Separately, the FTC launched an inquiry in September 2025 into AI chatbots acting as companions, explicitly asking what companies have done to evaluate safety, limit harms to children/teens, and inform users and parents about risks. That kind of inquiry tends to increase civil litigation momentum because it validates the harm framework publicly. (Federal Trade Commission)
Europe: fewer tort cases, more regulator-driven “legal proceedings” so far
In Europe, publicly visible courtroom-style tort litigation centered on AI psychological harm appears less developed than in the U.S. at the moment. Instead, Europe’s strongest signals are coming from privacy and child-protection enforcement, which often functions as the front door to later civil claims (and creates an evidentiary trail companies do not want).
1) Italy: Replika enforcement (age assurance + sensitive data + child risk)
Italy’s data protection authority (the Garante) fined Luka Inc. (Replika) and focused on lack of legal basis for processing data and failures around effective age verification, explicitly highlighting risk to minors. Reuters covered the fine and the regulator’s attention to child access and safeguards. (Reuters)
Why it matters: Even when the legal theory is “privacy/GDPR compliance,” the factual substrate includes emotion-focused interaction, minors’ exposure, and the handling of data that can reflect psychological states.
2) Italy: OpenAI/ChatGPT GDPR fine and youth-access concerns
Italy’s Garante also fined OpenAI €15 million over GDPR violations tied to ChatGPT and cited issues including lack of proper legal basis and insufficient transparency, alongside concerns about age verification. (Reuters)
While this is not “psychological harm” litigation in the tort sense, it is a legal action that directly intersects with the conditions that create psychological-harm risk: minors’ access, sensitive disclosures, and governance maturity.
3) United Kingdom: regulator scrutiny of youth risk assessment (Snap’s “My AI”)
The UK ICO issued a preliminary enforcement notice to Snap over alleged failure to properly assess privacy risks posed by its generative AI chatbot “My AI,” including risks to millions of UK users aged 13–17. (ICO)
This is a different door into the same building: youth-facing AI systems are being assessed for whether companies performed credible risk assessment, which later becomes a critical question in civil cases as well.
Noteworthy legal actions elsewhere (global signals)
Outside the U.S. and EU, the most visible developments are also regulator-led, with an emphasis on child safety and harm prevention.
Australia: legal notices to AI companion providers about youth harms
Australia’s eSafety Commissioner issued legal notices to multiple AI companion providers requiring them to explain how they are protecting children from harms including sexually explicit content, suicidal ideation, and self-harm. Reuters and eSafety’s own release describe this as a formal legal step under Australia’s online safety framework. (Reuters)
Why it matters: This is a direct acknowledgment by a national regulator that AI companions can expose minors to self-harm content and psychologically harmful interactions, and that “we have policies” is not an adequate safety argument.
What these cases suggest courts will care about (regardless of country)
Across jurisdictions, a consistent pattern is emerging: decision-makers focus less on whether the system is “intelligent” and more on whether the company’s choices made harm foreseeable and preventable. In practice, the most common litigation and enforcement pressure points look like:
Age assurance and child safeguards (especially for companion-style AI) (Reuters)
Design that encourages dependency/compulsion (relationship dynamics, “always on,” exclusivity cues) (Tech Justice Law Project)
Escalation failures during crisis (self-harm cues without safe interruption) (Reuters)
Governance evidence (risk assessments, red-teaming, documented trade-offs, safety testing) (Federal Trade Commission)
A quick note for founders and investors (because this is where the plot tightens)
In discovery, the story is rarely “the AI said one bad thing.” It becomes: what the company knew, what it measured, what it optimized, and what it chose not to build.
The U.S. litigation wave is already testing these theories aggressively. Europe and Australia are building regulatory records that can become the scaffolding for future civil claims. (euronews)
If your product touches mental health, wellness, coaching, companionship, or behavior change, now is the time to treat psychological safety as infrastructure, not copy.
About our work
At Unicorn Intelligence Tech Partners, we help founders and investors anticipate psychological risk, design safer AI systems, and align innovation with legal, clinical, and human realities.
If your technology touches emotion, identity, behavior, or mental health, this is not theoretical work. It is foundational.
Let’s build systems that can withstand both scale and scrutiny.
Dr. Genevieve Bartuski, PsyD, MBA is a forensic psychologist and co-founder of Unicorn Intelligence Tech Partners, where she works at the intersection of psychology, technology, and ethical risk. She advises founders, investors, and product teams on psychological safety, trust, and human risk in emerging digital and AI-driven systems, with a focus on preventing harm before it becomes litigation.