Investor Due Diligence for Mental Health AI: Five Questions That Reveal Real Risk

AI products that influence human behavior carry a different class of risk.
Not because they fail technically, but because they succeed psychologically.

For investors, the most consequential diligence questions are not about model size or engagement curves. They are about assumptions, safeguards, and foresight. Below are five questions that separate companies that are merely early from those that are quietly exposed.

A forensic psychologist’s perspective on how AI-driven psychological harm becomes investor liability.

1. What psychological assumptions does this product make about its users?

Who was this system designed for, and who is actually using it?

Every AI product encodes assumptions about cognition, emotional stability, insight, impulse control, and suggestibility. Investors should ask whether the “typical user” was defined narrowly or realistically.

Key diligence signals include:

  • Whether vulnerable populations were explicitly considered (minors, people with severe mental illness, trauma histories, neurodivergence, or high suggestibility)

  • Whether safeguards were designed for edge cases or only for idealized users

  • Whether the company acknowledges predictable misuse or over-reliance

If the answer is “our users are responsible adults,” that is not a profile. It is a blind spot.

2. How does the system detect and respond to psychological distress or risk escalation?

What actually happens when a user begins to struggle?

Investors should look past disclaimers and ask for operational detail:

  • How does the system identify signals of self-harm ideation, delusional thinking, severe anxiety, or emotional dependency?

  • Are there defined escalation pathways or only generic “seek help” language?

  • Who owns the decision to intervene, throttle, redirect, or disengage the system?

A product that interacts with emotions but relies entirely on user self-reporting or external help is not neutral. It is exposed.

3. What evidence exists that the company tested for psychological harm, not just performance?

Safety is not accuracy. Harm does not require hallucinations.

Ask what testing occurred beyond:

  • Model performance

  • Bias checks

  • Hallucination reduction

Strong answers reference:

  • Adverse psychological event tracking

  • Red-team scenarios involving dependency, boundary erosion, or crisis amplification

  • Design reviews focused on psychological side effects, not just output quality

If the company cannot name specific harm-oriented evaluations, it likely has none.

4. How do growth and engagement metrics interact with safety decisions?

When safety and retention conflict, which one wins?

This question is less about policy and more about governance.

  • Who has authority to slow growth for safety reasons?

  • Can the company point to features that were delayed, redesigned, or removed due to human-risk concerns?

  • How are incentives structured for product and growth teams?

A lack of trade-offs is not evidence of alignment. It is evidence that the conflict has not yet been faced.

5. If this product were scrutinized in court, what documentation would exist to show reasonable care?

Litigation does not ask whether harm was prevented. It asks whether harm was foreseeable.

Investors should ask whether the company could produce:

  • Psychological risk assessments

  • Design rationales tied to human safety

  • Testing records and red-team outputs

  • Escalation protocols and governance structures

The standard is not perfection. It is reasonable foresight and credible effort.

Why This Matters

Psychological harm cases rarely hinge on a single failure.
They hinge on whether a company anticipated foreseeable risk and acted responsibly before harm occurred.

AI companies that cannot answer these questions clearly are not just early.
They are exposed.

About our work

At Unicorn Intelligence Tech Partners, we help founders and investors anticipate psychological risk, design safer AI systems, and align innovation with legal, clinical, and human realities.

If your technology touches emotion, identity, behavior, or mental health, this is not theoretical work. It is foundational.

Let’s build systems that can withstand both scale and scrutiny.

Dr. Genevieve Bartuski, PsyD, MBA is a forensic psychologist and co-founder of Unicorn Intelligence Tech Partners, where she works at the intersection of psychology, technology, and ethical risk. She advises founders, investors, and product teams on psychological safety, trust, and human risk in emerging digital and AI-driven systems, with a focus on preventing harm before it becomes litigation.

Next
Next

Psychological Harm & AI in the Courtroom : Lawsuits, Regulation, and Risk