What Courts Will Ask For (Not What Marketing Claims Say)
When litigation arises, courts do not evaluate intention. They evaluate reasonableness, foreseeability, and documentation.
Here is what legal teams and courts will look for:
1. Foreseeability of Harm
Could a reasonable company have anticipated psychological or emotional harm?
Were known risks in the industry ignored?
Did the company understand how users would actually engage with the system?
“We didn’t think of it” is not a defense when the risk was predictable.
2. Design Decisions and Trade-Offs
Why were certain features included?
Were alternative designs considered?
Did growth, engagement, or monetization override safety considerations?
Courts care less about what you built than how you decided to build it.
3. Internal Knowledge and Communications
Internal emails, Slack messages, and documents are discoverable
Discussions acknowledging risk but postponing mitigation are especially damaging
Silence is not protection. It is ambiguity.
4. Policies vs. Practice
Terms of service and disclaimers are not safeguards
Courts look for:
Actual safety mechanisms
Operational response plans
Evidence that policies were enforced
5. Response to Harm Signals
What happened when users complained?
Were patterns noticed and escalated?
Did the company adapt or dismiss concerns?
A slow or dismissive response can be interpreted as negligence.
6. Expert Input and Oversight
Did the company consult experts in psychology, human factors, or safety?
Was advice incorporated or ignored?
Was risk treated as a core product issue or an afterthought?
Expert absence does not absolve responsibility. It highlights gaps.
About our work
At Unicorn Intelligence Tech Partners, we help founders and investors anticipate psychological risk, design safer AI systems, and align innovation with legal, clinical, and human realities.
If your technology touches emotion, identity, behavior, or mental health, this is not theoretical work. It is foundational.
Let’s build systems that can withstand both scale and scrutiny.
Dr. Genevieve Bartuski, PsyD, MBA is a forensic psychologist and co-founder of Unicorn Intelligence Tech Partners, where she works at the intersection of psychology, technology, and ethical risk. She advises founders, investors, and product teams on psychological safety, trust, and human risk in emerging digital and AI-driven systems, with a focus on preventing harm before it becomes litigation.