Skip to main content
LegalboardsAcademyBook a demo

What could be wrong with this?

write a demand letter for my client John Smith who was in a car accident and hurt his back and neck. the other driver was at fault. we want $150,000

In 20 minutes, we show why prompts like this create expensive rework.

Academy

Before AI touches your workflows, someone needs to own them.

Tai Miranda - Co-founder, Legalboards

Two failure modes

Both are invisible until something goes wrong.

Unclear ownership

Work lives in the team, so nobody catches what AI invented.

Shallow AI context

Output sounds polished, but the brief behind it was incomplete.

Recognition

You have already seen this happen.

An AI draft moved forward before facts were verified.

A timeline generated by AI was sent to a client as if confirmed.

Junior staff used AI output that sounded right but was not reviewed deeply.

A summary used stale data and the wrong number reached a demand letter.

The framework

The OWNED Framework

Five fields. Every workflow. Every prompt.

O
Owner named

One person owns it. Not a role.

W
What done looks like

Specific, observable, and dated.

N
Next owner

Handoff is explicit, or work stalls.

E
Evidence verified

Confirmed inputs, not assumptions.

D
Depth defined

AI output matches prompt depth.

O

OWNED · O

Owner named

Not assigned. Owned.

Vague

The team will handle it.

Owned

Maria owns this. Due Friday.

W

OWNED · W

What done looks like

Specific, observable, dated.

Vague

Draft sent.

Defined

Draft reviewed and approved by [name] on [date].

N

OWNED · N

Next owner

If nobody is named next, work sits. AI amplifies the gap.

Drift

Send to whoever picks it up.

Hand-off

Hand to PARALEGAL_2 for filing by 4pm.

E

OWNED · E

Evidence verified

Confirmed inputs, not assumed.

Assumed

AI built a demand from an outdated case summary.

Verified

Medical expenses [AMOUNT_1] confirmed [DATE_2].

D

OWNED · D

Depth defined

AI outputs at the level of the person prompting it.

Shallow

Shallow brief → polished, generic output.

Deep

Deep brief → specific, verifiable output.

The common way

What AI invents from a shallow prompt.

Typical prompt
write a demand letter for my client John Smith who was in a car accident and hurt his back and neck. the other driver was at fault. we want $150,000
What AI invented
  • Vague liability stated as fact. "Liability rests entirely with your insured" and "the evidence establishes" - but no evidence was provided. AI wrote it as if it was already proven.
  • Medical treatment invented in detail. "Diagnostic testing, physical therapy, pain management" - none of this was in the prompt. AI filled in a treatment history that does not exist yet.
  • Damages list expanded without instruction. Lost wages, emotional distress, loss of enjoyment of life - none of these were mentioned. AI added them because they sound right.
  • 30-day response deadline set unilaterally. No instruction was given on timeline. AI picked one and put it in the letter as a legal demand.
  • $150,000 confirmed as a settlement demand. Was this approved by the supervising attorney? Confirmed against policy limits? Nobody checked.

Structured Prompt

Every field maps to a verified OWNED step.

PARALEGAL PROMPT - DEMAND LETTER Use only verified facts below. Do not invent information. Flag any gap before generating output. [E - Evidence verified] Matter: CLIENT_A v INSURER_A Confirmed injuries: cervical strain, lumbar strain Total medical expenses: [AMOUNT_1] Demand amount: [AMOUNT_3], approved [DATE_3] [D - Depth defined] Tone: professional, factual, concise No assumptions. No unsupported legal conclusions.

Three golden rules

If you only remember three things from today.

Never paste real client names

Use CLIENT_A, OPPOSING_A, INSURER_A.

Never upload original documents

Extract verified facts and paste text only.

Know your AI plan

Use enterprise controls where data protection is required.