📩 Silicourt Valley – Issue #3

The bot’s not on trial. You are.

Can You Trust AI With Legal Citations?

Spoiler: Not really. But here's how to work with that reality—and why it matters more than you think.

Last week, a lawyer got sanctioned for submitting fake AI-generated cases. This week, I tested three major AI tools with the same legal question. The results will save you from making the same expensive mistake.

🧠 I Tested 3 AI Tools With the Same Question

"What are the elements of defamation under New York law?"

Simple question. Should be straightforward, right? Here's what actually happened:

🤖 ChatGPT: The Overconfident Liar

Got the elements right, then confidently cited "Johnson v. Daily News"—a completely fake case. Zero hesitation, zero disclaimers.
 Good for: Brainstorming, first drafts, creative writing
 Bad for: Anything that needs to be true

The danger: It sounds so confident that you'll trust it. Don't.

🤝 Claude: The Careful One

Great explanations in plain English, but refused to cite cases without major prodding. At least it knows its limits.
 Good for: Client communication, explaining complex concepts
 Bad for: Research, getting specific authorities

The upside: When it doesn't know something, it tells you.

🦾 CoCounsel: The Robot

Cited real cases with links: Steinhilber, Gross, Liberman. Boring. Accurate. We love that.
 Good for: Real research, case law verification
 Bad for: Anything requiring nuance or creativity

The trade-off: Accuracy over personality. Sometimes that's exactly what you need.

⚠️ The Real Kicker

OpenAI just got sued because ChatGPT invented sexual misconduct allegations against two radio hosts. The case got dismissed, but OpenAI didn't even deny that their AI made it all up.

If ChatGPT can fabricate serious allegations out of thin air, it can definitely fabricate your case citations. The same technology that invented "Johnson v. Daily News" could just as easily create a Supreme Court decision that never existed.

Bottom line: Each tool has different failure modes. Pick the right tool for the job, and always verify citations independently.

🛠 Tool Worth Trying: Paxton AI

AI-powered search inside Westlaw that answers questions in plain English. Right now it's like having an eager law student who sometimes forgets what "binding precedent" means.

What works: Great for initial research on straightforward legal questions. Saves time on basic statutory lookups.

What doesn't: Struggles with complex precedent analysis and jurisdiction-specific nuances.

How to use it: Start with simple, factual queries before moving to complex legal standards. When it works, it's genuinely helpful. When it doesn't, you're already in Westlaw to fix it.

🎯 Prompt That Works

Review this clause for issues with indemnity, jurisdiction, or liability limits under [state] law. Give me red flags and fixes.

Tested in GPT-4 and Paxton. Both responded fast. Only one made up a statute. (You can guess which.).

⏱ Quick Win

Build your AI testing lab:

  1. Write 3-5 anonymized scenarios from old cases (strip all client identifiers)

  2. Use these to test prompts safely without risking real client work

  3. Document what works and what fails spectacularly

Why this matters: You get a safe space to experiment, plus reusable scenarios for training associates and evaluating new tools. One setup, multiple uses.

📡 This Week in Legal Tech

🧠 Theo AI raised $4.2M to predict litigation outcomes. Your next settlement strategy might start with a probability score. Or it might be expensive nonsense. Time will tell.
Read more →

⚖️ Florida judge says AI chatbot output isn’t protected speech. Character.AI’s wrongful death suit moves forward. First real test of AI liability.
Read more →

📝 Draft ABA guidance on AI ethics is circulating. Lots of words about "competence" and "supervision." Translation: You're still responsible when your AI goes sideways.

🧾 House spending bill includes controversial AI provisions. The usual congressional dance of writing rules for tech no one understands.
Read more →

🔍 Your Move

Test this prompt with your AI tool of choice:

You are a [practice area] attorney in [your state]. Review this clause for potential issues with indemnity, jurisdiction, or liability limits. List red flags and suggest fixes.

Then verify every citation it gives you. Screenshot the fake ones and reply—I'll feature the best (worst?) examples in a future issue.

💬 Bottom Line

AI will hallucinate. That's not changing anytime soon. The question isn't whether you trust it—it's whether you know how to use it without getting burned.

The lawyers getting sanctioned aren't the ones using AI. They're the ones using it wrong.

Got AI horror stories or prompts that actually work? Reply—I'll share the good ones.

Silicourt Valley

Forward this to someone who thinks AI citations are "probably fine."

🔐 (Don’t Forget) Coming Soon: Silicourt Pro 

Silicourt Valley will always be free. But I’m building a paid version for professionals who want to: 
  • Save time with a searchable legal prompt library 

  • Get access to real-world workflows + templates 

  • Go deeper on the tech without getting lost in it 

No paywall yet. But when colleagues start asking why you're not using these tools, you'll want to be ready: 

How was this content?

Help me make Silicourt Valley better—pick one:

Login or Subscribe to participate in polls.

Disclaimer: The content provided in this newsletter is for informational and educational purposes only and does not constitute legal advice. Use of any information from Silicourt Valley does not create an attorney-client relationship. Readers should conduct their own due diligence or consult with a qualified professional before relying on any information or tools discussed herein. All views are those of the author and do not reflect the opinions of any affiliated institutions or employers.

Reply

or to participate.