logoalt Hacker News

guerythonyesterday at 2:40 PM1 replyview on HN

This is exactly why every AI citation we publish goes through a blocker. We dump the AI transcript plus the generated case numbers into a little script that hits the official court database and only passes through citations that return the same case id, party names, and paragraph text. If the extra lookup fails, the shot has to be marked as a hallucination, logged in the docket, and a human has to go re-verify with the actual law reports before we file anything. Treat the LLM like a drafting helper, not an authority, and make the human verification the gate that moves the draft from “AI promised” to “judicially safe.” We also keep a micro audit trail so if a clerk says “the AI gave me this,” we can replay how the prompt went and which citation check failed. What guard rails have other people put in front of AI-written judgements?


Replies

bsimpsonyesterday at 6:45 PM

I recently saw an interview with Anders Hejlsberg of TypeScript (and a long pedigree before that). The interviewer asked him about the role of AI in his work. I believe the context was porting TypeScript's tooling to Go.

His trick is to use AI to build the tools that do the work, not to ask it to do the work itself. If you say "hey Mr. AI, please port this code to Go," it'll give you back a big bag of code that you have no insight into. If it hallucinated something, you wouldn't know without auditing the whole massive codebase.

If instead you let AI build a small tool to aid the work, your audit surface is much smaller - you just need to make sure the helper tool is correct. It can then operate deterministically over the much larger codebase.