logoalt Hacker News

TonyAlicea10today at 12:30 AM1 replyview on HN

When designing AI-based user experiences I refer to this as provenance. It’s a vital aspect of trust, reliability, compliance and more. If a software system includes LLM output like this but doesn’t surface the provenance of its output for human evaluation and verification then it’s at best poor user experience, and at worst a dangerous one.


Replies

autoexectoday at 1:12 AM

At the same time, do you really want every conversation you have with your doctor recorded, handed over to third party companies, and stored forever with your medical file? Plus what doctor has time to sit down and re-listen to your visit to check to make sure the AI didn't screw up at some point in the future anyway? If your doctor isn't going to be verifying the accuracy from those recordings who would? Overseas contractors? At what point does it become a larger waste of time and money to babysit an incompetent AI than just not using one in the first place?

There are some good uses for AI, but I'm not convinced that this (or many other cases where accuracy matters) is one of them.

show 1 reply