This to me is the most important point in the whole text:
"We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.
We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants."
I've not think about it like that, but I think it's a great way to legislate.
The fiduciary model is the only regulatory framework that actually scales. In insurance (my field), we see the difference daily: a captive agent works for the carrier while an independent broker often has a pseudo-fiduciary duty to the client. If we applied that to data, your AI assistant would legally have to prioritize your privacy over the vendor's ad revenue. Right now, the incentives are completely inverted.
We've needed that in software (not just AI) for a long time.
Not a popular take; especially within the HN crowd.
That said, it needs to be scaled. As he indicated, only certain professions need fiduciaries.
Anyone that remembers working in an ISO9001 environment, can understand how incredibly bad it can get.