logoalt Hacker News

jasonjayrtoday at 4:52 AM2 repliesview on HN

The killer app was conceived as early as the 1980s: an agent running on your computer, organizing your files, your schedule, your messages, your bills, bank accounts, etc. All the parts of your life that were routine drudgery should be able to be offloaded to a smart agent, based on your preference, to bring you the information you needed with natural language queries, contextualized to what you were doing at the time, when you need it.

What's being delivered now is, an agent running on someone else's computer, copying your data to someone else's database, with zero responsibility, or mandate to protect that data and not share with with anyone else (in fact, they almost always promise to share it with their thousand partners), offering suggestions and preferences based on someone else's so-called recommendations, influenced by paying the agent's operators, and increasing pressure to make using someone else's computers + agents the only way to interact with other people and systems.

There is no doubt that LLM's can do amazing things, but the current environment seems to make it nearly impossible to do anything with them that doesn't let someone else inspect, influence, and even restrict everything you are doing with with these systems.


Replies

Animatstoday at 11:50 AM

> What's being delivered now is, an agent running on someone else's computer, copying your data to someone else's database, with zero responsibility, or mandate to protect that data and not share with with anyone else (in fact, they almost always promise to share it with their thousand partners), offering suggestions and preferences based on someone else's so-called recommendations, influenced by paying the agent's operators, and increasing pressure to make using someone else's computers + agents the only way to interact with other people and systems.

If we're going to have AI regulation, this is where to start. If a company's AI service acts for a user, the company has non-disclaimable financial responsibility for anything that goes wrong. There's an area of law called "agency", which covers the liability of an employer for the actions of its employees. The law of agency should apply to AI agents. One court already did that. An airline AI gave wrong but reasonable sounding advice on fares, a customer made a decision based on that advice, and the court held that the AI's advice was binding on the company, even though it cost the company money.

This is something lawyers and politicians can understand, because there's settled law on this for human agents.

jeswintoday at 5:58 AM

A few decades back, a lot of computer use was emails. And it was stored on someone else's servers - with everyone from server operators along the route, to the government potentially having access to it. Even HTTPS is a relatively recent thing.

I guess what I'm saying is - we've always had this problem.

show 2 replies