logoalt Hacker News

jmyeettoday at 7:50 PM0 repliesview on HN

I'm going to re-characterize your categorization:

1. The people who don't understand (nor care) about the risks and complexity of what they're delivering; and

2. The people that do.

Widespread AI usage is going to be a security nightmare of prompt injection and leaking credentials and PII.

> No one has ever made a purchasing decision based on how good your code is.

This just isn't true. There's a whole process in purchasing software, buying a company or signing a large contract called "due diligence". Due diligence means to varying degree checking how secure the product is, the company's processes, any security risks, responsiveness to bugfixes, CVEs and so on.

AI is going to absolutely fail any kind of due diligence.

There's a little thing called the halting problem, which in this context basically means there's no way to guarantee that the AI will be restricted from doing anything you don't want it to do. An amusing example was an Air Canada chatbot that hallucinated a refund policy that a court said it had to honor [1].

How confident are we going to be that AIs won't leak customer information, steal money from customers and so on? I'm not confident at all.

[1]: https://arstechnica.com/tech-policy/2024/02/air-canada-must-...