logoalt Hacker News

samuelknightlast Monday at 10:46 PM1 replyview on HN

If that was a jab it my writing then yes, I am absolutely being sincere because I am an expert on this topic. LLMs went from being ok at one-shoting a function a to being so good at hacking that it's difficult to evaluate them. Prospective customers get back to us after a demo and tell us about the exploits it found on their services that are so vague and technical that they wouldn't think to look for them.


Replies

streetfighter64yesterday at 9:05 AM

> Prospective customers get back to us after a demo and tell us about the exploits it found on their services that are so vague and technical that they wouldn't think to look for them.

Um, have you actually verified that those are actual exploits then? Vague and technical sounds exactly like a description of AI slop...

show 1 reply