logoalt Hacker News

rsyncyesterday at 7:07 PM2 repliesview on HN

"... having Claude Code, today, analyze a codebase and suggest where and how to fuzztest it ..."

I recently directed chatgpt, through the web interface, to create a firefox extension to obfuscate certain HTTP queries and was denied/rebuffed because:

"... (the) system is designed to draw a line between privacy protection and active evasion of safeguards."

Why would this same system empower fuzzing of a binary (or other resource) and why would it allow me to work toward generating an exploit ?

Do the users just keep rephrasing the directive until the model acquiesces ? Or does the API not have the same training wheels as the web interface ?


Replies

ogigyesterday at 7:14 PM

This very question was asked to Nicholas Carlini from Anthropic at this talk: https://www.youtube.com/watch?v=1sd26pWhfmg

The answer is complex, worth watching the video. But mainly, they don't know where to place the line. Defenders need tools, as good as attackers. Attackers will jailbreak models, defender might not, it's the safeguard positive in that case? Carlini actively asks the audience and community for "help" in determining how to proceed basically.

gbibasyesterday at 8:50 PM

[dead]