logoalt Hacker News

VadimPRtoday at 2:33 PM2 repliesview on HN

These security failures from Anthropic lately reveal the caveats of only using AI to write code - the safety an experienced engineer is not matched by an LLM just yet, even if the LLM can seemingly write code that is just as good.

Or in short, if you give LLMs to the masses, they will produce code faster, but the quality overall will degrade. Microsoft, Amazon found out this quickly. Anthropic's QA process is better equipped to handle this, but cracks are still showing.


Replies

FuckButtonstoday at 4:59 PM

To a certain extent, I do wonder if just letting claude do everything and then using the bug reports and CVE’s they find as training data for an RL environment might be part of the plan. “Here’s what you did, here’s what fixed it, don’t fuck up like that again"

squeegmeistertoday at 2:45 PM

Anthropic has a QA process? I run into bugs on the regular, even on the "stable" release channel