Just a cautionary case of porting to Rust using AI
https://blog.katanaquant.com/p/your-llm-doesnt-write-correct...
i think theres a different lesson to be taken from those cases - the LLM will build to what you give a feedback loop for.
if you give just the logical tests, it wont consider the speed at all. if you included tests that measure the speed and ask the llm to match the performance, itll do that too.
its the same class of error as everything else with llms. it has no common sense context for things people consider important. if you dont enforce the boundaries, it will ignore them
Discussed here if anyone's interested:
LLMs work best when the user defines their acceptance criteria first - https://news.ycombinator.com/item?id=47283337 - March 2026 (422 comments)
Also passing tests doesn't mean something works.
Claude code C compiler passed 100% of gcc tests and couldn't even run a hello world...