[flagged]
[flagged]
[dead]
[flagged]
[dead]
[dead]
[flagged]
[dead]
They needed to announce something after the Anthropic slop rewrite of Bun.
In an ideal world the would allocate 50% of compute to find errors in that rewrite and publish how bad Claude is, but that would undermine confidence in slop in general so that is not going to happen.
The best way I've found to work with LLMs is another OpenAI project, Symphony (which I implemented for Linear/GitHub and OpenCode[0]).
It integrates with your issue tracker and makes the tracker the UI for the LLM. It also clones the repo for every ticket, and can set up fixtures/etc. I can work on multiple items at a time, which is fantastic because otherwise you have to wait for the LLMs a lot.
Can someone recommend an IDE that can be used with a self-hosted model (via OpenAI or similar)?
[flagged]