Bold fucking claims for a "paper" that: makes an LLM with an awkward architectural tumor, and proves that it doesn't completely die on a purely synthetic task.
Further than most "AI psychosis" papers go, but still not in any way far.
And "makes these treasured black boxes irrelevant"?
With wild claims like this, either demo a generational improvement on a live model or GTFO.
I’ve been here over a decade longer than you sport. No need to bully people out when you are only 8 months in. I will be updating here when the model is live. Expect no further engagement.