If I didn’t empirically prove it I would agree with you. Computation and semiotics are two fields that want nothing to do with each other. The SRT is through several stages of quantitative validation. For the first time in 150 years semiotics is not a philosophy. It is proven to have computational value. The SRT bolts onto any model and improves it. I’ll be sure to link you to the benchmarks when published. The relevance is it makes these treasured black boxes irrelevant.
Bold fucking claims for a "paper" that: makes an LLM with an awkward architectural tumor, and proves that it doesn't completely die on a purely synthetic task.
Further than most "AI psychosis" papers go, but still not in any way far.
And "makes these treasured black boxes irrelevant"?
With wild claims like this, either demo a generational improvement on a live model or GTFO.