logoalt Hacker News

clejacktoday at 2:06 PM0 repliesview on HN

Yes, I recently got access to an annotations platform for llms, and I've found many projects associated with generating chain of thought outputs.

These COT outputs are the same sort of illusion as the general output. Someone is feeding them scripts of what it looks like to solve problems, so they generate outputs that look like problem solving.

I can't remember if I mentioned it previously on here, but an llm seems to be an extremely powerful synthesis machine. If you give it all of the individual components to solve a complex problem that humans might find intractable due to scope or bias, it may be able to crack the problem.