logoalt Hacker News

culopatinlast Friday at 1:50 AM2 repliesview on HN

Do you think this will continue growing if we stop struggling and posting our findings on forums?


Replies

vunderbalast Friday at 2:37 AM

Yeah, I think that's a legitimate concern. It's hard to know, even with sufficient training data, how far these systems can actually generalize their problem-solving abilities when they become data starved in the future either because of scarcity or that any potential new training data is contaminated by LLM radiation.

Too bad we don’t have a portal gun to access an infinite number of parallel universes where large language models were never invented for sources of unlimited fresh training data and unlimited palpatine power.

show 1 reply
adammarpleslast Friday at 9:16 AM

I don't think so, because Anthropic now has your question, the steps it tried, and the solution that finally worked, all in text form, already on their servers thanks to your claude session. Claude usage is itself a goldmine of training data.

show 1 reply