logoalt Hacker News

apublicfroglast Sunday at 9:58 PM8 repliesview on HN

> It's a very dangerous gamble. Today incredible value is available for nearly everyone. But it may stop without any warning, for reason outside our control.

What stops you from running the best open weighted LLMs currently available on consumer grade hardware for the rest of time? They're good enough for 95% of use cases, and they don't have a used by date. From what I can see, the "danger" is not having the next tier that comes out, but the impact of that is very low.


Replies

gioboxlast Sunday at 10:01 PM

> they don't have a used by date

For quite a lot of use cases, the current systems arguably do get worse over time if not continually updated. The knowledge cutoff date will start to hurt more and more as the weights age in a hypothetical scenario where you are stuck with them forever.

Coding, one of the most popular usescases today, would not be great if it say only understood java to a version from years ago etc.

https://en.wikipedia.org/wiki/Knowledge_cutoff

show 8 replies
turtlebitslast Sunday at 10:17 PM

FOMO. A new model comes out weekly and the HN crowd debates over the minutia of changes.

Pockets are too deep, it will only change once everyone is out of money.

show 1 reply
lxgrlast Sunday at 10:46 PM

They’re really not good enough, unless you consider 64 GB of memory or more consumer grade.

show 1 reply
root_axisyesterday at 2:42 AM

> They're good enough for 95% of use cases

They're not at all, not even close. Especially when you consider the use cases for people who are paying for LLM services today.

nightskilast Sunday at 10:33 PM

Hardware. Frontier labs are driving up demand so much that it's priced significantly above cost making it far less affordable. Just look at Nvidia's profit margins.

suikalast Sunday at 10:06 PM

The use cases in the future will be nothing like the use cases from today.

show 1 reply
avazhiyesterday at 12:37 AM

> What stops you from running the best open weighted LLMs currently available on consumer grade hardware for the rest of time?

Uh… the hardware requirements? And stop acting like some dog shit 8B model the average Joe can run on a laptop is even close to being comparable to what Claude or even Codex can currently do.

I have pretty good hardware and I’ve tinkered with the best sub-150B models you can use and they are awful compared to Anthropic/OAI/Grok.

show 2 replies
ai_fry_ur_brainlast Sunday at 11:40 PM

95% of usecases. What are you smoking.

show 2 replies