logoalt Hacker News

swazzyyesterday at 5:35 AM5 repliesview on HN

similar vibes as "640k ought to be enough for anybody"


Replies

Philip-J-Fryyesterday at 9:36 AM

I think the difference is that with LLMs, in a lot of cases you do see some diminishing returns.

I won't deny that the latest Claude models are fantastic at just one shotting loads of problems. But we have an internal proxy to a load of models running on Vertex AI and I accidentally started using Opus/Sonnet 4 instead of 4.6. I genuinely didn't know until I checked my configuration.

AI models will get to this point where for 99% of problems, something like Gemma is gonna work great for people. Pair it up with an agentic harness on the device that lets it open apps and click buttons and we're done.

I still can't fathom that we're in 2026 in the AI boom and I still can't ask Gemini to turn shuffle mode on in Spotify. I don't think model intelligence is as much of an issue as people think it is.

show 5 replies
shermantanktopyesterday at 6:02 AM

Well you can do a lot with 640k…if you try. We have 16G in base machines and very few people know how to try anymore.

The world has moved on, that code-golf time is now spent on ad algorithms or whatever.

Escaping the constraint delivered a different future than anticipated.

show 4 replies
pdpiyesterday at 8:50 AM

Look at the whole history of computing. How many times has the pendulum swung from thin to fat clients and back?

I don't think it's even mildly controversial to say that there will be an inflection point where local models get Good Enough and this iteration of the pendulum shall swing to fat clients again.

fliryesterday at 7:29 AM

Assuming improvements in LLMs follow a sigmoid curve, even if the cloud models are always slightly ahead in terms of raw performance it won't make much of a difference to most people, most of the time.

The local models have their own advantages (privacy, no -as-a-service model) that, for many people and orgs, will offset a small performance advantage. And, of course, you can always fall back on the cloud models should you hit something particularly chewy.

(All IMO - we're all just guessing. For example, good marketing or an as-yet-undiscovered network effect of cloud LLMs might distort this landscape).

iso1631yesterday at 12:40 PM

More than "a 3 year old laptop is fine"

My thinkpad is nearly 10 years old, I upgraded it to 32GB of ram and have replaced the battery a couple of times, but it's absolutely fine apart from that.

If AI which was leading edge in 2023 can run on a 2026 laptop, then presumably AI which is leading edge in 2026 will run on a 2029 laptop. Given that 2023 was world changing then that capacity is now on today's laptop

Either AI grows exponentially in which case it doesn't matter as all work will be done by AI by 2035, or it plateaus in say 2032 in which case by 2035 those models will run on a typical laptop.