logoalt Hacker News

storusyesterday at 2:43 AM3 repliesview on HN

With local inference on pretty decent local models we have nowadays (Qwen-3.5 and better) it's not much of a concern anymore.


Replies

walthamstowyesterday at 12:01 PM

Sure, if you've got a £5k laptop

IncreasePostsyesterday at 7:07 PM

Sure it is - there's still an opportunity cost of spending tokens(time/energy) creating a library from scratch vs using a preexisting well understood API.

Bishonen88yesterday at 7:12 AM

what percentage of people is using local models for anything serious? I reckon single digits if even that. And for a corporate work environment, probably close to 0.