logoalt Hacker News

testing22321today at 2:44 PM3 repliesview on HN

I see all these LLM posts about if a certain model can run locally on certain hardware and I don’t get it.

What are you doing with these local models that run at x tokens/sec.

Do you have the equivalent of ChatGPT running entirely locally? What do you do with it? Why? I honestly don’t understand the point or use case.


Replies

svachalektoday at 4:17 PM

1. There are small local models that have the capabilities of frontier models a year ago

2. They aren't harvesting your data for government files or training purposes

3. They won't be altered overnight to push advertising or a political agenda

4. They won't have their pricing raised at will

5. They won't disappear as soon as their host wants you to switch

show 2 replies
samueltoday at 2:50 PM

Chat is certainly an option, but the real deal are agents, which have access to way more sensitive information.

show 1 reply
dec0dedab0detoday at 3:18 PM

most of the llm tooling can handle different models. Ollama makes it easy to install and run different models locally. So you can configure aider or vscode or whatever you're using to connect to chatgpt to point to your local models instead.

None of them are as good as the big hosted models, but you might be surprised at how capable they are. I like running things locally when I can, and I also like not worrying about accidentally burning through tokens.

I think the future is multiple locally run models that call out to hosted models when necessary. I can imagine every device coming with a base model and using loras to learn about the users needs. With companies and maybe even households having their own shared models that do heavier lifting. while companies like openai and anhtropic continue to host the most powerful and expensive options.

show 2 replies