logoalt Hacker News

kgeisttoday at 9:15 AM0 repliesview on HN

>No mention of the fact that Ollama is about 1000x easier to use

I remember changing the context size from the default unusable 2k to something bigger the model actually supports required creating a new model file in Ollama if you wanted the change to persist (another alternative: set an env var before running ollama; although, if you go that low-level route, why not just launch llama.cpp). How was that easier? Did they change this?

I remember people complaining model X is "dumb" simply because Ollama capped the context size to a ridiculously small number by default.

IMHO trying to model Ollama after Docker actually makes it harder for casual users. And power users will have it easier with llama.cpp directly