logoalt Hacker News

anakainetoday at 7:10 AM3 repliesview on HN

Llama.cpp now has a gui installed by default. It previously lacked this. Times have changed.


Replies

nikodunktoday at 7:21 AM

Having read above article, I just gave llama.cpp a shot. It is as easy as the author says now, though definitely not documented quite as well. My quickstart:

brew install llama.cpp

llama-server -hf ggml-org/gemma-4-E4B-it-GGUF --port 8000

Go to localhost:8000 for the Web UI. On Linux it accelerates correctly on my AMD GPU, which Ollama failed to do, though of course everyone's mileage seems to vary on this.

show 1 reply
OtherShrezzingtoday at 7:12 AM

While that might be true, for as long as its name is “.cpp”, people are going to think it’s a C++ library and avoid it.

show 3 replies
mijoharastoday at 7:22 AM

Frankly I think the cli UX and documentation is still much better for ollama.

It makes a bunch of decisions for you so you don't have to think much to get a model up and running.