Llama.cpp now has a gui installed by default. It previously lacked this. Times have changed.
While that might be true, for as long as its name is “.cpp”, people are going to think it’s a C++ library and avoid it.
Frankly I think the cli UX and documentation is still much better for ollama.
It makes a bunch of decisions for you so you don't have to think much to get a model up and running.
Having read above article, I just gave llama.cpp a shot. It is as easy as the author says now, though definitely not documented quite as well. My quickstart:
brew install llama.cpp
llama-server -hf ggml-org/gemma-4-E4B-it-GGUF --port 8000
Go to localhost:8000 for the Web UI. On Linux it accelerates correctly on my AMD GPU, which Ollama failed to do, though of course everyone's mileage seems to vary on this.