Hmm..
pacman -Ss ollama | wc -l
16
pacman -Ss llama.cpp | wc -l
0
pacman -Ss lmstudio | wc -l
0
Maybe some day.yay -S llama.cpp
I just installed llama.cpp on CachyOS after reading this article. It’s much faster and better than Ollama.
llama.cpp moves too quickly to be added as a stable package. Instead, you can get it directly from AUR: https://aur.archlinux.org/packages?O=0&K=llama.cpp
There are packages for Vulkan, ROCm and CUDA. They all work.