llama.cpp moves too quickly to be added as a stable package. Instead, you can get it directly from AUR: https://aur.archlinux.org/packages?O=0&K=llama.cpp
There are packages for Vulkan, ROCm and CUDA. They all work.