Hmm.
I had to rebuild llama.cpp from source with the SYCL and CPU specific backends.
Started with a barebones Ubuntu Server 24 LTS install, used the HWE kernel, pulled in the Intel dependencies for hardware support/oneapi/libze, then built llama.cpp with the Intel compiler (icx?) for the SYCL and NATIVE backends (CPU specific support).
In short, built it based mostly on the Intel instructions.