logoalt Hacker News

speedgoosetoday at 6:57 AM1 replyview on HN

I follow the llama.cpp runtime improvements and it’s also true for this project. They may rush a bit less but you also have to wait for a few days after a model release to get a working runtime with most features.


Replies

Maxioustoday at 7:28 AM

Model authors are welcome to add support to llama.cpp before release like IBM did for granite 4 https://github.com/ggml-org/llama.cpp/pull/13550