logoalt Hacker News

zozbot234today at 8:05 AM0 repliesview on HN

Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.