logoalt Hacker News

kamranjonlast Saturday at 6:32 PM2 repliesview on HN

I wouldn't say that they aren't useful for inference (there are pretty clear performance improvements even from the asahi effort you linked) - it's just that you have to convert the model ahead of time to be compatible with the ANE which is explained in the readme docs for whisper.cpp that I linked above.

I would say though that this likely excludes them from being useful for training purposes.


Replies

zozbot234last Saturday at 6:42 PM

Note that I was only commenting on modern quantized LLM's that basically avoid formats like FP16 or INT8, preferring lower precision wherever feasible. When in-memory model values must be padded to FP16/INT8 this slashes your effective use of memory bandwidth, which is what determines token generation speed. So the only feasible benefits are really in the prompt pre-processing phase, and even then only in lower power use compared to GPU, not really in higher speed.

show 1 reply
conradevlast Saturday at 6:58 PM

My understanding is that model throughput is fundamentally limited at some point by the fact that the ANE is less wide than the GPU.

At that point, the ANE loses because you have to split the model into chunks and only one fits at a time.

show 1 reply