logoalt Hacker News

jwryesterday at 8:42 AM5 repliesview on HN

As a heavy user of MacWhisper (for dictation), I'm looking forward to better speech-to-text models. MacWhisper with Whisper Large v3 Turbo model works fine, but latency adds up quickly, especially if you use online LLMs for post-processing (and it really improves things a lot).


Replies

atiorhyesterday at 8:09 PM

MacWhisper supports 10x faster models with the same accuracy like Parakeet v2 (they were the first to do it 6-9 months ago). Have you tried those?

kavithyesterday at 9:31 AM

Not sure if this will help but I've set up Handy [1] with Parakeet V2 for STT and gpt-oss-120b on Cerebras [2] for post-processing and I'm happy with the performance of this setup!

[1] https://handy.computer/ [2] https://www.cerebras.ai/

show 1 reply
regularfryyesterday at 9:19 AM

If you haven't already, give the models that Handy supports a try. They're not Whisper-large quality, but some of them are very fast.

kermitimeyesterday at 10:56 AM

the parakeet TDT models that are coreml optimized by fluid audio are hands down the fastest local models i’ve tried— worth checking out!

(unloading to the NPU is where the edge is)

https://huggingface.co/FluidInference/parakeet-tdt-0.6b-v2-c...

https://github.com/FluidInference/FluidAudio

The devs are responsive and active and nice on their discord too. You’ll find discussions on all the latest whizbangs with VAD, TTS, EOU etc

smcleodyesterday at 12:18 PM

Handy with parakeet v2 is excellent