> Limitations
>Timestamps/Speaker diarization. The model does not feature either of these.
What a shame. Is whisperx still the best choice if you want timestamps/diarization?
WhisperX is not a model but a software package built around Whisper and some other models, including diarization and alignment ones. Something similar will be built around the Cohere Transcribe model, maybe even just an integration to WhisperX itself.
For podcasts there is this https://news.ycombinator.com/item?id=47584376
I would try Qwen-ASR: https://qwen.ai/blog?id=qwen3asr
See the very bottom of the page for a transcription with timestamps.
There is also: https://github.com/linto-ai/whisper-timestamped
It doesn't use an extra model (so it supports every language that works with Whisper out of the box and use less memory), it works by applying Dynamic Time Warping to cross-attention weights.
Even in the commercial space, there’s a lack of production grade ASR APIs that support diarization and word level timestamps.
My experiences with Google’s Chirp have been horrendous, with it sometimes skipping sections of speech entirely, hallucinating speech where the audio contains noise, and unreliable word level timestamps. And this all is even with using their new audio prefiltering feature.
AWS works slightly better, but also has trouble with keeping word level timestamps in sync.
Whisper is nice but hallucinates regularly.
OpenAI’s new transcription models are delivering accurate output but do not support word level timestamps…
A lot of this could be worked around by sending the resulting transcripts through a few layers of post processing, but… I just want to pay for an API that is reliable and saves me from doing all that work.