Yeah that makes sense, chunking on silence would sidestep the latency issue pretty cleanly. I've been running it through a basic fastapi wrapper so it just takes whatever audio blob gets thrown at it, no chunking logic on the server side. Might be worth adding a vad pass before sending to whisper though, would cut down on processing dead air too.
Maintainer of WhisperKit here, confirming we do exactly that for longform. We search for the longest "low energy" silence in the second half of the audio window and set the chunking point to the middle of that silence. It uses a version of the webrtc vad algorithm, and significantly speeds up longform because we can run a large amount of concurrent inference requests through CoreML's async prediction api. Whisper is also pretty smart with silent portions since the encoder will tell it if there are any words at all in the chunk, and simply stop predicting tokens after the prefill step - although you could save the ~100ms encoder run entirely with a good vad model, which our recently opensourced pyannote CoreML pipeline can do.