Maintainer of WhisperKit here, confirming we do exactly that for longform. We search for the longest "low energy" silence in the second half of the audio window and set the chunking point to the middle of that silence. It uses a version of the webrtc vad algorithm, and significantly speeds up longform because we can run a large amount of concurrent inference requests through CoreML's async prediction api. Whisper is also pretty smart with silent portions since the encoder will tell it if there are any words at all in the chunk, and simply stop predicting tokens after the prefill step - although you could save the ~100ms encoder run entirely with a good vad model, which our recently opensourced pyannote CoreML pipeline can do.
Oh nice, the pyannote coreml port is interesting. Last time I looked at pyannote it was pytorch only so getting it to run efficiently on apple silicon was kind of a pain. Does the coreml version handle diarization or just activity detection?