The blog post was published a couple months ago, and it looks like there hasn't been a follow-up release with the fully trained model. I'm not sure if there's much to take away from an early checkpoint besides the unique architectural choices they made in their model for faster inference.
Some smaller models from the LFM2.5 family have been published on Huggingface by the end of March, a month ago.
It can be assumed that this larger model takes more time to complete post-training, but it will follow in the near future after those smaller LFM2.5 models.