Yes. The models are good, the models are fast, and the internal tooling has caught up at this point too. There's a lot of UI/UX/tooling stuff that's still being worked through, integrations with VCS, and solving deeper problems that I probably can't talk about, but I'd say the frustrations of most are about the rate of change much more than the actual abilities.
One thing that's interesting is a bunch of internal thought leaders who swear by the Flash models over the Pro models. Whether this is true or not doesn't really matter, the interesting bit to me is that we are at a point with the models where "better" models are not necessarily more useful, and that faster with more work on the harnesses may be a better trade-off.
>One thing that's interesting is a bunch of internal thought leaders who swear by the Flash models over the Pro models.
I've seen people outside Google favoring flash Gemini models over the Pro.
There are also some benchmarks where flash models have higher scores, so yes, apparently speed does matter.
You’re absolutely kidding yourself if you genuinely believe that.
> a bunch of internal thought leaders who swear by the Flash models over the Pro models
I'm coming around on this too. deepseek-v4-flash is impressive.