For simple workflows or once-off workflows, that's a good approach.
For long running repeatable workflows (eg you want to leave your agent running over night, you want to run the same workflows over and over in different projects, or more autonomous Devin-like workflows) or you want audit trails/observability, vetted workflows (ie not have the LLM write them; or have the LLM write them and you review them) without having to read through scripts, or you have more complex requirements like having different models/providers for different workflow stages or the things I mentioned previously (context, plans, verification, etc), or you have more complex workflow needs (swarms or fork/join, parallel pipelines, routing/branching, error recovery or routing, etc) then a robust dedicated workflow engine is needed in my personal opinion.
I think for most users using claude/codex for themselves on smallish projects, its unnecessary, but was you scale up, I feel that more powerful tools are needed. Also, for corporate, where you need repeatable workflows with audit trails, artefact management, and job queue based task management starts becoming more important too.
I also feel that using a workflow engine as an internal behind-the-scenes system in a GUI-centric vibe coding tool might also help raise the ceiling compared to the existing tools, but I've yet to test that hypothesis. Just because it takes the mistakes out of the users hands: the engine will follow proven workflows, whether you ask it to or not, keeping skills for context/knowledge, not for orchestration.
Something else I've been experimenting with a little, but not enough yet to have an opinion, is small language models running locally for orchestration, and frontier models for doing work.