But planning like this is absolutely something AI can do. In fact, this is exactly the kind of thing we start with on our team when it comes to using AI agents. We have a ticket with just a simple title that somebody threw in there, and we asked the AI to spin up a bunch of research agents to understand and plan and ask itself those questions.
Funny enough, all the questions that you posed are things that come up right away that the agent asks itself, and then goes and tries to understand and validate an answer, sometimes with input from the user. But I think this planning mechanism is really critical to being able to have an AI generate an understanding, then have it be validated by a human before beginning implementation.
And by planning I don't necessarily mean plan mode in your agent harness of choice. We use a custom /plan skill in Claude Code that orchestrates all of this using multiple agents, validation loops, and specific prompts to weed out ambiguities by asking clarifying questions using the ask user question tool.
This results in taking really fuzzy requirements and making them clear, and we automate all of this through linear but you could use your ticket tracker of choice.
Absolutely. Eventually the AI will just talk to the CEO / the board to get general direction, and everything will just fall out of that. The level of abstraction the agents can handle is on a steady upward trajectory.