> So if Bob can do things with agents, he can do things.
I think the key issue is whether Bob develops the ability to choose valuable things to do with agents and to judge whether the output is actually right.
That’s the open question to me: how people develop the judgment needed to direct and evaluate that output.
There's a long, detailed, often repeated answer to your open question in the article.
Namely, if you can't do it without the AI, you can't tell when it's given you plausible sounding bullshit.
So Bob just wasted everyone's time and money.