The premise of your link is founded on the energy associated to with a single prompt. The source in your link for that energy claim links to a blog post that then links back to an earlier blog post from the original author of the link you provided (it's basically a circular reference).
Basically, there's a lot of words in your initial link, but they all hinge on the readers taken the stated energy assumption for a single (undefined) prompt at face value. If that initial assumption is wrong (at min, it's poorly defined in your link) all further conclusions are invalid.any a scientific publication have done this same trickery =].
They don't define what a query is when they are talking about AI power usage. If we want to get serious, we'd tie usage to tokens since we can actually track token usage.
>The source in your link for that energy claim links to a blog post that then links back to an earlier blog post from the original author of the link you provided (it's basically a circular reference).
Huh? The latter blog post does link to the former's blog, but not as a source for that claim. It cites an Altman blog, an estimate from EpochAI, an article in the MIT Technology Review (albeit one that estimates 3x higher), and a paper put out by Google. It's really surprisingly well cited and I don't know how you came away from it thinking it was a circular reference. The google study is in the subheading!