>The source in your link for that energy claim links to a blog post that then links back to an earlier blog post from the original author of the link you provided (it's basically a circular reference).
Huh? The latter blog post does link to the former's blog, but not as a source for that claim. It cites an Altman blog, an estimate from EpochAI, an article in the MIT Technology Review (albeit one that estimates 3x higher), and a paper put out by Google. It's really surprisingly well cited and I don't know how you came away from it thinking it was a circular reference. The google study is in the subheading!
Order of operations:
1) I click your link
2) I click the link associated with the 0.3 Wh of energy claim in the section "The full cost of a prompt".
3) The link from 2) takes me to a blog post from Hannah Ritchie. In Hannah's post, I click a link associated with the following excerpt:
"Third, as a result, more recent estimates suggested that the assumptions I relied on (h/t to Andy Masley’s work on this) — that one standard query used 3 watt-hours (Wh) of electricity — were possibly an order of magnitude too high. In this case, I was happy to be conservative and overestimate the energy use."
4) This link takes me to the author of your original post, but earlier.
None of this quantifies cost per token, which is really the much more relevant metric than whatever a "cost per text based query" means => which I think is both quite broad and quite model dependent.