logoalt Hacker News

1970-01-01today at 4:35 PM1 replyview on HN

Let's not gloss over the electrical supply. These chips won't work for free.


Replies

jampekkatoday at 5:01 PM

LLM inference uses on the order of 1 Wh per query. That's under 10 meters of driving on an EV or running air conditioning for under 5 seconds.

https://hannahritchie.substack.com/p/ai-footprint-august-202...

show 2 replies