This has nothing to do with the cost of storage. Surprisingly, you are not better informed than Anthropic on the subject of serving AI inference models.
A sibling comment explains:
https://news.ycombinator.com/item?id=47886200