If 40k is the barrier to entry for impressive, that doesn't really sell the usecase of local LLMs very well.
For the same price in API calls, you could fund AI driven development across a small team for quite a long while.
Whether that remains the case once those models are no longer subsidized, TBD. But as of today the comparison isn't even close.
Sure, but now double the team size. Double it again.
Suddenly that $40k is quite reasonable because you’ll never pay another dollar for st least 2-3 years.
With M3 Max with 64GB of unified ram you can code with a local LLM, so the bar is much lower
It's not. I've got a single one of those 512GB machines and it's pretty damn impressive for a local model.
It’s what a small business might have paid for an onprem web server a couple of decades ago before clouds caught on. I figure if a legal or medical practice saw value in LLMs it wouldn’t be a big deal to shove 50k into a closet