I find it interesting that Anthropic is in this position and not OpenAI. Where did OpenAI go wrong? Lack of focus and overambitious in some of their spending commitments?
Isn’t the only thing OpenAI did was: throwing a half baked model out for the public to go ham on? I was at Google when they did this and we already had working LLMs internally, they just weren’t good enough to release without PR backlash. I don’t see why such a pithy “advantage” should have led to anything other than a moment in the spotlight? The “we have no moat and neither does OpenAI” essay was published very shortly afterwards.
If anything you ought to expect them to be behind, since they took the position of making all the mistakes first so others (who already had the same or better tech) didn’t have to.
>Where did OpenAI go wrong?
OpenAI was Anthropic. Anyone involved in actually developing GPT jumped ship when Altman performed his coup.
[dead]
Does not seem that complicated. OpenAI basically had to do a lock-in deal with Microsoft/Azure at the time, and they pioneered this circular funding hyperscaler deal structure so there were some rough edges.
Anthropic (all ex Open AI) knew the negatives of the deal, so they made a slightly better deal with AWS, not a full lock in. They also grounded it in hardware from the start, ie. being the flagship customer for Trainium, the flagship customer for external usage of TPU's.