The good news is that, apart from the models themselves, we don't need much from these companies:
- Use Opencode and other similar open-source solutions in place of their proprietary harnesses. This isn't very practical right now because of the heavily subsidized subscriptions that are hard to compete with. But subsidies will end soon, and with progress in inference, it should be very doable to work with open-source clients in the near future.
- Use Openrouter and similar to abstract the LLM itself. That makes AI companies interchangeable and removes a lot of any moat they might have.