As long as Apple and Google put reasonable AI capabilities on device, then software engineers will use those capabilities when it makes sense (the article gives lots of good examples of capabilities that make sense to run locally). As the author notes, it's cheaper and more reliable to run these things locally.
That also doesn't preclude LLM services from being massively successful, they'll just have to justify the pricing and complexity that comes with their adoption, just like any other product.
> they'll just have to justify the pricing
like by selling it at a loss to build dependencies and then jacking the price up year after year by whatever amount is just below the cost of removing the dependency
In an ideal world they will. In reality most will use online AI, because it's path of least resistance and more familiar.
> That also doesn't preclude LLM services from being massively successful, they'll just have to justify the pricing and complexity that comes with their adoption, just like any other product.
What is completely different from every other product is how much they’re spending, and how much they’re obligating themselves to spend going forward. I think there’s a very good chance that the existing providers could be miles underwater coming out of this. Even if the business is not the everything to everybody that they’re banking on it being, they still owe all of that money back to the people they borrowed it from, and they will be a lot less likely to float them cash to get them back to a normal operating mode if they burned the last ocean of cash promising the universe and winding up with “oh yeah, that’s pretty useful sometimes.”