logoalt Hacker News

darkwateryesterday at 4:30 PM2 repliesview on HN

In a not-too-distant future (5 years?) small LLMs will be good enough to be used as generic models for most tasks. And if you have a dedicated ASIC small enough to fit in an iPhone, you have a truly local AI device with the bonus point that you get something really new to sell in every new generation (i.e. acces to an even more powerful model)


Replies

wmfyesterday at 6:58 PM

The Taalas approach is much more expensive than the NPU that phones already have.

show 1 reply
throwthrowuknowyesterday at 4:59 PM

it doesn’t need to go in the phone if it only takes a few milliseconds to respond and is cheap

show 2 replies