Why are you thinking about phones specifically? Most heavy users are on laptops and workstations. On smartphones there might be a few more innovations necessary (low latency AI computing on the edge?)
Many laptops and workstations also fell for the NPU meme, which in retrospect was a mistake compared to reworking your GPU architecture. Those NPUs are all dark silicon now, just like these Taalas chips will be in 12-24 months.
Dedicated inference ASICs are a dead end. You can't reprogram them, you can't finetune them, and they won't keep any of their resale value. Outside cruise missiles it's hard to imagine where such a disposable technology would be desirable.
Many laptops and workstations also fell for the NPU meme, which in retrospect was a mistake compared to reworking your GPU architecture. Those NPUs are all dark silicon now, just like these Taalas chips will be in 12-24 months.
Dedicated inference ASICs are a dead end. You can't reprogram them, you can't finetune them, and they won't keep any of their resale value. Outside cruise missiles it's hard to imagine where such a disposable technology would be desirable.