logoalt Hacker News

LarsDu88today at 5:08 PM3 repliesview on HN

I think an interesting thing about recent AI developments is that its all happening right as we hit the diminishing returns side of another "exponential that's actually a sigmoid" which is Moore's law.

The naive expectation is that AI will slow down b/c Moore's law is coming to an end, but if you really think about the models and how they are currently implemented in silicon, they are still inefficient as hell.

At some point someone will build a tensor processing chip that replaces all the digital matmuls with analogue logamp matmuls, or some breakthrough in memristors will start breaking down the barrier between memory and compute.

With the right level of research funding in hardware, the ceiling for AI can be very high.


Replies

paulpaupertoday at 9:07 PM

Moore's law is bypassed with volume--more datacenters

throwaway27448today at 5:18 PM

Even at orders of magnitude greater speed, we've still hit diminishing returns for quality of output. We simply haven't found anything like superhuman reasoning ability, just superhuman (potentially) reasoning speed.

show 3 replies
cyanydeeztoday at 5:11 PM

they already did put a model into the silicon and it's crazy fast. https://chatjimmy.ai/

I'm pretty sure there's a 3 year design goal starting this year that'll do that to any of the qwen, deepseek, etc models. There's a lot you could do with sped up models of these quality.

It might even be bad enough that the real bubble is how much we don't need giant data centers when 80-90% of use cases could just be a silicon chip with a model rather than as you say, bloated SOTA

show 2 replies