logoalt Hacker News

10xDevyesterday at 11:37 AM3 repliesview on HN

The fact that models aren't continually updating seems more like a feature. I want to know the model is exactly the same as it was the last time I used it. Any new information it needs can be stored in its context window or stored in a file to read the next it needs to access it.


Replies

kergonathyesterday at 11:58 AM

> The fact that models aren't continually updating seems more like a feature.

I think this is true to some extent: we like our tools to be predictable. But we’ve already made one jump by going from deterministic programs to stochastic models. I am sure the moment a self-evolutive AI shows up that clears the "useful enough" threshold we’ll make that jump as well.

show 1 reply
lxgryesterday at 9:16 PM

Persistent memory through text in the context window is a hack/workaround.

And generally:

> I want to know the model is exactly the same as it was the last time I used it.

What exactly does that gain you, when the overall behavior is still stochastic?

But still, if it's important to you, you can get the same behavior by taking a model snapshot once we crack continuous learning.

jnd-czyesterday at 2:16 PM

Unless you use your oen local models then you don't even know when OpenAI or Anthropic tweaked the model less or more. One week it's a version x, next week it's a version y. Just like your operating system is continuously evolving with smaller patches of specific apps to whole new kernel version and new OS release.

show 1 reply