logoalt Hacker News

robvirenyesterday at 3:44 AM1 replyview on HN

I find it fascinating to give the LLMs huge stacks of reflective context. It's incredible how good they are at feeling huge amounts of csv like data. I imagine they would be good at trimming their context down.

I did some experiments by exposing the raw latent states, using hooks, of a small 1B Gemma model to a large model as it processed data. I'm curious if it is possible for the large model to nudge the smaller model latents to get the outputs it wants. I desperately want to get thinking out of tokens and into latent space. Something I've been chasing for a bit.


Replies

planckscnstyesterday at 6:16 AM

Yes - I think there is untapped potential into figuring out how to understand and use the latent space. I'm still at the language layer. I occasionally stumble across something that seems to tap into something deeper and I'm getting better at finding those. But direct observability and actuation of those lower layers is an area that I think is going to be very fruitful of we can figure it out