logoalt Hacker News

EastLondonCoderlast Friday at 12:35 PM2 repliesview on HN

I’ve been using GPT-4o and now 5.2 pretty much daily, mostly for creative and technical work. What helped me get more out of it was to stop thinking of it as a chatbot or knowledge engine, and instead try to model how it actually works on a structural level.

The closest parallel I’ve found is Peter Gärdenfors’ work on conceptual spaces, where meaning isn’t symbolic but geometric. Fedorenko’s research on predictive sequencing in the brain fits too. In both cases, the idea is that language follows a trajectory through a shaped mental space, and that’s basically what GPT is doing. It doesn’t know anything, but it generates plausible paths through a statistical terrain built from our own language use.

So when it “hallucinates”, that’s not a bug so much as a result of the system not being grounded. It’s doing what it was designed to do: complete the next step in a pattern. Sometimes that’s wildly useful. Sometimes it’s nonsense. The trick is knowing which is which.

What’s weird is that once you internalise this, you can work with it as a kind of improvisational system. If you stay in the loop, challenge it, steer it, it feels more like a collaborator than a tool.

That’s how I use it anyway. Not as a source of truth, but as a way of moving through ideas faster.


Replies

BrtBytelast Friday at 3:49 PM

Once you drop the idea that it's a knowledge oracle and start treating it as a system that navigates a probability landscape, a lot of the confusion just evaporates

ostackelast Friday at 12:57 PM

Interesting concept with conceptual spaces, but how does that affect how you work with LLM:s in practice?

show 1 reply