logoalt Hacker News

hamstergeneyesterday at 7:21 PM23 repliesview on HN

Because the most important parts of the expertise are coming from their internal "world model" and are inseparable from it.

An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.

Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.

Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.


Replies

hibikirtoday at 3:52 AM

A non-trivial part of the big difference between the juniors that seem talented and "get it", and those that don't is precisely their ability to form accurate enough world models quickly. You can tell who is going at the "physics" of software and applying them, and who is just writing down recipes, and doesn't try to understand the nature of any of the steps.

It's especially noticeable when teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease. The bones of how computation works aren't changing, just how one puts together the pieces.

show 6 replies
gooseyardyesterday at 10:27 PM

By complete coincidence, yesterday I came across this link to an article Peter Naur wrote in 1985 (https://pages.cs.wisc.edu/~remzi/Naur.pdf) which I haven't been able to stop thinking about.

I've been doing this for coming up on thirty years now, mostly at one large company, and I spent a significant number of hours every week fielding questions from people who are newer at it who are having trouble with one thing or another. Often I can tell immediately from the question that the root of the problem is that their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem. Often they will complain that documentation is inadequate or missing, or that we don't do it the way everyone else does, or whatever, and there's almost always some truth to that.

The challenge then is to find a way to represent your own theory of whatever the thing is into some kind of symbolic representation, usually some combination of text and diagrams which, shown to a person of reasonable experience and intelligence, would conjure up a mental model in the reader which is similar to your own. In other words you want to install your theory into the mind of another person.

A theory of the type Naur describes can't be transplanted directly, but I think my job as a senior developer is to draw upon my experience, whether it was in the lecture hall or on the job, to figure out a way of reproducing those theories. That's one of the reasons why communication skills are so critical, but its not just that; a person also needs to experience this process of receiving a theory of operation from another person many times over to develop instincts about how to do it effectively. Then we have to refine those intuitions into repeatable processes, whether its writing documents, holding classes, etc.

This has become the most rewarding part of my work, and a large part of why I'm not eager to retire yet as long as I feel I'm performing this function in a meaningful way. I still have a great deal to learn about it, but I think that Naur's conception of what is actually going on here makes it a lot more clear the role that senior engineers can play in the long term function of software companies if its something they enjoy doing.

show 6 replies
LooseMarmosettoday at 12:36 AM

I think that this is actually a good thing. If everyone had the same internal world model, we would have very little innovation.

I try to train and mentor those that are junior to me. I try to show them what is possible, and patterns that result in failure. This training is often piecemeal and incomplete. As much as I can, I communicate why I do the things I do, but there are very few things I tell them not to do.

I am often surprised at the way people I have trained solve problems, and frequently I learn things myself.

Training is less successful for those who aren’t interested in their own contributions, and who view the job only as a means to get paid. I am not saying those people are wrong to think that way, but building a world view of work based on disinterest isn’t going to let people internalize training.

show 1 reply
frgturpwdtoday at 8:42 AM

Yeah, you can't get it out in "one session of conversation", but you definitely can under a different... context.

"Seeing the work reveals what matters. Even if the master were a good teacher, apprenticeship in the context of on-going work is the most effective way to learn. People are not aware of everything they do. Each step of doing a task reminds them of the next step; each action taken reminds them of the last time they had to take such an action and what happened then. Some actions are the result of years of experience and have subtle reasons; other actions are habit and no longer have a good justification. Nobody can talk better about what they do and why they do it than they can while in the middle of doing it."

show 1 reply
spondylyesterday at 9:18 PM

> An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities.

"Transmissionism" is a term I've seen to describe this

https://andymatuschak.org/books/

gfodyyesterday at 7:50 PM

this is why I only communicate in poetry

complexity is

not what you believe it is

please try listening

show 2 replies
k__today at 8:07 AM

I'd say, on averaged, it's 50% what you say and 50% communication issues.

Most smart juniors have no problem with learning. Perceptual exposure and deliberate practice works almost mechanically. However, if someone can't tell you what examples you should be exposed to, you'll learn crap.

forlorn_mammothyesterday at 8:46 PM

good things llms solve this problem by assuming everything can be put into words and then convincing the world this is true.

show 1 reply
mschulkindyesterday at 11:38 PM

This is surprisingly close to a personal theory I've been working on. I've been describing how to use AI to people as engaging the world model in their head, organization, or software.

I'd love to talk more live. I think I have some ideas you'd be interested in. Find me in my profile.

crabbonetoday at 9:42 AM

Another part of the equation is practice.

Long before the discussion of the morality of AI went mainstream, I ran into a problem with making what appeared to be ethical choices in automation, and then went on a journey of trying to figure this all ethics thing out (took courses in university, read some books...)

I made an unexpected discovery reading Jonathan Haid's... either Righteous Mind or the Happiness Hypothesis. He claimed that practicing ethics, as is common in religious societies is an integral and important part of being a good person. This is while secular societies often disregard this aspect and imagine ethics to be something you learn exclusively by reading books or engaging in similar activity that has exclusively the descriptive side, but no practice whatsoever.

I believe this is the same with expertise. Part of it is gained through practice, and that is an unskippable part. Practice will also usually require more time than the meta-discussion of the subject.

To oversimplify it, a novice programmer who listened to every story told by a senior, memorized and internalized them, but sill can't touch-type will be worse at everyday tasks pertaining to their occupation. It's not enough to know touch-typing exists, one must practice it and become good at it in order to benefit from it. There are, of course, more, but less obvious skills that need practice, where meta-knowledge simply can't be used as a substitute. There are cues we learn to pick up by reading product documentation which will tell us if the product will work as advertised, whether the product manufacturer will be honest or fair with us, will the company making the product go out of business soon or will they try to bait-and-switch etc.

When children learn to do addition, it's not enough to describe to them the method (start counting with first summand, count the number of times of the second summand, the last count is the result), they actually must go through dozens of examples before they can reliably put the method to use. And this same property carries over to a lot of other activities, even though we like to think about ourselves as being able to perform a task as soon as we understand the mechanism.

dogcomplexyesterday at 10:06 PM

Correct. One just has to realize that the cost of communication (and the context/memory lost along the way to train that understanding) is often just far higher than anyone has patience for. To fully understand the expert, they must become the expert. (or at least a hell of a lot closer than they were)

This is also why average people with little time to commit find it hard to realize the importance and depth of AI. It's a full on university education exploring those.

accidentallfacttoday at 10:27 AM

I'm going to get downvoted to hell for this, but you described the exact reason why education is a waste of time.

show 1 reply
ChrisMarshallNYtoday at 1:33 AM

Just wanted to say thanks for this.

Great thread.

coipyesterday at 10:51 PM

“Cursive knowledge”, as an old boss told me. Was incredibly ironic when he leaned into my misunderstanding.

themafiatoday at 5:17 AM

> AI can blow you out of the water at knowing more facts

Yea, but, I have a search engine that contains all the original uncompressed training data, so I'm back on top. How we collectively forgot this is amazing to me.

> and they need to have the right project that provides the opportunity to learn what needs to be learnt.

It takes _time_. I solve problems the way I do because I've had my fair share of 2am emergency calls, unexpected cost blowups, and rewrite failures in my career. The weariness is in my bones at this point.

zsoltkacsandiyesterday at 7:30 PM

That’s very well put.

danieltanfh95today at 1:36 AM

yep, as I was exploring in https://danieltan.weblog.lol/2026/05/dunning-kruger-and-the-... , the expert pays the "communication tax" to dumb down concepts that the listener can understand. There is a gap between domain understanding and what is being conveyed that is similar for human-llm interactions as well.

jongjongyesterday at 11:45 PM

Great points. Words allow one to communicate an approximation of part of what one knows.

Agree about expertise being inseparable from the 'world model'. When someone tells us something, they're assuming that we know a certain amount of background knowledge but, in reality, we never have exactly the missing pieces that the speaker is assuming we have because our world model is different. It can lead to distortions and misunderstandings.

Even if someone repeats back to us variants of what we've told them at a later time, it doesn't mean that they've internalized the exact same knowledge. The interpretation can be different in subtle and surprising ways. You only figure out discrepancies once you have a thorough debate. But unfortunately, a lot of our society is built around avoiding confrontation, there is a lot of self-censorship, so actually people tend to maintain very different world models even though the surface-level ideas which they communicate appear to be similar.

Individuals in modern society have almost complete consensus over certain ideas which we communicate and highly divergent views concerning just about everything else which we don't talk about... And as our views diverge more, it narrows down the set of topics which can be discussed openly.

whattheheckheckyesterday at 7:48 PM

Well here's an engineering problem figure out how to mentor 10x the number of juniors

cpursleytoday at 8:34 AM

This sounds like a whole lot of copium from devs who don't want to bother with the effort of just writing stuff down, ie good documentation practices...

Actually, maybe even worse (not directed at parent) - I think some "seniors" have a stick so far up their err keyboard, and think they are so wise beyond words that they refuse to share their "all knowing expertise" with anyone else as a form of gatekeeping or perhaps fear of being "found out" (that they are not actually keyboard "Gods").

Really though, just wright shit down even if the first draft isn't great. Write it down, check it into the codebase.

show 2 replies
CreepGinyesterday at 8:08 PM

[dead]

makbar890yesterday at 7:59 PM

[dead]