logoalt Hacker News

Nobody knows how the whole system works

254 pointsby azhenleytoday at 5:28 AM168 commentsview on HN

Comments

rorylaitilatoday at 2:11 PM

There are many layers to this. But there is one style of programming that concerns me. Where you neither understand the layer above you (why the product exists and what the goal of the system is) nor the layer below (how to actually implement the behavior). In the past, many developers barely understood the business case, but at least they understood how to translate into code, and could put backpressure on the business. Now however, it's apparently not even necessary to know how the code works!

The argument seems to be, we should float on a thin lubricant of "that's someone else's concern" (either the AI or the PMs) gliding blissfully from one ticket to another. Neither grasping our goal nor our outcome. If the tests are green and the buttons submit, mission accomplished!

Using Claude I can feel my situational awareness slipping from my grasp. It's increasingly clear that this style of development pushes you to stop looking at any of the code at all. My English instructions do not leave any residual growth. I learn nothing to send back up the chain, and I know nothing of what's below. Why should I exist?

show 9 replies
planbtoday at 3:05 PM

This article is about people using abstractions without knowing how they work. This is fine. This is how progress is made.

But someone designed the abstraction (e.g. the Wifi driver, the processor, the transistor), and they made sure it works and provides an interface to the layers above.

Now you could say a piece of software completely written by a coding agent is just another abstraction, but the article does not really make that point, so I don't see what message it tries to convey. "I don't understand my wifi driver, so I don't need to understand my code" does not sound like a valid argument.

show 2 replies
matheus-rrtoday at 2:40 PM

The dependency tree is where this bites hardest in practice. A typical Node.js project pulls in 800+ transitive dependencies, each with their own release cadence and breaking change policies. Nobody on your team understands how most of them work internally, and that's fine - until one of them ships a breaking change, deprecates an API, or hits end-of-life.

The anon291 comment about interface stability is exactly right. The reason you don't need to understand CPU microarchitecture is that x86 instructions from 1990 still work. Your React component library from 2023 might not survive the next major version. The "nobody knows how the whole system works" problem is manageable when the interfaces are stable and well-documented. It becomes genuinely dangerous when the interfaces themselves are churning.

What I've noticed is that teams don't even track which of their dependencies are approaching EOL or have known vulnerabilities at the version they're pinned to. The knowledge gap isn't just "how does this work" - it's "is this thing I depend on still actively maintained, and what changed in the last 3 releases that I skipped?" That's the operational version of this problem that bites people every week.

show 1 reply
sgarlandtoday at 3:45 PM

> “What happens when you type a URL into your browser’s address bar and hit enter?” You can talk about what happens at all sorts of different levels (e.g., HTTP, DNS, TCP, IP, …). But does anybody really understand all of the levels? [Paraphrasing]: interrupts, 802.11ax modulation scheme, QAM, memory models, garbage collection, field effect transistors...

To a reasonable degree, yes, I can. I am also probably an outlier, and the product of various careers, with a small dose of autism sprinkled in. My first career was as a Submarine Nuclear Electronics Technician / Reactor Operator in the U.S. Navy. As part of that training curriculum, I was taught electronics theory, troubleshooting, and repair, which begins with "these are electrons" and ends with "you can now troubleshoot a VMEbus [0] Motorola 68000-based system down to the component level." I also later went back to teach at that school, and rewrote the 68000 training curriculum to use the Intel 386 (progress, eh?).

Additionally, all submariners are required to undergo an oral board before being qualified, and analogous questions like that are extremely common, e.g. "I am a drop of seawater. How do I turn the light on in your rack?" To answer that question, you end up drawing (from memory) an enormous amount of systems and connecting them together, replete with the correct valve numbers and electrical buses, as well as explaining how all of them work, and going down various rabbit holes as the board members see fit, like the throttling characteristics of a gate valve (sub-optimal). If it's written down somewhere, or can be derived, it's fair game. And like TFA's discussion about Brendan Gregg's practice of finding someone's knowledge limit, the board members will not stop until they find something you don't know - at which point you are required to find it out, and get back to them.

When I got into tech, I applied this same mindset. If I don't know something, I find out. I read docs, I read man pages, I test assumptions, I tinker, I experiment. This has served me well over the years, with seemingly random knowledge surfacing during an incident, or when troubleshooting. I usually don't remember all of it, but I remember enough to find the source docs again and refresh my memory.

0: https://en.wikipedia.org/wiki/VMEbus

show 2 replies
virgilptoday at 6:49 AM

That's not how things work in practice.

I think the concern is not that "people don't know how everything works" - people never needed to know how to "make their own food" by understanding all the cellular mechanisms and all the intricacies of the chemistry & physics involved in cooking. BUT, when you stop understanding the basics - when you no longer know how to fry an egg because you just get it already prepared from the shop/ from delivery - that's a whole different level of ignorance, that's much more dangerous.

Yes, it may be fine & completely non-concerning if agricultural corporations produce your wheat and your meat; but if the corporation starts producing standardized cooked food for everyone, is it really the same - is it a good evolution, or not? That's the debate here.

show 3 replies
wduquettetoday at 6:04 PM

It's certainly the case that I don't always know how the layer below works, i.e., how the compiled code executes in detail. But I have a mental model that's good enough that I can use the compiler, and I trust that the compiler authors know what they are doing and that the result is well-tested. Over forty years and a slew of different languages I've found that to be an excellent bet.

But I understand how my code works. There's a huge difference between not understanding the layer below and not understanding the layer that I am responsible for.

show 1 reply
bjttoday at 7:47 AM

The claimed connections here fall apart for me pretty quickly.

CPU instructions, caches, memory access, etc. are debated, tested, hardened, and documented to a degree that's orders of magnitude greater than the LLM-generated code we're deploying these days. Those fundamental computing abstractions aren't nearly as leaky or nearly as in need of refactoring tomorrow.

mojubatoday at 9:02 AM

> AI will make this situation worse.

Being an AI skeptic more than not, I don't think the article's conclusion is true.

What LLM's can potentially do for us is exactly the opposite: because they are trained on pretty much everything there is, if you ask the AI how the telephone works, or what happens when you enter a URL in the browser, they can actually answer and break it down for you nicely (and that would be a dissertation-sized text). Accuracy and hallucinations aside, it's already better than a human who has no clue about how the telephone works or where to even begin if the said human wanted to understand it.

Human brains have a pretty serious gap in the "I don't know what I don't know" area, whereas language models have such a vast scope of knowledge that makes them somewhat superior, albeit at a price of, well, being literally quite expensive and power hungry. But that's technical details.

LLMs are knowledge machines that are good at precisely that: knowing everything about everything on all levels as long as it is described in human language somewhere on the Internet.

LLMs consolidate our knowledge in ways that were impossible before. They are pretty bad at reasoning or e.g. generating code, but where they excel so far is answering arbitrary questions about pretty much anything.

show 1 reply
PandaStyletoday at 7:53 AM

Perhaps a dose of pragmatism is needed here?

I am no CS major, nor do I fully understand the inner workings of a computer beyond "we tricked a rock into thinking by shocking it."

I'd love to better understand it, and I hope that through my journey of working with computers, i'll better learn about these underlying concepts registers, bus's, memory, assembly etc

Practically however, I write scripts that solve real world problems, be that from automating the coffee machine, to managing infrastructure at scale.

I'm not waiting to pick up a book on x86 assembly first before I write some python however. (I wish it were that easy.)

To the greybeards that do have a grasp of these concepts though? It's your responsibility to share that wealth of knowledge. It's a bitter ask, I know.

I'll hold up my end of the bargain by doing the same when I get to your position and everywhere in between.

show 4 replies
analog31today at 2:37 PM

Granted I'm not a software developer, so the things I work on tend to be simpler. But the people I know who are recognized for "knowing how the whole thing works" are likely to have earned that distinction, not necessarily by actually knowing how it works but:

1. The ability and interest to investigate things and find out how they work, when needed or desired. They are interested in how things work. They are probably competent in things that are "glue" in their disciplines, such as math and physics in my case.

2. The ability to improvise an answer when needed, by interpolating across gaps in knowledge, well enough to get past whatever problem is being solved. And to decide when something doesn't need to be understood.

show 1 reply
latexrtoday at 2:54 PM

> This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.

That doesn’t make it OK. This is like being stuck in a room whose pillars are starting to deteriorate, then someone comes along with a sledgehammer and starts hitting them and your reaction is to shrug and say “ah, well, the situation is bad and will only get worse, but the roof hasn’t fallen on our heads yet so let’s do nothing”.

If the situation is untenable, the right course of action is to try to correct it, not shrug it off.

mamptoday at 6:43 AM

Strange article. The problem isn’t that everyone doesn’t know how everything works, it’s that AI coding could mean there is no one who knows how a system works.

show 7 replies
wtetznertoday at 2:37 PM

I think a lot of people have a fear of AI coding because they're worried that we will move from a world where nobody understands how the whole system works, to a world where nobody knows how any of it works.

show 2 replies
cbdevidaltoday at 1:53 PM

This also applies to other things. No one person knows how to make a pencil.

Three minute video by Milton Friedman: https://youtu.be/67tHtpac5ws?si=nFOLok7o87b8UXxY

show 3 replies
youarentrightjrtoday at 6:41 AM

> Nobody knows how the whole system works

True.

But in all systems up to now, for each part of the system, somebody knew how it worked.

That paradigm is slowly eroding. Maybe that's ok, maybe not, hard to say.

show 1 reply
tjcheartoday at 8:00 AM

I take a fairly optimistic view to the adoption of AI assistants in our line of work. We begin to work and reason at a higher level and let the agents worry about the lower level details. Know where else this happens? Any human organization that existed, exists, and will exist. Hierarchies form because no one person can do everything and hold all the details in their mind, especially as the complexity of what they intend to accomplish goes up.

One can continue to perfect and exercise their craft the old school way, and that’s totally fine, but don’t count on that to put food on the table. Some genius probably can, but I certainly am not one.

show 1 reply
camgunztoday at 11:10 AM

Get enough people in the room and they can describe "the system". Everything OP lists (QAM, QPSK, WPA whatever) can be read about and learned. Literally no one understands generative models, and there isn't a way for us to learn about their workings. These things are entirely new beasts.

chasd00today at 8:11 PM

you're perfectly free to read, understand, even edit code created by these coding agents. I must have made that point in a dozen threads just like this one. Do people think because an agent was used then the code is unaccessible to them? When I use these tools i'm constantly reviewing and updating what they output and I feel like I completely understand every line they create. Just like I understand any other code i read.

whytakatoday at 7:28 AM

But people are expected to understand the part of the system they are responsible for at the level of abstraction they are being paid to operate.

This new arrangement would be perfectly fine if they aren't responsible when/if it breaks.

show 1 reply
gmusleratoday at 11:11 AM

It is not about having infinite width and depth of knowledge. Is about abstracting at the right level for the components are relevant enough and can assume correctness outside the focus of what you are solving.

Systems include people, that make their own decisions that affect how they work and we don’t go down to biology and chemistry to understand how they make choices. But that doesn’t mean that people decisions should be fully ignored in our analysis, just that there is a right abstraction level for that.

And sometimes a side or abstracted component deserves to be seen or understood with more detail because some of the sub components or its fine behavior makes a difference for what we are solving. Can we do that?

overgardtoday at 8:46 PM

Leaky abstractions have always been a problem. Sometimes people like to use them as an example of "see, you didn't understand the assembly, so why do you care about... X". The logic seems to be "see, almost all your abstractions are leaky, why do you care that you don't understand what's happening?"

A few comments on that. First off, the best programmers I've worked with recognized when their abstractions were leaky, and made efforts to understand the thing that was being abstracted. That's a huge part of what made them good! I have worked with programmers that looked at the disassembly, and cared about it. Not everyone needs to do that, but acting like it's a completely pointless exercise does not track with reality.

The other thing I've noticed personally for myself is my biggest growth as a programmer has almost aways come from moving down the stack and understanding things at a lower level, not moving up the stack. Even though I rarely use it, learning assembler was VERY important for my development as a programmer, it helped me understand decisions made in the design of C for instance. I also learned VHDL to program FPGAs and took an embedded systems course that talked about building logic out of NAND gates. I had to write a game for an FPGA in C that had to use a wonky VGA driver that had to treat an 800x600 screen as a series of tiles because there wasn't nearly enough RAM to store that framebuffer. None of this is something I use daily, some of it I may never use again, but it shaped how I think and work with computers. In my experience, the guys that only focus on the highest levels of abstractions because the rest of the stuff "doesn't matter" easily get themselves stuck in corners they can't get out of.

mrkeentoday at 8:15 AM

  Adam Jacob
  It’s not slop. It’s not forgetting first principles. It’s a shift in how the craft work, and it’s already happened. 
This post just doubled down without presenting any kind of argument.

  Bruce Perens
  Do not underestimate the degree to which mostly-competent programmers are unaware of what goes on inside the compiler and the hardware.
Now take the median dev, compress his lack of knowledge into a lossy model, and rent that out as everyone's new source of truth.
show 1 reply
MobiusHorizonstoday at 5:04 PM

There will always be many gaps in peoples knowledge. You start with what you need to understand, and typically dive deeper only when it is necessary. Where it starts to be a problem in my mind is when people have no curiosity about what’s going on underneath, or even worse, start to get superstitious about avoiding holes in the abstraction without the willingness to dig a little and find out why.

CrzyLngPwdtoday at 3:06 PM

Oh so many times over the decades, having to explain to a dev why iterating over many things and performing a heavy task like a DB query, will result in bad things happening...all because they don't really comprehend how things work.

tostitoday at 8:25 AM

Not just tech.

Does anyone on the planet actually know all of the subtleties and idiosyncrasies of the entire tax code? Perhaps the one inhabitant of Sealand and the Sentinelese but no-one in any western society.

_kunotoday at 3:54 PM

"Civilization advances by extending the number of important operations which we can perform without thinking about them." - Alfred North Whitehead

vineethytoday at 3:16 PM

There’s plenty of people that know the fundamentals of the system. It’s a mistake to think that understanding specific technical details about an implementation is necessary to understand the system. It would make more sense to ask questions about whether someone could conceivably build the system from scratch if they have to. There’s plenty of people that have worked in academic fabs that have also written verilog and operating systems and messed with radios.

markbaotoday at 4:41 PM

There’s a difference between abstracting away the network layer and not understanding the business logic. What we are talking about with AI slop is not understanding the business logic. That gets really close to just throwing stuff at the wall and seeing what works instead of a systematic, reliable way to develop things that have predictable results.

It’s like if you are building a production line. You need to use a certain type of steel because it has certain heat properties. You don’t need to know exactly how they make that type of steel. But you need to know to use that steel. AI slop is basically just using whatever steel.

At every layer of abstraction in complexity, the experts at that layer need to have a deep understanding of their layer of complexity. The whole point is that you can rely on certain contracts made by lower layers to build yours.

So no, just slopping your way through the application layer isn’t just on theme with “we have never known how the whole system works”. It’s ignoring that you still have a responsibility to understand the current layer where you’re at, which is the business logic layer. If you don’t understand that, you can’t build reliable software because you aren’t using the system we have in place to predictably and deterministically specify outputs. Which is code.

dizhntoday at 8:36 AM

Let me make it worse. Much worse. :)

https://youtu.be/36myc8wQhLo (USENIX ATC '21/OSDI '21 Joint Keynote Address-It's Time for Operating Systems to Rediscover Hardware)

shevy-javatoday at 8:32 AM

Adam Jacob's quote is this:

"It's not slop. It's not forgetting first principles. It's a shift in how the craft work, and it's already happened."

It actually really is slop. He may wish to ignore it but that does not change anything. AI comes with slop - that is undeniable. You only need to look at the content generated via AI.

He may wish to focus merely on "AI for use in software engineering", but even there he is wrong, since AI makes mistakes too and not everything it creates is great. People often have no clue how that AI reaches any decision, so they also lose being able to reason about the code or code changes. I think people have a hard time trying to sell AI as "only good things, the craft will become better". It seems everyone is on the AI hype train - eventually it'll either crash or slow down massively.

jpadkinstoday at 7:10 PM

Even if you know how the compiler and OS constructs work, you might not know how the hardware circuits work. Even if you know circuits work, you might not know how the power generation or cooling work. Even if you know how the power generation works, you don't know how extracting natural gas works or solar panels are created. etc,etc

My takeaway is that modern system complexity can only be achieved via advanced specialization and trade. No one human brain can master all of the complexity needed for the wonders of modern tech. So we need to figure out how to cooperate if we want to continue to advance technology.

My views on the topic were influenced by Klings book (it's a light read) https://www.libertarianism.org/books/specialization-trade

esafaktoday at 2:03 PM

It's called specialization. Not knowing everything is how we got this far.

show 1 reply
youknownothingtoday at 6:43 PM

I see many people comparing the production of code through AI with compilers: just another layer of abstraction. They argue that, in the same way that creating high-level languages that were compiled to assembler meant that most people didn't need to know assembler any more, then specifying specs and letting AI produce the high-level language will mean that most people won't need to know the high-level language any more.

However, there is a fundamental flaw in this analogy: compilers are deterministic, AI is not. You get high-level code and compile it twice, you get exactly the same output. You get specs and generate high-level code through AI twice, you get two different outputs (hopefully with equivalent behaviour).

If you don't understand that deterministic vs. non-deterministic is a fundamental and potentially dangerous change in the way we produce work, then you definitely fail at first principles.

psychoslavetoday at 8:15 AM

To be fair, I don't know how a living human individual work, let alone how they actually work in society. I suspect I'm not alone in this case.

So nothing new under the sun, often the practices come first, then only can some theory emerge, from which point it can be leverage on to go further than present practice and so on. Sometime practice and theory are more entengled in how they are created on the go, obviously.

mhog_hntoday at 8:20 AM

It is the same with the global financial system

show 1 reply
themafiatoday at 8:51 PM

> But does anybody really understand all of the levels?

Of the top of my head? Most of them. Did you need me to understand some level in particular? I can dedicate time to that if you like. My experience and education will make that a very simple task.

The better question is.. is there any _advantage_ to understanding "all the levels?" If not, then what outcome did you actually expect? A lot of this work is done in exchange for money and not out personal pride or desirous craftsmanship.

You can try to be the "Wizard of Oz" if you want. The problem is anyone can do that job. It's not particularly interesting is it?

theywillnvrknwtoday at 5:30 PM

Let me figure out how exactly the human body works before using it.

css_apologisttoday at 4:39 PM

Yes, but the person who understands a lot of the system is invaluable

erelongtoday at 4:53 PM

Reminds me of a short writing "I, Pencil"

The problem is education, and maybe ironically AI can assist in improving that

I've read a lot about programming and it all feels pretty disorganized; the post about programmers being ignorant about how compilers work doesn't sound surprising (go to a bunch of educational programming resources and see if they cover any of that)

It sounds like we need more comprehensive and detailed lists

For example, with objections to "vibe coding", couldn't we just make a list of people's concerns and then work at improving AI's outputs which would reflect the concerns people raise? (Things like security, designs to minimize tech debt, outputting for rradability if someone does need to manually review the code in the future, etc.?)

Incidentally this also reminds me of political or religious stances against technology, like the Amish take for example, as the kind of ignorance of and dependence on processes out of our control discussed seem to be inherent qualities of technological systems as they grow and become more complex.

cadamsdotcomtoday at 5:18 PM

Huh?

The whole point of society is that you don’t need to know how the whole thing works. You just use it.

How does the water system maintain pressure so water actually comes out when you turn on the tap? That’s entirely the wrong question. You should be asking why you never needed to think about that until now, because that answer is way more mind-expanding and fascinating. Humans invented entire economic systems just so you don’t need to know everything, so you can wash your hands and go back to your work doing your thing in the giant machine. Maybe your job is to make software that tap-water engineers use everyday. Is it a crisis if they don’t understand everything about what you do? Not bloody likely - their heads are full of water engineering knowledge already.

It is not the end of the world to not know everything - it’s actually a miracle of modern society!

snyptoday at 3:11 PM

Script kiddies have always existed and always will.

zhismetoday at 10:33 AM

what a well written article. That's actually a problem. Time will come and hit the same way it has done to aqueduct, like lost technology that no one knows how they have worked in details. Maybe it is just how engineering evolution works?

ameliustoday at 8:57 AM

Wikipedia knows how it all works, and that's good enough in case we need to reboot civilization.

spenrosetoday at 3:31 PM

“Finally, Bucciarelli is right that systems like telephony are so inherently complex, have been built on top of so many different layers in so many different places, that no one person can ever actually understand how the whole thing works. This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.”

show 1 reply
fedeb95today at 8:37 AM

why does the author imply not knowing everything is a bad thing? If you have clear protocol and interfaces, not knowing everything enables you to make bigger innovations. If everything is a complex mess, then no.

show 1 reply
kartoshechkatoday at 9:24 AM

engineers pay for abstractions with more powerful hardware, but can optimize at their will (hopefully). will ai be able to afford more human hours to churn through piles of unfamiliar code?

sciencejerktoday at 8:46 AM

We keep delegating knowledge of the natural, physical world for temporary, rapidly-changing knowledge of abstractions and software tools, which we do not control (now LLM cloud tools).

The lack of comprehensive, practical, multi-disciplinary knowledge creates a DEEP DEPENDENCY on the few multinational companies and countries that UNDERSTAND things and can BUILD things. If you don't understand it, if you can't build it, they OWN you.

anon291today at 8:27 AM

I don't like this thing where we dislike 'magic'

The issue with frameworks is not the magic. We feel like it's magic because the interfaces are not stable. If the interfaces were stable we'd consider them just a real component of building whatever

You don't need to know anything about hardware to properly use a CPU isa.

The difference is the cpu isa is documented, well tested and stable. We can build systems that offer stability and are formally verified as an industry. We just choose not to.

kgwxdtoday at 6:32 PM

Who cares? Nobody is concerned about that. They're concerned no one will be able to fix stuff when it goes wrong, or there will be no one to blame for really bad problems. Especially when the problem is repeating at 50 petaflops per second.

landpttoday at 2:30 PM

The pre-2023 abstractions that power the Internet and have made many people rich are the sweet spot.

You have to understand some of the system, and saying that if no one understands the whole system anyway we can give up all understanding is a fallacy.

Even for a programming language that is criticized for a permissive spec like C you can write a formally verified compiler, CompCert. Good luck doing that for your agentic workflow with natural language input.

Citing a few manic posts from influencers does not change that.

🔗 View 14 more comments