logoalt Hacker News

Nobody knows how the whole system works

275 pointsby azhenleyyesterday at 5:28 AM178 commentsview on HN

Comments

spenroseyesterday at 3:31 PM

“Finally, Bucciarelli is right that systems like telephony are so inherently complex, have been built on top of so many different layers in so many different places, that no one person can ever actually understand how the whole thing works. This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.”

show 1 reply
landptyesterday at 2:30 PM

The pre-2023 abstractions that power the Internet and have made many people rich are the sweet spot.

You have to understand some of the system, and saying that if no one understands the whole system anyway we can give up all understanding is a fallacy.

Even for a programming language that is criticized for a permissive spec like C you can write a formally verified compiler, CompCert. Good luck doing that for your agentic workflow with natural language input.

Citing a few manic posts from influencers does not change that.

foxesyesterday at 2:10 PM

Isn't ceding all power to AIs run by tech companies kinda the opposite - if we have to have AI everywhere? Now no one knows how anything works (instead of everyone knowing a tiny bit and all working together), and also everyone is just dependent on the people with all the compute.

mychaelyesterday at 3:35 PM

It's strange to believe that Twitter/X has fallen. Virtually every major character in software, AI and tech is active on X. The people who are actually building the tools that we discuss everyday post on X.

LinkedIn is weeks/months behind topics that originate from X. It suggests you might be living in a bubble if you believe X has fallen.

kgwxdyesterday at 6:32 PM

Who cares? Nobody is concerned about that. They're concerned no one will be able to fix stuff when it goes wrong, or there will be no one to blame for really bad problems. Especially when the problem is repeating at 50 petaflops per second.

dandanuayesterday at 5:35 PM

Somebody knows how a part of a complex system works. We can't say this for complex systems created with AI. This is a road into the abyss. The article is making it worse by downplaying the issue.

nish__yesterday at 5:30 PM

I do.

cess11yesterday at 9:40 AM

Yeah, it's not a problem that a particular person does not know it all, but if no one knows any of it except as a black box kind of thing, that is a rather large risk unless the system is a toy.

Edit: In a sense "AI" software development is postmodern, it is a move away from reasoned software development in which known axioms and rules are applied, to software being arbitrary and 'given'.

The future 'code ninja' might be a deconstructionist, a spectre of Derrida.

knorkeryesterday at 4:20 PM

I would say that I understand all the levels down to (but not including) what it means for electron to repel another particle of negative charge.

But what is not possible is to understand all these levels at the same time. And that has many implications.

Humans we have limits on working memory, and if I need to swap in L1 cache logic, then I can't think of TCP congestion windows, CWDM, multiple inheritance, and QoS at the same time. But I wonder what superpowers AI can bring, not because it's necessarily smarter, but because we can increase the working memory across abstraction layers.

paulddraperyesterday at 3:32 PM

Understand one layer above (“why”) and one layer below (“how”).

Then you know “what” to build.

ForHackernewsyesterday at 3:19 PM

I think there's a difference between "No one understands all levels of the system all the way down, at some point we all draw a line and treat it as a black-box abstraction" vs. "At the level of abstraction I'm working with, I choose not to engage with this AI-generated complexity."

Consider the distinction between I don't know how the automatic transmission in my car works, vs. I never bothered to learn the meanings of the street signs in my jurisdiction.

Atlas667yesterday at 3:15 PM

This is a non-discussion.

You have to know enough about underlying and higher level systems to do YOUR job well. And AI cannot fully replace human review.

anthkyesterday at 2:14 PM

9front's manuals will teach you the basics, the actual basics of CS (plan9 intro if you know to adapt yourself, too). These are at /sys/doc. Begin with rc(1), keep upping the levels. You can try 9front in a virtual machine safely. There are instructions to get, download and set it up at https://9front.org .

Write servers/clients with rc(1) and the tools at /bin/aux, such as aux/listen. They already are irc clients and some other tools. Then, do 9front's C book from Nemo.

On floats, try them at 'low level', with Forth. Get Muxleq https://github.com/howerj/mux. Compile it:

          cc -O2 -ffast-math -o muxleq muxleq.c
          
Edit muxleq.fth, set the constants in the file like this:

      1 constant opt.multi      ( Add in large "pause" primitive )
      1 constant opt.editor     ( Add in Text Editor )
      1 constant opt.info       ( Add info printing function )
      0 constant opt.generate-c ( Generate C code )
      1 constant opt.better-see ( Replace 'see' with better version )
      1 constant opt.control    ( Add in more control structures )
      0 constant opt.allocate   ( Add in "allocate"/"free" )
      1 constant opt.float      ( Add in floating point code )
      0 constant opt.glossary   ( Add in "glossary" word )
      1 constant opt.optimize   ( Enable extra optimization )
      1 constant opt.divmod     ( Use "opDivMod" primitive )
      0 constant opt.self       ( self-interpreter [NOT WORKING] )
Recompile your image:

       ./muxleq muxleq.dec < muxleq.fth > new.dec
New.dec will be your main Forth. Run it:

       ./muxleq new.dec
Get the book from the author, look at the code on how the Floating code it's implemented in software. Learn Forth with the Starting Forth book but for ANS forth, and Thinking Forth after doing Starting Forth. Finally, bacl to 9front, there's the 'cpsbook.pdf' too from Hoare on concurrent programming and threads. That will be incredibily useful in a near future. If you are a Go programmer, well, you are at home with CSP.

Also, compare CSP to the concurrent Forth switching tasks. It's great to compare/debug code in a tiny Forth on Subleq/Muxleq because if your code gets relatively fast, it will fly under GForth and due to constraints you will force yourself to be a much better programmer.

CPU's? Cache's? RAM latency? Muxleq/Subleq behaves nearly the same everywhere depending on your simulation speed. In order to learn, it's there. On real world systems, glibc, the Go runtime, etc, will take care of that making a similar outcome everyhere. If not, most of the people out there will be aware of stuff from SSE2 and up to NEON under ARM.

Hint: they already are code transpilers from Intel dedicated instructions to ARM ones and viceversa.

>How garbage collection works inside of the JVM?

No, but I can figure it a little given the Zenlisp one as a slight approximation. Or... you know, Forth, by hand. And Go which seems easiers and it doesn't need a dog slow VM trying to replicate what Inferno did in the 90's which far less resources.

bsderyesterday at 8:21 AM

Sure, we have complex systems that we don't know how everything works (car, computer, cellphone, etc.) . However, we do expect that those systems behave deterministically in their interface to us. And when they don't, we consider them broken.

For example, why is the HP-12C still the dominant business calculator? Because using other calculators for certain financial calculations were non-deterministically wrong. The HP-12C may not have even been strictly "correct", but it was deterministic in the ways in wasn't.

Financial people didn't know or care about guard digits or numerical instability. They very much did care that their financial calculations were consistent and predictable.

The question is: Who will build the HP-12C of AI?

ltbarcly3yesterday at 5:09 PM

I mean the quotes in this article aren't even disagreeing except on vague value judgements with no practical consequences.

Yes you can make better and more perfect solutions with a deep understanding of every consequence of every design decision. Also you can make some real world situation thousands of times better without a deep understanding of things. These two statements don't disagree at all.

The MIPS image rendering example is perfect here. Notice he didn't say "there was some obscure attempt to load images on MIPS and nobody used it because it was so slow so they used the readily available fast one instead". There was some apparently widely used routine to load images that was popular enough it got the attention of one of the few people who deeply understands how the system worked, and they fixed it up.

PHP is an awful trash language and like half the internet was built on it and lots of people had a lot of fun and got a lot more work done because people wrote a lot of websites in PHP. Sure, PHP is still trash, but it's better to have trash than wait around for someone to 'do it right', and maybe nobody ever gets around to it.

Worse is better. https://en.wikipedia.org/wiki/Worse_is_better

usgroupyesterday at 9:20 AM

[dead]

faresfayesterday at 7:52 PM

[dead]