logoalt Hacker News

MisterTealast Thursday at 1:15 PM10 repliesview on HN

> It's the entire reason "internet" standards won over "telco" (in this case ITU) standards - the latter could only be deployed by big coordinated efforts,

Anyone remember the promise of ATM networking in the 90's? It was telecom grade networking which used circuit switched networking that would handle voice, video and data down one pipe. Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes. You called a computer as if it were a telephone (or maybe that was Datakit?) and ATM handed the user a byte stream like TCP. Imagine never needing an IP stack or setting traffic priority because the network already handles the QoS. Was it simple to deploy? No. Was it cheap? Nooohooohooohooo. Was Ethernet any of those? YES AND YES. ATM was superior but lost to the simpler and cheaper Ethernet which was pretty crappy in its early days (thinnet, thicknet, terminators, vampire taps, AUI, etc.) but good enough.

The funny part is this has the unintended consequences of needing to reinvent the wheel once you get to the point where you need telecom sized/like infrastructure. Ethernet had to adapt to deterministic real-time needs so various hacks and standards have been developed to paper over these deficiencies which is what TSN is - reinventing ATM's determinism. In addition we also now have OTN, yet another protocol to further paper over the various other protocols to mux everything down a big fat pipe to the other end which allows Ethernet (and IP/ATM/etc) to ride deterministically between data-centers.


Replies

pjc50last Thursday at 1:20 PM

> Ethernet had to adapt to deterministic real-time needs

Without being able to get too into the telco detail, I think the lesson was that hard realtime is both much harder to achieve and not actually needed. People will happily chat over nondeterministic Zoom and Discord.

It's both psychological and slightly paradoxical. Once you let go of saying "the system MUST GUARANTEE this property", you get a much cheaper, better, more versatile and higher bandwidth system that ends up meeting the property anyway.

show 2 replies
kstrauseryesterday at 10:15 PM

I was there for ATM, and I'm so freaking glad it lost. It's a prime example of "a camel is a horse designed by committee". A 53 byte cell with a 48 byte payload? Of course! What an excellent idea! We definitely want a 10% overhead on a ludicrously small packet, just so it has tolerable voice latencies if you scale it down to run on a 64Kb DS0, never mind that literally everything in the industry was scaling up to fatter pipes.

ATM was nifty if you had a requirement of establishing voice-style, i.e. billable, connections. No thanks. It was an interesting technology but hopelessly hobbled by the desire to emulate a voice call that fit into a standard invoice line.

show 3 replies
EvanAndersonlast Thursday at 1:52 PM

ATM was superior in the context of a bill-by-the-byte telco-style network where oversubscribed links could be carefully planned. The "impedance mismatch" IP's of unreliable datagram delivery with ATM's guaranteed cell delivery created situations where ATM switches could effectively need unlimited buffer RAM to make their delivery guarantees even if the cells were containing IP datagrams that could just be discarded with no ill consequences.

There's likely an element of the "layering TCP on TCP" problem going on, too.

The classic popular treatment of the subject is: https://www.wired.com/1996/10/atm-3/

show 2 replies
pimlottctoday at 4:14 AM

My college went all-in on ATM-over-fiber and wired all the dorm rooms with it. It was a PITA. Of course no computers came with ATM support and the cards cost $400+ each so the school had hundreds of cards that they would “lease” them out to students each year. There would be a huge “install depot” at the start of the year where students brought in their (desktop) computers and volunteers would open them up, install the cards, install drivers and configure them for our network.

For Linux heads, it was doubly annoying, as ATM was not directly supported in the kernel. You had to download a separate patch to compile the necessary modules, then install and run three separate system daemons, all with the correct arguments for our network, just to get a working network device. And of course you had to download all the necessary packages with another computer, since you couldn’t get online yet. This was the early 2000s, so WiFi was not really common yet.

Even once you got online, one of the admins would randomly crash every so often and you’d have to restart to get back online. It was such a pain.

show 1 reply
p_llast Thursday at 1:41 PM

Pretty sure TSN is unrelated to ATM determinism, and comes from a completely separate area (replacing custom field buses where timing and contention is more important than bandwidth). Some of ATM complexity came from wanting to deliver the same quality of experience as plesiosynchronous networks provided for voice (that's how it got the weird cell size).

Once those requirements dropped down (partially because people just started to accept weird echo) the replacement became MPLS and whatever you can send IP over where Ethernet sometimes shows as package around the IP frame but has little relation to Ethernet otherwise.

show 1 reply
nofriendtoday at 2:14 AM

Was it actually superior though? The usual treatment is that packet switching works better at the scale of the internet. With voice, hogging a whole line works, but for the internet it makes more sense to slow everybody down when congestion occurs rather than preventing some people from connecting at all. I get why the telecoms would have you waste your bandwidth reserving a connection you don't need, and I get why they would try and sell that as a superior solution because of some nonsense about reliability, but I don't see it as providing much benefit to the user.

show 1 reply
rayineryesterday at 10:39 PM

> Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes

I love this. Ethernet is such shit. What do you mean the only way to handle a high speed to lower speed link transition is to just drop a bunch of packets? Or sending PAUSE frames which works so poorly everyone disables flow control.

show 1 reply
themafiatoday at 12:59 AM

Anyone remember the incredible disrepute of the phone company in the 80s?

We just wanted our own stuff. We did not want to coordinate with a proprietary vendor to network or be charged by the byte to do so.

cyberaxyesterday at 10:37 PM

And for a while, telco engineers tried to retrofit Internet to their purposes.

I worked on a network that used RSVP ( https://en.wikipedia.org/wiki/Resource_Reservation_Protocol ) to emulate the old circuit-switched topology. It was kinda amazing to see how it could carve guaranteed-bandwidth paths through the network fabric.

Of course, it also never really worked with dynamic routing and brought in tons of complexity with stuck states. In our network, it eventually was just removed entirely in favor of 1gbit links with VLANs for priority/normal traffic.

fmajidlast Thursday at 6:12 PM

I started my career at France Telecom's R&D lab in Caen, Normandy. They had their own home-grown X.400 email client, and even though they could have set up a SMTP server for free, they deliberately chose to MX to a paid SMTP to X.400 gateway out of OSI ideology.

It was complete garbage.

Another lab of theirs proudly made a Winsock that would use ATM SVCs instead of TCP and proudly made a brochure extolling their achievement "Web protocol without having to use TCP". Because clearly it was TCP hindering adoption of the Web /s

The Bellhead vs. Nethead was a real thing back then. To paraphrase an old saying about IBM, Telcos think if they piss on something, it improves the flavor.

One of the jobs I had applied out of college was to lead Schengen's central police database (think stolen car reports, arrest warrants etc) which would federate national databases. For some unfathomable reason, they chose X.400 as messaging bus for that replication, and endured massive delays and cost overruns for that reason. I guess I dodged a bullet by not going there.