logoalt Hacker News

perennialmindyesterday at 9:39 PM2 repliesview on HN

IPv4 was designed with extension headers: it boggles my mind that simply using the headers to extend the address was never seriously considered. It was proposed: https://www.rfc-editor.org/rfc/rfc1365.html

It still would have been a ton of work, but we could have just had what IPv6 claimed to be: IPv4 with bigger addresses. Except after the upgrade, there'd be no parallel system. And all of DJB's points apply: https://cr.yp.to/djbdns/ipv6mess.html


Replies

Dagger2today at 3:10 AM

I said "whenever anybody says it's bad and tries to come up with a better alternative, they end up coming up with something equivalent to IPv6", and that's what you did here. And as predicted, it was 6to4 you reinvented.

v4 extension headers are well known to get your packets dropped on the Internet, so they're a non-starter, but there's another extension mechanism you can use: you can set the "next protocol" field to a special value, then put the extended address at the start of the payload, followed by the original payload. This is functionally identical to using extension headers, but using a mechanism that doesn't get your packets dropped.

Far from not being seriously considered, this approach was adopted in v6 as RFC 3056.

> Except after the upgrade, there'd be no parallel system.

No. You get a parallel system because v6 addresses are too big to work with v4. Even if you used extension headers, v6 addresses would still be too big to work with v4. Whatever you do, v6 addresses are too big to work with v4. You WILL get a parallel system, and there's no way around this other than not making the addresses bigger.

apiyesterday at 10:22 PM

Here’s my understanding.

The people involved in core Internet protocol design were used to the net being a largely walled garden of governments, corporations, universities, and a small number of BBSes and niche ISPs.

Major protocol upgrades had happened before, not just for the core protocol but all kinds of other then-core services.

It had been a while but not that long, I think less than 20 years, and last time it was pretty easy. They assumed they could design something better and phase it in and all the members of the Internet community would just do the right thing.

That’s probably what made them feel they could push a more radical upgrade.

Unfortunately they started this right as the massive tsunami of Internet commercialization hit. Since V6 was too new, everyone went with V4. Now all the sudden you had thousands of times more nodes, sites, and personnel, and all of them were steeped in IPv4 and rushing to ship on top of it. You also lost the small town atmosphere of the early net where admins were a club and could coordinate things.

Had V6 launched five years earlier V4 would probably be dead.

V6 usage will probably keep creeping up, but as it stands we will likely be dual stack forever. Once the installed user base and sunk cost is this high the design is fixed and can never be changed without a hard core heavy handed measure like a government mandate.

show 3 replies