logoalt Hacker News

cedillayesterday at 5:51 PM5 repliesview on HN

You've got it exactly the wrong way around. And that with such great confidence!

There was always a confusion about whether a kilobyte was 1000 or 1024 bytes. Early diskettes always used 1000, only when the 8 bit home computer era started was the 1024 convention firmly established.

Before that it made no sense to talk about kilo as 1024. Earlier computers measured space in records and words, and I guess you can see how in 1960, no one would use kilo to mean 1024 for a 13 bit computer with 40 byte records. A kiloword was, naturally, 1000 words, so why would a kilobyte be 1024?

1024 bearing near ubiquitous was only the case in the 90s or so - except for drive manufacturing and signal processing. Binary prefixes didn't invent the confusion, they were a partial solution. As you point out, while it's possible to clearly indicate binary prefixes, we have no unambiguous notation for decimal bytes.


Replies

Sophirayesterday at 6:00 PM

> Early diskettes always used 1000

Even worse, the 3.5" HD floppy disk format used a confusing combination of the two. Its true capacity (when formatted as FAT12) is 1,474,560 bytes. Divide that by 1024 and you get 1440KB; divide that by 1000 and you get the oft-quoted (and often printed on the disk itself) "1.44MB", which is inaccurate no matter how you look at it.

show 2 replies
theamkyesterday at 6:54 PM

it's, way older in than the 1990's! In computering, "K" always meant 1024 at least from 1970's.

Example: in 1972, DEC PDP 11/40 handbook [0] said on first page: "16-bit word (two 8-bit bytes), direct addressing of 32K 16-bit words or 64K 8-bit bytes (K = 1024)". Same with Intel - in 1977 [1], they proudly said "Static 1K RAMs" on the first page.

[0] https://pdos.csail.mit.edu/6.828/2005/readings/pdp11-40.pdf

[1] https://deramp.com/downloads/mfe_archive/050-Component%20Spe...

show 1 reply
angst_riddenyesterday at 6:08 PM

It was earlier than the 90s, and came with popular 8-bit CPUs in the 80s. The Z-80 microprocessor could address 64kb (which was 65,536 bytes) on its 16-bit address bus.

Similarly, the 4104 chip was a "4kb x 1 bit" RAM chip and stored 4096 bits. You'd see this in the whole 41xx series, and beyond.

show 2 replies
snozolliyesterday at 7:18 PM

only when the 8 bit home computer era started was the 1024 convention firmly established.

That's the microcomputer era that has defined the vast majority of our relationship with computers.

IMO, having lived through this era, the only people pushing 1,000 byte kilobytes were storage manufacturers, because it allows them to bump their numbers up.

https://www.latimes.com/archives/la-xpm-2007-nov-03-fi-seaga...

zephenyesterday at 7:00 PM

> 1024 bearing near ubiquitous was only the case in the 90s or so

More like late 60s. In fact, in the 70s and 80s, I remember the storage vendors being excoriated for "lying" by following the SI standard.

There were two proposals to fix things in the late 60s, by Donald Morrison and Donald Knuth. Neither were accepted.

Another article suggesting we just roll over and accept the decimal versions is here:

https://cacm.acm.org/opinion/si-and-binary-prefixes-clearing...

This article helpfully explains that decimal KB has been "standard" since the very late 90s.

But when such an august personality as Donald Knuth declares the proposal DOA, I have no heartburn using binary KB.

https://www-cs-faculty.stanford.edu/~knuth/news99.html