Sounds like you lived through this, but for the younger generation...
I think the way to compare this with a modern machine is that the the early machines had no memory management or protection, meaning that any program could access any byte of memory, or any i/o address. Whether it was a good idea or not was up to the programmer.
There were BIOS and OS calls for interacting with display memory, that were supposed to make code more portable across machines. Devs almost immediately started writing to hard-coded address regions directly, which pinned those addresses down. Use of "unofficial" addresses and entry points made it phenomenally difficult to update the hardware or BIOS. This was true in the Apple ][, but also on PC's. For instance it's what created the infamous 640k memory limit.
I had an MS-DOS machine but its memory mapping was not identical to the IBM PC. Thus it was not "PC compatible." Apps that used the official MS-DOS calls worked just fine. Thankfully, two of those apps were Word Perfect and Turbo Pascal. I didn't need much else.
It was the wild west. Today, you try POKEing around where you don't belong, and you get a protection fault.
I mostly agree with all this -- I remember the glee with which people would discover and report "undocumented" BIOS or DOS interrupt calls, and the feeling that Microsoft were holding back on documenting these calls for selfish reasons -- but I can't see how they caused the 640k limit. That limit was built into the segmented memory architecture of real mode 8086 and successor CPUs.