HACKER Q&A
📣 ido

Why did it take until 2001 for mainstream OSs to get secure/stable?


Until windows XP and Mac OS X mainstream OSs (windows 3.x/9x, classic Mac, maybe you can even consider amiga at a stretch) didn’t have memory protection, were (even more) insecure, unstable, crashes and freezes happened multiple times per day, multitasking was flaky (I remember playing an mp3 would make my computer stutter in other tasks in win98 even on a then-new Pentium 4).

Why did it take so long to get the mass market/mainstream systems “modern” when BSD/Linux managed to do it in the mid 90s and commercial Unix did it in the 70s? Microsoft did release NT in the 90s but didn’t market it to the consumer market until XP in 2001.


  👤 cable2600 Accepted Answer ✓
Windows 2000 Pro is NT based, it came out before XP. It was more stable than XP.

The hardware had to pass up the OS in stability features. 486 and under CPUs had a heat sink but no fan. Some CPUs didn't even have a heat sink. They ran hot being on for 24/7. Causing lockups and freezes.

I have Windows 95 in a virtual machine and it runs faster and without the crashes and freezes on my Intel i5 system.

Don't forget OS/2 2.0 it was stable and crash proof. It was a join IBM and Microsoft product. Microsoft's OS/2 became NT.


👤 themodelplumber
> I remember playing an mp3 would make my computer stutter in other tasks in win98 even on a then-new Pentium 4

Really? I don't remember that combination being that bad. I was using low-latency Creative drivers and Reason on a low-end eMachines desktop about that time, with a full instrument rack including 5+ samplers and 3-5 synths, and 6-8 effect units, making music for CD-ROM titles.

At that time I didn't experience crashes or freezes even once a week...though later on my roomate with Windows Me and some girl dancing on his taskbar was a different story :-)

Platforms were really key back then, too. '90s Linux was really rare outside of tech circles. You could get a basic OS, a desktop _probably_, bonus if you knew your monitor's modelines (fingers crossed), some desktop choices, but the desktop and server software scene was mostly out of the question for Windows power users and even OS/2 power users. Not only was choice a real issue, but documentation and things like dependency hell were serious issues...

So if you had some desktop software you liked, let's say you were a Delphi person who could make anything, the platform really owned you...but the platform also had basic affordances for things--driver rollbacks for crashes; antivirus software to scan your stuff, and multitasking really depended on what you needed to multitask. There were options.


👤 daviddever23box
I think it’s too general an assertion to draw any conclusions from, to be honest: there’s about the same distance from Windows 95 (or System 7.5) to the introduction of Mac OS X as there is from OS X to iOS.

Take Intel and Microsoft Windows out of the equation, and it becomes rather clear that security and stability arise from decent software development with good, hardware-sympathetic tools on a power-per-watt competent platform.

And I give credit to AMD for essentially keeping Windows PCs alive as a platform: at each OS inflection point, they’ve provided, far more so than Intel, the extra performance-per-watt that papers over Windows’ shortcomings to provide a better multimedia experience.


👤 jb1991
Define "stable." Millions were using computers with great success long before 2001. Windows 95 and Windows 98 both worked and you got things done fine with them. Are today's OSs "stable" by your definition? It's not uncommon to reboot 2022 desktops often to fix stability issues, and in fact Mac is arguably getting less stable by the year.

👤 rurban
At our university we did choose FreeBSD in the early 90ies. Because it was exactly that. Secure and stable. Novell was also secure and stable, but it was not free. We used both.

Linux at that time was used by a few of our servers, but it was certainly not that secure. VMS was also ok, but not that popular. IRIX and friends were too expensive.


👤 eesmith
Memory protection requires hardware support that the early microcomputers didn't have. On the early machines you could read/write to anywhere in memory.

On Intel hardware, for example, the 286 supported protected mode, but it wasn't until the 386 in 1985 that it was useful. https://en.wikipedia.org/wiki/Protected_mode#The_286 .

You can see that with OS/2. The early version supported the 286, which meant only one DOS program could run at a time. OS/2 2.0 in 1992 used the 386's protected mode, which allowed better support for multi-user DOS.

You'll note that Linux started on a 386, which unlike earlier processors had an MMU which supported paging.

Even on the 386 there were issues, as Wikipedia's OS/2 entry points out:

> OS/2 always allowed DOS programs the possibility of masking real hardware interrupts, so any DOS program could deadlock the machine in this way ... Later, release 3.0 leveraged the enhancements of newer Intel 80486 and Intel Pentium processors—the Virtual Interrupt Flag (VIF), which was part of the Virtual Mode Extensions (VME)—to solve this problem.

A lot of programs were written with single-user DOS in mind, and didn't follow practices like https://en.wikipedia.org/wiki/DOS_Protected_Mode_Interface that made it possible run in protected mode. "Mainstream" DOS user's won't switch to another OS if important programs don't run on that OS.

With that in mind, let's examine what "commercial Unix" means.

Xenix was commercial Unix from the 1980s, available for microprocessors. As its Wikipedia page points out:

> Microsoft said the difficulty in porting to the various 8086 and Z8000-based machines had been the lack of a standardized memory management unit and protection facilities. Hardware manufacturers compensated by designing their own hardware, but the ensuing complexity made it "extremely difficult if not impossible for the very small manufacturer to develop a computer capable of supporting a system such as XENIX from scratch," and "the XENIX kernel must be custom-tailored to each new hardware environment."

As I recall, you could get Xenix for hardware without memory protection hardware but an errant program could bring down the whole system.

Apple also distributed a Unix - A/UX - starting in 1988. It too required special hardware: "select models of 68k-based Macintosh with an FPU and a paged memory management unit" quoting its Wikipedia entry. Continuing, "Compared to contemporary workstations from other Unix vendors, however, the Macintosh hardware lacks features such as demand paging. The first two versions A/UX consequently suffer from poor performance, ... blaming not the software but the incomplete Unix optimization found in Apple's hardware"

(1988 is also when NeXT came out, using similar hardware; a Motorola 68030 CPU and 68882 floating-point coprocessor.)

My interpretation, therefore, is that 1) early microcomputers didn't have the expensive features of their minicomputer/mainframe cousins, 2) but they were cheap, which is why "mainstream" used them, 3) leading to an installed base of software which a expected single-process/single-user/no memory protection environment, 4) that couldn't be virtualized well until the 1990s, 5) and people didn't find those features useful enough to switch to another OS/platform with those features, like Xenix or A/UX.