HACKER Q&A
📣 karmakaze

When did computers get 'fast enough' for you?


Unless gaming, machine learning, or the like, computers have been fast enough not to make a difference to me for quite a while. I was trying to recall any pivotal moments but have to go pretty far back. I'm also excluding overcoming slowdowns from abstractions because we could.

The last time I remember computers being very slow was waiting for C++ compiles in the late-90's/early-00's. Of course C++ compiles could still be slow for people, but I haven't had reason to need to use C++ since because of languages like Java or Go.

We always want faster server software as there can be many users using it at once, but even then an SQL db, indexes, and thoughtfully written queries generally get the job done. I remember there was a time that vertically scaling/federating dbs was almost not enough on bare metal, but add sharding and there's very little that you can't handle unless you're FB/Twitter.

[The slowest things I've run into in the last decade was Spring/Boot startup, and a React front-end where running all the assembly (TypeScript, CSS, images, polyfills, what-have-you) took over 30s to reload a page.]


  👤 falcolas Accepted Answer ✓
Computers - a long time ago. The software that runs on them? That swings back and forth. More specifically, I feel like we're in a "slow" part of the pendulum swing, and I keep hoping it will reverse directions.

I mean, my $400 crappy laptop can run Factorio flawlessly (because the devs give a fuck about performance), but gets bogged down while scrolling on a webpage. It can run gvim (or Sublime Text or Notepad++) with dozens of plugins as fast as I could wish for, but if I pull up a "modern" IDE, suddenly my computer "can't even". A Logitech mouse using default drivers is smooth as silk, but the moment I set up branded drivers, the computer gives up for several minutes.

It, frankly, makes me pretty sad.


👤 brimble
They're still not, due to software bloat.

Technical and human-imposed restrictions mean iOS devices tend to be the closest to feeling "fast enough". Desktops are hampered by little always-on utilities being written in fucking Electron and eating a GB+ each while also burning lots of processor cycles while not doing anything. Or feature-weak productivity suites living in the browser and taking 20x the resources of a more-featureful native equivalent from years ago (or well-written, native, modern alternatives). Or bad low-level building blocks leading to e.g. UI freezing, stuttering, or jankiness. Or vendor malware (Windows) doing god knows what.

I've only seen a couple operating systems that seem to give what I'd call enough priority to keeping UI snappy so the user feels like they're really in control and not just begging the OS to give them some time, in competition with their own software. iOS and BeOS... I struggle to think of another. QNX Proton was pretty good but I think that was a side-effect of design decisions made for other reasons, though I guess that still counts. Those would probably struggle to remain so running five different JS + HTML layout engines at the same time, like a modern OS is commonly subjected to. Human-imposed restrictions on iOS are what keeps it from suffering that fate.

However I don't think the bloat has yet overcome the benefits of SSDs, in particular, so things are still overall better than they were in, say, the 90s or early '00s.


👤 nicoburns
Until recently I thought my 2015 MacBook Pro was "fast enough", and I think arguably it still is for casual usage. But I've just gotten one of the new 2022 MacBook Pro's, and for work usage it's making an absolutely huge difference. Things that I had to wait for are now instant, my editor's language support is much more responsive, and my computer now no longer lags when video calling and screen sharing.

The slowest tasks I had before were probably live-reloading a React app, which has gone from "takes a few seconds" to "pretty much instant". And compiling a fresh build of an iOS app, which has gone from ~20mins and the my computer lagging slightly while doing other tasks to ~3mins and it performs other tasks perfectly fine while it does it.

It is almost 10x faster (2x faster in single core, 4.5x more cores) so perhaps it shouldn't have been surprising.


👤 QuadmasterXLII
For everyday usage, it's pretty simple and can be traced back to a moment in time: computers without SSDs are not 'fast enough', computers with SSDs are.

For programming, the 2 applications where computers aren't quite fast enough is compiling massive C++ projects and getting julia to first plot. I recently lept from having ~6 cores to having ~32 cores, and this has helped with the C++ but not really with julia - I think that's a programming issue.


👤 Retr0id
There are two types of tasks that run on my computer. Tasks I told it to do, and tasks someone else told it to do (with my begrudging acceptance).

The former category has always felt fast to me. Word processing, programming/scripting, number crunching, compiling code, viewing documents, etc.

The latter category is what feels slow even to this day, on high-end hardware.

- OS/software updates that take hours (monstly on Windows/macOS).

- Janky animations that drop frames (Discord "stickers").

- Webpages overloaded with useless JS.

- Horrendously slow backend webservers (technically nothing to do with my computer, but it affects my UX nontheless)


👤 cpach
Hm… I would say in 2013 when I bought a laptop with an SSD.

👤 taylodl
Computers became fast enough for me with the advent of the 25MHz 486. I no longer had to write drivers in assembly and could write them in C instead! It was like magic! Writing C code, C code!, and it being a driver! I was blown away. My productivity soared! (Though my knowledge of x86 assembly has atrophied somewhat).

The next bump came with the advent of the 2GHz quad-core i7. Then you no longer felt compelled to work with compiled languages (for purposes of this discussion I'm considering Java and C# to be compiled languages). There were many tasks and applications where these interpreted languages were "good enough" and allowed you to get solutions in the hands of users faster. Building serious business applications in interpreted languages was something that never seemed possible when writing drivers in assembly!

Finally, the most recent bump has been treating AWS as a runtime platform. Not only can I create solutions in Python, a language I absolutely detested back in the 90's, but hell, I don't have any servers to manage any longer! Things scale under load and HA is easy to achieve. I no longer even think of AWS as being "the cloud", it's a compute platform - and a very powerful compute platform at that!

I've come a long way from writing assembly for x86 processors with a few MHz of compute power and RAM measured in kilobytes!


👤 trey-jones
I'm old enough to remember typing (and also clicking? it's getting fuzzy) in Windows 95 and having to wait for the computer to catch up. Amazingly actions generally always completed eventually, but I wanted to do things faster than the computer could.

I used a 2013 Macbook Pro (4 core, 16GB RAM) for a really long time and had very few complaints, but I then graduated myself to a desktop machine with 12 cores and 64GB of RAM, and I can finally say that this is fast enough. There is almost never anything to wait for, and if there is, it probably isn't the fault of the hardware.

The biggest bottleneck for computers I think for the last 30 years is really network bandwidth. Again, I'm old enough to think that 768k is probably enough for just about anything, and if you really need to transfer a large chunk of bytes, just go to sleep while the computers are working, but even network bandwidth is pretty cheap these days. 1GB residential connections, 10GB LAN. We're good to go.

The problem is that these innovations mean that we can get away with building less efficient software. Ultimately this means that no matter how many computing resources are available, I think we're likely to find a way to use them all, and more.


👤 thatjoeoverthr
Pentium III. One of my favorite computers was a dual coppermine, one ghz. I know my computer now can deal with much heavier multimedia workloads but the experience of sitting at it and using it is the same at best, and sometimes far worse. Recently I started a new job where I need to use Microsoft Teams and it brings the computer to its knees. None of this advancement matters. It offers me nothing.

👤 mg
If I had a sufficiently fast computer, I would love to "solve chess".

Crunch through all possible games to figure out if ...

- White always wins

- Black always wins

- Every game ends in a draw

... when the players have enough computing power and all uncertainty is removed from the game.

But there probably won't be a fast enough computer in the foreseeable future.


👤 e4e78a06
> because of languages like Java or Go

Sounds like you haven't spun up a large project recently :) Anything Maven build times are terrible and Go isn't much better once you start bringing in a ton of dependencies and codegen frameworks for stuff like protobuf and dependency injection. Java builds can easily take 30 mins and Go at least 3-4 mins.

Don't even get me started on MacOS Docker IO performance. Every time something gets faster we fill it up with more inefficiency.


👤 dspillett
Somewhere around late 2016 or early 2017, at which point most machines I used had SSDs for system drives and a cheap SoC was running video from the media array to my TV (a Pi3 at the time, now a Pi4 is doing the job as the 3 struggled with x265 at 1080p). I'd just put together a new main PC with a GTX1080(6Gb). The biggest upgrade in that build was the 1080, which is hardly taxed ATM as I've stopped playing much by way of fancy games (due to a mix of having picked up other hobbies and the rest of life getting in the way).

I have upgraded my main desktop recently, late 2020 IIRC, because I started doing a bit of video recording on it. The old CPU was fast enough but the new one does in less than an hour what would have been an overnight task for the old. The old kit was handed down to improve the home server, mainly to give it more RAM rather than for speed (the old motherboard in there wouldn't take more than 16G, at least not officially).

The other “fast enough” question is home Internet. For me that was probably 2011 when FTTC became available with 10mbit upstream (more is useful, but 10mbit is fast enough). For downstream I'd say even earlier with 17mbit down (the 1.4mbit up I had then was much more limiting) and anything that didn't happen quickly at that rate wasn't urgent enough to leave running while I did something else. Of course what I do has grown but the connection has grown too so stayed in the “fast enough” range. I'd like faster either way (currently up to ~76down/17up though right now ~50/10 as after multiple disconnects I think due to the recent storms (my last section of line is string down the street from a pole)) but that would be luxury not something I need at all.


👤 antisthenes
Computers felt pretty fast for me around 2010-2014 era.

That's around the time when quad-cores become cheap/ubiquitous, SSDs were a huge performance improvement and Windows was in its good era (W7/W8.1) and wasn't a dumbed down yet bloated tablet OS that we have now.

Also, our ISP was doing multiple free speed upgrades and we went from 25/25 to 100/100 while paying the same amount.


👤 pwason
It depends on the OS/application. My accelerated A500 at 14Mhz is perfect for playing VIRUS, my accelerated A3000 at 25Mhz makes the game almost impossible to play. The A500 is too sluggish for PageStream and PageRender 3D. Both these machines are running SCSI HDDs, so even sequential disk access is a tad slow (ha) compared to what's common these days. But for a lot of things, even running on 35 year old hw, the classic AmigaOS runs just fine compared to Windows 10 on an i7 or XEON CPU, etc. And you really can't beat Autoconfig and Datatypes, and all the other brilliant efficiencies of the OS. How much of today's CPU/RAM/NIC/SSD performance is sucked up by in-your-face advertising and behind-your-back telemetry?

👤 wink
That depends a little.

For every day use - since the first SSD I've not really had problems.

For games - I'm not usually one to chase fresh AAA titles, so either my 2008ish Core 2 Duo or my 2012 i5. That were 4 years and then I only upgraded in 2019 after 7 years, while still using the machine mostly for games.

Work is a whole different thing though. From 2013-2017 I had an i7 (laptop), which was kinda fine as I was "only" doing Java, Python, ops stuff etc - so a huge SSD and a lot of RAM were more important. End of 2017 I started doing mostly C++ and they gave me an i5 laptop, I hated every minute of it because of the compile times. Now I'm not doing so much C++ and now an i5 is fine again.


👤 lapsis_beeftech
I am still waiting on ≈45 minute software builds a few times a week and am constantly annoyed by slow-loading web sites. I don't expect computers to get fast enough "to not make a difference" for daily tasks in my lifetime.

👤 jaredcwhite
For me, there was a clear transition when SSDs came on the scene. Prior to that, when using spinning disks (particularly in laptops or laptop-like scenarios such as iMacs), you could always tell when the computer would start to thrash and things were getting bogged down. Your productivity is halted until the computer gets its act together. Now with blazing-fast SSDs, that _rarely_ happens. It was one of the early draws of tablets—no matter what you were doing at any single time, the OS as a whole remained fully responsive. Now we generally have that experience across all computers and computer-like devices. It's awesome.

👤 porcoda
When SSDs became the norm, performance stopped being an issue for most of my work. I can still push a machine to its limits with some of my work, but that’s not a surprise: modeling and simulation generally can saturate any system you use.

The only times I feel performance issues these days are in the self-inflicted category thanks to developers who adopted the write-once/run-everywhere/consume-all-the-resources mentality. I’m pretty convinced if those developers vanished and we had native apps for most things, performance would be a non-issue for most computing users who aren’t doing heavy gaming or resource intensive analysis tasks.


👤 syntheweave
It's been stop and start. Computers are fast whenever I do things that haven't changed in a while, where I know the workflow and can expect stability. They're slow when I'm confronted with more things to learn and it's buggy and uses new tech stacks generously.

The feeling of fastness isn't just the literal latency of pushing buttons and seeing things appear, but also how well the task has actually been optimized: sometimes "sending an image" can feel slow because the task has been made more complex, for example, because I don't know where the image was saved.


👤 chomp
Computers became "fast enough" in the Pentium 4 days for me, which is when I was finally able to properly multitask. I couldn't do much simultaneously on my Pentium 3, as I recall (e.g. couldn't browse the web and listen to mp3s at the same time without occasional distortion). Installing Gentoo took hours.

I built a P4 box with 15k RPM Seagate Cheetahs at the time, and on Windows XP, everything loaded near instant. Things feel way more bloated nowadays, though everything still loads near instantly, so I think we're roughly even to what we had once threaded processing came out.


👤 MrMember
Maybe the early 2010s? SSDs were ubiquitous and cheap enough for at least the OS and the most commonly used applications and games. Around that time CPU performance plateaued for a while. There were improvements but nothing like going from single core to multi-core, or the raw speed improvements from earlier generations. The processor I bought in 2011 easily lasted me until around 2018 without issue or any real desire to upgrade (and that was for gaming, if all I was doing was general purpose computing I'd still be using it).

👤 bluGill
1999 when I built a dual Pentium pro machine - overclocked the CPUs to 200mhz (the CPU was available at that speed, but I found a deal on lower spec CPUs). It did everything I asked for many years, but it was getting old and the sirens call of laptops got to me. I'm now on a pinebook pro, and the speed is plenty fast for most everything. (the screen, trackpad, and keyboard leave something to be desired, but the CPU is fine)

Now at work I often am rebuilding millions of lines of C++, there I'm still screaming more more more.


👤 bee_rider
Toshiba NB-305, a netbook with a fairly early Atom processor. Which normally would have been too slow to use really, but with an SSD it was perfect.

The processor was pretty weak, but that lead me to be careful with my makefiles. And write scripts so that LaTeX could be compiled asynchronously after writing in vim.

Nowadays my builds are slower because I have too much processing power and got lazy about working out dependencies. Everything is a little annoying because I don't bother hiding the limitations.


👤 mnw21cam
Hasn't happened yet. A regular job at work involves kicking off a process that spreads itself across 100 nodes and takes a week, munching through tens of terabytes of data. Even at home I'm processing 30GB-sized datasets, which understandably takes a couple of hours. And editing an hour-long video means a lengthy stage of thumb-twiddling.

I think if computers were faster, we would simply invent new tasks that take a long time with them.


👤 jeffreyrogers
They're not. I have a 2017 Macbook Air and gmail frequently takes many seconds to display an email when I click to open it. I used to use Mozilla Thunderbird and emails loaded basically instantly. I encounter minor annoyances like this almost every time I use a computer. Mostly I don't even consciously notice them anymore since these things happen so frequently, but if they magically disappeared I'm sure the change would feel incredible.

👤 lousken
Probably 10 years ago with first ssds, however it feels like software developers think they can affort to write even worse than ever before because their code is still 'fast enough'. Or on the other hand if it's really slow they blame the hardware and not themselves which drives me nuts.

On multiple occasions I had to force them to fix some completly abyssmal performance regressions in our software.


👤 givemeethekeys
13" 2013 MacBook Pro with 16gigs of Ram. Fast enough. All slowdowns were software features perfected with a fraction of the resources more than ten years ago.

The amazing thing about services such as Google Stadia and Geforce Now, combined with high speed internet is that I will never need a gaming machine again.

And, I can now use virtual environments for high horsepower development environments for cloud work.


👤 kennu
I was doing Yocto Linux development with an older computer, spending ages building OS images. Recently I upgraded to a Ryzen 5950X and now builds complete in a few minutes. They could complete in a few seconds or milliseconds, so it's still not "fast enough" for me. I think I can always come up with something useful to do with an even faster computer.

👤 silisili
Believe it or not, I don't really use computers outside of work - no gaming, video editing, etc...so that impacts my answer.

NUC8i5 was the first time I felt like machines weren't really getting faster year after year. I bought a 4x4 last year just because I assumed things have changed, and don't really notice any difference in day to day usage.


👤 The_rationalist
Spring startup time is instantaneous with native: https://github.com/spring-projects-experimental/spring-nativ... About web frontend tooling, the author of Vue.js solved this class of issues with Vite.js

👤 idoh
A couple months ago I took on a project that involved working with coda, figma, with a company on the google productivity stack (gmail, meet, gcal), and the workload (mainly ram requirements) really crushed my computer. I had to get a new one, and maxed out the ram, and now I’m fine.

So I’d add web-based or electron-based apps to your “unless” list.


👤 pengo
For me, SSDs pushed me over the line. I've been running Linux on Dell's highest spec XPS13 for five years and recently updated to the latest model. The only thing either the new or previous laptop labours at is video transcoding, but even that takes a fraction of what it took previously. I'm happy.

👤 Jenkins2000
Every new computer I've bought was fast enough but becomes completely unusable over time.

The one notable exception was my MacBook Pro 2019 which drove me insane because the stupid fans were always running.

Recently, my MacBook Pro 2021 M1 Max with a lot of RAM is running perfectly and I've never heard the fan on it.


👤 copperx
Around 2012, when a Retina screen, fast SSD, and a 8 core CPU was something you got in a base model laptop.

👤 polotics
Computers have always been fast enough, software has almost always been too slow, recently more and more

👤 runjake
Around Haswell (4th gen Intel), but I avoid using bloated software, which is extremely common, so YMMV.

👤 MattPalmer1086
Like many here, computers for daily use got fast enough with ssds. Processors haven't been the bottleneck for a long time.

My current laptop I bought in 2016 and it's fine. I only replaced that because the screen on the old one died.

So I guess for at least a decade, nearly.


👤 TheOtherHobbes
Still not. I could do with a couple of orders of magnitude for better audio DSP.

No-compromise 3D video rendering would be interesting too. Not game quality - which is very good, in limited ways - but full truly photorealistic render quality at game frame rates.


👤 369548684892826
The Intel Core i5 3320M in the Thinkpad x230 is a CPU from 2012, and I reckon for me that could be the turning point where even laptop CPUs were about good enough. Desktop CPUs have been good enough for a bit longer than that, maybe mid-2000s?

👤 rsecora
Around 1991

It was the first time the computer was not slowing my workflow. AutoCAD and Turbo Pascal running on a 80286 with a floating point coprocessor and a 50MB HDD.

Previous hardware/software stacks in 8086/Tandy/Osborne/PCjR... were not fast enough.


👤 BeFlatXIII
With the notable exceptions of advanced use cases, such as web browsing and the I/O speed of the SD card, the Raspberry Pi 4 is fast enough for most common computing tasks (assuming you prefer vim over IntelliJ/PyCharm).

👤 JoeAltmaier
I used to work on an 8086 networked computer with multitasking. It booted in 12 seconds, to the shell.

But it took an hour to compile 9MB of source code (our operating system).

So it's all relative to the task.


👤 ipaddr
While processing and downloading have gone down so much responsiveness time has been going up with every animation improvement.

The last time things felt slow was loading a tape on a commodore 64


👤 iso1631
I have a 2016 desktop with a E5-1620 CPU, 16G ram, and a Quadro K2200. I never feel like it's slow.

Browsers are the main thing which seem to gobble up resources, specifically memory.


👤 markus_zhang
Computer has been fast enough for me when I no longer wanted to play AAA FPS games, i.e. probably around the year 2008. Then as the last time I actually built a computer.

👤 tokamak-teapot
Lots of times.

1. When I used an Acorn Archimedes.

2. When I had an AMD K6-2, had a Matrox Mystique and ran tkdesk

3. When I had a Pentium III ‘Coppermine’ and ran KDE

4. When I had a dual Athlon and a Western Digital Raptor

4. When I first got an SSD

5. M1 Air


👤 ergonaught
My Vic 20 was plenty fast for me. It needed more RAM.

👤 AndrewDucker
When I can play Elden Ring in 4k with all graphics options set to High.

Next year, of course, there'll be a new game with even higher settings.


👤 AnimalMuppet
I do embedded systems. The 68040 was fast enough (1990). The 68060 was very much fast enough (1994).

For desktops, probably 2000.


👤 williamscales
Still not fast enough. When I can compile the Linux kernel in less than the time it takes me to blink let’s talk.

👤 smoldesu
My personal laptop is a 13-year-old x201. Its "fast enough" for 99% of my uses that aren't gaming.

👤 leke
I bought my boy a basic gamer rig in 2012. I'm using now it because I can do everything I need to do on it.

👤 sliken
I wanted ECC, got a 2015 Xeon 4c/8 Thread E3-1230 v5 (cheaper than the i7 of the time), 32gb ECC ram, gtx1070, and a SSD. It's been reliable and fast. I think I paid $1,500 for it (not including 2 27" 2560x1440 monitors). I do admit to upgrading from a small SATA SSD to a larger M.2 NVMe SSD which did seem to make boost some things nicely. Especially working with large directories of audio, video, photos.

Had a few friends buy $2k-$3k laptops around then, all of which have been replaced years ago.

Building software (unless it's a big Rust project) is quick enough to not care. Occasional gaming (random steam games, warzone 2100, even valheim) seems fine full screen. Even watching 4k downscaled to 2560x1440 seems fine.

Sure 7 years later I'm considering an upgrade, but not sure I'm going to notice. A zillion terminals and chrome tabs don't seem to slow it down at all. Looking for PCIe-5, DDR5 (twice as many channels), and a mid range Alder lake, or the coming out later this year zen 4. I would like ECC so I'm leaning towards an alder lake flavored Xeon (when they come out), or a Zen 4 desktop or laptop. Or maybe a SFF (typically laptop parts in a small desktop case with a integrated GPU). The new "zen3+ and RDNA2" chips sound like plenty for the next 8 years.


👤 Ancapistani
It depends on the use case, really.

Outside of gaming, I've not felt the need to upgrade for raw performance reasons since ~2012. That was the year I bought by first MacBook Pro. That's not to say that "Mac" was the turning point - that MBP was just the first laptop I've owned where the build quality was sufficient that it wasn't the limiting factor for device longevity. That particular laptop lasted me until late 2016, when my youngest daughter spilled a full glass of milk onto the keyboard. The one I bought to replace it was thinner and lighter, but only enough "faster" to be barely noticeable.

Most of my personal and professional computer use relates to development. There's a point where want I'm working on grow to be complex enough that _no_ laptop is going to be sufficient, and I switch to using some sort of hosted runtime. A powerful desktop would push that point a bit further, but again, not enough to be worth the added expense and inconvenience.

I use CLI tools 90% of the time or more; the only real use I have for a GUI is a web browser. As a result, I've used some pretty limited environments productively and without real issues - I used a Raspberry Pi 4 (4GB) at home for a few months between "real computers" a couple of years ago, and went without a laptop for almost a year before that. I used an iPad with a Bluetooth keyboard and a remote developement environment (a Digital Ocean droplet running ArchLinux, using Blink+MOSH) and Safari was fine for checking incremental progress.

Lately I've become more active with my photography and have begun doing some videography as well with various drones. The M1 Macs are substantially better for that kind of work, but I don't see myself falling back into a frequent upgrade cycle.

TL;DR: My upgrade cycle doubled from ~12-18 months to 3+ years in ~2012 for general computing.


👤 s1artibartfast
For me, speed peaked around 2005.

Today I have 32 gb of ram and can barely run chrome, acrobat, and word.


👤 projektfu
I was pretty impressed by a Quadra 900.

👤 whitepoplar
Last year, when I got a MacBook Air M1.

👤 amriksohata
2002 after the release of Windows XP

👤 rambojazz
I'm still waiting

👤 sys_64738
Sandybridge in 2001.

👤 sleepingadmin
I think it was right about when SSDs were coming out. I even kind of proved it.

I built this machine with latest greatest hardware in late 2010.

AMD Phenom II X6 1090T 16gb ram 128gb SSD crossfire amd HD 5900 gpus

various upgrades over the years but 12 years later, that was still more or less within the specs of today. I got my moneys worth for sure.