Why is this? At what point will the hunger of apps for RAM be satiated? With 32GB? 64GB?
With all the advances in tech, why are our hardware still struggling to keep up with demands of the apps we use?
"nobody does that" times 20 years of RAM getting ever bigger, ever cheaper; and you get where we are today.
Once upon a time, emacs was considered a hog: "Eight Megabytes And Constantly Swapping". We had to invent whole new layers of indirection and waste to come up with the current VScode type efforts.
"the hunger of apps for RAM" will never be satisfied, just like there will never be no books for the printing presses to print: We'll find some use for keeping the machines thrashing no matter how silly.
There is no free lunch, everything has a price. If you want to have fast, very efficient software, you will have to optimize it for the hardware, which will cost development time. It also may or may not be maintainable.
Or you can use an all-in-1 solutions, that will handle everything in all (most) situations. It will be relatively fast to develop with, but the result will be far from efficient. It will be "good enough" for the current hardware.
As long as hardware is cheaper than developer time, this trend will not stop most probably.
(On a personal note, this is my favorite thing about embedded software: developers are forced to use hardware efficiently, which can result in surprisingly fast code - at least surprising when compared to regular bloatware)
In the simple case a value like “foo” requires no copying during parsing - it can be parsed with zero copies. But escaped values like “foo\”bar\”” cause issues. The actual string you want is foo”bar” without the escape slashes. Ok, so for this you need to copy the unescaped string into a scratch buffer somewhere. How big is this buffer? Is the JSON parsing API you’re using flexible enough to let you re-use buffers between calls to avoid extra allocations?
Cool. Now how do you express “this parsed struct field may reference the input bytes or it may reference some scratch space” in a way that’s memory safe? Does your chosen language even support this? What if your internal string representation is platform specific?
Copying uses potentially double the memory but whatever, we are hampered by calling conventions and legacy code and other libraries and opaque unknown callers in different languages and different operating systems and different CPUs. And on top of this mountain of shit you stand and yell “I just want to parse some JSON!”. Screw it. Just always re-allocate and make copies
Now imagine this times by a thousand at every layer of the stack and in every process and every library.
I use fvwm and it is far from fancy, but it gets the job done and is quite efficient.
Proprietary software Companies discovered early on eye-candy sells, and many people in the Linux Land is following what Apple and Windows developers do because most people want a similar fancy look.
In many cases PCs and Laptops are more of an entertainment device as opposed to a data processing device, that means Browsers are getting lots functionality for that use-case, and that is not cheap.
I had hoped the move of the masses to Cell Phones and Tablets would stop this trend, but seems it only to have accelerated it.
Now a "rant"
On the Linux side, Wayland is a big concern for me because I may be forced into using software that has a lot of functionality I do not need, stealing RAM from processing/analyzing large sets of data that could use gigs of RAM.
Right now my distro of choice still easily allows me to avoid that bloat, but the Wayland move may force it to follow to use something Red Hat created that to me is way to complex and is overkill.
Luckily OpenBSD and NetBSD (on a BSD now) is trying to avoid that mess, but Wayland may force changes I believe neither of these systems wants. Adding the bloat these systems have been able to avoid. Already dbus is needed for Firefox/chrome... What else will they be forced to port.
Certainly other apps could do better. But I've made peace with the fact that browsers and IDEs, for instance, should use as much as I have available because the whole point is for them to be fast and to be fast they need to cache as much data as possible.
Plus we're just victims of our own technological success. RAM is cheap. Similar to all the old classic cars from the 60s, gas was cheap, engine tech advanced and environmental regulations nonexistent, so you got giant 7L V8 engines running at like 10 mpg even with premium, leaded gasoline. It wasn't until pollution regulation, gas taxes to pay for infrastructure, and now climate change (in some peoples' minds) that efficiency became a priority.
Likewise, with SSDs being so fast these days, the strategy of just bringing everything you need "into RAM" and then letting the paging system handle the loading/unloading as you actually need it is a reasonable strategy.
To your original point, honestly, it kind of seems like the "hunger for RAM" kind of has been satisfied! Mainstream computers have been coming with 8/16 GB of RAM for a decade now and it seems like there isn't much upwards pressure on that. More RAM can be useful for special use cases like intensive graphics/media tasks, running VMs, etc. but doesn't really make a difference day-to-day, where the RAM is just acting as a layer of caching, and no normal applications have a big enough working set to cause an issue.
I guess our desires to have plenty of RAM available are secondary to our desires to have 100 tabs open at once, a quarter of which have a ton of sophisticated JS behavior, and never lose application state in any of them. Oh well.
You could always look at this the other way - I did pay for that 32GB of RAM, why shouldn't it be actively used for things I find at least sort of useful instead of staying forever unused in support of some abstract notion of how being paltry with RAM use is good, or for some possible other task I want to do that does need lots of RAM.
The value added of using less RAM for a single application that you develop is close to zero, so people don't do it; if all other apps on that computer use hundreds of MB to several of RAM, your reduction from 100 MB to 5 MB is insignificant for the overall system. When everyone thinks the same, you just buy another 16 GB of RAM and everyone's messy app will fit. Developers know RAM is cheap, but don't calculate the result of cheap multiplied by millions (not cheap at all).
The only 2 times developers think about RAM consumption is when they design an application and set a target and then when they go to testing and exceed the target, so they need to reduce it enough to pass. If the design review allows them to set a relatively high target, they don't have to care at all.
I'm on a 4 GB system, so RAM fills up quickly. Since I'm also still using a HDD, it's pretty important for me to prevent RAM from filling, since swapping makes the whole desktop environment lag.
When I see the RAM indicator column get close to 75 % I'm warned and do things like close applications I currently don't need or press the above mentioned button, since Firefox is open pretty much the whole time.
There are programs out there that are RAM-efficient. Every single one was written by someone that DOES care and ISN'T lazy.
The other big thing is I never have more than 4-5 tabs open--anything beyond that just bookmark the sites and come back later, having hundreds of tabs open is just silly and a true anti pattern for research and thinking.
When I joined virtualization startup Bromium shortly after it was founded in 2011 (product now sold by HP as HP Sure Click), we were planning to put 100s of VMs on a single laptop, for security purposes, and were counting on and hoping() for the amount of RAM available to us on customer devices exponentially increasing over time, as had been the norm until them.
In 2011, a typical low-end laptop came with 4GiB of RAM, and a high-end one with 8GiB. I was a the company for five years, and those specs stayed exactly the same. Now in 2023, almost 12 years later, the minimum spec for a laptop has grown to 8GiBs of RAM, whereas Moore's law (transistors per unit space doubling every 18 months) would have predicted somewhere around 256GiB of RAM (4 (2*(12/2)) = 256GiB).
So where is the feast we were promised?
*) We actually made that happen, using lots of cool software tricks, but that is a story for another day.
Same reason nobody has empty storage rooms in their homes, no matter the size of the house.
Things expand to fill up their surroundings. It’s like a law of nature.
Of course files might be compressed on disk and optimally not compressed in memory, and of course I might want to connect to internet services and download data to store in memory that never goes to disk.
So I'll still want more RAM some point past the hypothetical world where I have as many terabytes of RAM as I do storage.
For the last 10 years or more, whenever someone is building a computer and asks for my advice, I look their build over and say "I would double the RAM." If they follow the advice and ask for any more advice on the build, I look it over and say "I would double the RAM."
It's because it is so much faster than disk.
I’ve never seen the ram usage exceed 25%, even in the most demanding uses such as VR sim racing, or developing full stack code with a bunch of IDE’s and vms running.
I believe 32gb will be the next standard, as is 16GB is today. I don’t believe the next hop to 64GB will happen anytime soon in the next 10 years. I believe 32gb will be enough resources to hold over the next computing wave.
There, that's the answer.
Ok, before getting downvotes, let me explain. The current programming "best practices" is to write programs with manageable and easy to understand code, not necessarily using as less memory as possible. It is not a bad way per se, you need to spend resources where needed, and focus on optimization when it is really the problem. But with today RAM, specially in development environments, that's rarely an issue. If you have 50% of something unused, you try to use it (similar as space, no matter how much space you have, you will try to fill it just so it doesn't go to waste). The main issue, in my opinion, is that developers often forget that their program is not the only one on a user machine.
Premature optimization is evil. No optimization at all is even 'eviler'.
Weird,
8/16 GB of RAM were common configurations for at least last 10? 15? years.
RAM advances very, very slowly in compare to e.g disks, gpus and cpus.
There were Optane memories which had way higher amounts of RAM, but they're cancelled as far as I've heard.
Personally the only time I'm constrained by memory is when running Windows VM in the background.
But if I didn't have 20 apps running in the background (web browsers/games/IDEs/chats/etc) then it wouldn't be a problem, I think.
Our computers are doing more and more tasks, we want to have answers faster and without using internet bandwidth, so there's a lot of caches and having stuff in the memory
One is because of software bloat. Apps and frameworks keep getting larger, so they need more memory. But that's only a small portion of the explanation, and when talking about 16GB+ of RAM, probably almost irrelevant.
The other is because RAM is there to be used, particularly for caching. The problem here, however, is that every app is its own island in how it does caching, so multiple apps consuming RAM for caching will end up doing the wrong thing. See https://jmmv.dev/2021/08/using-all-memory-as-a-cache.html for a detailed description of this issue.
2) writing streaming code is harder
You could imagine someone doing things like creating a global variable with a queue in it and then putting log lines into that queue, but never taking items out. Because each log line has a reference to it from being in a data structure, and that data structure is referenced by the global scope, that memory will never be released.
In big tech co infrastructure, almost all server software will have memory limits set on the process. The process will either try to allocate more memory and fail or it will be restarted by a process that is watching it.
The other thing to consider is caching. When there is data on the hard drive that your computer needs, it will ask for it. If you ask for the first block of a file, your computer predicts you will probably ask for the rest of the file and optimistically starts loading it into memory before it is asked for. As long as your computer isn't busy with something higher priority, it will likely try to do things now that you might want to do later in order to appear faster.
Web browsers have been known to also optimistically pre-fetch files, although I'm not sure what the current state of optimistic web requests is: https://developer.mozilla.org/en-US/docs/Web/HTTP/Link_prefe...
Your browser might keep all types of extra data so it can serve your request quickly (form memory) rather than slowly (from some server half way across the world).
Another thing to consider is quality of data. In the past it would have taken forever to download a simple picture, so they were highly compressed and blocky looking/pixelated. Now we can stream things in 4k. As ram increases, so too does the fidelity of our media.
So to summarize, without doing any research at all, I expect the answers are:
Memory leaks (programs bloating over time)
Caching mechanisms (exchanging space used for speed)
Higher quality media (exchanging space used for higher quality)
Imaging getting an OS message "sorry, even though your computer has 12 more GB of ram available, this program has used its max. Allow more?" people would ask for this message to be disabled and the allow more Yes option be the default.
Maybe that's even a bit too pesimistic. Cloud computing already gives companies incentive to focus on efficiency for cost saving and there is a growing trend to write simpler more performant software to replace existing tools, however not specifically in the general purpose computing space.
“Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.”
There’s a comparable race between hardware engineers striving to build faster and more capacious systems and programers blithely consuming all the cpu and memory they can find. So far, the programmers are winning that one.
Firefox got to a point where it was consuming 24 GIGS OF RAM, yes, 24, on my 32GB of RAM machine, and you'd say oh but that's still okay, your PC should function properly, nope, once it starts leaking FF starts hogging your PC, even at like 8GBs of RAM it's already hogging the PC.
I don't really know what's going on but FF mods on reddit and such are on denial and gaslight you about it, it's been this way on several operating systems (windows 10 and now 11) and with or without addons (not like I use any special addon, really, just basically Ublock, although now I've started using clickbait remover, sponsorblock and bypass paywall).
For the same amount of tabs open Edge and Brave for example use like 3gbs or less of RAM, even after months.
However - that's still a whole hell of a lot of RAM! I remember the 640KB days.
The interesting part is, there was a big step up from DOS and text mode to Windows and everything being GUI, and that made some major changes in software and how people use computers. But -- there haven't really been many changes since then.
Open a word processor and type a letter or something now. Same letter you could've typed in 1995 or 1998. It'll take 100x the RAM (or more), for the exact same result. Same with spreadsheets. And the vast majority of other stuff any normal person does on a computer.
A sidenote is that there are a few things like streaming video, which weren't really doable back then. But most everything else hasn't really changed much to the user. Click to make this text bold, or align right - that's the same as it was in 1995.
What has changed is that now all software must have at least 75 layers of indirection and more dependencies than you could shake a tree at. Mostly just to do the exact same things that equivalent software did a whole quarter century ago. Without all that nonsense.
It's all the frameworks, patterns, and architecture astronauts. Us developers are continually reinforced with this idea that making things as abstract, generic, and be-all-things-to-all-people is the proper way to develop. And reusing code (no matter how loosely it matches what you need to do) and gluing it together is the way to build things. Instead of just writing code that does exactly what you need and nothing more.
Nowhere is this bloat more obvious than the web. A typical web page today takes around 1000x as many resources as an equivalent page that communicated the exact same information would've 10 years ago.
It's still mostly just text, some images, and forms. Same as it was then. But nowadays it's just not done properly unless you add at least two frameworks with several megabytes of javascript dependencies. And also a lot of tracking scripts and popups and various other overrides of the default behavior.
The real problem though, is not the RAM usage, or bandwidth. We've got the hardware for that now. The real problem is how hard it is to understand and debug all that extra cruft.
We're just wrapping layers upon layers upon layers, none of which are comprehensible or even correct (each layer patching over the previous or adjusting the generic results to fit the specific needs), instead of just doing the specific thing right to start with.
Our hardware is struggling because, as a whole, us developers are pursuing the stupidest route possible. The route that is (supposedly) the most efficient for development, despite being the least efficient in actual use.
However, that tower of abstractions and layers costs us a lot. Not just in RAM usage, bandwidth, and disk space.
I'd guess we're to the point that the average developer spends significantly less than 1/3rd of their time actually developing and at least 2/3rds of their time dealing with all the cruft. So it's not even efficient on that front.
I don't know what it will take for everyone to understand just how bad the current trajectory is. Or what percentage of developers would actually have any clue how to build things efficiently. We're at least two generations into this approach.
I think it’s more user behavior than application implementation.
Ymmv. Good luck.