Languages
* C should have had standard types comparable to Rust's Vec and String. Pascal did. Null terminated strings were a terrible idea, and are still a curse. Plus a sane set of string functions, not "strcat" and its posse. Didn't have to be part of a general object system. Should have happened early.
* Slices. Easy to implement in a compiler, eliminates most of the need for pointer arithmetic. Again, should have happened early.
Those two alone would have eliminated tens of thousands of pointer bugs and decades of security holes.
Operating systems
* UNIX should have had better interprocess communication. It took decades to get that in, and it's still not that good in Linux. QNX had that figured out in the 1980s. There were some distributed UNIX variants that had it. System V had something.
Networking
* Alongside TCP, there should have been a reliable message-oriented protocol. Send a message, of any length, it gets delivered reliably. Not a stream, so you don't have issues around "is there more coming?". There are RFCs for such protocols, but they never went anywhere.
Databases
* Barely existed. This resulted in many hokey workarounds.
With all of those, we could have had CRUD apps by the late 1980s. That covers a whole range of use cases. There were client/server systems, but each had its own system of client/server interconnection. With all of the above, it would have been much easier to write basic server applications.
When it comes to 'software engineering', such as it is, we have regressed. We are incredibly wasteful, in the name of productivity. Some of that trade-off makes sense, but a lot doesn't, and there's diminishing returns. The microservices trend, that's spread like wildfire, is a good example. It makes sense in some cases, but the additional resource consumption is astronomical, both in computer resources, and humans, which is something it's supposed to help with. But the culture moved to an extent that, if you say a particular system is better off served by a monolith, you are seen as an alien in the best case, and a an unskilled hack in the worst case.
Sure, your team can package a microservice in a container, using any language, expose an API, ship it to kubernetes and publish your API spec without having to talk to anyone. Now, you have a whole other team managing your K8s and AWS environments, a whole bunch managing the different database systems that are isolated per service, yet another dealing with whatever messaging mechanism you have probably added once direct API calls became an issue, and you have a full team of Secops people deploying the latest and greatest tools from CNCF to fix the problems the architecture created. While a lot of that would have been, in other times, been solved with a function call. Worse yet, any runtime environments are duplicated, and even then simplest service now demands multiple gigabytes to run. Add HA requirements and you have an enormous overhead, to the point that deploying multiple copies of the monolith would have been far simpler – and much cheaper.
In the 80s, we had Lisp Machines. Nothing has come close ever since. Maybe if we had stuck with S-Expressions, we wouldn't have to reinvent markup languages every 5 years (XML, JSON, YAML, etc). Even without them, the Lisp condition system is amazing.
I wrote about it at https://shkspr.mobi/blog/2020/11/what-would-happen-if-comput...
Look at early games on the Sega Megadrive / Genesis compared to the ones released towards the end of its run.
As we learn more about computer science, we can push systems further than their creators envisaged.
Here is a pretty brochure for the LM-2 (a relabelled MIT CADR): http://www.bitsavers.org/pdf/symbolics/brochures/LM-2.pdf
But would you have paid 80k USD for one (http://www.bitsavers.org/pdf/symbolics/LM-2/LM-2_Price_List_...) -- and that is without monitor, disk, ...?
A classic example is https://www.vogons.org/viewtopic.php?t=89435, which shows what can be done “today” with old hardware, which was thought impossible for all the years before.
That said, the discussion on "better built-in languages" feels like it is aiming at something I don't understand. Older computers booting into a programming environment was something that worked really well for what it was aiming to do. You could argue that BASIC was a terrible language, but there were other options.
I think the current raspberry pi setup is quite a good example for what you are speaking to? Mathematica is preloaded with a free license and is near magical compared to the programs most of us are writing.
Given this, I do think we have a lot of glue and duct tape fastening our digital world together.
But some of the low level stuff done today is probably just as good as would be done in the 80’s if not more, given the now easier access to information, and complexity requirements to even participate in low level development being higher.
I imagine a dev from the 80’s (like my professor from college) may be dismally annoyed at the amount of abstractions some languages have.
Spring boot for Java? So much ‘magic’. He’d say “just write it in PHP” lol
If you accept data points from the 90s, the work that Kaze Emanuar is doing on Nintendo 64 engineering improvements is a good indicationthat it's possible. Improved compilers, more modern algorithms, rethinking rasterization and how everything happens on device.
There's a lot of "what if we had infinite time to optimize for this stack" for sure. His comparison of various approaches to solve fast inverse squaee root on N64 hardware is really impressive.
So when you see e.g. demos of SymbOS[2] running on an MSX2 (which at 1985 barely qualifies as "early 80s") with a 1MB ram expansion, it's not really the sort of thing that people would have had at home in the early 80s.
1: The PC was offered in configurations as low as 16k, models with less than 64k were fairly quickly discontinued.
If you could run a small bank or insurance company on a beefy mainframe back in the 80s, then why can't I do the same today with faster hardware? Sure there's more work for the computer to do, but it's also much much faster and as almost infinite more storage space.
How much hardware would you need to run something like HN, if you wrote the code with the same care and understanding of the underlying machine architecture as a 80s software developer? And HN is probably already a good example of how much you can do with very little software.
I did not get to program on 32 bit minis until maybe around 1988 or so. By then, minis were just starting to be replaced by PCs.
PCs kind of stopped companies from developing better minis. Where I was, by the late 80s, the high-end mini from the vendor I worked for was a borderline mainframe, maybe even a mainframe. But sales were falling at that point.
On the hardware side, the graphics were very low-res with each pixel being 3 scan lines high. Simply adding RAM (there was an upgrade available to go from 16K to 32K) to the base machine should have allowed higher resolution graphics by not re-using pixel data so much or at all (simple hardware changes). Higher horizontal resolution should have been possible but with more complex hardware changes.
I'm looking forward to the end of scaling. I'm hoping that leads to more efficient software at all levels over time.
The reason we have DOS is because CP/M could run on those systems too. A Unix-like OS wouldn’t even come remotely close to running on early 80s micro computers.
And much has been said about the problems of UNIXs design but the reason it was designed they way it was is entirely due to limitations in the hardware of its era too.
Unfortunately once a standard has been set, it’s very hard to change. It’s why there is still DOS remnants in modern NT-based Windows, why Linux and macOS still have TTYs, and why C-based languages even though decades of CS have given us far better principles to build from.
Knowledge access is also something that has improved significantly. Programmers back then may not have had all the information that could have improved their work.
During the 1980s, Infocom developers wrote games in ZIL, which was compiled to the Z-machine.
In 1993 Graham Nelson released Inform, which compiled to the Z-machine. Quoting https://www.filfre.net/2019/11/new-tricks-for-an-old-z-machi... :
> This event is considered by most of us who have seriously thought about the history of text adventures in the post-Infocom era to mark the first stirrings of the Interactive Fiction Renaissance, and with them the beginning of an interactive-fiction community that remains as artistically vibrant as ever today. ...
> It wasn’t until some years after Inform had kick-started the Interactive Fiction Renaissance that enough ZIL code was recovered to give a reasonable basis for comparison. The answer, we now know, is that Inform resembles ZIL not at all in terms of syntax. Indeed, the two make for a fascinating case study in how different minds, working on the same problem and equipped with pretty much the same set of tools for doing so, can arrive at radically different solutions. ...
> Having been designed when the gospel of object-oriented programming was still in its infancy, ZIL, while remarkable for embracing object-oriented principles to the extent it does, utilizes them in a slightly sketchy way, via pointers to functions which have to be defined elsewhere in the code. ...
> The Inform standard library, by contrast, was full-featured, rigorous, and exacting by the time the language reached maturity — in many ways a more impressive achievement than the actual programming language which undergirded it. ...
> Because it was coded so much more efficiently than Infocom’s ad-hoc efforts, this standard library allowed an Inform game to pack notably more content into a given number of kilobytes.
[1] https://en.wikipedia.org/wiki/Action!_(programming_language)
Both are resource efficient but probably have fascinating differences in what they're like to use.
Leaving aside the evolution of language designs: we's gone through several "revolutions" since that time. object oriented, etc etc. say all you like about the subtle power of LISP. the fact that there's so much Python code out there doing stuff, and people can bend it to their own needs, says a lot for modern progress.
You might conceivably be able to run something like Go, without concurrency (thus eliminating the runtime), starting around the early 1990s on high-end computers, and that would have been a really, really nice system compared to most other things at the time. I pick Go because of its relative simplicity. A variant of D or something would probably also work well.
A practical more-functional language could probably also have been provided, but some careful work would be necessary. Sum types have some details that we can easily just throw power at nowadays but would take a bit more thought back then to keep performance up enough; you may have to trade down on a few details. Linked lists were not as relatively catastrophic back then as they are on modern systems but they still involved an extra pointer on everything if nothing else, a language more strongly focused on arrays might be able to help.
But it's important not to underestimate the resource poverty of the time. Rust is out of the question. People complain about how slow it is today as it is. The executables modern Rust produces may have run well (with appropriate bittedness changes and such) but the compiler is hopelessly complicated for the 1980s. Nothing even remotely resembling it would have worked. Most modern languages have moved on and are too complicated to fit on machines of that era in any recognizable state.
Modern dynamic scripting languages only began coming online in the late 1990s. Such examples that existed in that space before then were incredibly slow; people pine for the capabilities of the Lisp machines but nobody is sitting here in 2024 pining for their performance characteristics.
I've kind of rewritten your question to be about the 1990s. I think that's because honestly, in the 1980s, no there probably isn't a huge advantage we could give them. Pecking around the edges, sure. Avoiding some things that only hindsight could show were some traps, sure. Commodore could have used some assistance with building a platform that could expand into the future better; for all the advantages that Amiga had over the IBM world, IBMs ultimately expanded into the computers we run today more smoothly than the Amiga could have. There's some tweaks to graphics hardware we could suggest, especially in light of how they were actually used versus the designers thought they would be. But it's not like we'd be going back there with amazing genius techniques for running on their systems that they couldn't have conceived of.