HACKER Q&A
📣 coolvision

How do you think software development will look in 20 years?


Would programming still look like sitting in front of VS code (and chrome tab open with stack overflow explanation for that pesky compiler error)? I hope not. I like this quote from "coders at work"

"So I think one direction Erlang might take, or I would like it to take, is this component direction. I haven’t done it yet, but I’d like to make some graphic front ends that make components and I’d like to make software by just connecting them together. Dataflow programming is very declarative. There’s no notion of sequential state. There’s no program counter flipping through this thing. It just is. It’s a declarative model and it’s very easy to understand. And I miss that in most programming languages."

Do you have any interesting books / articles that discuss this topic?


  👤 david927 Accepted Answer ✓
This is one of my favorite topics! I'll take it a step further and say, God help us if we're still looking at text in 20 years. (Note: I'm not a fan of, nor advocating graphical models.)

The component model has been around since object orientation achieved popularity in the 1980's. There's a big conflict where, when you encapsulate complexity you hide specifics, and where you hide specifics, you reduce flexibility. The easier it gets, the less it's likely to match what you want to do. I'm not saying it's an insurmountable problem, I'm just saying that this has been a barrier for the last 40 years.

The quote is right when it comes to Declarative Programming. That is where we'll end up. In 20 years, programming won't be less complex, but it will be 1000 times clearer and more manageable. And yes, nary a line of text code to be seen outside of mathematical expressions.


👤 dave_sid
I think we’ll be on Java 20 at least and IntelliJ will have some nice new shortcuts.

React will get even more over engineered no developer left on the planet will understand any exiting code. The web will then be in a permanent state from 2040 onwards with no code changes possible for fear of introducing a breaking change.

Rust will first be used in a production app in 2028 and people will think it’s okay.

People will continually switch back and forth between functional programming and object oriented programming as their favourite paradigm for years to come. Design patterns will become popular again, but then become unpopular. Then popular.

DHH will claim that programming is dead, that TDD is alive again, that computers are dead, and that Basecamp is having a sleep but not quite dead. Developers will be divided on this.

Someone will use the abstract factory design pattern in 2026 but then refactor it back the next day after some negative feedback on the PR.


👤 Sevii
Programming will look extremely similar. Compare programming in 2000 vs programming today, what changed? We used C in 2000 and now we use Go.

The problem with visual programming is that it doesn't scale past the size of your display. We have been capable of data flow programming with a gui for over a decade, it just isn't a great way to program complex systems.

I expect there will be more code and more of it will be written in languages like go. The biggest constraint on programming is comprehending big code bases, and languages like Go help with that.

We need an actual advance in programming to avoid a future that looks like today except with 10x as much code to understand before you can do anything.


👤 p1esk
In 2040, computer programming as we do it today will be completely automated. There will be two main groups of software developers: those who develop machine learning models that generate computer programs, and those who translate business/product requirements into a design specification and then translate the specification into a plain English description, which is fed into a machine learning model. Entering the plain English product/feature description into an ML model will be an interactive process - the model will ask for clarifications whenever something is not clear enough to implement. At that point writing Python code will be just as arcane and tedious as writing assembly code today. ML models will generate better Python (or Rust or whatever) code than most human programmers, just like today a C compiler generates better assembly code than most human programmers. Eventually ML models will be trusted enough to produce machine code directly, making human readable programming languages obsolete.

I predict we will see first systems like this within 5 years, but it will take much longer (up to 20 years) for these systems to become reliable enough to be adopted on a mass scale. This will be similar to the adoption of self-driving cars.

p.s. this view assumes there is no AGI developed by 2040.


👤 marshmellman
JavaScript will be the modern COBOL, with lots of critical legacy software needing support. Meanwhile, contemporary code won’t be so different from, say, Rust.

Languages will be cross-compatible (perhaps over something like WASM), and the amount of software dependencies will grow exponentially.

There will be vastly more software than can be effectively maintained. Dependency hell will be given a new meaning. As a result, the industry will suffer from severe security incidents that shake the public’s faith in software.

Meanwhile, global state actors will pollute the hardware supply chain with hardware that contains critical and hard-to-detect vulnerabilities. This will shake the public’s faith in hardware.

In response to these trends, there will be a push for programmer-hackers who can solve business problems using minimal dependencies and hardware. Low level code will be king in this new era, and, to remain competitive, developers will need deep integrated knowledge of hardware design and electrical engineering. Salaries will remain very high :)


👤 yummypaint
Computing will be more heterogeneous involving a mix of datacenters, SOCs, CPUs beyond x86, and FPGAs. There will be even more emphasis on decoupling code from the hardware it runs on. A shift toward functional programming could be a good fit for this. There will also be more virtualization in general and hopefully better interoperability. As the cost of developing custom ASICs continues to drop, there will be more malicious hardware to contend with, rather than just malicious software running on commodity hardware.

👤 alexfromapex
Function as a service in Rust with the code editor being run as a function on a server that saves to distributed network storage. Almost everything will be distributed because of privacy so most devs will have a home personal cloud server to run their code on.

👤 obayesshelton
Still won't have enough people

Still have security issues

Still have big beasts dominating

I won't be in it any more haha


👤 quickthrower2
Everyone will be gluing containers together: You'll have containers running in JS running in your browser, so if you want a feature you use docker web. This will run on web assembly - which will run everything on the web and off - mobile apps and desktop apps. But there will be competing web assembly standards so you will need to code for all of them.

👤 butnotforgotten
Hopefully by then, computers can read off our wants directly from our heads.

👤 tpoacher
If experience tells me anything, it will be identical but with names changed.

👤 aristofun
I hope devops will be finally automated and will stop looking like a mess

👤 giantg2
It will look great to me because I should hopefully be out of it by then.

👤 logicslave
It will be all config files

👤 subjectsigma
My biggest hope is that we will have better tools for introspection and generation of code. Others in the comments have already talked about machine learning, and yeah, I get it, machine learning is powerful and it would be cool if computers could write code for us. But I don't think that's going to happen. Rather, I hope it will look something like this:

* You start writing a piece of code. Halfway through the first function, your IDE realizes that a library already exists with a similar interface, and asks if you want to download it.

* You agree to download it, and after checking it meets your needs, start integrating it into your program. You do this by building a high-level model of what the code is supposed to do and then plugging it into the existing boilerplate. IE, the concept of 'files' or 'sockets' or 'memory' isn't necessary, you just define the algorithms on high-level, mathematical data structures, and all the messy details are figured out for you afterwards. If the computer isn't 100% sure it knows what you're talking about, it asks.

* After some tweaking, you commit the code and try to push to your VCS. The code is automatically linted and refactored for you to match the style guide and appropriate design patterns, but the system refuses to let you push because after several rounds of completely automated testing (static analysis, fuzzing, etc.) it was found that a small bug exists. You download the results in the form of an image that allows you to step through the entire environment at any level of abstraction, instruction-by-instruction if you want (but nobody ever does that), with all the state and data at any given time during testing available to you. You can scan forward and backwards through windy stack traces instantaneously, as easy as dragging a slider. You can 'fork' the image by stopping at a certain point and modifying the internal data structures - this is, again, translatable to any level of abstraction. This makes it trivial to identify the problem and fix your code.

* You finally commit/push and go grab a well-deserved beer. Your code gets deployed somewhere - you don't know and you don't really care, the system picks an appropriate set of cloud servers and handles the hot-swapping for you. A week or so later, you get an email saying that your new feature didn't survive the mandatory A/B testing - the system automatically but cautiously presented it to a small set of users, who discovered yet another bug. After the bug was tripped by an intelligent constraint solver, the change you made was instantly rolled back and the alert sent out.

* Again you download a system image and rewind to see what happened. Inside the guts of your program, you find a bunch of user data, with private details intelligently obfuscated. Finding the problem, you submit your code again and now everything is fine. After a few weeks, the code is rolled out to everyone and your version becomes the known good version, merged alongside the changes of your coworkers. Your last step is to write a description, in the form of constraints or some other high-level DSL, describing what you found, and submitting it to the analysis tools in your pipeline so they can discover similar problems in the future.

TL;DR Writing code should be around 95% reading or thinking, and 5% writing. All boilerplate is handled for you, bit-fiddling or manual performance optimizations are rare occurrences. State should be much easier to manipulate and the computer records all intermediate states during execution. IDEs should be armed to the teeth with analysis tools that catch problems.

Will this happen? Probably not. But it would be nice.