>Also, I think today we’re kind of overburdened by choice. I mean, I just had Fortran. I don’t think we even had shell scripts. We just had batch files so you could run things, a compiler, and Fortran. And assembler possibly, if you really needed it. So there wasn’t this agony of choice. Being a young programmer today must be awful—you can choose 20 different programming languages, dozens of framework and operating systems and you’re paralyzed by choice. There was no paralysis of choice then. You just start doing it because the decision as to which language and things is just made—there’s no thinking about what you should do, you just go and do it.
For context this book is copyrighted 2009 so this interview is more than a decade old, and I'm sure many things have changed since then.
No OS to worry about (the machine probably had a very basic BIOS to handle peripherals but that was it)
No permissions model
A tiny API to interact with whatever graphics/sounds facilities were available
No need to worry about what resources simultaneously-running programs could be using (there weren't any)
Actually no need to worry about concurrency at all (you just couldn't do it)
No need to worry about what language to use (either the one blinking at you when you turned on the machine, or assembly language for the processor)
No need to worry about how to restructure computations to use a computation shader or SIMD
_You_ were the owner of every resource in the machine and it all danced to your tune. And, best of all, you could come to a complete understanding of how every part of those machines worked, in just a few weeks of practice. Who today knows the intricacies of their laptops to the same extent?
However, everything feels vastly more complicated. My friends and I would put together little toy websites with PHP or Rails in a span of weeks and everyone thought they were awesome. Now I see young people spending months to get the basics up and running in their React front ends just to be able to think independently of hand-holding tutorials for the most basic operations.
Even business software felt simpler. The scope was smaller and you didn’t have to set up complicated cloud services architectures to accomplish everything.
I won’t say the old ways were better, because the modern tools do have their place. However, it’s easy to look back with rose-tinted glasses on the vastly simpler business requirements and lower expectations that allowed us to get away with really simple things.
I enjoy working with teams on complex projects using modern tools and frameworks, but I admit I do have a lot of nostalgia for the days past when a single programmer could understand and handle entire systems by themselves because the scope and requirements were just so much simpler.
We have 1,000 more solutions to 1,000 more problems. We have extensive documentation on all the new things. Documentation is mostly focused on nouns, sometimes on verbs. The "what" and the "how" are easy to find.
What we don't have is clarity on how they fit together. The overwhelming majority of work done in software is fitting the pieces together in a way that works. The "why" and the "when" are really difficult to pin down.
The biggest overhead is trying to conceptualize the systems that make up the foundations of software: operating system libraries, compiler toolchains, shell environments, dependencies, etc.
Because there is so much mysticism involved, people who have spent years or decades treating operating systems, package managers, development environments, etc. as playgrounds to explore have an advantage that is difficult to articulate, let alone teach.
Anyone learning software today is presented with a lot of exciting opportunities to explore programming itself. There are handy web-based editors where you can write programs that do input and output all in the browser itself. No need to learn about shells or packages or git...
But those things we factored out of the learning experience are probably the most meaningful subjects to learn about, if you want to actually create something. It's really tricky to find direction to go from vague concepts to working projects.
Today, tools are incredibly better; compilers, debuggers, profilers. I'll take something from JetBrains or Visual Studio any day over what I had available in the 1990's. There were some gems back then, but today, tools are uniformly good.
What has gotten difficult is the complexity of the systems which we build. Say I'm building a simple web app in JS with a Go backend and I want users to have some kind of authentication. I have to deal with something like Oauth2, therefore CORS, and auth flows, and to iterate on it, I have some code open in Goland, some code open in VS Code and my browser, and as a third component, I have something like Auth0 or Cognito in a third window. It's all nasty.
If I'm writing a desktop application, I have to deal with code signing, I can't just give it to a friend to try. It's doubly as annoying if it's for a cell phone. If I need to touch 3D hardware, I now have to deal with different API's on different platforms.
It's all, tedious, and it's an awful lot of work to get to a decent starting point. All these example apps in the wild are always missing a lot of the hard, tedious stuff.
Edit:
All that being said, today, I can spin up a scalable server in some VM's in the cloud and have something available to users in a week. In the 1990s, if there was a server component, I'd be looking for colo facilities, installing racks, having to set up software, provision network connections, and it would take me ten times as long to the first prototype. I'd have to write more from scratch myself. Much as some things today are more tedious, on net, I'm more productive, but part of that is more than 25 years of experience.
When I first started programming all of my tools had a simple workflow:
* Write a single text file
* run a single command to build (cc thing.c)
* Run the resulting file as a standalone command
People learning to program are often new in general. They're figuring out their text editors. Figuring out how to run programs. Figuring out so many basic things seasoned developers take for granted.
I became quite fluent in C, writing many, many useful programs with just a single text file. By the time I had a need to learn about linking multiple files in large projects I was already fluent and comfortable with the language basics.
Contrast this with modern environments: I need to learn whole sets of tools for managing development environments (venv, bundle, cargo, etc etc etc). These development harnesses all change rapidly and I am constantly googling various sets of commands and starter configs to get things running. These are all things that a seasoned developer will be constantly dealing with on a complex project, but it seems like little effort has been put into creating basic defaults to simplify things for beginners.
Programming is programming, and the language doesn't make a ton of difference to me, though I do have strong preferences.
But that is the -easy- part. The hard part is everything else. Jenkins? Gitlab runners? Github Actions? There are at least 5 people regularly use, all have their own way of doing things and syntax.
To Docker or not to Docker. Kubernetes? ECS? RPM? How about just lambda functions?
What about your config management.. Ansible? Chef? Puppet? Salt?
It's the worst part of switching jobs to me. Especially when feeling like you have to invest time learning a system that's falling(or even has already fallen) out of favor.
I used to love the idea of owning the whole pipeline. Now I just want to sit in a hole writing solid code, and let someone else handle all the CI and systems parts.
Also, even if you understand and agree with Werner Vogels' mantra "everything fails all the time", it's incredibly challenging to make a truly robust distributed system. There's just so much happening so rapidly, low-probability problems become consistent failures as you scale, and the wrong recovery approach can have non-obvious second-order effects leading to bigger problems.
1. Concurrency. Multiple cores are a completely normal thing now, so having to think about how different threads may interact went from a theoretical concern to a very practical one.
2. Dependencies. Back then you could just turn on the computer and start coding. Today many things have large amounts of dependencies that need installing, compiling or setting up. Weird problems can happen. I have an issue where VS Code just refuses to autocomplete in the test section of my project. Why? I have no clue, and VS Code is a giant of a thing. It's quite easy to spend days or even weeks trying to set things up and work out issues with things that aren't even the thing you were trying to write.
3. Teamwork. Modern computers allow for large programs, which require teams to develop. A lot of the work in building modern successful software is in organization, record keeping, documentation and working with other people.
4. Security. Pretty much everything interacts with outside untrusted inputs, and so it's far more important than before to treat every input correctly. Anything from image loaders to parsers to APIs may be exploited.
Distributing native apps has gotten harder in some ways with code signing required in order to share binaries without scary pop ups or the OS blocking outright.
In the past, when we had no Internet, or when the resources for studying were scarce, we would intently stare at the screen, trying to understand what the program does. You'd have to disassemble and debug the program and even try to explain it to your cat, dog or a rubber duck. These days programmers are often not even familiar with the term "rubber duck debugging". We would study and read actual RFCs; programming wasn't "fish for that stuff on Stackoverflow or Google" kind of thing.
We used to let our brains wonder. Thinking about past and present. That allowed us to generate awesome, great ideas.
Today we have tons of tools and techniques that seem to be vastly better than the tools we had in the sixties, eighties, and nineties. But if you compare that with how much computational power, memory and storage capacity have expanded since then, and compare with how we utilize that power, somehow it feels like our applications are getting worse, not better.
It's nearly impossible today to perform any programming without the Internet. And that's why programming today is much more difficult than in the past. There's too much information. Too many ways to succeed, and even more to fail.
For the most part, those programs written with those tools still work.
Today everything is update on the fly, and subject to random breakage. GIT is better, everything else has gone downhill.
Fast forward to today and the most painful parts of my day are dealing with shitty tools. git’s incomprehensible interface. Jenkins CI/build system that’s barely more than a log of every damn line the compiler outputs, but split up in a way that somehow makes it even harder to figure out what went wrong when something does go wrong. JFrog’s Artifactory that looks like it’s having seizures when you search for the thing that Jenkins built. And then when you find it, it lists the path, but you can’t click on the path to download it. There’s a separate button in a different place for that. These tools feel like they did a user study and whenever something was easy for the user, they threw that out and figured out a way to make it harder. Interacting with this shit is infuriating, especially when you’re on a deadline to get something out the door. I feel like I’m taking crazy pills when I bring up these problems and other people just shrug. As if that’s the way it’s always been and it can’t be changed.
2. And the pace of development, not because one cannot learn fast enough, but what one has learned will quickly be made obsolete because of "developer fashion" (favourite front end JS frameworks, anyone?). This happens because communities are important, e.g. for answering dev questions and fixing bugs, so one cannot adopt a "dead" framework. Each new language suffers from lacking essential libraries, components (and for managers: talent for hire), so it is risky to adopt one that has not yet accrued critical mass, yet it is also risky to miss a trend.
3. Computer security for many systems is critically important yet very difficult, and attackers are everywhere.
On the upside, in the past, machines were more heterogeneous, however today there are just a few survivors: Windows, Mac OS X, Linux and in the mobile space Android and iOS. Because many apps are Web apps, cross-platform has become easy, although the user experience of a Web app in no way is comparable to a native app. WASM is likely to address some of that, and - I hope - will remove much of the ugliness of abusing the HTML+HTTP paradigm intended for technical documentation for distributed application.
Years ago, a curious individual could solve some basic problem, maybe its a better file system, or a better kernel, or a new scripting language. Your program might take off, and you could build eventually find an entire industry around the problem you tackled.
Today, that's unlikely to happen. Gone are the days of some people hacking in their room creating a solution to some problem. Most of the "low hanging" fruit is now in very niche areas, you will have to do some extensive study just to understand the problem well enough to approach it.
In practice, a lot of these choices are already made of us. When you join a new project, the space of choice is limited. But when choice is too be made, what I find difficult is that you need to reach a consensus with your colleagues. When you're introverted, it's taxing. There's always some person that needs extra convincing.
Code review wasn't as pervasive and can be tiring too. Sometimes you need to explain again and again why you made that decision (it was in a design document, it was discussed in a meeting, and then the question is brought again in the code review).
But the worse part for me is the accumulation of abstraction layers and dependencies. I'm working on a project with tons of internal dependencies that are loosely specified and documented, all of them introduce some unreliability. The whole edifice is fragile, and yet is expected to work 24/7. This causes a lot of stress.
30 years ago, nearly all projects I worked on were waterfall and my development team was far more relaxed. Spending days tinkering and thinking of a good solution was valued over sprints and rapid commits. Of course, we got far less done back then, but it was also less stressful (in my experience, at least).
Typing this one magic word brought up an IDE, including an editor with highlighting, an interactive help system, samples, an in-editor REPL, and single-key shortcut to run the program. I can't remember if it also came with a debugger and a way to create stand-alone executable, or if that came later.
It had built in commands for drawing, input and sound, all well documented. And the UI was straightforward and intuitive.
This doesn't really exist anymore.
20 years ago, if I needed to convert a json file to xml (that's a very bad example because 20 years ago JSON was just born and nobody used it, but no other example as simple as that one comes to mind, sorry ), that would have taken me days of coding just for that single utility function. And probably lots of headaches with regular expressions or syntaxic/semantic parsing.
Today, I type "npm install json2csv" and I'm done.
The only thing that might be harder for people starting today is that the level of abstraction is so high today that we almost don't even need to understand the underlying computer science to be a developer, and so the few rare times when you do need to understand what's going on behind the scenes, that might be complicated. Other than that it's only advantages.
Progress and history always go this way, things always get easier and cheaper to build. I don't see why programming would be different.
When I read that section, I wasn't really convinced. Not that programming is not quite different than it was way back when, but the difficulty that we face is that there is so much more software around. Some bad, some good. Many more useful tools.
For example, since I started programming the following innovations have significantly added to tools that we have to apply to programming:
* The C language. Much better than writing high-performance code in assembler.
* SQL. The power of SQL is extraordinary.
* Lots of disk space. Now we don't have to store our programs on punched cards.
* Extraordinary increase in memory.
* Networking, beyond dial-up connectivity.
* Other highly useful languages, such as Python
* Google, probably the top-used resources by programmers
* IDEs
The illusion that we are hampered by lots of choice is a bit illusory. Usually our programming environment is set not by our own individual choice, but often by the environment we plug into--by what resources we are going to access, and other team decisions.
Maybe about 20 years ago, if you had a website that allowed users to post a comment and upload a small jpeg, it was considered crazy bonkers cool.
Today, you probably need some advanced 3d UI that communicates with your phone and millions of users in real-time with geotracking and 100 other features to get a pizza to your door in under 5 minutes to barely raise an eyebrow about the technology.
Devices have become more complex and interconnected. Expectations for software are also higher, while there’s more resources poured into finding and exploiting vulnerabilities.
Keeping on top of all of that, isolating your memory access and permissions, etc… is a lot harder today than it was even a few years ago.
In every other way I think programming is easier now. Languages are more ergonomic, even the ones that are decades old. Libraries are more easily available and there are tons more resources today than ever before for learning.
In summary, apparent simplicity that causes actual complexity is a big problem now.
And those things, frameworks and platforms, are the biggest technical burdens for programmers today. The ability of the web to create what we used to call "interactive apps" (but are today just apps) has lead to the desire and expectation that all web content will take on this level of polish and appearance. While that's possible, it's also arduous. In today's world, you must also learn the frameworks, tooling, and CI/CD processes that lead to your work making it onto someone else's screen. That's a whole lot harder than what we used to do -- like when publishing meant copying a floppy and putting it in an envelope, for example.
The ability of platforms to change their specs and rules all the time (and their insistence on doing so) is another new programmer burden. In the old days, one could write to a piece of hardware or OS and expect that code to run for a long time. Not so anymore. Now the hardware is virtualized and many layers of middleware and SaaS are required to make your code do anything. All of those are moving targets and will change out from under you, without your desire or permission, while you try to continue to deliver new code and service your old code.
Finally, and this is the final nail in the coffin for some more experienced programmers, the misunderstanding of the concept of "agile" or "XP" and how it became scrum -- a series of micromanagement theatrics and paperwork pushing -- has made programming a lot more difficult and a lot less fun. One of the best parts of software development was the unpredictability and experimentation that led to innovative results and small, incremental improvements. Close contact with customers was also a hallmark of early software development. Today's top-down, "How long will it take you to write and debug a feature that doesn't exist?" management mentality cannot and does not lead to quality software, as everyone can tell. It does lead to programmer burnout, quiet quitting, and a lot of wasted time in dev shops.
Today, every single business application is somehow supposed to have its own design language and can't use any standard UI library.
Also, deeply ironically, I think now certain web applications are now heavier than Swing applications (obviously not IDEs, but ordinary applications).
And I assume most people are considering web applications rather than native. If so, it’s like a full circle from xterms…
I think the difference is maybe one of perception. We can do more so the baseline expectations of users/customers are higher. I also think that there has been a higher growth of people specializing in one area rather than every programmer sort of being a full stack generalist by default. So for a front end specialist, databases and server side may seem like a black box and more complicated than ever. Or someone specializing in kernel programming might think front end is more complicated than ever.
Generally you can still do things the way they were done in the past if you want to. New tools might make things easier and learning them might seem complex, but you don't have to.
Every choice we make as developers today not only has to be vetted by the team, it has to fight against all of the other options and the opinions created by people who are paid to convince others to use their solution.
I’ve had to push back against both my peers and managers who suggest technology that’s been marketed to them. And by push back, I mean spending days researching these new technologies to see if they’re just an old technology with new wrappings, a VC backed promise that’s still a few years from being usable, or doesn’t solve the problem at all.
I won’t go into too much detail, as it will just turn into a vent. And the current industry direction in general doesn’t make it easy (containers, industry leading black-box “solutions”, system admin and security).
In my immediate experience; cowboys that just “do” no matter the risk, informed or not; are really thriving right now. Not sure if this is a post-pandemic large lockdown results driven thing..
But if you follow good/great practice, are a responsible, policy and process following individual - constructive and critical thinker; you’re in for a hard time. Smaller firms seem to handle this better, in my past experience.
Theres also a group of individuals in between these two that seem to be struggling.
Concurrency and multi-threading. To get good performance you used to care about a single thread on a single CPU core that you had exclusive access to. Now you have to utilize multiple cores, care about context switches and in even more extreme cases handle NUMA memory architecture. It's hard and the fact it's slightly less hard with go is one of the reasons of the language popularity.
My tongue is in my cheek, but I often wonder.
Look at an Android project: there are maybe 12 different kinds of files! For a 'Hello World'. You have manifests, gradle (which is yet another programming language), snippets in Kotlin and Java, and entirely different xml 'language' for view definition, massive APIs, massive complexity and 'weight' in the simulators.
It's hard for people to focus on the problems space when were are overwhelmed with layers of tooling and abstractions.
The idea of developing and running this application with a single person would have been laughable.
Today, because tools are soooo accessible, I can, and do, routinely spin up my own VM, apply Puppet, drop my MySQL and Django containers onto that VM, and pull https certificates in addition to doing the front and back end software development.
Life would be waaay simpler if I could just write server side web application code and wait around for database developers, sys admins, and front-end folks to do their thing. Imagine not having to learn a testing harness because there are actually people testing the software!
I started in the mainframe/COBOL days. There, the programs could be sophisticated but the systems documentation was excellent. Just a little complex and you really had to understand algorithms and such.
Then came client/server. That brought transaction monitors (like Tuxedo) and took away the benevolent dictator model (IBM) that gave you transactions practically for free. New stuff to learn, pitfalls to avoid.
Then came J2EE. A ton of complexity and false starts on fledgling technology. XML processing, etc. Finally REST came along and made things somewhat understandable-- but still you had to manage transactions finally.
And now we're in the Kubernetes age. Once again, tons of infrastructure to learn, but a strong framework. (So somewhat like the mainframe days.)
It's all a big circle. You have to constantly learn. I really think it is harder to get on board now than it was in the past.
Also if you start today, you gotta learn some part of the pyramid your abstractions stand on, and that's growthing work as time passes.
Now, write distributed micro service. Many many programs. Many databases and queues. How do I debug that? How do I monitor that. We get scalability minus L3 cache access. Now deployment orchestration. Win zero down time deployment.
Now, getting a decent toolchain takes some (usually small, but nonzero) effort, and there is a flood of conflicting information on every decision, including that first step.
Information overload and analysis paralysis bite hard.
(If you can get past that initial hurdle, things are immensely better than any time else in history in most ways, though the trap of getting overwhelmed by info and options is persistent.)
What passed for project management before XP and Agile wasn't great. But at least it didn't impose ridiculous administrative burden onto devs.
Gods, we used to complain about doing weekly status reports. Now it sounds like heaven.
That solution always had some scripting capabilities, which were a subset of JS with some software-specific extensions. Nothing really fancy, a few useful things were missing but overall there always was a way to reach your goal.
As I'm not doing developments daily this "low-tech" approach was nice: One file that would be copied to your production system and linked there and that's it. For debugging you had the integrated logging interface, a web-based thing, not too fancy.
One or two major release ago, they switched to a TypeScript/NodeJS based scripting system.
While I can now "glue parts together" by using npm, I also have to transpile my code after every change and have to run an extra step to "package" everything for deploying it. Creating a new script requires a specific console command which will then prepare the file & folder structure.
Debugging can, in theory, still be done through the web-based interface, but the recommendation is to set up a launch.json file, so VS Code can directly connect to the software on a pre-defined port and you can set breakpoints and step over your code while it's being run. I'm sure that's cool for hardcore programmers, but maaaaaaaan, ain't nobody got time for that if you've got business stuff to run.
Somewhat anecdotal, but I spent 4 hours on Friday with trying to extract numbers from a txt file and write them to another file. Reading the file, running a regex, that's no big deal.
But creating a temporary file is a HUGE pain with the new system. In the old system it was something like: var fh = job.createNewFile('test.txt', 'UTF-8'); and you could then work with your file handle.
Now I' fiddling around with the third npm package to create temporary files and it seems like the problems is not me using the packages wrong but how those packages try to create a file in a Windows environment which fails.
To be honest: I'm still guessing that's the problem and need to ask some smarter people, but thing like that "just worked" before.
Oh: And now everything is async, which really gives me headaches, or you have to create a function which can then be called with an await so everything else waits for it...
Swap? What's that? Network stuff? Huh? I have wifi.
They have a laptop, a.good one, and that's just it.
And while security should always have been a thing, the chance that you write an application accessable on the internet is much higher today than 10 or 20 years ago.
Beside that, there a much better tools available and easier to use than ever before. Building a whole platform is possible with a small team.
Building a highly scalable self healing zero downtime system is basically gifted to you when following modern practices like good java frameworks (magic) and k8s
It used to be transfer your source files via FTP, now it’s setting up Docker and Kubernetes and I don’t know what.
Ofc the latter is better for teams, but now you must learn a whole stack just to deploy code.
Right there inside the same document, rather than learning about Node, APIs, a templating engine, etc. It kind of just worked and was very simple. Of course, for professional apps this caused a ton of problems and now using PHP that way is seen as something from the dark ages.
At those companies fixing a bug that would be relatively straightforward with a monolithic architecture is a humongous pain in the ass.
Technically, programming is not more difficult than it used to be. But in practice it is, due to how expensive and complicated it is to know what is going on, and how little the average manager cares about it.
You used to know and a few basic APIs (your OS, your stdlib…). You’d probably spend more time implementing basic utilities. However now you have to manage a complex software supply chain of dependencies of varying quality and security risk.
My first language was GW-BASIC on an IBM compatible. Turn on the machine, type GWbasic and hit enter. You get the IDE, the editor and everything else in a single shot. Zero barrier. You could write decent programs in it and gave you that ongoing sense of achievement. Nowadays, you have a long series of things to do (clone a template, setup your editor/environment, download this and that etc.) just to get started.
Learning programming today is difficult because of context. Some kids don't even know what a file system is. The idea of an interpreter evaluating a text file doesn't click because what even is a text "file"?
Being an intermediate application developer I would argue is easier than before and possibly the best place to be in your career. Web Development with React and Angular turns front end development into a more complex excel-like experience. It's still programming, obviously, but the tools take a lot of the complexity out of it and a lack of experience has you blissfully ignorant of anti-patterns and a lack of testing.
Back end web developers have the best life (technologically speaking). Complete control of the runtime, any language they want to use and normally pretty straightforward implementations. Security and testing are often overlooked but that's okay.
Senior application developers hold so much context that they are often frustrated by the design decisions of the tools they use. "Why can't we all write web applications with Rust, except using JavaScript modules because Rust modules are terrible, but only once Web Assembly has access to the DOM - actually the web sucks and we should write native applications. I can't wait to retire."
Devops is both harder and easier. It's easier to build a scalable reliable system, but is harder to get started because, while a simple cloud VM is still available, you feel dirty if you haven't provisioned everything using some form of orchestration.
Desktop application developers are in the worst place possible. Microsoft doesn't even know what GUI API it wants to use. No one uses Apple's native Desktop API, Linux is.... anyway (don't flame, I love GTK4). The best option is Electron until everyone is using a single platform.
Mobile development is hard af, there are almost no engineers in the field and you need a PHD just to install Android studio and pick the right Android SDK.
All in all, I love it
We have way better resources for learning but also much higher expectations. You need to be able to write good code without much thought so that you can maintain a higher level of context while programming. Sometimes you get to write just a nice little isolated bit of code, but usually there are a lot of moving pieces. (no matter how functional and immutable we try to make it.)
lot of people with large followings espousing opinions and practices when they hardly spend time shipping product. this makes it hard, for even someone senior and experienced, to know what they should pay attention to. sometimes it feels like you should pay attention to things you fundamentally know doesn't make sense. and a lot of energy goes into unwinding mainstream rhetoric
I cite this as difficult because while most of the other hard parts of software are enjoyable, this one is just an energy suck
Ensuring that your software will still function in 20 years. Previously you had shrink-wrapped software packages on physical media with explicit releases and versions. Now everything depends on cloud services and APIs which could change or disappear at any moment.
Imagine being a firefighter and your boss tells you that you can't use a water hose to put out the fire; instead you must use either a portable fan or a flamethrower. That's what programming feels like these days.
It's not a bad thing. It just means that knowing one language and some algorithms - formerly the exact skillset of a typical CS grad - isn't really "enough" to do anything. Instead it's whole stacks of software that an organization has staked their own project on. At an old company, they built most of the stack, but nobody who built it is around anymore. At a new company, it's open source, but they don't have anyone to spare to maintain it. Eventually there is some threshold that gets crossed where the stack is the problem, and that drives an attempt at solving things better.
But it's no longer the case that most orgs have to build most of their own software; it's nearly always glue and one piece of special sauce.
Users had much lower expectations.
Programming was slow and intentful. Programming today is much more stressful, I think.
The likelihood that the next thing you do sees you scurrying about Stackoverflow even after 20 years in the job, is quite high now.
Advances in automation and tooling make it easier than ever to develop complex projects with lean teams but it can't be done if your org opts to inherit a brittle dependency chain every time "I don't want to reinvent the wheel" surfaces.
It's so much worse today when it comes to things like paralysis of choice and configuration.
I've been working on a new project for Boot.dev students and the goal is to get them a simple but professional dev environment on their machine. It's a very hard problem.
As always, the core issue is complexity. We expect our programs to do much more than before, and that requires additional complexity.
But we already know how to deal with complexity: we add a layer of abstraction. The problem, in my view, is that current abstraction layers are either too low-level (e.g., React) or tackling only part of the problem (e.g., AWS Lambda).
With GridWhale, I'm working on creating a layer of abstraction that appears as a single, unified machine that you have full control over, but is actually running on a distributed, scalable micro-services architecture.
I've got a long way to go, but I think this is the right direction.
I sure wouldn’t want to write the backend of a crud app in Fortran, nor the front end. Or do anything with Fortran besides scientific computing (Fortran = Formula Translator…it was built with a limited use case in mind!)
Companies that adapt to newer frameworks and not write everything in C++ are more efficient, but only if they control how much variation of tooling there is within a discipline.
So today, you do have to specialize in a discipline a bit more (front end, backend, data) but each discipline has a sensible set of tools IMO. A developer can and should get some exposure to a secondary discipline to be well rounded and “T-shaped”, but should also appreciate the value of specialization.
The reliance on Design Patterns (Gang of Four) went from nice to use, to mostly required.
So many developers now go right in to coding without learning other skills. The result is software to automate or help a system that the coders don't really know how to do manually and it shows. These coders know one language, one framework and maybe very good at that but they don't actually know how a computer works or why what they are doing work.
Fast growing businesses that require more workforce than market can provide. This result in more inexperienced developers writing code and frustration of the experienced developers (if they have to work with the former ones).
Too much of dubious quality blog posts (we now have too much information we need to verify with opposite to having too little information that was hard to find years ago).
Once upon a time, "UI" was "print the output to a file".
Then it was some interconnected CICS screens.
Then it was some interconnected web pages.
Now it's some interconnected web pages that also display properly on mobile devices, that display in the user's chosen language (and numeric format), that displays properly no matter the user's screen size, that hopefully allows blind users to use a screen reader, that respects Europe's privacy laws, that has good security...
It's easier now because we have better tools. It's harder now because the baseline expectations are so much higher.
In the 90s you could count on one hand the number of large scale systems that could support thousands of users concurrently.
Any half-decent mobile app these days could easily get to a few hundred thousand concurrent users.
Always start at the bottom and work up. You’ll avoid much of the frustration of confusion, at the expense of a slow start. Most importantly, when you’re done you’ll be useful at every level.
The business processes covered by systems have grown much more complexer. The organizations have grown larger because software is doing more (more safely and easily).
The modern programmer faces thus larger organizations, larger/more-sided discussions and far a complexer social, political and business-related landscape than 20 years ago.
While the technical problems have gotten much easier, in general.
Now with dynamic languages, interpreters, docker, kubernetes, AWS and layers of dev tools and frameworks it can be harder to know what your code is actually doing. But those abstractions can also give you superpowers.
Yes, like you said the massive amount of code and options, often each with their own trad-offs and advantages
The hardest thing is sometimes not always chasing the newest and latest thing to gain a bit of performance, or improving an f score, etc.
Everything else has gotten better: the machines are almost-inconceivably faster and larger (in capacity, logical size; in physical size they are of course ever smaller!), the compilers are smarter than ever, the languages more ergonomic and safer, etc.
The only downside, the great undertow, is burgeoning complexity. A "thundering herd" attack on the human mind.
Sure microservices are faster to elevate and more reusable, but the systems have become a birds nest of dependencies. A lot of the time our troubleshooting relies on the other teams that own those services. Coordinating and communicating takes significant overhead and inevitably leads to occasional issues.
There's also the Rick Cook quote:
> Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.
Also society's pace. Things moved more seasonally.. nowadays there's
But getting to that point where you feel like you're accomplishing something, where what you've produces feels like it measures up to the standards you've been taught to expect, that barrier's gone up so much faster.
For example, there’s still a lot of python 2 code snippets that won’t work now that python 3 is the default interpreter.
And all that has to work, kindof. It’s a bit crazy.
That comes with a lot of problems. You end up dealing with user accounts, authentication, request retries, what to do if the server is down, etc. Plus all the security and abuse issues.
Testing, architectures, patterns, principles, software life cycle, building abstractions, system modeling etc.
- really worry about disk space or memory size
- check if something would work on IE
- worry about version control
- manually scale something
But I understand that in the context of large teams.
Coding itself is the exact same thing as 40 years ago, input, processing, output
Today code is written for humans.
Mostly by people who aren't great communicators.
Weirdly, I think the biggest difficulty is that we are much better served. A huge amount of what we do is well paved. React, bundlers (which are fairly isomorphic to each other webpack/rollup/esbuild/parcel/snowpack/vite), Systemd, Kubernetes, React, gRPC/protobuf, npm/node, even uring or eBPF... the list of well entrenched technologies is high. There's better set ways to do things, well served, than there used to be.
The difficulties show up in a number of fashions. First, it precludes innovation, is highly stasist, when there is a humming along underbelly that maintains life. In the Matrix, the Elders of Xion admitted it was just highly automated machines that kept life alive, and in some ways our ascent upwards has decoupled us similarly; we're rarely liable to go back & reconsider the value of trying other ways. We're kind of "stuck" with a working set of system layers, that we don't innovate on or make much progress on. Our system layer is pretty old and mature. Everything is ingrown & interlocked, depends on the other things.
When we do try to break out, rarely is it precisely targetted reconsiderations: often the offshoot efforts are from iconoclasts, smashing the scene & building something wildly different or aggressively retro. Iconoclasts seek pre-modern times, rather than a new post-modern alterations or rejiggerings.
Another difficulty with having vastly more assumed is that there's less adventurers in the world, less find-out-the-truth/roll-up-your-sleeves/dig-in/read-through-the-source mentality & experience (and more looking only on the surface for easy "solutions"). Being generally well served means we rarely go off the beaten path. So our wilderness survival skills/resourcefulnesses are attrophied, and newcomers are less likely to have developed these deep hunting skills that used to be both simpler (because our systems back then were simpler, had less code) and more essential. A lot of these modern works aren't even that hard to dig into. But there's shockingly few guides for how to run gdb or a debugger on systemd, few guides to debgging kube's api-server or it's operators, few people who can talk to implementing gRPC.
I don't think we're at crisis levels at all, but i think the industrial stratification & expectations of being well served will start to haunt us more and more across decades, that we'll lose appreciation & comprehension (alike the Matrix problem), we'll fail to make real progress.
GraphQL is an interesting case-study to me. It rejected almost all common web practices & went back to dumb SOAP like all-purpose endpoints. The advantage of just getting the data you ask for was good, of not having to think about assembling entities, of having a schema system. But so many of these things are things the actual web can and should be good at. We spent a long time having schema systems battle each other, but that higher-level usable web just kept failing to get built out, so total disruption made sense. We still haven't a lot of good replacements for GraphQL, still haven't made strong gains, but still, it feels like GraphQL is somewhat fading, that we're less intimidated by making calls & pulling data than we used to be.
A lot of the abstraction then was simply mind boggling, and while the patterns all still exist today a lot of them have been simplified by language features and put on a diet.
Sure we have some frameworks and such to make this easier but it really puts a damper on experimentation and raises the barrier of entry significantly.
I miss the days where you could hate your users and the code didn't have to like have a SLA.
/s
The effect is guessing. Everybody guesses on whether a candidate can potentially do the job. Some of those new hiring start ups might be slightly better at guessing, but its really still just job boards or head hunters with a large margin of error.
The way other industries solve this problem is to establish baselines of accepted practice. If you exceed the baseline you may or may not be employable, but you do at least exceed the minimal technical qualifications to practice. This is true of professions like: teacher, truck driver, lawyer, doctor, nurse, accountant, real estate, fork lift operator, and really just about everything else. Unfortunately, most software employers spend all their candidate selection effort attempting to determine minimally acceptable technical competence instead of more import things, such as soft skills, and even still its often just guessing.
Other industries apply this solution in one of two ways: education plus a required internship that may result in a license or education plus a license followed by an agent/broker relationship. Education may refer to a university education or a specific technical school depending upon the industry and/or profession.
To mitigate hiring risks, since everybody is just guessing anyways, employers turn to things like tools and frameworks to ease requirements around education and/or training. This is problematic as it frequently results in vendor lock-in, leaky abstraction problems, and catastrophic maintenance dead-ends when a dependency or tool reaches end of life. Even still potentially having to rewrite the entire product from scratch every few years is generally assumed to be less costly than waiting for talent in candidate selection since there is no agreed upon definition of talent and hiring is largely just guessing anyways.
All of this makes programming both easier and more difficult, depending upon which side of a bell curve your capabilities reside. Reliance upon tools to solve a very human competence problem is designed to broaden bell curves which allows more people to participate but also eliminates outliers. This means if you, as a developer, lack the experience and confidence to write original software then you might perceive that programming today is much easier. If, on the other hand, you have no problem writing original software without a bunch of tools and dependencies you may find software employment dreadfully slow and far more challenging than it should be for even the most simple and elementary of tasks.
Every single ticket I get, I just immediately see for my mind's eye: a couple of tables, and a couple of queries. And sometimes if it's something really weird, I see 5-10 lines of shell script.
But I'm "not allowed" to use those tools, and instead I have to use ORM, Object Oriented Data Model, different JSON translation operations, grotesque frameworks and in the end spend three weeks of very frustrating wasteful work, trying to coerce these humongous tools to achieve the simple feature, that would have just been three days of easy pleasant work if I was "allowed" to use a database.
I worked in a bank in 2006 and we used VB6 to create CRUD based apps in no time, which were super stable, fast and responsive. Just a simple MVC stack, query the database, get recordsets, render a view. This was a really simple task back then.
Now it takes ages and so so much work and effort to create an equivalent web based app, which is always slow and buggy, and the work is mainly incredibly frustrating trying to reverse-engineer and figure out some inane "smart" logic in a framework, that tries (and fails) to automate a task that was already very simple and quick, and didn't need automating at all.