From browsing internet forums and listening to friends it seems that a popular approach to bettering oneself as a software engineer is to "learn the latest technologies". However, with so many new technologies to choose from, how will I know which ones will benefit me long term? This reminds me of the Red Queen hypothesis [0], learning new stuff just to keep up.
I suppose the stereotypical example is the web development space with its many frameworks (Reactjs, Nextjs, Svelte, Remix). I notice this as well with programming languages such as Go, Rust, Julia, Dart, and Kotlin.
On the flip side, I wonder if mastering the fundamentals and the things that will not change in the next 50 years is a wiser approach. This is inspired by the ideas of Jeff Bezos and Warren Buffett [1, 2]. If this is true in technology, then I wonder what are those things that will not change in the next 50 years?
I also wonder how the Lindy Effect [3] applies to technology. Would it be more worthwhile to strive for a high level of proficiency in decades-old languages such as Java and C++, or focus instead on promising languages such as Go and Rust? Reading Dan McKinley's article on Choose Boring Technology [4] nudges me in the direction of focusing on more mature technologies.
[0] https://en.wikipedia.org/wiki/Red_Queen_hypothesis
[1] https://www.goodreads.com/quotes/966699-i-very-frequently-get-the-question-what-s-going-to-change
[2] https://fs.blog/staying-the-same/
[3] https://en.wikipedia.org/wiki/Lindy_effect
[4] https://mcfunley.com/choose-boring-technology
- Humans oscillating between greed and fear. Has happened for the last few millennia :-)
- Showing up on time , being reliable and doing your best leading to good outcomes
I usually base my decisions around these 3..
There's no conflict between learning new things and "fundamentals" (however we define them). It's useful to know both. You start with the more general stuff that applies everywhere (this is a lot of what you study at university - how a computer works, how code executes, how to design a program, data structures and algorithms etc.) and then over time you pick up details like language details, specific APIs, frameworks etc.
Languages don't matter that much. Learn a few, be aware of what's out there and get comfortable with picking up new ones as you need them. It's really not a big deal.
Specific technologies that won't change:
- Unix. It's been in use since 1969, runs on almost every device. It isn't going anywhere.
- In particular, become fluent with its ideas and the CLI. That group at bell labs was hands down the best programmers doing the best programming.
- LISP. It's been in less use since 1958 and influences almost every modern language. It's like learning greek classics because of their influence.
- HTTP. The basics of the internet haven't changed and work pretty well.
- Text. Text is the best way to convey information. Nothing beats it. Not pictures. Not audio. Not video. Learn text manipulation.
"You cannot possibly trust two classes of people: educators who are better at explaining than understanding because they’re selectively better, and science journalists, who are better at communicating than understanding. Then, you end up with things like scientism." [1]
The pandemic and climate crisis has also shown us the huge importance of thoughtful communication. Having both theoretical and years of practical background in journalism, I do wonder, though, how much of this emphasis is slowly becoming a hype; a source of well-paid bullshit jobs. There are companies and institutions with deliberately honest communication out there, but often times "good communication" leans towards masterful, smooth hiding of wrongdoings or stuff you don't actually grasp. It's way too easy to (accidentally) become better at explaining than understanding in this field. In this light, it's probably always good to have those more straightforward "hard skills people" around you as well.
All in all, I think it's an excellent time to read Dostoevsky and think about the volatility of human temperament. We're often trying to hide our rough edges with (soft) communication, but who knows how well, or for how long, this actually works.
I would also definitely bet on Dostoevsky's fiction being relevant in 50 years, too.
1: https://medium.com/conversations-with-tyler/bryan-caplan-nas...
Learn how to reason about problems from first principles. Understand how back planning works and how to apply it to a project. Cultivate your communication skills and build a habit of constant learning.
Learn about critical thinking and logical fallacies. Build heuristics you can use to evaluate ideas with incomplete information.
These skills will serve you well long after any technical knowledge you have expires.
New technology often replaces old technology, but new theory (in math/CS) generally extends older theory, and problem-solving techniques are often timeless. So if you're looking to learn things that will be relevant in the long-term, your answer is likely algorithms, data structures, and the like. They're not going to be rendered irrelevant by the passage of time; they're going to become more fundamental as we build more advanced theories on top of them.
If the kind of stuff you're into is more like Rust, React, or whatever, then the stuff useful 50+ years from now probably won't be those particular technologies, but the ideas behind them. So instead of learning Rust, you'd need to learn about topics like abstract interpretation and formal verification. (Or to put it another way: try to figure out how to make a Rust compiler, vs. learning how to use it.) If you're into event-based technologies or like parallel programming, go learn about the various models of computation (there are lots) and their strengths and weaknesses. There's a lot of prior literature and research in areas you might not expect, and you'll be surprised how often research from half a century ago suddenly becomes relevant again (like it did with parallel computing). If you like PyTorch, go learn about optimization and machine learning (and I'm not referring to neural network architectures here). There are grad courses on all of this stuff if you go searching around. You'll probably have to put in more effort into them than you'd need to put into learning React or Go, but that's life.
It lets us convey information in the most effective and durable way, with very high signal-to-noise ratio. (NB: I'm not dissing on other communication mediums; I see them as complementary to plain text.)
1942 - First mass-produced fully-electric analog computer
1951 - First mass-produced commercial computer (using vacuum tubes)
1963 - First mass-produced integrated circuit mainframe computer
1974 - First mass-produced personal computer
1984 - First GUI operating system with widespread adoption
1995 - AOL, CompuServe, and Prodigy bring the internet to the masses
2007 - First smartphone with widespread adoption
~2017 - I'm actually having trouble coming up with something. Maybe cryptocurrency (specifically Ethereum) gaining widespread adoption, but it's too early to judge its significance. Same with VR.
The job market and what it meant to be a programmer changed so much between these intervals.
You might not even be in tech anymore in 11 years. Maybe you'll be retired after a lucky windfall. Or dead.
Of course, mathematics will continue to develop in unimaginable directions, but the “base” notions in math will stay the same. I am talking here about notions of relation, function, group, vector space, tensor, category, measure, integral... Even if some of those are known for a long time, formal definitions that we use today are pretty new. And it seems to me that those will stay fundamental for a long time.
(Seems to me that other sciences (like physics and biology) don’t have that “stability”. In those fields, it wouldn't be too surprising if we discovered some fact that changes a lot we know about it)
Also, as these core notions of math will stay the same, math notation will stay the same also. And that is a good sign that we will continue to use TeX for a loooong time
This will make you stand out from your peers and ensure a good career path for years to come.
At my day job I work as a Java+Angular developer on an enterprise software, but if there's a chance to do a small freelance gig on the side, or my personal projects, I'd take it as an opportunity to try out the "hip" stuff just to learn what's good about it and if I like it or not (most recently Nuxt.js).
Also the older, more stable tech are still getting cool updates too but at a much more slower rate, and very often adopted from the newer languages/frameworks. So if I've played with the newer stuff too, I would know what the update is about and have a much easier time getting used to it.
What certainly won't change is the way humans work. That means that the way they use tools also won't change much (modulo maybe some newly established conventions).
As a direct result, the way Don Norman simplifies the process in figure 2.7[1] of "The Design of Everyday Things" will stay relevant. If you intend to build tools for humans -- even if those tools are digital -- it will be a good idea to keep the principles of feedback and feedforward in mind. If you want to learn timeless skills, it might be a good idea to learn more about your audience (i.e. users) instead of new and shiny technologies.
[1]: https://books.google.ch/books?id=I1o4DgAAQBAJ&pg=PT76&dq=des...
I coach all my gen Z hires that the skill is not in coding but communication. For example, the act of coding is a dual communication. Once to the computer, where "skill" means transmitting the message simply, unambiguously and without repetition. once again, repeatedly, to the community of readers and maintainers who look at your code (where the skill here is to communicate clearly and without misunderstanding, without needing a shared cultural, organizational or temporal context). Being able to think deeply about what it means to communicate is a true long term life skill.
Having seen computing grow from before the IBM PC, my primary lament is new developers rarely have a view of how everything works (however simply) from the very bottom to top, from basic semiconductor electronics to getting something onto an App Store. Most pick a layer and live there, largely unaware of what's happening below or above - and so don't know why things work, and more importantly why they don't.
Learn how to explain and document why you chose an architecture for a problem. It's a habit that'll set you apart from your peers.
I don’t foresee AI becoming advanced enough to be able to plan out and build most residential homes in the next 30 or so years. Maybe after that it’ll be good enough to design basic / common homes. Same goes for the hardware and control systems needed for fine motor control for those jobs.
I also don’t see AI replacing most software engineers or STEM based jobs until general AI becomes a thing either. I can see testing probably being the first thing to go.
Fundamental physics will always be fundamental ;) and you can build up to most things from first principles.
The book talks about what is important in the communication in the workplace, like when, why and how saying no for example.
We’ll do less with more, and then have meetings about it.
If you’re able to do things end to end and actually understand how things work people will think you’re a witch.
Here's a counterintuitive idea you won't hear often: reading on its own is futile. You'll eventually forget almost all of what you read or even learn if it's not reinforced. In fact, knowledge that isn't being actively recalled or applied can decay shockingly quickly.
The only way to combat this is to have a system. A simple and fairly informal system might be Post-Its and notes in a book that you review sporadically, whenever you happen to remember to do it. A more sophisticated system might leverage spaced repetition, practice problems, and tools like e.g. Anki and Obsidian/Roam Research.
As an example, let's say you didn't want to forget how to differentiate (most) functions. This might sound ridiculous from where you are now, but keep in mind this is about thinking longer term. So how would you achieve this? Well you might have some flash cards with concepts and some graphics and then some more with practice problems. How many problems would you need to do a year to recall differentiation properly? Maybe 50? Add flash cards.
Now differentiation is a poor example insofar as you might never find practical utility for it across the entirety of a software engineering career and you don't necessarily even need to remember how to do it manually (e.g. just use Wolfram Alpha). It can suffice to spend 5-10 minutes a year making sure you recall the most critical ideas.
Broaden out and think about what could be useful for you. Mental models, data structures/algorithms, systems design, etc. Consider what would be needed to retain that knowledge and then extend your system to make it happen. Focus on ideas that are disproportionately useful (or could be) but that you encounter infrequently (so e.g. perhaps you might include 'distributed systems consensus' and not 'git fundamentals'). For most concepts it suffices to have a good high level understanding and assume that you can deep dive and relearn the rest, should the need arise.
You'll be amazed at the surface area you can cover spending even 5 hours a week doing this. It's one of the easiest ways to remain an effective and flexible thinker.
Unless functional programming takes over the world, then a lot of us are screwed.
This will never go out of style. All these new frameworks are all built on javascript or such technologies, and knowing the underlying technology will always pay for itself in dividends.
Learning a framework-du-jour will only let you be proficient in that framework
COBOL: No matter what where how, the airlines and banks will still be running on COBOL.