It might be seen as stodgy to pick Java along side a relational database for a brand new application over the latest JS hotness plus NoSQL. But we know that Java is going to still be around in 20 years, we know the language and architecture can do what we plan, and is easily able to expand and scale as the product grows. We know what deployment, integration, maintenance, etc looks like.
What young developers don't see is the long, long, long line of failed technologies that were hot, then faded it obscurity. It's very hard to bet on winners.
The thing that many younger devs don't (yet) get is that tech works in cycles. Today's hotness is yesterday's tried-and-true. As an example, Java (J2SE) came out in 1998 and my guess is - based on open positions on e.g. Stack[1] - that you'll still be able to write Java for decades.
So for an old guy to adopt new tech:
1. I'd want there to be obvious benefits that outweigh the costs for the new approach. As an example, I'm not convinced FE frameworks are the best approach for CRUD apps. If you're building Trello and need crazy interactivity, sure. But you incur a bunch of cost vs. say Django / Rails / Laravel with intercooler or Stimulus sprinkled into interactive pages. And what's the huge cost for? Skipping a few page refreshes?
2. I'd want tech to be mature enough that I feel good it's going to be around and therefore is worth investing in. As you get older your time becomes more valuable as you tend to take on other obligations (kids, spouse). Hacking on new tech late at night or all weekend is less palatable or possible than it used to be.
Curious to hear what others think though.
1- As of writing there are 425 jobs that mention Java (https://stackoverflow.com/jobs?q=java). Surprisingly there are only 341 that mention React (https://stackoverflow.com/jobs?q=react).
They argue that this causes less bugs and time to transfer information to another developer, but the overall cost to learn and implement exceeds the losses. They also tend to be less flexible, which also increases the costs further.
Older devs are much less likely to make mistakes, whether it's Java or SQL, and the new technologies designed to reduce mistakes by tripling code size are less appealing.
It's different to, say, C vs Assembly, or Java vs C, when there were substantial gains.
(1) expertise/investment in existing tech platforms: why get into something where you'll be one of many with 1 year of experience where you are already on a platform where you've got 20?
(2) having seen the hyped products fizzle out quickly. Why get involved in something that may not ever have industrial demand materialize where you are already invested in a platform that is established and will have, even if it was abandoned for all new projects tomorrow, demand for expertise driven by legacy software for longer than the remainder of your working career.
(3) Fatigue with the whole cycle of learning the basics of yet-another-stack rather than deepening knowledge of established stacks, application domain, engineering/analytical/process mastery that isn't tech stack specific, etc.
I'm a 40+ dev who loves learning new things including tech stacks, but I absolutely don't view "learning new tech stacks" (especially really new ones rather than established but new-to-me ones) as anywhere close to the most career-useful use of time. Its a hobby that sometimes has nice career side-benefits.
OTOH, for a new dev some of this isn't just missing but positively inverted: a new tech stack is an opportunity that, if it takes off, for you to be in a position that while you'll still have to compete with people with more general experience, you won't have to compete with people with a decade or more stack-specific experience than you have, whereas with an established stack there is no way to avoid that.
The HN crowd is the other extreme, mostly being young, they are mostly only exposed to what has been promoted recently.
It certainly seems that way when we have tools like electron are popoular because it allowa them to stick with the javascript/html they already know vs learning things like native desktop apis or nosql where they can just insert json objects rather than learn SQL. It seems rare to meet devs under 30 who learned tried and proven languages like c, c++, perl, etc or that know how to use a debugger outside of an IDE.
What I've observed in myself and others is that older people become dramatically wiser about wasted motion, effort, time. They acquire a greater respect for the value of energy and time. Not least of all because at 40 most things are more taxing physically than at 20. Your brain will tell you flat out: no, I don't want to burn my valuable time on that, I don't need it; I want to do this other more important thing with my time instead and I have to choose one or the other (having an experienced scope or overview of life and knowing that time investments are always choosing between competing things; as you get older you more fully realize how quickly life rolls by and that you will not get to do everything, so these choices become quite important). By 50 it's generally going to be true that you have more good days behind you than ahead of you, time becomes very important.
I think older devs seek to avoid unnecessarily reinventing wheels whenever they can. They seek to avoid wasting time learning this technology when that technology they're already a master of will do just as well (and faster, they already know it). And as others have mentioned, they will inevitably also have been burned numerous times on watching tech come and go and having sunk time and effort into learning a thing that vanishes. You can't afford time hits as trivially at 40 as you (seemingly) could at 20. When you burn / waste a year at 20, it doesn't feel nearly as detrimental as if you do it at 40.
I think all of us in the industry for a while have worked on "dead end" projects where many things were just unfixable because core libraries, languages, and frameworks are simply obsolete. These are, by far, the worst projects to work on.
Java is the perfect counter example. It's kinda verbose and has some annoying shortcomings, but all the tooling is top notch. It's one of the fastest languages out there and has the best debugging/build/monitoring/IDE tools of any language I've used. And it's rock solid stable, we have plenty of servers that run some hacky old stuff for years straight without a restart.
And the best part about Java is that I can find a library to do anything. I have never once ran into a technical roadblock in Java because I needed something unusual.
Most of these attributes are lacking in new languages. Rust has crappy IDE tools and debuggability, no good ORM, not many supported target architectures, no run time code generation. Golang has terrible package management, no generics, and a basic garbage collector that's fast except when it isn't. Node/JS threading support is a joke, the standard library is so small that most projects pull in over 100 megabytes of dependencies, and it's still a slow memory hog compared to Java/Go/Rust/C#.
Java doesn't have anything "terrible" about it compared to the glaring shortcomings in these newer languages. There's just been enough decades of use that everything has been smoothed over until it's at least decent.
I bounced between Perl, PHP, C# in my early days and I'm sticking with Java now. It's the easiest to work with. I'll pick up a new language when it's as good as Java, so maybe Go or Rust in another 5 years
Wary of getting onto the new–new hotness bandwagon just because it’s the new–new thing, and weary of being pushed into a new–new thing because it’s the new–new thing and not necessarily the best solution for the problem at hand.
Increasing my productivity is irrelevant if it decreases usability or performance of whatever it is I’m building. If it isn’t serviceable, or maintainable, I won’t consider it. If it’s a proprietary solution that depends on the company surviving, I won’t consider it.
For me it’s a tradeoff of risks. Will the benefit of rewriting everything, again in this new language and framework pay off in reduced operational or service costs or improved performance? If it won't, then investing money and time chasing after the new–new thing just seems to be a waste.
You just have to take things on a case by case basis, and especially understand that you have to think about your whole team, and also recruiting new people.
On the one hand the hype cycle is annoying. First, Node and NoSQL were supposed to solve all problems, now it's GraphQL, or Go, or whatever. All the while perfectly reasonable tech like SQL or REST are almost not considered anymore even in cases where they are more suitable.
OTOH, some of the "older" people are the same when it comes to newer technologies. Things can't be the same forever, and not every change is a temporary fad (the internet wasn't, nor were mobile phones), and the same people that now refuse to look at things like e.g. Kotlin or Scala should realise that Java was new at some point too and plenty of languages that came before have now all but died out.
In short, people should come up with actual arguments for/against some tech instead of going purely by novelty (whether as a positive or a negative aspect).
The more money spent hyping a thing, the sooner it vanishes. About the only exception is Java, which was hyped from here to Saturn. Everyone jumped on it, at the time, to get out from under Microsoft and other lock-ins.
Proprietary lock-in is the worst thing ever.
When you understand that, you will be an expert.
Younger people see existing solutions as OLD and gravitate towards HOT new solutions. This creates change for change sake, at the cost of efficiency and usability.
Consumers do not like change unless it brings new functionality or improves usability.
As an older programmer and ops I see the trends and cringe at the loss of efficiency. Software running on resources that dwarf what we had in the 90s is giving a worse perceived performance.
But they are probably more resistant to the current hype train, and they experienced the various pitfalls of hype-driven marketing. "Fake it till you make it".
E.g. as with the current safety hype around Rust, they probably know more about the three fundamental safeties Rust proclaims to support (Note: it does not). E.g. That Java proclaimed memory safety for a long time, and just last year really started to remove their memory unsafe API.
The maturity argument is relevant to managers only, not technicians. Older technicians love improving approaches to build and maintain software, be it more abstract, more elegant, shorter or just better. Older technicians love the new CI/Deploy pipelines, simplier OO or FP paradigms, better garbage collectors, better type systems, better concurrency approaches.
And maturity does not say that much. POSIX is pretty mature I would say, but has completely broken string (unicode) and concurrency support. Blocking IO does not help neither. GCC and GLIBC are still a horrible mess. Linux/Windows/FreeBSD also.
When I started out, I was all trained up in cutting edge and to walk into a job that did not do that. I learned why.
1) Bleeding edge tech is just that, something will bleed. 2) Skill debt 3) Standards
Those are the main three.
1) Bleeding edge - all software has bugs, anything new needs time to mature and when your running a business and want your nth 9's uptime, stability, robustness.....just works and does the job at hand - everything that something less mature, tried and trusted offers. Sure it may be that rare - just works from day one, but the odds are terrible when most often it will not.
2) New skills will be needed and paradigm shifts just never happen, it's costly and transitions are gradual and glacial almost. You also have legacy stuff that needs support. I've worked at companies doing paradim shifts, ended up leaving existing employees stuck supporting the old and they got new staff in to play with the new toys. This saw the buisness knowledge in the old team ignored and also saw many leave as the new team was, well, arragant and in the end, the old system lost traction and the new system never ever worked as well as the old system due to lots of bleeding edge gotcha you would never expect, like many bugs. But that was the days of 4Gl's in the late 80's. We still have COBOL today for many reasons - it just works and does the job at hand and has proven to do that for a long time. "Trust is earned, never given away" Klingon proverb that is true for IT in the business world.
3) Standards - this ties up with the previous two, but equally is important as you need to have standards and anything new takes time to curate those standards, in coding and operation, these nuances take time to perfect and equally are important if you want to start out right with any tech in business that depends upon such tech. Standards, whilst operation and some best practices will be common for all, but each buisness will haev it's own nuances and that plays out in standards. Be that naming convention. Though you can and I have seen odd standards that had a passing back to the legacy code/system and that was due to making things logical across those systems and that would be mostly in naming standards, data formats, many things that all add up.
But yes, it's nice to play with new bleeding edge toys, but for many businesses - that will not be what they want. They prefer toys which have seen every possible accident happen and all safety revisions are in place and it is now stable without updates every other day.
But for some businesses - being able to run with cutting edge is their business, or case of the new project in which the risk is offset by the gains more easily and everybody has their own segway...
It's almost as if being older is more like playing chess, being younger is more like playing checkers?
I don't know where I fit on this but I don't really care about the new language or tech if it doesn't help me reach my goals. For example, I am perfectly fine doing everything in python for most things unless a different language gives me enough of an advantage to reach my goals. A few years ago, I'd hammer it out in C or every other side project is used to try and learn some language. I'd be happy now to do something in Go for example,provided I can guarantee someone other than myself will maintain it and I need really good concurency and network code.
I wrote highly complex Rich Internet Applications long before most people, but I kept away from Javascript for the last few years mostly because I heard other people complaining about the instability.
React work has come my way and I am having fun with it. Right now it seems to me that the tools are where they should be.
I am all for developer convenience, but I am also for performance and the fact is that the subjective performance of computers today isn't consistently better than computers 40 years ago. The "Internet of things" won't really happen unless we get cheap chips with high real-world performance with an absolute minimum of bloat.