But why companies keep trying to push new languages to massive adoption.
Isn’t it like searching for holy grail that in the end makes us programmers life more and more stressful?
Instead of delivering the value and focusing on the task, on the product we spend half of the time just learning yet another new super puper technology. Which looks just like another one plus some sugar.
How many languages has this happened with and how often? Go, Rust, Kotlin, Swift, Julia, Typescript are in a generation of newer programming languages and they're between almost 12 (Go) and 7 (Swift) years old . Each of them offers something new in comparison to its predecessors. Almost every other commonly used language is from the 80s or 90s.
There are other, more niche languages but they fall under your point about research, experimentation, doing stuff for fun etc. Just because some language pops up in blog posts or on HN every once in a while, that doesn't mean that it has wide enough adoption.
Is any of this really that much? I don't think that learning a new language once every few years should be something that challenging especially for an experienced developer.
Big companies / orgs have made new languages like Swift, Kotlin, TypeScript, Elm, Rust, Zig, Elixir because the previous counterparts (Objective C, Java, JavaScript, also JavaScript, C++, C, Erlang) have big flaws.
Everyone still creates DSLs because they are more effective at solving certain problems than general-purpose languages.
If you don’t feel the pain of those problems enough to understand why the improvements matter, then you can happily ignore the new language hype.
I can’t comment about the JS ecosystem, but every new language (~half dozen) I’ve learned has given me new perspectives on solving problems, and sometimes the language itself serves as a better tool I reach for frequently. This may be somewhat biased by the fact that the problems I’m most interested in are typically algorithmic/mathematical in nature, and YMMV for different classes of problems.
my guess of general case is to solve quirks in other languages. but seems to me overcoming quirks is more cost effective than creating entirely new language with new quirks only you know about.
Developing a new language is a significant investment, though, so it's natural for companies that do so to want to commercialize it and get it used by as many people as possible.
>Instead of delivering the value and focusing on the task, on the product we spend half of the time just learning yet another new super puper technology.
You shouldn't, in the vast majority of cases. You should choose the language that is best suited for the task at hand and use that. Chasing after the new & shiny is tempting (it's fun!) but it rarely actually makes engineering or economic sense.
People used to program everyday business apps in arduous things like assembly language, C++, etc.
But the search has been staggering in various directions and taken missteps half the time due to historical accidents and corporate games. Look at eg the current primitive and fragmented state of GPU languages for a glaring example.
If you have too many languages, then it's hard to reuse code and less people know how things work.
If you have a small amount of languages, things can get boring and innovation can suffer.
> we spend half of the time just learning yet another new super puper technology. Which looks just like another one plus some sugar.
If you don't enjoy it or it doesn't provide any value... Then just don't do it.
The best engineers I've seen use tools they know very well, and rarely adopt new fashionable technologies.
IIRC library designers in Rust have at least demonstrated some effort to review other naming conventions, and make informed common/improving choices.