- fools who can't do simple math or read fine print ("have no idea how much 20 $2 per hour instances pumping data at 10 mbit/second add up in a month, and OMG that data is not free when i already pay for the instance?!". And besides they give me a whopping $100,000 credit - that will last eternity!")
- corporate tricksters ("if we don't invest into our hardware and buy AWS instead, our next quarter bottom line will look GREAT and i get a nice bonus, and by the time truth transpires, i will jump the boat to the next shop i do the same trick with")
- people with breaks in basic logic and total lack of foresight ("i can't afford buying all the hardware for my small pet startup, and will make do with just $200 a month in AWS, and i don't realise it will only work for as long as my startup is not successful and has no users - and when it's no longer the case, i will be vendor-locked with tech solutions based on AWS and petabytes of data which is $0.05 per GB to download, locked up there, and will bleed money for years").
They should be avoided at all costs except for development purposes, and if you don't know how to or can't afford to do something without clouds, you just don't know how to do it or can't afford it.
In Europe, none of my clients use clouds. They have dedicated setups with reputable providers that work a lot better than cloud-based ones and cost pennies. Also, i realise that my custom software development biz doesn't really work with EU clients, i barely make a profit with them and they get to be real pain. Probably suggests that educational level of Europe is a lot higher.
Refactoring can always be done with a running (no-downtime) system at no extra cost (time or money) compared to rewriting or downtime-requiring approach.
You can always deliver user value and "paying up technical debt" can and should be done as part of regular work (corrollary from above: at no extra cost).
We'll never do away with physical keyboards for inputting text (yet I only have one mechanical keyboard I don't even use regularly :).
"AI" is the dotcom bubble (notice how every big company HAS to get in on it, no matter how ridiculous their application is?)... Further, it will simply allow those who apply their power unto others to do so in an even more egregious or deeply-reaching way.
Advertising should be illegal.
Proprietary software is basically always a trap (if it's not harmful or coercive at first, it eventually will be, well after you're locked in).
The web has been ruined by turning it into an operating system (also see "advertising should be illegal"). 99% of the time I just want very lightly-styled text, and some images. I don't need (or want) animated, drop-shadowed buttons.
Graphical OS user experience was basically "solved" 30 years ago and there hasn't been much of anything novel since -- in fact, in terms of usability, most newer OSes are far worse to use than, say, Macintosh System 7 (assuming you like a GUI for your OS). The always-online forced updates of modern OSes exacerbates their crappiness -- constant change and thus cognitive load, disrespectfully changing how things work despite how much effort you spent to familiarize yourself with them.
A lot of good things about way we wrote websites and native applications back in the early 2000's were babies that got thrown out with the bathwater. That's why we can't seem to do what we could do back then anymore--at least not without requiring 4x as many people, 3x as much time, and 20x more computing power.
(Maybe more than a few people on HN will agree with this, now that I think of it...)
Why? If programmer builds something only for him/herself, or a few of their peers, it really doesn't matter. Do as you like. But be aware that one-off / prototype != final product.
Commonly held view is that programmers are a small % of population, thus their skills are rare (valuable), thus if programmer's time can be saved by wasting some (user) CPU cycles, RAM etc (scripting languages, I'm looking at you!), so be it. Optimize only if necessary.
BUT! Ideally, the programming is only done once. If software is successful, it will be used by many users, again & again over a long time.
The time / RAM / storage wasted over the many runs of such software (not to mention bugs), by many users, outweighs any saving in programmers time.
In short: fine, kick out a prototype or something duct-taped from inefficient components.
But if it catches on: optimize / re-design / simplify / debug / verify the heck out of it, to the point where no CPU cycle or byte can be taken out without losing the core functionality.
Existing software landscape is too much duct-tape, compute expensive but never-used features, inefficient, RAM gobbling, bug-ridden crap that should never have been released.
And that developer has a beefy machine doesn't mean users do.
WASM is as close as we've been since Multics. Genode is my backup plan, should someone manage to force POSIX file access in order to "improve" or "streamline" WASM.
mandatory code reviews on every merge are a net negative. Too many people wasting time on nits and YAGNI "improvements". Actually improving the code in a structural way is too hard and most reviewers won't spend the effort. It would be better to dedicate time and resources to code audit and improvement on a regular cadence, e.g. pre-release.
A/B testing is pure cargo cult science, it has no where near the rigor to actually determine anything about human behavior (look at the replication crisis in real psychology where they use 10x as much rigor!) You might as well use a magic eight-ball.
I'll bite: Using const for all variables in JavaScript is moronic. It's a trend that should have been killed with prejudice in the crib. If you want type safety, use another language. Use let for variables and const for actual constants. Words have meanings. The const statement wasn't created so developers could litter their code with it to show how cool they are.
Imagine you're learning JavaScript at young age: "Here are three ways to declare variables - var, let and const - but you should only use the most confusing one that doesn't actually make verbal sense for general use, nor do what you'd think it would do based on its description. Use it anyways. Because reasons."
I understand it, I run it, and I don't have to deal with 3rd party's changes, performance problems, or downtime. Also, less bugs related to data consistency.
(Modern web stacks are 99% bloat and impediment to human progress.)
This is especially relevant with touch screens, which I wish weren’t so omnipresent. I really don’t want to use a touch screen in a car, for example…
1. Personal names are not generic strings, any more than Dates are numbers or Money is a floating point value. Name, as in a person's given and family name, or whatever, should be a type in every standard library, with behaviors appropriate for the semantics of a Name, language, and country. Yes there are lots of conflicting international differences. We've managed to handle languages and calendars of significant complexity, we can do so sufficiently well for the things we use to identify ourselves.
2. "AI" is a marketing term for a kind of automation technology able to use pattern matching to reproduce plausible new versions of the data used to train the model's algorithms. It couches this automation as especially powerful or magical as a way to draw a smokescreen around the real problems and limitations of the technology.
I further believe that someone will figure out an algorithm that runs on classical computers to match the speed of quantum annealing, thus the Quantum Annealing machines, while currently useful, have a limited shelf life.
Shor's algorithm, which runs on Quantum Computers, requires the use of many repeated cycles of small rotations of qubits in the complex plane of the Bloch sphere. These are analog operations that accumulate phase error. It's not possible to use error correction with these operations, as those techniques necessarily sacrifice information in the complex component plane, leaving the real component "corrected".
Days I learn things about other people are always welcome!
Ok quick list
* Class Inheritance is an anti-pattern.
* Python is an anti-pattern.
* The 80/20 of functional programming is use pure functions and acknowledge that everything is a list or an operation on a list.
* Javascript/Typescript will continue to win.
* Erlang/Elixir/BEAM is the better timeline. But unfortunately we're not on it.
* Microservices are a luxury and most people should just vertically scale their monolith
* GraphQL is better than REST when you have multiple clients, codegen or any AI/RAG needs
* Frontend Frameworks are not changing as fast as a backend developer wants you to believe
* Including a FE Framework in your application at the begining is a smart move for anything consumer facing.
* The Haskell community is their own worst enemy.
Obviously this is shunned in every company.
It makes more sense for Nvidia to put GPUs in data centers with good cooling, shared between multiple gamers and idle workloads (AI, etc.), instead of having them sit in expensive but unused home desktops most of the day
Nvidia is the only one who has a real shot at this because they're the only ones who can directly allocate GPUs to cloud gaming (unless what's left of AMD wants to get in on the action too). And they're the only ones that Steam specifically partners with for its Cloud Play beta: https://partner.steamgames.com/doc/features/cloudgaming
Stadia failed not because of the technology, but because Google mismanaged it and never understood PC gaming cultures. Nvidia and Steam do, and it's a much, much better product for it.
It eliminates memory bandwidth issues.
Its a nice elegant alternative architecture. I'm quite surprised nobody actually tried it earlier.
[1] https://github.com/mikewarot/Bitgrid
* I give 50/50 odds I'm a crackpot on this one... I'd really like to know for sure which way that coin toss comes out in the end
We should have kept the machines blind and retained social control between people and families.
Once we allowed machine code to be intelligible, we poured control of services, governments and speech into machines.
Truly free speech in a CPU became synonymous with the concept of a virus. And real free speech has followed, programmed rules on an electronic device are socially silencing people.
The concept of a virus, outlawed one of the best uses computers could have provided, maintaining speech globally.
Now we are racing to the bottom, to put the rules of society into a machine. Then lock society out of the machine, with a few holding all the keys to the computing kingdom.
Slavery to machines, as enjoyable as it is, will eventually become nonsensical.
As power becomes centralized, all the new moves will be made by those outside the system.
So I think that CPUs should be running unintelligible code, and LLMs are a baby step towards that.
Using whitespace as scope definition is a war crime.
(same goes for YAML)
If a new technology threatens to eliminate hundreds of thousands of jobs, and the benefits are marginal or largely in favor of the capital class, then we should probably not pursue that technology.
2. Most developers 10x over-complicate solutions and spend 20x too much time implementing them.
3. Don't ever deprecate API's! An API is a contract that potentially thousands of systems or developers rely on. So find ways to move forward without breaking things. You are a pro right? So act like one. Linus gets it. Most don't.
Do not take on tech debt. If you absolutely need to pay it off ASAP
- Job titles in tech are out of control. Searching for a job requires me to use like 3-7 searches because everyone is making up titles as they go, and its just insane. We call ourselves engineers, and its time we take another page from the book of engineering, for job titles. Engineer titles are very pointed, and thus job function limited by their title. Example: Cloud Infrastructure Engineer [I], this would be a generalist cloud role, at the jr/entry level, eventual knowledge of all clouds and their things as you reach [III] status (or [V], depending on company size). Their only function is infrastructure for cloud services. Today, such a title would also require devops knowledge, CI/CD, various services for monitoring, logging, kubernetes, etc. These should obviously be jobs for multiple people but we just keep letting things get stacked on to a single title. Ive even seen ridiculous titles that are very obviously 3 jobs in one, especially when you look at the responsibilities/qualifications of it. Really need to get a grasp on this. likely nothing will get fixed without entire unionization of people in tech, unfortunately.
- React, Next, etc, are steaming piles of code. I was going to build a react frontend for a project that was size constrained. The out of the box build, adding only react and some bare minimum packages resulted in well over 4mb of js, AND I DIDNT EVEN HAVE A WORKING ANYTHING YET. Utterly disgusting. Add to that that it seems that any dev who does front/backend work worships their framework of choice to the point of using it in places it has absolutely no reason to be. Ive seen these developers trying to write shell scripts in js/ts, using react. The amount things ive seen written in js that had no right to be, is way to high.