There are a few trends that are tiptoeing on the edges here:
- tools for thought / creative tools (Muse, Nodes.io, Mem, etc)
- end-user programming (people trying everything from visual PL's to simpler / more constrained programming interfaces)
- local-first software (https://www.inkandswitch.com/local-first/)
- web3 - I'm biased here and a bit skeptical of the fervor here, but matches some of the ethos / philosophy
- more customizable / hackable personal computing hardware (https://frame.work/)
- spatial + tactile computing so the body can participate in the thinking process (instead of only the 'head'): https://dynamicland.org/
- explorables, games / interactive viz to quickly transfer rich context: https://explorabl.es/
- video games, often indie ones, are exploring deep ideas: see Zachtronics games, Jonathan Blow's games, some of Annapurna Interactive funded games, etc. So many!
- open data: better end-user data tools so anyone can understand the systems around them
What would once be a metal machine onto which you load executable code has become a thing that you can run virtually. Containers, serverless lambdas, dynamic language runtimes. Everything becomes faster, more fungible (a zeitgeist word but I mean it in the old way) and easier to work with.
We’ve known for a long time that software will eat the world — the comings and going’s of everyday life, once they are expressed as code, become infinitely easier to patch and improve using only a keyboard and display.
What’s new and ongoing is that software is eating itself. Another comment here expresses this as code becoming a data structure but that’s too pure for me. The practical improvements we’ll see don’t need to go that far and won’t be as zealous but we’ll reap the same rewards.
I’m personally looking forward to the next generations of infrastructure as code, where all the useful parts are factored out of the current monolithic ecosystems and become as composable as actual code is. It feels like, to draw an analogy with version control systems, we are at that 2005 boundary of CVS to SVN, and git is still around the corner.
Everything listed here is stuff at the high end of the hype cycle. Everyone is already looking at it. The next big thing is going to be something nobody is looking at. If people are already looking, it would have already happened.
Climate change and related issues are driving big changes that large corporates are already having to address regardless of impending legislation.
Energy efficiency, waste management, sustainable computing, carbon markets, regulatory compliance and reporting (ISO50001), smart grids, EV logistics... there are so many areas.
Anything that you choose to focus on next year will be immensely valuable and important when the demand suddenly picks up before the end of the decade. You can build up on your existing technical foundations.
Like a hash map or an array, its text representation is merely a human-readable interface to it. I predict the capabilities of IDEs will expand, and to make their state persistent, they will begin to store IDE-specific files in your source code directory. Eventually this will morph into the source code itself being stored in binary format and opened/searched by editors.
The programming language Unity is pioneering in this domain, but they are too far ahead of their time. The transition to code-as-data will be gradual.
If it does it can become huge. Not because of TypeScript as a native language. Not because it can create binaries. Not because it has an audited standard library or any other good reason.
But because it includes a simple but flexible way to prevent itself from doing things the author didn't want it to.
If we are extremely lucky other platforms will pick this up.
A small bit of confirmation for this in my mind is the increase in companies switching to a business model that is aware of this. "We will host it for you for a fee, but we fully support self-hosting too." ala Gitlab and Ghost (https://ghost.org/docs/hosting/)
If you're thinking of wearables, I personally don't think there's anything interesting there until someone can work out a decent high-bandwidth way of communicating with your device. Gimme something like subvocalisation that doesn't require me to wear something stupid or talk out loud.
Personal bias talking, but I'd expect to see modest uptick in Nix[os] interest
Kubernetes will transition from new and exciting to a mature platform. Expect less new big features and more minor improvements
- pots that tell you when they are hot.
- bikes that say their tires are flat.
- weights that track how many reps you did.
- shoes that tell you: stop landing on your heels so hard.
- pillboxes that shove pills down your throat.
- pillboxes that shove politics down your throat.
- chargers that tell you their pronouns are he/him.
- coffee mugs that tell you weather.
- pants that say you've worn them too long.
- windshields that tell you to stop texting.
- food containers that tell you maybe you shouldn't eat that leftover casserole.
IOT is gonna be the next big thing, get on the hype train cause this ones for real. Don't be left behind still writing SQL.
If we are not choosing to go there, physics will teach us the hard way (it already begun).
Basically any region/country that is able to do the same, with less hardware, less energy will have a big advantage over the other regions/countries in the upcoming years.
Serverless. Current clouds are a half-step because they partially abstract away hardware. Serverless is the ultimate cloud because it abstracts away all hardware.
Self-hosted open-source alternatives to public clouds. Public cloud providers use proprietary software to provide and manage cloud services. However, open-source alternatives will eventually appear and will only require a pool of machines (virtual or not) to operate in order to provide a set of cloud services - computation, storage, identity management, load balancing, etc. This will lead to decentralization and proliferation of private self-managed clouds.
2032? Hard to say, but it is quite possible that software and the web will actually look quite similar to what we have now. I found the concepts in this presentation quite intriguing [1]. Some stuff like airplanes haven't changed THAT much in the last 50 years, because they are good enough for their purpose.
[1]: https://idlewords.com/talks/web_design_first_100_years.htm
This is likely to be a business change as much as a technological one, but it’s definitely going to have an effect.
I’m hoping for a small revolution in 3rd party libs and a locking down on the supply chain as a start.
Capability Based Security - I keep hoping this will be the year, as we're going to be in cybersecurity drama land until it gets adopted.
Rich text source - Why not include photos, and other stuff in your source code?
Tools that take source code, compile it into an abstract syntax tree, and let you tweak that tree, then spit back out refactored source code, possibly in a different language. Maybe it's possible to go all the way to LLVM and back to Pascal?
I would hope that someone comes up with a toolkit that puts a UI on the user's machine directly in touch with server code on the back end, without HTTPs or any of that type of cruft.
I bet that:
autonomous cars will still suck.
CPU will just get lower and lower, but nothing fancy except maybe weird shapes concepts.
web (browsers) will still introduce some weird not privacy friendly stuff.
FP-ish languages ain't gonna be mainstream, probably nothing cool enough for purists and good enough for mainstream will appear.
decreasing energy usage seems to be irrelevant cuz people will use that energy in other way anyway.
quant computer still just fancy thing in lab.
__________________
or I've found one that I hope will be big! air purifier's cost and efficency improvements! shit's so expensive, yet so important
I was impressed by it being a “Framework IDE.” It contains hundreds of visual tools and workflows where actual source code files are a small component which you drag and drop into widgets. Each widget is a visual representation of a class, and it’s source code is irrelevant.
Some of these tools are visual state machines, reactions to events, version control, collaboration. Most of us won’t like that change, as we’d lose a lot of control to opinionated rigid tools.
Maybe XCode and Android Studio will evolve to that for the next generation of developers.
It's much bigger than games.
There are some frameworks that are just now hitting their stride that make it 10x easier to build virtual worlds. Unreal Engine 5 in particular has been interesting to me. Even today we are seeing people who work full time in VR now. (mostly lower wage employees in low cost of living countries). But I've also met people who are self employed in VR making avatars for people in VR chat or that sort of thing.
It's harder now with covid but if you have a chance to try VR somewhere definitely do, it helps if you can experience it first hand.
* Better GPU API on web and native with WebGPU
* Use of compute shaders to interface with trained NN models or to "replace" css.
* Easy reactive programming using immediate mode UI powered by GPU instead of CPU
* Data driven design ala ECS where your ECS lives in the cloud.
Not just for 3D AR/VR but also business/productivity apps in 2D.
Heterogeneous computing is going to be the norm in everything. Since we’ve trained a generation of programmers to be hardware unaware, there is going to be increasing need for people who can understand all the concepts of how to tie all these different devices together to keep this huge group of people productive. I would also lump network devices into this category as the trend to disaggregate accelerates and lower latency access protocols/methods are needed.
Embedded SW is going to be more important as devices get more powerful. If we are truly to see AR devices I think this is definitely the case. The difference between FW/SW will continue to blur. Will also need people who understand HW to best optimize for power consumption.
Power consumption aware computing and profiling. As we hit physical limits of power density in data centers, we will need good ways to profile power consumption of code and power based optimization of code. This is obviously a thing already in mobile devices. But expect power aware computing to become more and more prevalent everywhere over time.
IMO, VR will skyrocket when:
We have a portable headset that is:
Affordable
4k resolution per eye
Has decent mixed reality capabilities (e.g. see the Varjo XR 3 as an example on YouTube)
Has a good omnidirectional threadml that is affordable (i.e. walking, running and moving should be possible as it is normally). I find the current threadmills that I see (e.g. KatVR) a good step in the direction but still too problematic.
Killer apps are (from concrete to abstract):
Having multiple monitors in your room. Now you can travel with 10 monitors in your backpack.
90% real simulations (try Golf+, Eleven Table Tennis and Thrill of the Fight — those games are already there).
Mark Zuckerberg his vision of the Metaverse, which I interpret as: being able to mix and match hanging out and playing games together in VR (note: more companies than only Meta could work on this). Reasoning: gaming together is simply a specialized form of hanging out.
Software-dizing hardware interfaces. The more stuff we put in VR/AR (especially force feedback mechanisms), the better we can mimic the feel of hardware interfaces. In certain cases it will feel real enough and then you can import the entire hardware device into software/VR/AR and suddenly thr production of such a device is now much cheaper. For example: a realistic electronic keyboard in VR. I don’t know how to solve for the force feedback in this case but I do know that there is other hardware for which this must be easier.
Source: I bought an Oculus Quest 2 a month ago to play around with VR while constantly asking what it is lacking and what will make it awesome.
Imagine a personal assistant that you could have a full on conversation with. I don't believe this requires full-blown AGI to achieve, but improvements in language and human understanding by ML might get us there.
Think something like the Computer from Star Trek NG.
(2) Once Microsoft work enough of the bugs out, which I'm guessing will take another 2-3 years, I suspect Windows on ARM will be the Next Big Thing, or at least one of the Big Next-ish Things. ARM supplanting Intel will be a general Thing.
Maybe the next big thing is doing more with less. Rust and Zig come to mind.
If all of reality is able to be piped into Unity (structure of static environment, motion of sensors, dynamic object poses) you can build Reality Apps the same as you can build video games.
You can give people spidey-senses or "the force". Everything will have a UX.
There’s lots of reasons why robots aren’t currently picking up trash across every beach in America 24 hours a day. Or cooking for you whenever you want.
One of them is the inability to identify objects the way humans do. But it’s getting better. Slowly. There’s no fundamental wall that’s been hit yet.
Think about that for a moment. Let it simmer.
Think about cameras that can identify humans and structures. Roombas that can identify cats and walls before it hits them.
Think about all the work that can suddenly be automated because object representation hits an inflection point.
It’s going to change humanity. The story of our species. It’s going to be a perennial wonder.
But to me it looks like some things to keep on the radar: quantum computers, machine learning and more applications of it, possibly blockchain/smart contracts (I'm very unsure about this one).
It's so obvious that visual mediums are just going to continue to dominate time spent on screens. It seems odd that lots of the pixels we see today are manually put there by an artist or a sensor.
Not: Web3, blockchain, or cryptocurrency
I mean actual online communications not requiring a server, account, or some Ponzi scheme.
The big investment in your career will be ideas that stand the test of time. Relational Databases, data structures, programming paradigms, math, etc. Even in machine learning, just knowing core statistics and regression is more important than HottestNewArchitecture :TM:.
Learn the ideas underneath these ideas, and you'll really grow and be able to better evaluate / learn new ideas.
Privacy and accessibility are extremely difficult and challenging tasks. Fake news, disinformation, echo chambers and cultural bubbles are growing problems. We basically need to rebuild plenty of things that are part of our daily life, and this will distract significant resources from the „next big things“. More likely digital society 2.0 is the next big thing, that will replace Meta and Google.
In short, everyone will participate in the wholesale markets (on the retail side) through automated bids. You put in how much you want to pay and as electricity becomes more expensive, eventually you're curtailed. With uncertainty in a fully renewable fleet (assuming nukes don't get built in mass), the demand will have to match whatever supply is available. This is starting in baby steps today, but will likely extend in other areas as well. This will take a lot of software at several levels.
2.) Not something I'd expect to happen in the next decade, but I think automatic and frequent voting will eventually happen if security is ever improved.
As of now, it is still a long ways off and there is a relevant xkcd about a developer horrified about exposing voting to modern software development. The "demarchists" from the "Revelation Space" series by Alastair Reynolds do this where they're constantly polled and vote on issues and government is largely based off of that. I can't see that happening in the US (a true democracy and not a representative based one), but it could be used as a tool to help those in power make decisions. We have smartphones that could do this today.
3.) I'd really like to move away from Windows/Mac/Unix at some point in the next 50 years for something far more simple.
Tablets don't seem very productive and the average desktop os is an absolute cluster f. Honestly, I'd love to have a very simple computer with a fast processor and lots of RAM that is somehow similar in nature to a C64 in that it's fairly simple to just boot right in and give it some commands to move something around the screen and build a game in a half hour. I want more though... something similar to Wolfram Mathematica built-in. The internet, but no web browser black hole [you'd have to fix the internet first :) ]. In short, throw out the unmaintainable mess that exists today in favor of a better, more holistic design. Maybe the Smalltalk-80 method, but Forth, Rebol, and Tcl stick out more to me. Maybe something like the power of a modern computer with a far greater simplicity in mind. No automatic spyware pushed by Microsoft or incomprehensible bloat. There was a group looking into this sort of thing that reached out to me on HN once, but I've been unable to find their page. I believe they tried to use sea vessels as a metaphor for computing where a "dinghy" is what you would ultimately want. I know things are built off of the shoulders of Giants, but computing seems to have lost its way in some respect. Why is it so hard to do GUI stuff in 2021? There are 1000 frameworks and they're all insanely complicated.
Projects like Zig I think are the first signs that we are getting fed up with what we have.