1. Video Calls
In video calls, encoding and decoding is actually a significant cost of video calls, not just networking. Right now the peak is Zoom's 30 video streams onscreen, but with 1000x CPUS you can have 100s of high quality streams with advanced face detection and superscaling[1]. Advanced computer vision models could analyze each face creating a face mesh of vectors, then send those vector changes across the wire instead of a video frame. The receiving computers could then reconstruct the face for each frame. This could completely turn video calling into a CPU restricted task.
2. Incredible Realistic and Vast Virtual Worlds
Imagine the most advanced movie realistic CGI being generated for each frame. Something like the new Lion King or Avatar like worlds being created before you through your VR headset. With extremely advanced eye tracking and graphics, VR would hit that next level of realism. AR and VR use cases could explode with incredibly light headsets.
To be imaginative, you could have everything from huge concerts to regular meetings take play in the real world, but be scanned and sent to VR participants in real time. The entire space including the room and whiteboard or live audience could be rendered in realtime for all VR participants.
Also, large scale data hoarding becomes far more affordable (I assume the petabyte ram modules also mean exabyte disk drives). So you can be your own Internet Archive, which is great. Alternatively, you can be your own NSA or Google/Facebook in terms of tracking everyone, which is less great.
Non smooth and badly conditioned optimization problems scale much better with size, but getting high precision solutions is hard. These are important for simulations mentioned elsewhere, but not just for architecture and games, also for automating design, inspections etc [2].
With GPUs we have proven that parallelism can be just as good or even better than speed increases in enhancing computation. And there again have been speed increases trickling in.
I don't think it's realistic to say that more speed advances are unlikely. We have already been through many different paradigm shifts in computing, from mechanical to nanoscale. There are new paradigms coming up such as memristors and optical computing.
It seems like 1000x will make Stable Diffusion-style video generation feasible.
We will be able to use larger, currently slow AI models in realtime for things like streaming compression or games.
Real global illumination in graphics could become standard.
Much more realistic virtual reality. For example, imagine a realistic forest stream that your avatar is wading through, with realtime accurate simulation of the water, and complex models for animal cognition of the birds and squirrels around you.
I think with this type of speed increase we will see fairly general purpose AI, since it will allow average programmers to easily and inexpensively experiment with combining many, many different AI models together to handle broader sets of tasks and eventually find better paradigms.
It also could allow for emphasis on iteration in AI, and that could move the focus away from parallel-specific types of computation back to more programmer-friendly imperative styles, for example if combined with many smaller neural networks to enable program synthesis, testing and refinement in real time.
Here's a weird one: imagine something like emojis in VR, but in 3d, animated, and customized on the fly for the context of what you are discussing, automatically based on an AI you have given permission to.
Or, hook the AI directly into your neocortex. Hook it into several people's neocortices and then train an animated AI 3d scene generation system to respond to their collective thoughts and visualizations. You could make serialized communication almost obsolete.
imagine you're working on airport. thousands of sheets, all of them PDF. hundreds or thousands of people flipping PDFs and waiting 2-3+ seconds for the screen to refresh. CPUs baby, we need CPUs.
Probably using AI a lot more, on-device for every single camera.
Also, 1000x parallelism or 1000x single core?
Higher IPC, higher clock, more cores, more cache, more cache levels, more memory bandwidth, faster memory access, faster decode, etc.
One idea I imagine would be possible with a 1000x speed would be real time software defined radio capture, analysis and injection.
This would lead to a complete chaos, until we update our security standards.
With 1000X CPU computing, each computer will have equivalent computing power as human brain.
So brain compute interface or jarvis like AI may get possible
Average nodejs manifest file would contain x12000 more dependencies.
Also, we would see a ton more AI being done on the local CPU. Anything from genuine OS improvements to super realistic cat filters on teams/zoom.
And finally, I think people would need to figure out storage and network bottlenecks because there is only so much you can do with compute before you end up stalling waiting for more data