What would a new paradigm for CPU and GPUs look like? Is there some aspect of the legacy system that is not ideal but clearly economically impossible to change. Does the poly fill type design of cpu microcode have drawbacks?
One analogy is planes. Plane design has remained pretty unchanged since 1960s because of regulations, very different design = pay to retrain pilots. Of course planes have still drastically improved across board, but but they are still controrting themselves to adhere to the paradigm of the pilots for economic reasons. Without this, a new plane design might be quite different. (This is part of the Boeing crisis- the software was designed to “polyfill” the change in flight beahvior caused by the contorting themselves to get a better engine on similiar plane frame)
How do you optimize architectures for event-based "do nothing 99.9% of the time, do A LOT 0.1% of the time" workloads? I don't know. My hunch is that you should prioritize memory latency and pay more attention to worst case rather than best case performance.
2. Something like a [Lisp machine] that is optimized for functional-style programming with immutable data.
[Ternary computers]: https://www.wikiwand.com/en/Ternary_computer
[Lisp machine]: https://www.wikiwand.com/en/Lisp_machine
What's probably going to happen is more integration and more drm.
Unless you mean cockpit design :)