HACKER Q&A
📣 amichail

Is it likely that an AI advance will require new hardware for speedup?


And if not, why not?


  👤 dangitnotagain Accepted Answer ✓
Uncertainty resolves as state into the moment of now.

What may be becomes when it happens.

What is the potential, or probability of which kind of change?

Ideally, advancements will make for “optimal” efficiency, though their purpose is for useful feature progress, and these bodies rushing forward may well obliviously oil burn or inexcusably deforest precious remains of our Earth in pursuit.

In the future I hope for is not hover cars or glass roads, it is one where obvious technology disappears into the useful and tastefully enduring.

Enterprises FOMO may be toxic. The gradual progress at a resourceful speed is too inevitable.

What do we need?

Improved AI might be one where the user has more involvement in structuring the model’s “super-psyche”. Enabling and disabling layered YAML, overlayed by saved/edited sessions. This could continuously re-render everything you’ve ever done and inform you of substantial changes (you still want a report right?)

Or maybe away from LLM, and back to a deductive process for which we have more realistic expectations (recognize discrete domain level success.)

Consider an ultimate game server which facilitates requests based on signed policies, and provides real time self support.

Not AGI, better than just a chat bot.

So “who knows?” And, always happy to makes something up!


👤 sp332
Nvidia already added optimized mixed-precision fused multiply-accumulate units. That’s about as good as you could ask for for specialized hardware for the kind of math current AIs use. The main bottleneck is memory bandwidth. So maybe future chips will trade off fewer or slower compute units for faster RAM.

👤 activiation
It already does, doesn't it? If anything, the hardware requirements won't be as bad in the future