HACKER Q&A
📣 behnamoh

Why cloud GPUs have low availability?


Runpod used to have high availability of 4090 GPUs just about a month ago, but it seems like something has happened that has made almost all of them become unavailable. Now when I want to deploy my model, I get "low availability" messages and sometimes the instance doesn't even boot up correctly.

I tried vast.ai as well but the way they do things is rather primitive and unprofessional in that there really isn't a straightforward way to get into your instance and install booga or whatever. Their SSH failed every single time as well.

I don't know what other options are available for people who want to run research on LLMs and don't have the budget to purchase a big GPU (even if it were available...).


  👤 bradknowles Accepted Answer ✓
Have you tried AWS? Or other big cloud providers?

👤 edgoode
We have aggregated most cloud GPU providers into a single platform, so you have an easier time getting the machines you need with our aggregated availability. You can check live availability and prices at https://shadeform.ai

Disclaimer - I am one of the cofounders.