Curious to learn from people with experience deploying and running AI workloads on non-hyperscalar cloud GPUs (think like Voltage Park, Hyperstack, Akash, etc).
What went into your choice of GPU cloud? What were the criteria? Have you experienced any frustrations/issues with their infra?
I'd b happy to share my own experiences with clouds in the comments too. Looking forward to hearing from u :)
👤 aabdel0181 Accepted Answer ✓
I used to use akash but sometimes pytorch would just crash mid training run