HACKER Q&A
📣 manili

Versal HBM Series vs. Nvidia A100 for LLM Training?


Hello all, I am very curious to know how feasible it is to use a cluster of Versal HBM Series (e.g. VHK158 evaluation board) for training an LLM like LLaMA-2 70B in terms of 'Performance/Power/Cost.' Are there any papers regarding the comparison of a cluster of VHK158 evaluation boards and a cluster of, say, A100s?

Thanks.


  👤 brucethemoose2 Accepted Answer ✓
Well, this post is the second result for even running llama on Versal, so...

I think MLC-LLM (though TVM) can maybe run inference?