Mostly it will. The one advantage a M2 MAX has is with really large LLM inference with llama.cpp, as you have to compromise quality severely to fit 70B in 24GB of RAM.... but do anything else, and you want desktop GPUs with as much VRAM as you can get.
I just built a 10-liter SFF desktop in a Node 202 with an RTX 3090 for ~2k, and couldn't be more pleased.
However, Apple's Silicon chips are making strides in this field. It ultimately depends on your specific needs and preferences. Consider your software ecosystem, budget, and whether you need portability.
Both options have their strengths, so research thoroughly to make the best choice for your AI projects.
In case you have an AI project and want to make it real, feel free to contact us! https://www.ratherlabs.com