Does the Python code you want to run use PyTorch? Is it related to LLMs or Stable Diffusion? Because those should also run on CPU alone. Not very fast, but good enough to play around with.
If you're thinking of renting a machine from AWS, I'm currently using a "g4dn.xlarge" instance which does support CUDA and can run Stable Diffusion comfortably. It's a bit pricey though and I had to wait and get approved before I could rent it.