For LLMs, it feels like the opposite. There’s a smaller set of frameworks (e.g., llama.cpp, vllm), each supporting a wide range of models. This makes it relatively straightforward to integrate them into other languages like Go, as you only need to maintain a few idiomatic layers.
To me, it’s a no-brainer that Go or Rust will replace Python for serving LLMs. They’re CPU-intensive, Python is generally slow, and the limited number of LLM runtimes simplifies the transition.
(1) Python’s dominance in AI inference is driving, and will continue to drive, more investment in improving Python for lots of things that it isn’t great at right now that people want to do a long with AI inference. We’ve actually seen a lot of that over the last few years, with physics engines and robotics simulation platforms for Python, some of which are Python bindings for existing libraries written in other languages, but some of which are built in Python (e.g., via Taichi or Numba, both of which can produce and execute GPU kernels from Python code, and the latter of which can JIT and parallelize (mostly numeric) Python code on CPU, as well.) This will also include investment in Python’s core and standard library to address pain points.
(2) The increasing importance of AI inference will at the same time drive more investment in AI inference libraries for non-Python platforms.
The relative balance between the progress of those two efforts will be a big factor in how much Python is used in inference going forward, for AI in general, and for LLM’s in particular.
My guess is the CPU overhead of Python is not significant compared to running an LLM but Python has limited facilities for dealing with concurrency. For a while I was into writing asyncio web servers but I eventually found workloads (an image sorter running the wrong way on an ADSL connection: one process is thinking hard for 2sec, meanwhile images are not downloading) that would tie them into knots. gunicorn and celery and similar things can handle parallelism with multiple processes but if you have a 1GB model you will terribly waste memory.
In Java on the other hand you can have a 1GB model and it is shared by the threads and there is no drama.
I wrote a chess program in Python that was good enough to beat my tester a few times last month and have been wanting to take it to a chess club but my tester tells me it needs to respect time control for that. Also I'd like to support a protocol like XBoard or UCI. Either way it is necessary that the comms thread can interrupt the thinking thread and that's dead easy to do in Java and a huge hassle in Python.
Sure there are threads in Python and if I wanted to screw around with alpha software there is the no-GIL Python but remember this: when you're doing a project which has a high-risk or research component it's a bad time to pick tools that require you to learn things. If you are good at Rust or Go I'd say go with that. But don't pick up a language because you heard somebody else thinks it cool. A lot of people are running big and complex apps on Java but you don't hear about it so much.
The only places where Go and Rust take the lead is optimizing the non-AI code that you write. That's still a valuable advantage, but it's not going to displace the use case for Python on it's own.