One thing I'm curious about here is the operational impact.
In production systems we often see Python services scaling horizontally because of the GIL limitations. If true parallelism becomes common, it might actually reduce the number of containers/services needed for some workloads.
But that also changes failure patterns — concurrency bugs, race conditions, and deadlocks might become more common in systems that were previously "protected" by the GIL.
It will be interesting to see whether observability and incident tooling evolves alongside this shift.
For big things the current way works fine. Having a separate container/deployment for celery, the web server, etc is nice so you can deploy and scale separately. Mostly it works fine, but there are of course some drawbacks. Like prometheus scraping of things then not able to run a web server in parallel etc is clunky to work around.
And for smaller projects it's such an annoyance. Having a simple project running, and having to muck around to get cron jobs, background/async tasks etc. to work in a nice way is one of the reasons I never reach for python in these instances. I hope removing the GIL makes it better, but also afraid it will expose a whole can of worms where lots of apps, tools and frameworks aren't written with this possibility in mind.
> observability tooling for Python evolving
As much as I dislike Java the language, this is somewhere where the difference between CPython and JVM languages (and probably BEAM too) is hugely stark. Want to know if garbage collection or memory allocation is a problem in your long running Python program? I hope you're ready to be disappointed and need to roll a lot of stuff yourself. On the JVM the tooling for all kinds of observability is immensely better. I'm not hopeful that the gap is really going to close.
> If true parallelism becomes common, it might actually reduce the number of containers/services needed for some workloads
Not by much. The cases where you can replace processes with threads and save memory are rather limited.
I would have thought most of those would have been moved to async Python by now.
A lot of that has already been solved for by scaling workers to cores along with techniques like greenlets/eventlets that support concurrency without true multithreading to take better advantage of CPU capacity.
But python can fork itself and run multiple processes into one single container. Why would there be a need to run several containers to run several processes?
There's even the multiprocessing module in the stdlib to achieve this.
This is surely why Facebook was interested in funding this work. It is common to have N workers or containers of Python because you are generally restricted to one CPU core per Python process (you can get a bit higher if you use libs that unlock the GIL for significant work). So the only scaling option is horizontal because vertical scaling is very limited. The main downside of this was memory usage. You would have to load all of your code and libraries N types and in-process caches would become less effective. So by being able to vertically scale a Python process much further you can run less and save a lot of memory.
Generally speaking the optimal horizontal scaling is as little as you have to. You may want a bit of horizontal scaling for redundancy and geo distribution, but past that vertically scaling to fewer larger process tend to be more efficient, easier to load balance and a handful of other benefits.