> I have never seen any performance problem being solved by running it on Azure's virtualization
Sorry, I wasn't clear. I am not virtualizing the workspace. I'm using `recc` which is like `distcc` or `ccache` in that it wraps the compiler job. Every developer keeps their workstation. It just routes the actual `clang` or `gcc` calls to a Kubernetes cluster which provides distributed build and cache.
> Isn't there a less convoluted way of making the best engineers leave?
We have 7000+ compiler jobs in a clean build because it is a big codebase. People are waiting hours for CI.
I'm sure that drives attrition and bringing that down to minutes will help retain talent.
> Tens of thousands of vCPUs for a single compilation run, or to accommodate 100 developers who try to compile their own changes?
Because it uses remote execution, it will ideally do both. My belief is that an individual developer launching 6000 compiler jobs because they changed a header will smooth out over 300 developers that generally do incremental builds. Likewise, this'll eliminate redundant recompilation when git pulling since this also serves as a cache.
This makes absolutely no sense to me. Are you really recompiling 6000 things each time a dev in the company needs to add a line somewhere in the codebase? Have you thought about splitting that giant thing in smaller chunks?
Thanks for expanding on it, now it's more clear what you want to achieve. If I see things like this, it seems Linus was up to something for banning C++. That sounds like a nasty compilation scheme, but I guess the org has painted itself too deep into that corner to get out of it.