Please don't. C packaging in distros is working fine and doesn't need to turn into crap like the other language-specific package managers. If you don't know how to use pkgconf then that's your problem.
When I used to work with C many years ago, it was basically: download the headers and the binary file for your platform from the official website, place them in the header/lib paths, update the linker step in the Makefile, #include where it's needed, then use the library functions. It was a little bit more work than typing "npm install", but not so much as to cause headaches.
I agree entirely. C doesn't need this. That I don't have to deal with such a thing has become a new and surprising advantage of the language for me.
I've contemplated this quite a bit (and I personally maintain a C++ artifact that I deploy to production machines, and I generally prefer not to use containers for it), and I think I disagree.
Distributions have solved a very specific problem quite nicely: they are building what is effectively one application (the distro) with many optional pieces, it has one set of dependencies, and the users update the whole thing when they update. If the distro wants to patch a dependency, it does so. ELF programs that set DT_INTERP to /lib/ld-linux-[arch].so.1 opt in to the distro's set of dependencies. This all works remarkably well and a lot of tooling has been built around it.
But a lot of users don't work in this model. We build C/C++ programs that have their own set of dependencies. We want to try patching some of them. We want to try omitting some. We want to write programs that are hermetic in the sense that we are guaranteed to notice if we accidentally depend on something that's actually an optional distro package. The results ... are really quite bad, unless the software you are building is built within a distro's build system.
And the existing tooling is terrible. Want to write a program that opts out of the distro's library path? Too bad -- DT_INTERP really really wants an absolute path, and the one and only interpreter reliably found at an absolute path will not play along. glibc doesn't know how to opt out of the distro's library search path. There is no ELF flag to do it, nor is there an environment variable. It doesn't even really support a mode where DT_INTERP is not used but you can still do dlopen! So you can't do the C equivalent of Python venvs without a giant mess.
pkgconf does absolutely nothing to help. Sure, I can write a makefile that uses pkgconf to find the distro's libwhatever, and if I'm willing to build from source on each machine* (or I'm writing the distro itself) and if libwhatever is an acceptable version* and if the distro doesn't have a problematic patch to it, then it works. This is completely useless for people like me who want to build something remotely portable. So instead people use enormous kludges like Dockerfile to package the entire distro with the application in a distinctly non-hermetic way.
Compare to solutions that actually do work:
- Nix is somewhat all-encompassing, but it can simultaneously run multiple applications with incompatible sets of dependencies.
- Windows has a distinct set of libraries that are on the system side of the system vs ISV boundary. They spend decades doing an admirable job of maintaining the boundary. (Okay, they seem to have forgotten how to maintain anything in 2026, but that's a different story.) You can build a Windows program on one machine and run it somewhere else, and it works.
- Apple bullies everyone into only targeting a small number of distros. It works, kind of. But ask people who like software like Aperture whether it still runs...
- Linux (the syscall interface, not GNU/Linux) outdoes Microsoft in maintaining compatibility. This is part of why Docker works. Note that Docker and all its relatives basically completely throw out the distro model of interdependent packages all with the same source. OCI tries to replace it with a sort-of-tree of OCI layers that are, in theory, independent, but approximately no one actually uses it as such and instead uses Docker's build system and layer support as an incredibly poorly functioning and unreliable cache.
- The BSDs are basically the distro model except with one single distro each that includes the kernel.
I would love functioning C virtual environments. Bring it on, please.
I mean … it clearly isn’t working well if problems like “what is the libssl distribution called in a given Linux distro’s package manager?” and “installing a MySQL driver in four of the five most popular programming languages in the world requires either bundling binary artifacts with language libraries or invoking a compiler toolchain in unspecified, unpredictable, and failure-prone ways” are both incredibly common and incredibly painful for many/most users and developers.
The idea of a protocol for “what artifacts in what languages does $thing depend on and how will it find them?” as discussed in the article would be incredibly powerful…IFF it were adopted widely enough to become a real standard.
> C packaging in distros is working fine
GLIBC_2.38 not found
^ This.
Plus, we already have great C package management. Its called CMake.
Well, if you're fine with using 3-year old versions of those libraries packaged by severely overworked maintainers who at one point seriously considered blindly converting everything into Flatpaks and shipping those simply because they can't muster enough of manpower, sure.
"But you can use 3rd party repositories!" Yeah, and I also can just download the library from its author's site. I mean, if I trust them enough to run their library, why do I need opinionated middle-men?