> Having your whole application with its containers, volumes, and networks all defined together in one easy-to-read YAML file is a way better experience. Deployment is two steps: 1. `git clone foo` 2. `docker compose up -d`. You can see the state of the application containers with `docker compose ps`. You can run multiple compose applications on the same host and manage them separately by putting them in different directories.
I always felt it the other way around: docker compose files are weird blobs of YAML that I have to hunt down the location of or parse their under-speced labels to find the location of. I can't make them depend on any non-container services[0], the break my firewall rules[1], and I have to use a whole mess of bespoke tooling just to do normal start/stop/restart operations with them instead of using the same commands I use for literally any other service.
> With quadlets, you delegate everything to systemd. You have to break the configuration up into a bunch of tiny unit files and then separately copy them to /etc or a dedicated user's dotfiles.
The nice thing about quadlets is exactly that, they integrate with systemd and by extension the rest of the system. I don't have to think about `webapp.container` as a "Docker container" I can think of it as just `webapp.service`, like any other piece of software I would install and run. All the related files are in one of the well-speced file locations that follow the same hierarchy as anything else on the system (user -> etc -> /usr), optionally grouped in folders[2].
> Good luck SSH'ing into an unfamiliar system and understanding at a glance what it's doing.
Just use the same tools you'd use on any other systemd system: `systemctl list-units`, `systemctl status`, etc. Versus having to hunt down compose files either manually or by parsing the under-specified labels on the containers.
> (Even moreso if you created dedicated users for each application, which I understand is the recommended solution.)
TBH I've rarely seen this advice. Most people I know just run it as root (which is what I do) or as a `podman` user. But even in this situation it should be pretty easy to figure out whats' running, as you know it's all running as one user and is hard-namespaced to only rely on resources available in that account.
> If I'm just holding it wrong and there exists some better tooling to manage podman in prod that I don't know about, I'm happy to hear about it.
Quadlets are just files that created systemd services, so basically any configuration management or deployment tool will manage them fine. Ansible has a dedicated Quadlet role that works pretty well, or just git clones+`systemctl start`. This would probably be the recommended way if you're not using k8s/etc.
Alternatively, you can just `git clone /etc/containers/systemd/`, `systemctl start container` like with docker compose. If you're running multiple containers, either refer to them with `Wants=`/etc in the Quadlet files, create a `.target` file that references them all, or put them all in a `.pod` and start the pod. I think this is the part were most people stumble though: when you're used to treating containerized software as a separate kind of "thing" it's a little weird to go back to treating it like normal services.
I've been writing something to help with deploying quadlets GitOps-style[3] that will hopefully fill the "more than one server but less than kubernetes" deployment gap.
[0] Unless I wrap the compose steps in a systemd unit, at which point now I have two problems.
[1] Caveat, this has probably gotten better overall but I still run into compose-related firewall issues about once or twice a year
[2] The newer versions of Podman also support `.quadlets` files, that merge all the quadlets into one file.
[3] https://github.com/stryan/materia . There's also https://github.com/orches-team/orches and https://github.com/ubiquitous-factory/quadit
Parent complaint sounds like they are just unfamiliar in general with managing regular system services using unit files/systemctl.