There is a post about getting overwhelmed by 15 containers and people not wanting to turn the post into a container measuring contest.
But now I am curious, what are your counts? I would guess those of you running k*s would win out by pod scaling
docker ps | wc -l
For those wanting a quick count.
Zero.
About 35 NixOS VMs though, each running either a single service (e.g. Paperless) or a suite (Sonarr and so on plus NZBGet, VPN,…).
There’s additionally a couple of client VMs. All of those distribute over 3 Proxmox hosts accessing the same iSCSI target for VM storage.
SSL and WireGuard are terminated at a physical firewall box running OpnSense, so with very few exceptions, the VMs do not handle any complicated network setup.
A lot of those VMs have zero state, those that do have backup of just that state automated to the NAS (simply via rsync) and from there everything is backed up again through borg to an external storage box.
In the stateless case, deploying a new VM is a single command; in the stateful case, same command, wait for it to come up, SSH in (keys are part of the VM images), run
restore-<whatever>.On an average day, I spend 0 minutes managing the homelab.
Is this in a repo somewhere we can have a look?
I’ll DM you… Not sire I want to link those two accounts publicly 😄
On an average day, I spend 0 minutes managing the homelab.
0 is the goal. Well done !
Edit: Ha! Some masochist down-voted that.
Why VMs instead of contsiners? Seems like way more processing overhead.
Eh… Not really. Qemu does a really good job with VM virtualizarion.
I believe I could easily build containers instead of VMs from the nix config, but I actually do like having a full VM: since it’s running a full OS instead of an app, all the usual nix tooling just works on it.
Also: In my day job, I actually have to deal quite a bit with containers (and kubernetes), and I just… don’t like it.
Yeah, just wondered because containers just hook into the kernal in a way that doesn’t have overhead. Where as a VM has to emulate the entire OS. But hey I get it, fixing stuff inside the container can be a pain
How it started : 0
Max : 0
Now : 0
Iso27002 and provenance validation goes brrrrr
74 across 2 proxmox nodes in a few lxcs

64 containers in total, 60 running - the remaining 4 are Watchtowers that I run manually whenever I feel like it (and have time to fix things if something should break).
What tool is that screenshot from?
There is a post about getting overwhelmed by 15
I made the comment ‘Just 15’ in jest. It doesn’t matter to me. Run 1, run 100. The comment was just poking the bear as it were. No harm nor foul intended. Sorry if it was received differently.
None, if it’s not in a Debian repo I don’t deploy it on my stable server.
It’s not really about docker itself, I just don’t think software has married enough if it’s not packaged properly
My kubernetes cluster is sitting happily at 240, and technically those are pods some of which have up to 3 or 4 containers, so who knows the full number.
35 stacks 135 images 71 containers
I am like Oprah yelling “you get a container, you get a container, Containers!!!” At my executables.
I create aliases using toolbox so I can run most utils easily and securely.
Toolbox?
Podman toolboxes, which layer a do gained over your user file system, allowing you to make toolbox specific changes to the system that only affect that toolbox.
I think it’s oringinally meant for development of desktop environments and OS features, but you can put most command line apps in them without much feauture breakage.
I always saw them pitched by Fedora as the blessed way to run CLI applications on an immutable host.
That’s why I use them, but they are missing the in ramp to getting this working nicely for regular users.
E.g. how do I install neovim with toolbox and get Wayland clipboard working, without doing a bunch of manual work? It’s easy to add to my ostree, but that’s not really the way it should be.
I ended up making a bunch of scripts to manage this, but now I feel like I’m one step away from just using nixos.
36, with plans for more
- There are usually one or two of those that are just experimental and might get trashed.
13 running on my little Synology.
Actually more than I expected, I would have guesses closer to 8
About 62 deployments with 115 “pods”
All of you bragging about 100+ containers, please may in inquire as to what the fuck that’s about? What are you doing with all of those?
100 containers isn’t really a lot. Projects often use 2-3 containers. Thats only something like 30-50 services.
Not bragging. It is what it is. I run a plethora of things and that’s just on the production server. I probably have an additional 10 on the test server.
In my case, most things that I didn’t explicitly make public are running on Tailscale using their own Tailscale containers.
Doing it this way each one gets their own address and I don’t have to worry about port numbers. I can just type http://cars/ (Yes, I know. Not secure. Not worried about it) and get to my LubeLogger instance. But it also means I have 20ish copies of just the Tailscale container running.
On top of that, many services, like Nextcloud, are broken up into multiple containers. I think Nextcloud-aio alone has something like 5 or 6 containers it spins up, in addition to the master container. Tends to inflate the container numbers.
Ironic that Nextcloud AIO spins up multiple…
Things and stuff. There is the web front end, API to the back end, the database, the redis cache, mqtt message queues.
And that is just for one of my web crawlers.
/S
A little of this, a little of that…I may also have a problem… >_>;
The List
Quickstart
- dockersocket
- ddns-updater
- duckdns
- swag
- omada-controller
- netdata
- vaultwarden
- GluetunVPN
- crowdsec
Databases
- postgresql14
- postgresql16
- postgresql17
- Influxdb
- redis
- Valkey
- mariadb
- nextcloud
- Ntfy
- PostgreSQL_Immich
- postgresql17-postgis
- victoria-metrics
- prometheus
- MySQL
- meilisearch
Database Admin
- pgadmin4
- adminer
- Chronograf
- RedisInsight
- mongo-express
- WhoDB
- dbgate
- ChartDB
- CloudBeaver
Database Exporters
- prometheus-qbittorrent-exporter
- prometheus-immich-exporter
- prometheus-postgres-exporter
- Scraparr
Networking Admin
- heimdall
- Dozzle
- Glances
- it-tools
- OpenSpeedTest-HTML5
- Docker-WebUI
- web-check
- networking-toolbox
Legally Acquired Media Display
- plex
- jellyfin
- tautulli
- Jellystat
- ErsatzTV
- posterr
- jellyplex-watched
- jfa-go
- medialytics
- PlexAniSync
- Ampcast
- freshrss
- Jellyfin-Newsletter
- Movie-Roulette
Education
- binhex-qbittorrentvpn
- flaresolverr
- binhex-prowlarr
- sonarr
- radarr
- jellyseerr
- bazarr
- qbit_manage
- autobrr
- cleanuparr
- unpackerr
- binhex-bitmagnet
- omegabrr
Books
- BookLore
- calibre
- Storyteller
Storage
- LubeLogger
- immich
- Manyfold
- Firefly-III
- Firefly-III-Data-Importer
- OpenProject
- Grocy
Archival Storage
- Forgejo
- docmost
- wikijs
- ArchiveTeam-Warrior
- archivebox
- ipfs-kubo
- kiwix-serve
- Linkwarden
Backups
- Duplicacy
- pgbackweb
- db-backup
- bitwarden-export
- UnraidConfigGuardian
- Thunderbird
- Open-Archiver
- mail-archiver
- luckyBackup
Monitoring
- healthchecks
- UptimeKuma
- smokeping
- beszel-agent
- beszel
Metrics
- Unraid-API
- HDDTemp
- telegraf
- Varken
- nut-influxdb-exporter
- DiskSpeed
- scrutiny
- Grafana
- SpeedFlux
Cameras
- amcrest2mqtt
- frigate
- double-take
- shinobipro
HomeAuto
- wyoming-piper
- wyoming-whisper
- apprise-api
- photon
- Dawarich
- Dawarich—Sidekiq
Specific Tasks
- QDirStat
- alternatrr
- gaps
- binhex-krusader
- wrapperr
Other
- Dockwatch
- Foundry
- RickRoll
- Hypermind
Plus a few more that I redacted.
I look at this list and cry a little bit inside. I can’t imagine having to maintain all of this as a hobby.
Dococd + renovate goes brrr
From a quick glance I can imagine many of those services don’t need much maintenance if any. E.g. RickRoll likely never needs any maintenance beyond the initial setup.
Kube makes it easy to have a lot, as a lot of things you need to deploy on every node just deploy on every node. As odd as it sounds, the number of containers provides redundancy that makes the hobby easy. If a Zimaboard dies or messes up, I just nuke it, and I don’t care whats on it.
About 50 on a k8s cluster, then 12 more on a proxmox vm running debian and about 20 ish on some Hetzner auction servers.
About 80 in total, but lots more at work:)






