I’m talking not only about trusting the distribution chain but about the situation where some services dont rebuild their images using updated bases if they dont have a new release.
So per example if the particular service latest tag was a year ago they keep distributing it with a year old alpine base…
I don’t know enough about code to verify things myself. And I assume this applies for a lot of us here. So I just pray that nothing’s fucked in the distribution chain.
I’m also in this category, but OP is talking about something else.
Like if you use container-x, which has an alpine base. If it hasn’t released a new version in several years then you’re using a several year old alpine distro.
I didn’t really realise this was a thing.
Not yet but I plan to. Just haven’t gotten around to setting it all up yet.
No
All the time. There’s a lot of cves in old premade docker containers.
Rebuild: no. If the software itself is unmaintained, it gets replaced.
Patch: yes. If the base image contains vulnerabilities that can be fixed with a package update, then that gets applied. The patch size and side effects can be minimized by using copacetic, which can ingest Trivy scan results to identify vulnerabilities.
There’s also repos like Chainguard and Docker hardened images which are handy for getting up to date images of commonly used tools.
I don’t think a year old base is bad. Unless there’s an absolutely devastating CVE in something like the network stack or a particular shared library, any vulnerabilities in it will probably be just privilege escalations that wouldn’t have any effect unless you were allowing people shell access to the container. Obviously, the application itself can have a vulnerability, but that would be the case regardless of base image.
Yes, because I mostly like to have my services built in a Debian container inside my Proxmox environment. If I’m running it in Docker, there’s a good chance it’s temporary/PoC, and in that case I do not rebuild or anything, I run it for whatever purpose it serves and then it either goes away or gets migrated to a handcrafted Debian container.
I have a repo for some home automation, where some hardware specific modules are required. But it’s becoming rarer since more software handle it in runtime.
Too much work.

I didn’t realise this was a problem.
I’m not too worried about it though.
each container has such a small attack surface. As in, my reverse proxy traefik exposes port 80 and port 443, and all the others only expose their API’s or webservers to traefik.
If you care about security you build it is own. No need to trust random dude in the internet. After all It just fire and forget. Copy whatever “code” is used to build container you are after, verify it once and than just rebuild it periodically to pull patches from more reliable sources.
Docker security is a joke, no need to make it worse.I’ve never rebuilt a container, but I also don’t have any containers that are deprecated status either. I swap off to alternatives when a project hits deprecation or abandonware status.
My only deprecated container I currently have is filebrowser, I’m still seeking alternatives and have been for awhile now but strangely enough it doesn’t seem there are many web UI file management containers.
As such though ever since I learned that the project was
abandonedon life support(the maintainer has said they are doing security patches only, and that while they are doing more on the project currently, that could change), the container remains off, only activating it when i need to use it.







