Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    edit-2
    2 months ago

    All my services run on bare metal because its easy. And the backups work. It heavily simplifies the work and I don’t have to worry about things like a virtual router, using more cpu just to keep the container…contained and running. Plus a VERY tiny system can run:

    1. Peertube
    2. GoToSocial + client
    3. RSS
    4. search engine
    5. A number of custom sites
    6. backups
    7. Matrix server/client
    8. and a whole lot more

    Without a single docker container. Its using around 10-20% of the RAM and doing a dd once in a while keeps everything as is. Its been 4 years-ish and has been working great. I used to over-complicate everything with docker + docker compose but I would have to keep up with the underlining changes ALL THE TIME. It sucked, and its not something I care about on my weekends.

    I use docker, kub, etc…etc… all at work. And its great when you have the resources + coworkers that keep things up to date. But I just want to relax when I get home. And its not the end of the world if any of them go down.

      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 months ago

        Couple of custom bash scripts for the backups. Ive used ansible at work. Its awesome, but my own stuff doesn’t require any robustness.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      Oh so the other 80% of your RAM can sit there and do nothing? My RAM is always around 80% or so as its caching stuff like it’s supposed to.

          • mesa@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            Welp OP did ask how we set it up. And for a family instance its good enough. The ram was extra that came with the comp. I have other things to do than optimize my family home server. There’s no latency at all already.

            It spikes when peertube videos are uploaded and transcoded + matrix sometimes. Have a good night!

      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 months ago

        Freshrss. Sips resources.

        The dd when I want. I have a script I tested a while back. The machine won’t be on yeah. Its just a small image with the software.

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    2 months ago

    Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.

    The only constant is change.

  • Strider@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    Erm. I’d just say there’s no benefit in adding layers just for the sake of it.

    It’s just different needs. Say I have a machine where I run a dedicated database on, I’d install it just like that because as said there’s no advantage in making it more complicated.

  • Jerry on PieFed@feddit.online
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Depends on the application for me. For Mastodon, I want to allow 12K character posts, more than 4 poll question choices, and custom themes. Can’t do it with Docker containers. For Peertube, Mobilizon, and Peertube, I use Docker containers.

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Why could you not have that Mastodon setup in containers? Sounds normal afaik

      • farcaller@fstab.sh
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        I’ll chime in: simplicity. It’s much easier to keep a few patches that apply to local OS builds: I use Nix, so my Mastodon microVM config just has an extra patch line. If there’s a new Mastodon update, the patch most probably will work for it too.

        Yes, I could build my own Docker container, but you can’t easily build it with a patch (for Mastodon specifically, you need to patch js pre-minification). It’s doable, but it’s quite annoying. And then you need to keep track of upstream and update your Dockerfile with new versions.

  • iegod@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    2 months ago

    You sure you mean bare metal here? Bare metal means no OS.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    106
    arrow-down
    4
    ·
    2 months ago

    Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.

    Yes, I’ll die on this hill.

    • sylver_dragon@lemmy.world
      link
      fedilink
      English
      arrow-up
      35
      ·
      2 months ago

      But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!

      In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 months ago

        kubernetes

        Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.

        Those terms do mean something, but they’re a lot simpler than execs claim they are.

        • mesa@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 months ago

          I love using it at work. Its a great tool to get everything up and running kinda like ansible. Paired with containerization it can make applications more “standard” and easy to spin back up.

          That being said, for a home server, it feels like overkill. I dont need my resources spread out so far. I dont want to keep updating my kub and container setup with each new iteration. Its just not fun (to me).

      • AtariDump@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        2 months ago

        …oh shit, the RAM is on fire.

        The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.

        Burn mothercucker, burn.

        (Thanks phone for the spelling mistakes that I’m leaving).

  • Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    2 months ago

    It’s just another system to maintain, another link in the chain that can fail.

    I run all my services on my personal gaming pc.

  • Andres@social.ridetrans.it
    link
    fedilink
    arrow-up
    3
    ·
    2 months ago

    @kiol I mean, I use both. If something has a Debian package and is well-maintained, I’ll happily use that. For example, prosody is packaged nicely, there’s no need for a container there. I also don’t want to upgrade to the latest version all the time. Or Dovecot, which just had a nasty cache bug in the latest version that allows people to view other peoples’ mailboxes. Since I’m still on Debian 12 on my mail server, I remain unaffected and I can let the bugs be shaken out before I upgrade.

    • Andres@social.ridetrans.it
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      @kiol On the other hand, for doing builds (debian packages and random other stuff), I’ll use podman containers. I’ve got a self-built build environment that I trust (debootstrap’d), and it’s pretty simple to create a new build env container for some package, and wipe it when it gets too messy over time and create a new one. And for building larger packages I’ve got ccache, which doesn’t get wiped by each different build; I’ve got multiple chromium build containers w/ ccache, llvm build env, etc

      • Andres@social.ridetrans.it
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        @kiol And then there’s the stuff that’s not packaged in Debian, like navidrome. I use a container for that for simplicity, and because if it breaks it’s not a big deal - temporary downtime of email is bad, temporary downtime of my streaming flac server means I just re-listen to the stuff that my subsonic clients have cached locally.

        • Andres@social.ridetrans.it
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          @kiol Syncthing? Restic? All packaged nicely in Debian, no need for containers. I do use Ansible (rather than backups) for ensuring if a drive dies, I can reproduce the configuration. That’s still very much a work-in-progress though, as there’s stuff I set up before I started using Ansible…

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      Say more, what did that experience teach you? And, what would you do instead?

  • erock@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Here’s my homelab journey: https://bower.sh/homelab

    Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet

  • kiol@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    Are you concerned about your self-hosted bare metal machine being a single point of failure? Or, are you concerned it will be difficult to reproduce?

  • sj_zero@lotide.fbxl.net
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    2 months ago

    I’m using proxmox now with lots of lxc containers. Prior to that, I used bare metal.

    VMs were never really an option for me because the overhead is too high for the low power machines I use – my entire empire of dirt doesn’t have any fans, it’s all fanless PCs. More reliable, less noise, less energy, but less power to throw at things.

    Stuff like docker I didn’t like because it never really felt like I was in control of my own system. I was downloading a thing someone else made and it really wasn’t intended for tinkering or anything. You aren’t supposed to build from source in docker as far as I can tell.

    The nice thing about proxmox’s lxc implementation is I can hop in and change things or fix things as I desire. It’s all very intuitive, and I can still separate things out and run them where I want to, and not have to worry about keeping 15 different services running on the same version of whatever common services are required.

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      Actually docker is excellent for building from source. Some projects only come with instructions for building in Docker because it’s easier to make sure you have tested versions of tools.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    There’s one thing I’m hosting on bare metal, a WebDAV server. I’m running it on the host because it uses PAM for authentication, and that doesn’t work in a container.

  • missfrizzle@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    2 months ago

    pff, you call using an operating system bare metal? I run my apps as unikernels on a grid of Elbrus chips I bought off a dockworker in Kamchatka.

    and even that’s overkill. I prefer synthesizing my web apps into VHDL and running them directly on FPGAs.

    until my ASIC shuttle arrives from Taipei, naturally, then I bond them directly onto Ethernet sockets.

    /uj not really but that’d be sick as hell.