Hello everyone,
I am about to renovate my selfhosting setup (software wise). And then thought about how I could help my favourite lemmy community become more active. Since I am still learning many things and am far away from being a sysadmin I don’t (just) want tell my point of view but thought about a series of posts:
Your favourite piece of selfhosting
I thought about asking everyone of you for your favourite piece of software for a specific use case. But we have to start at the bottom:
Operating systems and/or type 1 hypervisors
You don’t have to be an expert or a professional. You don’t even have to be using it. Tell us about your thoughts about one piece of software. Why would you want to try it out? Did you try it out already? What worked great? What didn’t? Where are you stuck right now? What are your next steps? Why do you think it is the best tool for this job? Is it aimed at beginners or veterans?
I am eager to hear about your thoughts and stories in the comments!
And please also give me feedback to this idea in general.
No love for Open Media Vault? I run it virtualized under Proxmox and I’m quite happy with it, not very fancy but super stable.
I run about twenty containers on OMV, with 4 8tb drives in a ZFS ZRAID5 setup. I love how users can be shared across services, for example the same user may access SMB shares or connect via OpenVPN.
+1 for OMV. I use it at work all the time to serve Clonezilla images through an SMB share. It’s extremely reliable. The Clonezilla PXE server is a separate VM, but the toolkit is available in the
clonezillapackage, and I could even integrate the two services if I felt particularly masochistic one day.My first choice for that role was TrueNAS, but at the time I had to use an old-ass Dell server that only had hardware RAID, and TrueNAS couldn’t use ZFS with it.
OS: Unraid
It’s primarily NAS software, with a form of software raid functionality built in.
I like it mainly because it works well and the GUI makes is very easy to use and work with.On top of that you can run VMs and docker containers, so it is very versatile as well.
I use it to host the following services on my network:
- Nextcloud
- Jellyfin
- CUPS
It costs a bit of money up-front, but for me it was well-worth the investment.
Love Unraid. Been using it for a few years now on an old Dell server. I’m about to transform my current gaming PC into the main server so I can utilize the GPU pass-through and CPU pinning for things like running a VM just for LLM/AI and a VM for EndeavourOS for gaming. I just need to figure out how to keep my old server somehow working still bc of all the drive storage I have already setup, which my PC doesn’t have space for without a new case.
For anyone looking to setup Unraid, I highly recommend the SpaceInvaderOne YouTube channel. It helped tremendously when I got started.
+1 for unraid. Nice OS that let’s me easily do what I want
Debian on the host and everything else in containers
I have a nuc with Linux mint and host everything on docker containers. I expose any service I need through caddy.
Truenas core because I’m a bsd guy at heart. with that all but dead I’m trying to decide between bare freebsd or xigmanas.
I have a arch linux box for things that don’t run on bsd.
Been using debian for 25 years.
PVE running on a pile of e-waste. Most of the parts are leftovers from my parents’ old PC that couldn’t handle Win10. Proxmox loves it. Even the 10GB mis-matched DDR3 memory. The only full VM is OPNSense (formerly pfSense), everything else runs inside Debian containers. It only struggles when Jellyfin has to transcode something because I don’t have a spare GPU.
Best type of homelab! Just use what’s there
- Hypervisor: Debian stable + libvirt or PVE if you need clustering/HA
- VMs: Debian stable
- podman if you need containerization below that
I have been using Proxmox VE with Docker running on the host not managed by Proxmox, and then Cockpit to manage NFS Shares with Home Assistant OS running in a VM. It’s been pretty rock solid. That was until I updated to Version 9 last night, it’s been a nightmare getting the docker socket to be available. I think Debian Trixie may have some sort of extra layers of protection, I haven’t investigated it too much, but my plan tomorrow and this week is to migrate everything to Debian 12 as that’s the tried and true OS for me and I know it’s quite stable with Cockpit, docker and so forth with KVM for my Home Assistant installation.
One other OS for consideration if you are wanting to check it out is XCP-NG which I played with and Home Assistant with that was blazing fast, but they don’t allow NFS shares to be created and using existing data on my drives was not possible, so I would’ve had to format them .
I’ve several Debian stable servers operating in my stack. Almost all of them host a range of VMs in addition to a plethora of containers. Some house large arrays, others focus on application gruntwork. I chose Debian because I know it, been using it since the early 00s. It’s👌.
Linux
Kinda dumb but I run DietPi on a mini PC. Just nice and simple
+1. Very easy, very stable.
I also started with DietPi an every device, works like a charm. But I personally want to try something else to learn a bit more.
Edit:
I think about trying NixOS in the near future.
Maybe crazy, but I’ve been running flatcar lately. Automatic OS updates are nice and I pretty much exclusively use most of my machines to run containers.
I’ve been using NixOS on my server. Having all the server’s config in one place gives me peace of mind that the server is running exactly what I tell it to and I can rebuild it from scratch in an afternoon.
I don’t use it on my personal machine because the lack of fhs feels like it’d be a problem, but when selfhosting most things are popular enough to have a module already.
I used to really like esxi, but broadcom screwed us on that.
Hyper-v sucks to run and manage. It’s also pretty bloated.
Proxmox is pretty awesome if you want full VMs. I’m gonna move everything I have onto it eventually.
For ease of use, if you have Synology that can run containers, it’s okay.
I also like and tend to use unraid at my house, but that’s more because of my insane storage requirements and how I upgrade with dissimilar disks fairly frequently. (I’m just shy of 500tb and my server holds 38 disks.)
(I’m just shy of 500tb and my server holds 38 disks.)
That means every one of your disks is >13TB? That’s expensive!
It’s been a long term build. With unraid it’s been pretty easy to slowly add disks one disk at a time.
I’m moving everything towards 22tb disks right now. It’s still got a handful of 4 and 5tb disks in it. I’ve ended up with a pile of smaller disks that I’ve pulled and just… sit around.
I also picked up a Synology recently that houses 12x 12tb disks that goes into that total count. I’ve got another couple Synologys just laying around unused.
I’ve got 30x4TB disks, just because second hand enterprise gear is so cheap. I’ll slowly replace the 4TB SAS with larger capacity SATA to make use of the spin down functionality of unraid. I don’t need the extra speed of SAS and I wouldn’t mind saving a few watt-hours.
Damn, 38 disks! How do you connect them all? Some kind of server hardware?
Curious because I’m currently using all 6 SATA ports on an old consumer motherboard and not sure how I’ll be able to expand my storage capacity. The best option I’ve seen so far would probably be adding PCIe SATA controller(s), but I can’t imagine having enough PCIe slots to reach 38 disks that way! Wondering if there’s another option I haven’t seen yet.
Yep. It’s a 4u super micro chassis with the associated backplanes.
I had some servers left over from work. It’s set up to also take jbod cards with mini-sas to expand into additional shelf’s if I need that.
My setup really isn’t much of an entry setup. It’s similar to this: https://store.supermicro.com/us_en/4u-superstorage-ssg-641e-e1cr36h.html








