Hey guys. I’ve been considering maybe moving to another OS for my home lab. Do you have have any suggestions? Especially former Unraid users? Mostly just for arrs though I would like to run reverse proxy/file hosting as well. Proxmox seems pretty trendy can I use it for arrs as well as backups?
Rant/extra info:
Tap for spoiler
I’ve been using Unraid for a couple years now, even paid for basic registration. I’ve largely used it to run all my arrs in docker, pihole and had a HASSIO VM running.
I recently tried setting up nextcloud, during the set up (which like nearly everything, I followed a video guide for) I ran into a novel error. So I deleted the nextcloud docker and got it from the official repo instead. Now my nextcloud share is gone and I can’t create new shares??
Stuff like this happened when I set up guac. Weird errors, plenty of which have little documentation or explanation. Plenty of which I need to ssh in or use Linux commands to fix. Which lead me to, “I’m having to learn this stuff anyway, why not spin up a Linux server and learn properly”.
Should I just rebuild/give Unraid a bit more time, it is young OS wise right?
I never used unraid but was thinking about it
I went to truenas for my NAS and Ubuntu server for my application server instead. I use dockge for my docker webui and I’m happy with that setup
I legitimately don’t understand the trendiness of proxmox given that vms are overkill compared to containers. If you are migrating from unraid you are likely already using the docker version of all your arr services so going and spinning up vms feels like a step backwards.
You can either use the exact same containers and use systemd to run them as raw services or use something like docker compose or dozens of other tools to orchestrate them. I use k8s but can’t recommend it with a straight face after taking down VMs for being overkill (very different kinds of overkill but still)
Sometimes you need a VM. They’re not overkill, just useful for different things.
Examples; Running Windows, Running OSX, Passing through hardware to use isolated from the host (PCIe devices, USB, etc), Linux guests where you need a full kernel and permissions (for example to run Docker without issues caused by being nested inside a container).
VMs don’t really have much more overhead than a container in most use cases too. For example a VM with debian installed uses about 30MB of RAM.
I was replying specifically in the context of the original question. Unraid already has their services tooling built out over containers so this person already is probably using containerized versions of the arr services. It would be overkill to go build vms for these services specifically for what you said. They don’t need to be windows or osx, they don’t need hardware passthrough, they don’t need a full kernel.
That aside. You absolutely can run containers as a full isolated kernel and directly map hardware to them. CGroups absolutely allows for those use cases. You may not be using docker anymore but docker is more of a crutch for beginners who probably dont need those things.
One example of this in the real world are COS and Bottlerocket which are literally distributions of Linux where even core is components are individually running under different containers via cgroups. COS runs on every GKE cluster in the world and bottlerocket on most EKS clusters.
The benefit of splitting services between VM’s is the same as it always has been: I can break one service without breaking ALL of them. Containers are an improvement over native installs but they do not solve this problem completely.
I can break one container without breaking all of them? I can run them in isolated container networks and even isolated cgroups if I want to. Docker hides a lot of the core reasons tools like jails and chroot and eventually LXC were created but containers absolutely can do the things you are using vms for if you are willing to learn how they work