HACKER Q&A
📣 scott01

Please recommend how to manage personal serverss


Hey guys,

I'm not an infrastructure engineer nor do I work in web, but I'm pretty comfortable with Linux. I realised I need to spin up a couple of home servers and VPSs to simplify and localise my digital life, and I have an RPi and an x86 NAS in my home network, and a VPS in the cloud. They run different hardware and distros, so I have to set them up a bit differently, which is a pain of itself, but what makes matters worse is a situation when I mess something up real bad or when there's another reason that essentially forces me to reinstall.

I tried Ansible and find it hard to use. E.g. at some point I decided to redeploy my server to a different VPS type in the same cloud, but I had to patch my Ansible scripts to do so, even though it was the same Rocky Linux distro (and it failed at some random docker compose networking config IIRC). I guess, Ansible scripts aren't reproducible and require constant work to keep them working. But I very much like them vs just SSH-ing into servers.

That leads to my question. Is there anything I can do to write config once and just deploy it more or less reliably? NixOS looks interesting, but learning another programming language just for this feels a bit too much for me. Or maybe there's another way to do stuff like this which I overlook as I'm in a different industry?


  👤 mynegation Accepted Answer ✓
Docker-compose and be done with it. Kubernetes and NixOS are great and more powerful than docker compose, but the learning curve is longer. Feel free to graduate to k8s or NixOS, once you are up and running with compose. Docker compose has the most tutorials, YouTube videos, the widest support among projects you want to self host, and many more people who can readily advise you on troubleshooting. Check out r/selfhosted to get a feeling.

You can use portainer if you need a GUI but command line is not that complicated if you are comfortable with CLIs.


👤 uniqueuid
Opinionated take: "a couple of home servers" are probably the wrong solution to your problem. Almost everything you do as a private person works better, faster, more reliably, and with much less time investment if you use a single machine for it.

👤 dijit
When you follow an online guide, dump a copy of the text on the page and a link in a single text file called “readme.txt”.

When you create backups of the state of the machine (even if the backups are just tarballs sent over ssh to the other machines), include a copy of that file.

Learning another DSL or desired state config system is going to be a pain because they lack a lot of things programmers like, like breakpoints, good LSPs and crucially: reproducible environments.

Worse still, the DSLs shift around. I know cfengine, puppet, chef, salt and ansible. Because they keep getting replaced with cleaner abstractions over time or have a kinder eye on them from the community.

Do the simple thing, document what you do to your machines. Its not sexy but you dont have to unlearn patterns or spend time trying to fix your environment just to make your docs (which now are code) work automatically.


👤 adaszko
Check out Proxmox + https://tteck.github.io/Proxmox/ + lxc container snapshots on the NAS and set up Proxmox backup server on the Pi. I find such a setup to be "all benefit, no giving up anything", contrary to NixOS.

👤 talldayo
I was going to suggest NixOS. It's a bit of a climb to learn it, but having a modular setup that works with all my devices is absolutely a killer-app. My desktop, laptop, VPS and Raspberry Pi all share the same terminal configuration from the same Git repo.

Waxing poetic about NixOS on HN is a horse well-beaten. Just try it, if you've got an extra machine laying around and a few hours to spare. I think it's a great halfway option for people that want complex server composition software without the Kubernetes built-in.


👤 dsr_
It looks to me like you've been poisoned by the modern scaling approach. You're not going to run N web servers and M load balancers and P application servers, deploying them automatically from a CI system. You're going to run nginx and six single applications behind it and one database, right? So when you adopt ansible or puppet or nix to run a config, you are adding complexity, not simplifying your life. Even docker may be overkill.

The points to consider:

- architecture. You have three boxes. One of them has lots of storage. One of them is cheap. One costs you monthly. I don't know that this is what you actually want. You probably need a main box that can do anything, a backup facility for that, and a proxy to expose services to the outside world.

- a common operating system on all nodes. I like Debian stable. Not everybody does. Being happy with it is more important than being the "best". But you should only have one.

- automatic backup of config and data. Snapshots are nice.

- if you can't have perfect snapshots, you can at least check your config into git. Use etckeeper.

- set up a common approach to running things. Make everything grab TLS certs from Lets Encrypt through nginx. Make a new user for each service. Make a new database user for each service that needs that, make a new PHP worker pool, whatever. Be consistent.

- document your policy and your exceptions. This can be a text file or a wiki or something weird.

- know where you are getting things, how to upgrade them, and how to get announcements of available updates.


👤 neilv
For two publicly-reachable services I personally run, I decided the way that involved the least work and was most likely to be low-drama -- initially, and ongoing -- was to just put them on separate $5/month Linodes running Debian Stable.

My personal wiki has very short notes on how to rebuild each from scratch. (Pretty much: push Linode Web site buttons to make a new Debian Stable instance, get a shell and do this `apt install` command line, and edit config file like so). Data gets pushed/pulled via simple shell scripts run on laptop (usually using SSH and rsync).

Separate from those services, my GPU server is a separate box at home, and frequently changing at a low level, so blasting it entirely a few times has made pragmatic sense, so I'm glad it's not sharing config complexity any other resources. And setting up the large ML stacks down to proprietary drivers sometimes is initially very experimental, and I need to do it manually first anyway, not yet ready to make a Dockerfile or set up passthrough for containers, and after the experiment works, there's no reason to do that. Were I making a production setup, or something reproducible by others, I'd do more after the initial experiment setup.

Wrangling much more complex layers atop (e.g., K8s, Docker, Terraform, Ansible, NixOS, etc.) sometimes means more things that can go wrong, and sometimes more time spent learning someone else's bureaucracy. Most of tech work now is learning piles of other people's bureaucracy. That makes sense for businesses that actually need that complexity, and for people who just want to copy&paste cargo-cult command lines and hopefully it works, and for people who want to homelab it for experience (which is perfectly valid). But the way I run my important services and my experimental box seemed to be easier overall.

Of course, for curiosity/resume/masochism purposes, I do have a separate K8s cluster at home, which runs nothing important, and which I can obliterate and change and experiment with at will, without being encumbered by it running services I actually need.


👤 kalib_tweli
The hard part here is idempotency. Ansible is great for a programmer because it's learning for fun. And you just have to spar with your machine to get good.

But for a non-programmer, it's understandable you don't want to be bother with the inner workings of your OS and how to maintain Ansible script idempotency.

And for every piece of software you want to run on your server, the idempotency task grows more difficult.

My honest opinion? Tolerate the learning curve for docker-compose. Each application you need can be managed and tweaked in isolation. Troubleshooting "works on my machine" problems will cost you more time in the long-run. You can't anticipate all the weird interactions between your programs and the os. Being able to nuke the setup and rebuild from scratch is your most valuable tool.

- thin base os (install just enough to run docker-compose)

- maintain images for each of your apps you need.

- mount the essential volumes of each image to well known location on your hard drive to make manual backups easy.


👤 don-code
Kubernetes. No, seriously.

It's an orchestration tool that's common in the real world, and also notoriously hard to learn and "get right". Downtime due to obvious, important mistakes is common, and it leaves both engineers and lower management wondering if it was a good idea to adopt.

The thing is, in your home environment, you have no (or hopefully significantly lower) uptime requirements. If you break the entire cluster for a few days, because you ran into a network problem or upgraded it wrong, who cares? That's a potentially hundred-thousand-dollar learning opportunity at a large organization, for just the cost of electricity in your home.

For what it's worth, I run Kubernetes both in my day job and in my home lab. I've learned more about networking from running my own cluster on bare metal (HP DL360 boxes) than I have from ten years of managing infrastructure for bigcorp's, and it also gives me a safe place to play with functionality that I might want to adopt at work.


👤 treffer
Depends on what you are doing. But you can take the path of app / os images.

My home network is just openwrt, and I use make plus a few scripts and imagebuilder to create images that I flash, including configs.

For rpi I actually liked cloud-init, but it is too flaky for complicated stuff. In that case I would nowerdays rather dockerize it and use systemd + podman or a kubelet in standalone mode. Secrets on a mount point. Make it so that the boot partition of the rpi is the main config folder. That way you can locally flash a golden image.

Anything that mutates a server is brittle as the starting point is a moving target. Building images (or fancy tarballs like docker) makes it way more likely that you get consistent results.


👤 Wool2662
I use a single old PC at home. Put debian on it, install docker and unattended upgrades. Create docker compose files for all services. Make sure to use 'latest' everywhere and run watchtower to update all images regularly. While i expose a select few services to the internet I connect to most of them via VPN. In the local network I'm using pihole for local DNS and since I use a wildcard let's encrypt certificate I have ssl for everything which makes it nice to use.

Haven't had to touch the system more than once a year or so when I got an alert that unattended upgrades couldn't install something.


👤 ripjaygn
Maybe containerize most things since distros change a lot between releases. That way you can keep your distro on the latest version and even switch distros without too much impact.

👤 exe34
I use nixos on my laptop, but never learnt enough to make it my everywhere-OS.

Might I suggest a different route that I took - use the base image from whatever vps and modify as little as possible of it. Then run everything else in docker.

That's how I migrated my placeholder website and my gogs install across to a new provider: I copied my data across and ran the original commands to launch docker containers that I used on the first server. These are now happily running on the new server.


👤 allanwind
Ansible has been better than Chef and Puppet for small environments. I looked at cdist but it wasn't faster for my use case. Also, Ansible executes rules ordered unlike Chef and Puppet which helps reduce your state space. If you incrementally maintain servers, by definition, your only tested configuration is the one you executed. The way to improve reliability is to start from the same (container) state, and now the only maintenance ought to be changes (OS upgrades has been it for me). Ansible across 1 server and 2 desktops with no changes takes ~3m17s and I wish it was way faster. As of know I manage with tagging things and only running a subset of things. Consider standardizing on a single distro (I use Debian and it's served me extremely well over the years). +1 on centralizing, too, till you have an use case that requires more servers. servers <= containers. Simplify. Kubernetes is complicated.

👤 RGamma
It'll take some time to setup but NixOS with nixops and maybe disko can do a lot depending on your use case..

I just use NixOS flakes with a syncthing'ed flake repo across 5 hosts (desktop, laptop, a media device (NUC7), a home server and a VPS). It has its problems, but I'll iron them out eventually.

As always start small...


👤 kkfx
NixOS or Guix Systems are the less archaic way, civilized people (Lisp vignette here) have in 2024 to manage digital life. Learning Nix is a pain but learning enough to been able to run a PERSONAL infra it's not so challenging.

Trying to replicate "the cloud" at home is a nice way to tie their own genitals, hang some loads than start jumping.

Said that: do not use Raspi or NAS, assemble a small desktop, it can be a NAS, a router, a server for any kind of service and it's just common commodity hw, the best supported in the FLOSS world, the quickest to be replaced/the cheap for spare parts. Desktop iron today does not eat that much electricity and have enough for most common needs. And using NixOS or Guix System you do not need to run a gazillion of stuff just to show a damn hello world, so you can milk you hw as needed.


👤 sevagh
>I realised I need to spin up a couple of home servers and VPSs to simplify

Presumably you're trying to replace some paid services with local self-hosting? Consider that paying for a service _is_ the simpler option.


👤 persnickety
You can't avoid learning another programming language if you want to describe your setup in such a way that a computer can recreate it.

But you can easily fall into the trap of having a bazillion underspecified informal languages if you try cobbling together bash scripts, dockerfiles, and whatever other thing you need ad-hoc.

Nix is probably a good investment in that light. My personal concern is that it moves rather fast, and some things should run themselves and stay secure without being touched more than once a year.


👤 puppycodes
Take a look at Vagrant! https://www.vagrantup.com/ In my admittedly limited understanding I believe it offers closer to a Nix like reproducable rather than repeatable deployments. Like Nix I beleive you can also hash verify each VM to be confident you have the same image.

👤 aynyc
I used to have multiple RPIs, and different physical servers (old PCs). I tried dockers and others because I though I was cool. Until I decided to just use one modern PC (actually a work computer that was off lease), and run docker for each of my server stack. I can't tell you how much easier my life has become when it comes to admin.

👤 theshrike79
My home NAS/server just runs Unraid. It’s drop dead simple and works.

For cloud/vps stuff I use a bunch of docker-compose files + configs that do pretty much everything. The underlying os is usually Debian because it’s what I’m used to and it doesn’t break stuff by going too fast.


👤 Asmod4n
I wouldn't go the route and use a VPS for personal stuff, ever. Or a Cloud Provider for that matter.

Find a Hoster which offers you a shell login where the Hoster manages close to all services you need, including backups, security updates and so on.

That should massively simplify your setup.


👤 ratg13
I'd recommend just using cloud-init.

If you're running a server in the cloud it's already available.

It takes no effort to set up yourself .. and it's just a basic script that is run that sets up a server exactly how you want it.


👤 hwbunny
cd /

dpkg --get-selections >>installed_packages

git init

git add installed_packages /etc /home/*/.* /root /whateverneeded

git commit -m "system init"

on a new system just copy over the .git folder

install packages from installed_packages, then git checkout

reboot

that's all :D

or there is chef, puppet