HACKER Q&A
📣 Brajeshwar

Show Your Homelab and Home Server Setup for Inspiration


I’ve been tinkering with quite a bit of self-hosting and home-lab stuff, starting with a few Raspberry Pis, a 10+ year-old MacMini, and a few laptops-as-servers. I’m willing to make mistakes, learn, and be inspired by what you do. I believe many of us would love your tips, tricks, and all the gotchas in between.


  👤 ewweezdsd Accepted Answer ✓
Used corporate mini-pcs make excellent home servers if you don't need too much storage (typically they support two SSDs), and want low power consumption & noise and small form factor. Right now ones with 7th or 8th gen i5 are a pretty good deal, often around 50-120 USD on ebay, a bit more in the EU. Idle power cosumption is about 10W. If you want to play around with virtualization, a higher-end model with Intel NIC is recommended due to some Linux driver issues (at least with Proxmox).

https://www.servethehome.com/introducing-project-tinyminimic...

Corporate thin clients with Pentium J5005 or similar also make decent raspberry pi replacements, usually just not as good price-power ratio as proper minipc. Certain models have pcie slot, which makes them ideal DIY routers when fitted with pcie NIC & pfsense/opnsense as OS. If you're using a consumer router, such project can make lots of sense.

For NAS, I recommend building your own. Personally I use Asrock J5040 mini-itx board with Node 304 case and Unraid as OS, houses 4x 3.5" drives, so far happy with it. For maximum reliability you might want a mothebroard that supports EEC memory though.


👤 q0uaur
just running my old desktop PC with proxmox.

most important thing for my homelab i ever did was starting to mess around with VMs. first with virt-manager to have a gui, then just plain kvm (actually probably qemu) on the commandline using scripts to auto-install vms, and now a combination of Proxmox and Ansible to make a vm with just a few keystrokes. The freedom of messing around as much as i want without danger of breaking my system is the best.

next most important was setting up dns and dhcp automation - whenever the ansible playbook makes a vm, it immediately reserves a static ip for that mac and makes a DNS entry for that vm's name pointing to it's ip - so i can easily reach all my vm's by name without having to go through the effort of setting that up. great for when a test vm becomes permanent and needs to be accessible. this combined with using ansible to automatically create my user and put my ssh pubkey into it's authorized-hosts file makes everything SO smooth, no friction at all to work on different vm's and experiment.

also, when the ssd proxmox was installed on died, it was super easy to restore all the vm's.

next step is probably automating public dns, port forwarding and reverse proxy settings - so far i am doing that manually, but i just got a new phone and refuse to log into google on it, so i've had to set up a bunch of services, kinda running my own cloud, and they all need port forwards and reverse proxy entries. speaking of reverse proxy... it's a wonderful tool, set up ssl/https ONCE on it and use certbot to manage certs in one place, and you can test any service by just forwarding it to the vm without ssl, no hassle. for critical things i guess i'd still recommend setting up ssl for your internal network too, defense in depth like.


👤 tetris11
RPi4 running DietPi, with a 1TB USB drive, all strapped to the back of the TV. Syncthing for backing up phone media, Hugo for my blog posts, HomeAssistant for my ZigBee devices, Jellyfin for my media stack which network mounts my Synology NAS (auto wake/suspend), Moonlight client to game in the living room with PlayStation six axis controller, private git. All reverse proxied behind a CG-NAT to a single public IP4 VM I pay maybe 10€ / year for.

👤 sgarland
It’s been through quite a few iterations, but the current setup is a 35U Sysracks (not filled, I just like the space) with:

  * 3x Dell R620 running K3os on clustered Proxmox
  * 1x X11 Supermicro 2U with ZFS handling the disk array
  * 1x X9 Supermicro 2U, also with ZFS, that is a backup target for the other one (WoL to ingest, then shutdown)
  * UniFi Dream Machine Pro
  * UniFi 24 port PoE 1G switch
  * 2x UniFi UAP-AC-PRO APs (not in the cabinet, obvs)
  * APC UPS with battery extension
The Dells have Samsung PM863 3.84 TB drives for Ceph, which Proxmox handles. They have some ancient 2.5” enterprise pull SSDs for boot.

The Supermicros have some cheap ADATA NVMe SSDs for boot (the X9 has a modded BIOS to make this possible).

In general I like it, but the current in-progress work is rewriting all of the Ansible playbook that generates QEMU images for Proxmox, and setting up PXE so I can grab IPs on boot for new images. The latter will enable me to shift off of K3os in favor of Talos. Technically I can do so now, but it requires manual input to set the IP addresses.

Beyond that, I want to get a 10G switch so a. RBD-backed devices are faster b. ZFS send/recv is faster c. Just because.


👤 efxhoy
At home I run:

Raspberry pi 4 8GB with a 22TB external hdd and 1tb external ssd. For uhhhh media. Runs *arr stack and samba. Raspberry pi os (debian) with everything defined in a docker compose file. Also runs tailscale. Having my entire media library anywhere I have one of my devices is pretty sweet and setup was stupid easy. No plex.

An i3 NUC for hosting a game server for some friends. Think it runs ubuntu server. Runs Valheim. Also in docker compose.

A mitac board with dual GbE and a low power intel cpu N3-something. Runs pfsense.

For learning and tinkering I spin up VMs in hetzner and delete them when I’m done. We use AWS via terraform at work and I’m not likely to work somewhere where physical servers is something I’d deal with so using the most likely interface I’m going to be put in front of makes more sense for me. Having everything in terraform really is lovely, from networks to machines it’s all in my editor.


👤 astromechza
Personally, I've really enjoyed using a 1U 16 core Atom server as a single node Kubernetes cluster. I started on raspberry PIs and Synology NASs too but consolidated them earlier this year and didn't look back. It runs a whole bunch of stuff now and I blogged about the setup at https://bensblog.meierhost.com/20230705-home-lab-infrastruct....

Totally understand that it's a bit more expensive than second hand hardware and smaller elements so I'd only suggest going down that path once you know you'll get the utility out of it.

I've used it to replace my reliance on Google photos/docs and lean more on self hosting, though that means that backups, disk mirrors, and run books for restoring everything is double important!


👤 p0d
I have been running a server in the attic for 20yrs, now relocated to the shed (not the same hardware).

I am getting a lot of joy out of vnc connect at the moment. I have ethernet to my shed with a pi firewall between my attic and the garden. VNC connect means I don't need to pfaff about with port forwarding etc.

I use rclone to sync my saas business backups from the server to the cloud. That may not sound secure but I used to be an Infrastructure Lead and I have seen how government agencies left doors open in data centre cages.

One piece of advice. Don't waste all your holidays on such projects, pace yourself :-)


👤 extragood
I've been running TrueNAS for years on commodity hardware that's been upcycled from my personal workstation after major upgrades.

Current iteration is: a skylake era CPU, 16 gb ddr4, 4x 14TB WD Reds, 2x 512 GB SSD, and a m.2 boot drive

Used primarily for shared storage, backups, media serving, and lightweight docker containers (home assistant, git tea, etc) in an Ubuntu VM.

It's been super stable (paired with a UPS) and was surprisingly economical for what it is. I've also learned a lot in the process.

I do wish my CPU had a few more cores though.


👤 CommieBobDole
Mine is mostly various cast-off enterprise hardware, living in a semi-finished room in the basement. From bottom to top:

- Mid-'90s 42U IBM server cabinet

- Two 1500VA APC UPSes (non-rackmount)

- 1U Dell R610 (PFSense firewall)

- 1U Dell R620 (ESXi host)

- 2U Dell R720XD (60TB NFS/CIFS storage server)

- 4U rackmount case containing a Windows gaming/workstation PC

- 4U rackmount case containing a Linux workstation (the previous iteration of the gaming/workstation PC)

- Black Box KVM for the two workstations

- TP-Link unmanaged 10GB switch

- Cisco managed 1GB switch

- Cable modem

- Two 1U PDUs plugged into the UPSes - half-ass dual power path for the machines that support it.

It's a pretty nice setup; I have a bunch of system service VMs on the ESXi host (IDS, Splunk, etc) and can spin up a new one in a few minutes whenever I have a new project or want to try something out. The cables from the KVM go through the floor to my office on the main floor so I don't have to listen to fans and can switch with a key combination. And of course I have plenty of storage for whatever - I back up all of my VMs and my workstations to it, I can download pretty much anything I need to, etc.

And since it's been asked before, yes, electricity is oddly inexpensive where I live.


👤 sschueller
https://sschueller.github.io/posts/wiring-a-home-with-fiber/

I run proxmox for my router (opnsense) and several servers on it like gitlab, Mastodon, matrix chat, lemmy and some others.


👤 bigbuppo
Bought three Dell T420s for like $120 each. Loaded them up with ram. They're actually quieter than some NUCs in normal operation. On the down side, they have obnoxiously bright blue LEDs on the back and the little LCD display. A bit of rubylith tape took care of that.

👤 justsomehnguy
My homelab setup is simple:

I have no homelab.

I have a decade old Synology (ds115j if you wonder) as a cheap-ass NAS and a torrent client; it's still running because I got it for $0 and it's more convenient to torrent from a separate device than from a WiFi connected notebook.

I have a two AD networks, two tier PKI, Nextcloud, Netbox, Gitea, fileservers, at least two RDS servers, Syncthing disco server and clients, probably WSUS and WHMCS (didn't touch for yeats, literally) and somethings else I forgot and not at the PC to inventorize.

Between 5 to 7 servers around the world which can be used or are a proxy.

Zabbix server and it's proxies on the abefore mentioned servers.

I don't have a homelab.


👤 piotrke
I use Intel NUC. It is small, quiet and you can choose some components as you like. For a homelab, be sure to have a CPU that supports virtualization (VT-x), so you can play with VMs. On top of that Proxmox.

Like someone already mentioned /r/homelab for more ideas.


👤 OnlyMortal
I’ve got a Raspberry Pi 4 booting from an old SSD I pulled from my MBP 2011.

I run it as a “pi-vpn” tap server and it has Samba installed.

Using Tunnelblick on my MBP, I can access my LAN from wherever I am. TimeMachine works too.

However, I mostly use the VPN to get to UK TV when I’m abroad. Sky Go, Netflix and iPlayer mostly.

I’m planing to use my second Pi 4 as a tap client and the WiFi as an access point. I’ll take it with me next time.

I’m going to try to get a Sky Q mini working with the system to get UK TV in an extended holiday in Portugal. Both ends have symmetric 1Gbit connections so it might work.

Edit: I’ve also had a Pi 4 running N64/PSX/Arcade emulators (Emulation Station) over-clocked to get a good frame rate. PS Bluetooth controller.


👤 marginalia_nu
The Marginalia search engine ran for over a year on a AMD Ryzen 9 3900X with 128 GB non-ECC RAM and a mix of NAS drives and SSDs of various types ranging from consumer crap, through enterprise drives, and even an Optane. 16 TB mechanical storage, 4 TB SSD. All on domestic broadband.

It survived the HN front page and even bigger traffic spikes. Did blip out when Elon Musk tweeted a link to one of my blog posts, but only momentarily.

Now that server's been relegated to run a test environment and to perform various odd jobs.

I do think everyone interested in programming should have some sort of server in the house. Being able to run processing jobs for a few days really does radically expand what you're able to do.


👤 injinj
I've been running frr (free range routing) for networking, using ospf layer 3 routing between my hosts. This allows dynamic routes to be populated throughout and makes a switched layer 2 network optional, since switches tend to be expensive and obnoxiously loud and a star topology is not necessary with a layer 3 network.

I like the Supermicro Xeon D boards because I can power 6 of them off a single power supply (the GPU cables can be converted to a 4 pin cpu).

I also use systemd-nspawn (w/ dnf --installroot or debootstrap) or docker to attach instances to the network, where each has it's own layer 3 address distributed by frr.


👤 brainlessdev
I bought a Biostar J4125NHU motherboard and put it in a small case with two 2.5 HDDs and two SSDs and am using it to download shows and movies from a Usenet provider and stream them with Jellyfin.

The motherboard was a huge pain to work with, and I had to return two (!) units; I made the third one work. The first two would not boot in the same configuration.

I set up the box with NixOS. Here is its flake: https://github.com/fnune/bilbo

The NixOS experience was fun!


👤 replwoacause
Dell Optiplex 7060 Micro i7 running Proxmox on bare metal. Works awesome and lets me experiment very quickly spinning up VMs for each of my projects.

👤 sphars
I'm repurposing an old laptop (an Asus X53SV with an i7, 8GB RAM, and 1TB HDD) for a simple server at home that I use for Plex/Jellyfin, Navidrome, and some self-hosted webapps, all using Docker. Can't handle transcoding media much, so I do that separately. Obviously it's an old laptop, a decade old at this point, but it handles everything I use it for.

👤 Hamuko
I have a Synology DS918+. Not exactly the best bang for your buck, but the form factor is tiny and fairly good fit for a small apartment. For access, I have a PiVPN setup on a Dell Wyse 3040 because Raspberry Pis were extremely hard to obtain. Eaton 5S 550i for keeping both powered.

Need to get a new router though. I have a Mikrotik hAP ac and it seems like it's hanging onto dear life.


👤 samgranieri
I’ve got five raspberry pi’s running arch Linux arm and a Mac mini for plex video encoding. Plus shared storage.

👤 noaoh
I've got a NAS, a really nice NetGear router, and four mini PCs

👤 KomoD
My homelab is a single 4U with Proxmox and a crappy GPU I never got to work

The 4U sits on the floor in a "closet", I think you can picture it


👤 archi42
Network: Aruba 1930 24x GbE /w PoE and 4x SFP+ in the basement; Mikrotik CSS610 8x GbE and 2x SFP+ in the home office. Linked with 10GbE fiber, and a ton of VLANs.

Server: Dell Precision Rack 7910 with a single E5-2690v4 (14C/28T), 64GB DDR4 ECC (4x16GB), 4x 3.5" HDDs (a hack) with 30TB net storage in RAID1 and 2x small SSDs (OS on RAID1, and temporary data/caches without RAID). I put in the 2x GbE + 2x SFP+ NIC. It's connected with 10GbE and 1GbE.

My old system was nice, but upgrading meant I had to get an expensive CPU; luckily the Dell fell in my hands. With this one, I could double the core count cheaply (it's a 2S system and the CPU is really cheap) and iirc have 192 or 256GB memory per socket (with the cheap 16GB modules).

The above hardware plus all the PoE stuff (5x UniFi AP, a switch behind the TV, DECT-VoIP Gateway), DSL modem and a few ESP draw 110W. The server itself is in the 80W ballpark (that's 15€/month worth of electricity). This could be reduced with more modern or less powerful hardware. What I did was replace my old, big array of 12 disks with four much bigger disks. Since OS and cashes are on SSD, the disks can spin down a lot. I only insert one PSU, since the second one adds 17W idle power draw.

Compared to other home servers that can be pushed much lower it's still okay, since it replaces a bunch of services I'd have to pay for & allows much faster transfers to my workstation (for backups etc) than the 32MBit/s DSL. The old server was running out of cores... The only external service I still spend money on is mail.

Server runs baremetal Arch Linux (it's not cattle), and two VMs (qemu): OpnSense and Home Assistant. Samba serves files from the raid to the network. Services are partially native, some in podman. There are: cockpit, mealy, foundry, step-ca, scanserv, vaultwarden, mosquito, zigbee2mqtt, pihole, tasmoadmin, uptime kuma, UniFi manager, samba, lancache, heimdall, plex. Plus two custom services. It also interfaces with the PV using RS485. A 20m USB cable connects it to the zigbee stick on the second floor. I'm looking forward to adding influxdb/graphana for long term monitoring of our heat pump and BEV power consumptions.

The OpnSense does DHCP, wireguard and local DNS.

First step was to setup cockpit, since I like it to configure IPs and access the VMs. Then OpnSense for inter-VLAN routing and firewall, and to the outside world.

Since I wanted to encrypt the traffic even locally, that was the second step: I have a step-ca that serves certs using ACME. My nginx acts as a reverse proxy for most services and gets certs from the step-ca. The CA is limited to the .lan so it can't be used to intercept other traffic. Also, DHCP puts hosts on .dhcp.lan, so random hosts can't just try to pick any domain name inside the network and get certs for that.

With that done, I looked at services I use or that sounded useful, and spun them up.


👤 nonameiguess
I'm not gonna show anything but I've done quite a bit. My house is a four-story townhouse with the second floor being a single room where the kitchen, living room, and dining room are in an open floorplan. We put most of our stuff there. My wife built shelving into the corner of the den area, wrapping the corner and going up to the 12-foot ceiling. She also added a rail-mounted rolling ladder, so we've got a nice library setup. Beneath that is locking cabinets for craft material and I keep spare electronics and cabling in it. We have the same model of cabinet mounted underneath the nook for our television where the cable hookup is and that's where I keep most of the homelab.

I've got an OPNSense appliance router running FreeBSD. I made no attempt to modify it. I tried to build a Linux router but realized I can't make anything that takes up this little space and the PCIE-mounted NICs to get the number of ethernet ports costs a lot more than buying an appliance. I built the NAS server myself, using ASRock Rack mini-ITX motherboard with AMD CPUs that have graphics integrated onto the chip. It's got a 1 TB SSD cache and 8 spinning drives. The machines I use as servers are six Minisforum small form-factor PCs, similar to NUCs but fanless, cheaper, and with AMD 6-core processors. I don't think they outperform or anything, but having more cores makes it easier to pin many VMs. These plus the NAS server and television are plugged into two Cisco switches that support 10 GbE and two UPSes that typically give me about 30-40 minutes in the event of the frequent power outages we get in Texas.

I've been tempted forever to try building a "real" server, but they're power-hungry and loud and way more than I need. The small form-factor PCs have done the job and I can run them in a closed cabinet that looks identical to all my other cabinets. The only modification I needed to do was install USB fans in the cabinets themselves to ventilate the heat, but they don't make any level of noise I can perceive.

I've got Aruba Instant-On WiFi access points, one for each floor of the house. They run five separate networks, one for work devices, one for IoT, one for televisions that aren't hard-wired, one for guests, and one for a main non-work WiFi network. Everything except the main network is forbidden from sending packets to local IPs. I don't know there's much benefit outside of that, but it allows me to set the television network to be optimized for streaming, low QoS on the IoT and guest networks, and make the main network WiFi 6. It's also pretty funny to ban porn on the guest network and see which houseguests notice and complain about it.

Being in a townhouse, the only walls I have running floor to ceiling are either shared or external, which means they're very tightly insulated, and running cable between floors was a pretty serious challenge. If you're ever going to do it, I would recommend doing it immediately upon moving in, before you do anything else, before you even move in furniture. I'm pretty serious about cable management and keeping things neat, so I run everything I can through walls and/or floors. No cable is loose except at the last mile.

As for the self-hosted services, I don't use any sort of on-prem management layer like vSphere or Proxmox or anything. It's all Arch Linux with libvirt running on hardware-accelerated KVM. I at least automated the Arch builds by putting provisioning scripts on USB drives with a "cidata" label since the Arch installer comes with cloud-init and you can use this for unattended installs by just in two drives at first boot instead of one. Most everything else runs on Kubernetes, with Longhorn as a storage provider. I use Ansible playbooks to install Kubernetes and the applications are installed and configured with GitOps. So the external services are a Git server and Minio on the NAS acting as a backup target for anything that will backup to S3 as well as a package mirror and image registry so I can provision everything without Internet access. I load balance the control plane with kube-vip and Ingress with MetalLB, using the L2 advertisement features.

Unbound on my router is configured as the default DNS for the LAN. This recurses to NextDNS. I block outbound port 53 to try and ensure everything is actually using this, but there isn't much you can do to block DNS over HTTPs without a MITM proxy. Pretty much any known ad source, telemetry, tracking domains, is all blackholed at both the Unbound and NextDNS layers. It doesn't seem to break too much. The Paramount+ app stopped working, but their actual content is streamable through Prime Video using the same subscription. I should probably just cancel it, but they have SEC and NFL football that I still geek out for plus I've been rewatching Aeon Flux and Daria and they own the MTV back catalog.


👤 firemelt
try /r/homelab

👤 baobun
Been homelabbing in some capacity for decades, current iteration ~5y in now.

Here are some pointers from me to you, with the assumption that you will end up with some form of personal production workloads and that extended unplanned downtime, security breaches and data-loss would be stress-inducing experiences you want to avoid.

- For any future hardware you add, get minimum 2 of everything. You never know when or why the extra will prove invaluable.

- Get a managed L2 PoE switch with more ports than you think you need. Used Brocade/Ruckus gear can be found on eBay, for example. Until you do, at your current scale you can get away with several cheaper smaller unmanaged switches. But I'd be considering getting more serious gear when start growing out of 10-12 Ethernet ports if you aren't already sick of cables and wall-warts at that point.

- The STH forum is an amazing resource. Identify and scourge megathreads relevant to you.

- Segregate your networks. Don't run your servers on the same network where you have your WiFi AP and user clients. Ideally, your servers won't even have a default gateway and be firewalled to only allow internal traffic even for the outgoing. You will set up not only reverse proxies for the incoming (Caddy will be the easiest to get started with if it's all the same to you) but also for the outgoing (squid still seems to be the sane default for HTTP?). You can still proxy HTTPS via an HTTP CONNECT proxy without having to care about TLS, certs, or MitMing.

- One piece to the above would be setting up a "bastion host" - the gateway and firewall between your labnet and the world. You want something with minimum 2 NICs. My personal experience with USB NICs has not been great. I strongly advice you to consider this use-case for the next piece of hardware you get alongside the switch. A cheap SBC (get minimum 2!) should be fine.

- Wireguard

- Use configuration management and resist the urge to manually configure stuff by SSHing and editing files. You want to be able to reproduce it and keep track of changes you made. Ansible is popular but there are many others - pick whatever feels smoother for you.

- Backups: do them. Learn about the 3-2-1 rule and apply it. You could let one of your Rpis with an HDD be a dedicated backup-sink.

- Virtualization: I'd say you shouldn't bother with this at all for now, unless learning virtualization is a goal in itself. Hypervisors like Proxmox make a lot of sense if you have 1 or 2 huge hosts. You have a larger number of smaller hosts already. Makes more sense to scale horizontally and use containers (Docker/LXC) to separate workloads within one host. If you get something more beefy than the Mac Mini down the line, it can start making sense to look into, though.

- Just Do It. You don't need any of the above to start iterating and prototyping today. Configuration management will make it easier to fearlessly set up and tear down your setup.


👤 JoeyBananas
Sorry but you should just swallow your pride and rent a box from your favorite cloud hosting megacorp if you really want to get the job done