This homelab setup is my favorite one yet.

This Homelab Setup is My Favorite One Yet
Hosting my own services using a homelab has been an absolute dream. However, my first homelab setup had some mistakes, and so I decided to rebuild it from scratch. In this article, I'll share how I built what I consider to be the perfect homelab setup.
Reflecting on My Previous Setup
For the past 12 months, I've been running my own homelab using it to self-host a number of different software and services. My original setup was a highly available 4-node Kubernetes cluster powered by K3s, and whilst I've been really happy with this setup, I definitely made some mistakes at the beginning - ones that I would change if I was to rebuild my cluster from scratch.
So I decided to do just that, taking what I had learned and building what I think is the perfect homelab.
Planning the Perfect Homelab
To begin, I decided to write down my thoughts about what I wanted my next setup to be:
Core Requirements
- Highly available Kubernetes cluster - Just as before, but this time with three nodes instead of four to simplify the setup while saving power and reducing costs
- 32GB of memory and 2TB of storage per node - This would enable me to run more services on each node
- Networking - Each node needed at least a single 2.5 Gbit ethernet port, which would be plenty for my home network
- Power efficiency - Individual nodes should use less than 20 watts when idle to prevent heat, noise, and save on energy costs
- Performance - Despite the efficiency requirements, the CPU had to be capable of handling any tasks I threw at it
Hardware Selection
With my hardware requirements defined, I went about searching for a viable option, which ended up being the Beelink EQ12 - a machine that either met or exceeded all my specifications.
Why the Beelink EQ12?
The EQ12 comes with the Intel N100 CPU, which is incredibly low-powered:
- 11 watts when idle
- 23 watts under load
Despite this efficiency, the N100 is still rather performant with an iGPU that supports most modern media codecs. When it comes to networking, the EQ12 comes with dual 2.5GB ethernet ports, which is more than I was looking for.
Hardware Upgrades
By default, the EQ12 only comes with 16GB of RAM and 500GB of storage. Fortunately, it's pretty simple to upgrade both components. Even though Intel's own website says that the N100 only supports 16GB of memory, I was able to install and use 32GB successfully.
Materials List
Here's what I ordered for my three-node setup:
- 3x Beelink EQ12 units - Amazon Link
- 3x 32GB Memory modules - Amazon Link
- 3x 2TB SSD - Amazon Link
- Ubiquiti Switch - Amazon Link
Total cost: approximately $1,400
Note for beginners: If you're just getting started with homelabs, I wouldn't recommend spending this sort of money. Instead, I recommend using an old laptop or any other hardware you may have lying around.
Hardware Installation Process
Installing the upgraded components on the EQ12 is straightforward:
Step-by-Step Installation
- Remove the bottom plate - Remove four screws and pull up the plastic tab
- Remove the SATA enclosure - Remove three additional screws (two are hard to find, but all screws are the same size)
- Disconnect cables carefully - Gently lift the SATA enclosure and detach the four-pin fan header
- Upgrade memory - Remove existing 16GB and replace with 32GB
- Replace SSD - Remove screw, replace the drive, and screw the new one back in
- Reassemble - Reattach fan cable, screw in the enclosure, and replace bottom plate
Repeat this process for all three machines.
Software Installation: Choosing NixOS
For my initial homelab, I had chosen Ubuntu Server, and whilst this worked, it was tedious to go through the setup process for each machine. This time I wanted a more declarative approach, which led me to two options:
- Talos Linux - An immutable minimal distro designed for Kubernetes (interesting for future exploration)
- NixOS - My chosen option, which has quickly become a favorite in 2024
Why NixOS?
I decided to go with NixOS because I wanted something familiar for this project, having used it successfully on both of my Framework laptops.
Using NixOS Anywhere
To make the installation process even easier, I used NixOS Anywhere, which enables remote NixOS installation using SSH.
Installation Process
Prerequisites
- Create installer USB - Download the NixOS ISO and flash it using the
dd
command - Boot from installer - Insert USB and power on each device
- Set up SSH access - Set password using
passwd
command and obtain IP address withip addr
NixOS Configuration
I adapted my configuration from one I wrote on stream a couple of months ago. The configuration is available on GitHub.
Key Configuration Changes
If you use this configuration, make sure to change:
- Username from "Elliot" in
configuration.nix
- SSH authorized keys with your own public key
- Hashed password (generate using
mkpasswd
command in NixOS installer)
Handling the K3s Token
One challenge I encountered was securely setting the K3s token. This token is used for authentication when nodes join the cluster and needs to be kept secret.
My approach:
- Generate a secure token using
pwgen -s 16
- Temporarily hardcode it for initial setup (ensure it's not committed to git)
- After installation, replace with token file option pointing to
/var/lib/rancher/k3s/server/token
Deployment Command
nixos-anywhere --flake .#h-0 root@
This command builds everything from scratch but caches artifacts for subsequent nodes, making the process much faster.
Network Configuration
After successful installation:
- Copy Kubernetes config to host machine using
scp
- Update server IP from loopback to
homelab-0
- Test configuration using
k get nodes
(wherek
is an alias forkubectl
) - Set fixed IP addresses in router configuration
- Label nodes with sticky labels for easy identification
Setting Up Essential Services
With the cluster running, I moved on to setting up necessary services. My main goal was to get PiHole running and accessible at pihole.home
.
Container Storage Interface: Longhorn
Longhorn provides distributed storage using node storage, offering fault tolerance and redundancy - one reason I chose 2TB SSDs for each node.
I use Helmfile for declarative Helm chart deployment, creating infrastructure as code.
Helmfile Configuration
repositories:
- name: longhorn
url: https://charts.longhorn.io
releases:
- name: longhorn
namespace: longhorn-system
chart: longhorn/longhorn
version: "1.5.3"
Resolving Dependencies
Initially, Longhorn failed because the iSCSI admin binary wasn't found. NixOS doesn't follow standard Linux file system hierarchy, but there was a simple fix available on the Longhorn GitHub repository.
Load Balancer: MetalLB
MetalLB provides load balancer implementation for bare metal Kubernetes clusters, allowing services to be accessed via IP addresses.
After installation, I needed to configure an IP pool using Kustomize:
# metallb/pool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.192/26
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default-pool
DNS Server: PiHole
PiHole serves as a DNS server for local network DNS records and network-wide ad blocking. I prefer using local DNS records over remembering IP addresses.
PiHole Configuration
Key configuration in the values file:
- Persistent volume enabled
- Load balancer IP set to
192.168.1.250
- Upstream DNS servers pointing to router IP
Ingress Controller: Nginx
The Nginx Ingress Controller acts as a reverse proxy for cluster services, enabling access via domain names like pihole.home
.
Automated DNS Records: External DNS
External DNS automatically writes DNS records to PiHole based on hostnames found in Ingress resources, eliminating manual DNS record management.
Configuration
# external-dns values
txtOwnerId: "external-dns"
provider: pihole
sources:
- ingress
ingressClassFilters:
- nginx-internal
extraArgs:
- --pihole-server=http://192.168.1.250
Final Result
With all services configured and deployed using helmfile apply
, I now have:
- โ Three-node highly available Kubernetes cluster
- โ Distributed storage with Longhorn
- โ Load balancing with MetalLB
- โ
PiHole accessible at
pihole.home
- โ Automated DNS record management
- โ Power-efficient setup using less than 20 watts per node when idle
Key Takeaways
This rebuild taught me valuable lessons:
- Declarative configuration (NixOS) saves significant time during setup
- Power efficiency doesn't have to compromise performance
- Proper planning prevents costly mistakes
- Automation (External DNS, Helmfile) reduces manual maintenance
What's Next?
I'm ready to start migrating the rest of my services to this new cluster. I'll plan on looking at some of those in further detail in another article, so let me know in the comments if that's something of interest!
Additional Resources
- GitHub Repository: https://github.com/dreamsofautonomy/homelab
- NixOS Installer: https://nixos.org/download/
- NixOS Anywhere: https://github.com/nix-community/nixos-anywhere
- Discord Community: https://discord.gg/mD8K42rqfS
Note: The hardware links above are Amazon Affiliate Links, which means I get a commission if you decide to make a purchase through them. This comes at no additional cost to you and helps support the channel.