Setting up a production ready VPS is a lot easier than I thought.

Setting Up a Production-Ready VPS: It's Actually Easier Than You Think
Recently, I've been working on a brand new micro SaaS and having a lot of fun doing so. One thing I've really appreciated is how easy it is to deploy applications to the cloud, with a huge number of platform-as-a-service options making deployment straightforward.
While these platforms can be pretty great, they're not always perfect. Due to their underlying business model, they're not well-suited for long-running tasks or transferring large amounts of data, which can sometimes result in unexpectedly high bills.
This contrasts with using a VPS (Virtual Private Server), which often provides much more consistent billing while mitigating some of the caveats that come from using serverless platforms. Despite these benefits, however, I've always been rather hesitant to use a raw VPS for deploying production services due to the perceived difficulty of setting up a production-ready environment.
But is that actually the case? To find out, I decided to give myself a challenge: see how difficult it would be to set up a production-ready VPS from scratch. As it turns out, it's actually a lot easier than I thought!
The Challenge: Building a Production-Ready VPS
To go along with this challenge, I built a simple guestbook web app with the goal of deploying it on a VPS. Before deploying, however, I decided to write out a list of requirements to define what "production-ready" meant.
Requirements for Production-Ready Deployment
Core Infrastructure Requirements
- DNS Record - A domain name pointing to the server
- Application Running - The web app up and operational
- Security Hardening - SSH hardening and firewall configuration
- TLS/HTTPS - All HTTP communication over TLS with automatic certificate provisioning and renewal
High Availability & Performance
- Load Balancing - Distribute traffic across multiple instances
- High Availability - Minimize downtime even on a single node
Developer Experience
- Automated Deployments - Push changes that automatically deploy within minutes
- Monitoring - Get notified if the website becomes unavailable
Technical Approach
I set some constraints for this project:
- Use simple tooling without requiring extensive domain expertise
- No Kubernetes (k3s, microk8s)
- No full-featured solutions like Coolify
- No infrastructure as code (Terraform, Pulumi, OpenTofu) - though I may migrate to these in the future
- Focus on setting up without additional layers of abstraction
Getting Started with Hostinger
This article is sponsored by Hostinger, who kindly provided a VPS instance for this project.
For this project, I used a Hostinger KVM 2 instance with:
- 2 vCPUs
- 8 GB memory
- Up to 8TB bandwidth per month
- 100 GB SSD storage
- Only $6.99/month on a 24-month contract
To put this in perspective, if you tried to transfer 8TB of data on Vercel, it would cost over $1,000! The value proposition of a VPS becomes pretty clear when you look at these numbers.
Get your own VPS instance with Hostinger and use coupon code DREAMSOFCODE for an additional discount.
VPS Setup and Initial Configuration
Operating System Selection
I chose Ubuntu 24.04 LTS for its stability and widespread support in the VPS community. While I would have loved to use Arch, Ubuntu's long-term support makes it ideal for production environments.
During setup, I:
- Disabled the Malware scanner for a minimal installation
- Set up a strong root password
- Added my SSH public key for secure access
Adding a Non-Root User
The first thing I do on any new VPS is create a non-root user account, as working as root is generally not advised:
adduser elliot
usermod -aG sudo elliot
This creates a new user and adds them to the sudo group for elevated permissions when needed.
Requirement 1: Domain Name Setup
I purchased the zen.cloud
domain from Hostinger for just $1.99 for the first year. After purchase, I configured the DNS records:
- Cleared existing A and CNAME records
- Added a new A record pointing the root domain to my VPS IP
- Waited for DNS propagation (can take a few hours)
# Check your server's IP
ip addr
SSH Hardening for Security
Before proceeding, I implemented several SSH security measures:
Installing Tmux
If you're following along, consider installing tmux to maintain sessions if your SSH connection drops:
sudo apt install tmux
Disabling Password Authentication
First, I copied my SSH public key to the non-root user:
# From local machine
ssh-copy-id elliot@zen.cloud
Then I modified the SSH configuration:
sudo vim /etc/ssh/sshd_config
Key changes made:
PasswordAuthentication no
PermitRootLogin no
UsePAM no
After reloading SSH:
sudo systemctl reload ssh
Getting the Web Application Running
I built a simple guestbook application in Go for this project. You can find the complete code on GitHub.
Initial Approach: Direct Binary
First, I tried the naive approach of building directly on the server:
# Install Go
sudo snap install go --classic
# Build the application
go build
# Set database URL and run
export DATABASE_URL="your_postgres_url"
./guestbook
While this worked, I'm not a fan of compiling applications on production servers.
Containerization with Docker
Instead, I opted for containerization using Docker, which provides:
- Immutable, versioned images
- Better configuration management
- Easier deployment and rollbacks
Installing Docker
Following the official Docker installation guide:
# Add Docker's official GPG key
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Add user to docker group
sudo usermod -aG docker $USER
Docker Compose Setup
The project includes a docker-compose.yml
file with both the application and PostgreSQL database. I set up a secure password using Docker secrets:
mkdir db
echo "your_secure_password" > db/password.txt
Then deployed the stack:
docker compose up -d
Firewall Configuration
I used UFW (Uncomplicated Firewall) to secure the server:
# Default policies
sudo ufw default deny incoming
sudo ufw default allow outgoing
# Allow SSH (critical - don't skip this!)
sudo ufw allow ssh
# Allow HTTP and HTTPS
sudo ufw allow 80
sudo ufw allow 443
# Enable firewall
sudo ufw enable
Important caveat: Docker can bypass UFW rules by directly modifying iptables. This is a known issue, and the best solution is to use a reverse proxy instead of exposing application ports directly.
Reverse Proxy with Traefik
This is where things got really exciting. Instead of using nginx, I chose Traefik - and it was probably one of the two biggest reasons why setting up this production-ready VPS was much easier than expected.
Traefik Configuration
I added Traefik as a service in my docker-compose.yml
:
reverse-proxy:
image: traefik:v3.1
command:
- "--api.insecure=true"
- "--providers.docker=true"
ports:
- "80:80"
- "8080:8080" # Web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Then added a simple label to my guestbook service:
guestbook:
# ... other config
labels:
- "traefik.http.routers.guestbook.rule=Host(`zen.cloud`)"
That's it! Traefik automatically detected the service and started routing traffic. No complex nginx configuration files needed.
Load Balancing and High Availability
Here's where Traefik really shines. To demonstrate load balancing, I scaled my application to three replicas:
docker compose up --scale guestbook=3 -d
Traefik automatically detected all three instances and began load balancing between them - no additional configuration required! This improves availability because if one instance fails, traffic continues flowing to the healthy instances.
To make this persistent, I added the replicas configuration:
guestbook:
# ... other config
deploy:
replicas: 3
TLS and HTTPS with Automatic Certificates
Traefik's second superpower is automatic TLS certificate generation using Let's Encrypt. I updated the Traefik configuration:
reverse-proxy:
image: traefik:v3.1
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.myresolver.acme.tlschallenge=true"
- "--certificatesresolvers.myresolver.acme.email=your-email@example.com"
- "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- letsencrypt:/letsencrypt
And updated the guestbook labels:
guestbook:
# ... other config
labels:
- "traefik.enable=true"
- "traefik.http.routers.guestbook.rule=Host(`zen.cloud`)"
- "traefik.http.routers.guestbook.entrypoints=websecure"
- "traefik.http.routers.guestbook.tls.certresolver=myresolver"
After redeploying, Traefik automatically obtained and configured TLS certificates!
HTTP to HTTPS Redirect
To ensure all traffic uses HTTPS, I added redirect rules:
labels:
# ... existing labels
- "traefik.http.routers.guestbook-http.rule=Host(`zen.cloud`)"
- "traefik.http.routers.guestbook-http.entrypoints=web"
- "traefik.http.routers.guestbook-http.middlewares=redirect-to-https"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
Automated Deployments with Watchtower
For automated deployments, I used Watchtower, which monitors Docker images and automatically updates containers when new versions are available.
Watchtower Configuration
watchtower:
image: containrrr/watchtower
command:
- "--label-enable"
- "--interval"
- "30"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I labeled the guestbook service for monitoring:
guestbook:
image: ghcr.io/dreamsofcode-io/guestbook:prod
labels:
# ... other labels
- "com.centurylinklabs.watchtower.enable=true"
Rolling Deployments
To avoid downtime during deployments, I enabled rolling restarts:
watchtower:
# ... other config
command:
- "--label-enable"
- "--interval"
- "30"
- "--rolling-restart"
Now when I push a new image with the prod
tag, Watchtower detects it and performs a rolling update, restarting instances one by one to maintain availability.
Monitoring with Uptime Robot
For the final requirement, I set up monitoring using Uptime Robot, which has a decent free tier. It periodically checks if the website is available and sends email notifications if it detects downtime.
The setup is straightforward:
- Create an account
- Add your website URL
- Configure notification preferences
For a single-node VPS, this simple uptime monitoring is much more practical than setting up a full observability stack with Prometheus, Grafana, and the ELK stack.
Final Production Deployment
With everything configured, I removed the Traefik web UI for security and deployed the final stack:
docker compose up -d
Conclusion
Setting up a production-ready VPS was much easier than I initially thought. By using tools like Traefik and Watchtower, I was able to quickly set up a robust environment with:
- โ DNS pointing to the server
- โ Application deployed in Docker containers
- โ HTTPS with automatic certificate management
- โ Hardened SSH
- โ Firewall protection
- โ Load balancing across multiple instances
- โ Automated deployments with rolling updates
- โ Uptime monitoring
While a VPS solution may not be as simple as using a PaaS, it offers more control and potentially lower costs for certain types of applications, especially those with high data transfer needs or long-running processes.
The complete source code for the guestbook application and deployment configuration is available on Github