2025-06-29
A Home Server Proxy in the Cloud, Using HAProxy and Tailscale
Overview
The goal was to stop using my home's IP address as the public IP for my home server, while at the same time:
- continue to host my websites and services on a machine that I physically control;
- terminate TLS on that same machine rather than decrypting traffic on Someone Else's Computer;
- allow the home server to see the actual clients' IP addresses (not just the IP of the machine forwarding traffic) so that I can keep running AI poisoning software [1] and fail2ban.
After a ton of searching and trial & error (a *lot* of this was new to me), I ended up doing the following:
- provision a $5/month VPS;
- connect the VPS to my home server with a Tailscale VPN;
- configure Tailscale authentication using OIDC hosted on Codeberg, rather than a GAFAM service;
- install HAProxy on the VPS;
- configure HAProxy so that TLS terminates on the home server;
- configure HAProxy to forward source IPs to my home server using the 'PROXY Protocol';
- adjust the existing nginx files on my home server to recognize the PROXY Protocol headers added by HAProxy;
- point DNS records to the public IP address of the VPS.
Bonus side-effects: I no longer have to worry about dynamic DNS, and I get to close pretty much all the ports on my router.
If you want to do something similar, and that's enough of an explanation to point you in the right direction, great! If you want a few more specifics, read on.
* * *
Steps I took
Get a VPS
Pretty self-explanatory - I got a cheap and pretty low-spec one, with 'unlimited' data transfer, although in practice it's capped by the slow max transfer speed. Even though I'm using the lowest-tier VPS available from this provider, I'm pretty sure it's faster than my home connection, so it's unlikely to be a bottleneck.
On the fresh Debian install, I installed ufw and opened up ports 22, 80, and 443, then set up passwordless SSH [2] login from my server and laptop.
Install Tailscale VPN
This was surprisingly easy to set up. At first I thought I had to make the VPS an 'exit node' or something like that, but I just needed to create a regular 'tailnet' consisting of my home server and the VPS.
To do this, simply install Tailscale on both machines using the official instructions found here:
OIDC identity with Codeberg
When you run the <code>sudo tailscale up</code> command post-install, you'll be prompted to sign up for a Tailscale account. To do so, you need to sign in with an 'identity provider'. I was kind of taken aback when I was presented with a list of Big Tech corporations, and almost bailed on Tailscale entirely right then and there!
Luckily there's another option at the bottom of the list: OIDC. I noticed that Codeberg was among the options. And once I tracked down some instructions for how to set this up, it was pretty easy, thanks to kennyqin's write-up here:
Install & configure HAProxy on the VPS
Because I didn't want a ton of downtime on my active sites & services, I made a quick dummy website on a subdomain so that I could experiment a bit. Once I had that up and running, with a bare-bones nginx config and a TLS cert, and visible to the public web, I pointed the DNS record for the subdomain to the VPS.
I struggled for a while at the next step. See, by default, when the VPS forwards requests to my home server, it doesn't pass on the IP address of the actual client making the request. As far as my home server knows, the request originated at the VPS. In order to keep running fail2ban and AI poisoning software on my home server, I need to pass on the actual source IP address. I ended up trying a lot of things before I figured out how to do this properly.
Turns out it's done by adding 'PROXY Protocol' headers to the requests. Nginx, which I use as a reverse proxy on my home server, can accept/parse PROXY Protocol headers [3] but for some reason doesn't seem to be able to *add* them to packets - which is ultimately why I installed HAProxy on the VPS instead of nginx.
Luckily, HAProxy is pretty much a single binary and a single config file, and the syntax for the latter is quite readable. So are the docs. [4] So it took a little reading but things were up and running before too long.
Because it's generally used as a load balancer, most HAProxy tutorials have instructions for some combo of "frontend" blocks to define the listen ports and "backend" blocks to define where (and how) things get routed. But since all I'm doing is sending everything to one place - my home server - I decided it would be easiest to use "listen" blocks, since they do both things in one go.
Also I used a default of "tcp mode" instead of "http mode" since I'm not reading or decrypting the packets on the VPS, just passing them through.
In my config, which is partially shown below, I've commented out a few lines. That's because they're the same as the default values that I configured - but I left them in the file just to show the syntax in case I want to change something in the future.
The only other major thing to note is the use of "send-proxy" at the end of each block: this is how the PROXY Protocol headers get added.
Here are a couple of links that helped me through this process:
And here's the config I ended up with. Be sure to replace the "100.x.x.x" values with your home server's Tailscale IP address.
global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon defaults log global mode tcp option tcplog option dontlognull option logasap option http-server-close balance roundrobin timeout client 10000 timeout connect 5000 timeout server 10000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http listen vps_http # mode tcp bind :80 # option tcplog # balance roundrobin server home_server_http 100.x.x.x:80 check send-proxy # replace IP listen vps_https # mode tcp bind :443 # option tcplog # balance roundrobin server home_server_https 100.x.x.x:443 check send-proxy # replace IP # and so on, for any other services running on different ports
* * *
Edit nginx configuration(s)
Next task: configure nginx to accept the PROXY Protocol headers. Once again the reddit post linked earlier from u/hellociaagent [5] was very helpful; see also the nginx docs section [6] dealing with this topic.
A couple notes:
- I've always installed nginx from the Debian repos. It's possible that if you get an error about a missing module, that there's a pack of nginx modules that you'll need to install (I have nginx, nginx-core, and nginx-common installed on my home server).
- One other thing to keep in mind here is that if nginx is set to accept the PROXY Protocol, you won't be able to also connect to nginx using your home server's local network IP address - you have to go through the VPS.
In /etc/nginx/nginx.conf I more or less followed u/hellociaagent's instructions, adding this to the top of the http block:
set_real_ip_from 100.x.x.x; # HAproxy local IP set_real_ip_from x.x.x.x; # HAproxy external IP real_ip_header proxy_protocol; # use proxy_protocol real_ip_recursive on;
and this just before the access_log statement:
log_format proxy '$proxy_protocol_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
Then inside each of my site configuration files in /etc/nginx/sites-available, I replaced the "listen 80;" and "listen 443 ssl;" statements with
listen 80 proxy_protocol; proxy_set_header X-Real-IP $proxy_protocol_addr; proxy_set_header X-Forwarded-For $proxy_protocol_addr;
and
listen 443 ssl proxy_protocol; proxy_set_header X-Real-IP $proxy_protocol_addr; proxy_set_header X-Forwarded-For $proxy_protocol_addr;
and at this point, nginx should be able to receive and pass on the original client IP address :)
--------------------------------------------------------------------------------------------
(A quick note about reducing downtime)
In fact, to reduce downtime as I switched everything over, I actually made two nginx site config files for each website/service and named them things like "blog_home" and "blog_vps". One has the old config, the other has the new one (these are all saved in "/etc/nginx/sites-available/" and symlinked to "/etc/nginx/sites-enabled/".
Then I made two separate nginx.conf files, naming them "nginx.conf.home" and "nginx.conf.vps". In each of these files I made a small change to the 'Virtual Host Configs' section located within the http block: instead of the line reading
include /etc/nginx/sites-enabled/*;
to either
include /etc/nginx/sites-enabled/*_home;
or
include /etc/nginx/sites-enabled/*_vps;
which will allow me to quickly switch the configuration back to the home server if I ever need to, simply by replacing "nginx.conf" with the contents of "nginx.conf.home".
--------------------------------------------------------------------------------------------
Last step: update DNS records
Now you can point DNS to the external IP address of the VPS with an A record - and if everything's configured correctly you should be good to go!
* * *
Future improvements
- possibly swap out Tailscale with the self-hosted, open-source version called Headscale - or simply use Wireguard? This eliminates the need for an OIDC provider, but there are a lot less features available. This is probably fine, in my case?
- I run both a gemlog and a tootik server on the gemini protocol. Long story short, I'm not 100% sure if it's possible to get PROXY Protocol to work properly with these, so I'll be implementing rate limiting on the VPS. The VPS does already have some built-in protections, mind you.
- Add my pi-hole to the tailnet so I can easily use it as my DNS server from anywhere. I briefly tried to get this working but still need to iron out the kinks.
- maybe move to a VPS provider that's not in Canada? One of the reasons for doing all this work in the first place is that the Canadian government has taken a sudden lurch toward increased surveillance, currently pushing a bill that lets cops go around demanding certain information from service providers (digital and otherwise!) without having to get a warrant first. This includes asking ISPs for the IP addresses of their subscribers. And like, am I doing anything particularly nefarious or illegal? No! But that doesn't mean I don't have a right to privacy. Anyway, this new setup is likely also better when it comes to capitalist surveillance too - so even on a Canadian-based VPS, there are real privacy benefits.