Context
The list of web-based services I self-host is growing. For each service that needs to be reachable from the internet, I register a subdomain and point it to my public IP. My ISP-provided router then forwards incoming HTTP/HTTPS traffic to a single internal machine via NAT. The problem is that my services are spread across multiple machines: I need a routing mechanism to dispatch traffic to the right one.
With a single NAT rule, only one machine is reachable from the internet — Machine B and C are inaccessible.
In this blog post, I will detail the existing mechanisms that can solve this routing issue and why I decided to use a Server Name Indication (SNI) proxy. Then I will present how to set it up using NixOS and how to make sure that it is correctly secured using a recent large-scale deployment study by Pletinckx et al.
The options
First, let’s discard some generic options:
- Port-based routing: As said earlier, my router allows defining rules to route traffic from a given port to a specific machine. An option would simply be to associate a port for each service. I would define a rule for each service in my router interface to route the traffic to the correct machine. For multiple reasons this is a bad idea: (1) Very user-unfriendly, I should prefix every url with a port number ; (2) Can’t be automated. Every time I add a service, I would need to manually add the rule using my ISP’s very slow interface.
- Single reverse proxy: Another alternative would be to put all services on the same machine. But this is not scalable when machines are not powerful. Also, this is a no-fun approach…
So, the focus is on mechanisms that allow keeping the current multi-host architecture. Specifically, a mechanism that routes all incoming traffic from a single machine to the others where services are running. This can be done through different techniques.
Reverse proxy
A reverse proxy is a component placed in a network to route applicative traffic. It inspects HTTP requests before forwarding them to the correct backend. Routing decisions are made based on the HTTP Host header. Beyond routing, a reverse proxy can also perform security checks, load-balancing or response caching. They can be very easily implemented using NixOS, for instance using nginx as follows (from a previous post):
services.nginx.virtualHosts."ca.net.gq" = {
locations."/" = {
proxyPass = "https://100.64.0.5:6060";
};
};
With this, every request sent to ca.net.gq is forwarded to https://100.64.0.5:6060. The target address can be any machine in the LAN, not just the current one.
One constraint of this approach is that the Host header lives inside the encrypted TLS payload, so the proxy must terminate TLS to read it. It decrypts the incoming connection, inspects the header, routes the request to the backend then handles the response back to the client. Optionally, the proxy can re-establish a TLS connection with the backend. This means certificate management must be centralized on the proxy machine: every domain you expose needs its certificate provisioned and renewed there, and each backend machine loses control over its own TLS configuration.
A reverse proxy terminates TLS, decrypts the traffic, reads the HTTP Host header to route to the correct backend, then re-encrypts.
In practice on NixOS, this looks like the following. The proxy machine declares a virtual host per domain, each with enableACME = true so that NixOS handles the Let’s Encrypt certificate lifecycle automatically:
# On the proxy machine
services.nginx = {
enable = true;
virtualHosts."service-a.example.com" = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://192.168.1.10:8080"; # plain HTTP to Machine A
};
};
virtualHosts."service-b.example.com" = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://192.168.1.11:8080"; # plain HTTP to Machine B
};
};
};
security.acme = {
acceptTerms = true;
defaults.email = "gregor@example.com";
};
The backend machines then only need to listen on plain HTTP (they never see TLS):
# On Machine A
services.nginx = {
enable = true;
virtualHosts."service-a.example.com" = {
locations."/" = {
proxyPass = "http://localhost:3000";
};
};
};
Every domain’s ACME lifecycle happens entirely on the proxy, and the traffic between the proxy and the backend machine is not encrypted.
SNI proxy
The Server Name Indication proxy is acting similarly to the reverse proxy, however the routing is based on the server_name value in the TLS ClientHello message (see the structure of such message in my blogpost on TLS Fingerprinting). This field is unencrypted, meaning we can route the packet to the correct machine without decrypting the whole message. This also means that each machine can manage its own TLS configuration and certificates rather than having to manage them from the reverse-proxy. I like the idea of independent machines each managing their own ACME certificates, as well as the possibility to prevent TLS termination on intermediaries. This is the approach chosen here.
An SNI proxy reads only the server_name field from the TLS ClientHello to route traffic — without ever decrypting it. TLS is terminated end-to-end on each backend machine.
In details, the SNI proxy works as follows. A message is coming from the internet to the SNI aiming at reaching service-a.example.com. The SNI proxy inspects the TLS ClientHello and extracts the server_name field without decryption. Then, the PROXY protocol is used to encapsulate the TCP stream (containing the TLS flux). Based on the hostname, the TCP stream will be forwarded to the appropriate machine on a port listening for PROXY protocol. The backend server will read the PROXY stream, parse it, recover the real client IP and terminate TLS.
Representation of the stream as received by the SNI proxy and the equivalent stream received by the proxied machine.
Setting up a SNI proxy using nginx and NixOS
The SNI proxy itself
Nginx supports SNI-based routing through its ngx_stream_ssl_preread_module module. The directive of interest for us is the ssl_preread on, which instructs nginx to parse the TLS ClientHello and extract the server_name field without decrypting the traffic. The extracted value is exposed in the $ssl_preread_server_name variable, which is then used in a map block to resolve the correct backend address.
Once again, this is heavily abstracted in NixOS. The stream-level configuration is managed via services.nginx.streamConfig. Below is the configuration for the SNI proxy machine:
services.nginx = {
enable = true;
streamConfig = ''
map $ssl_preread_server_name $backend {
service-a.example.com 192.168.1.10:443;
service-b.example.com 192.168.1.11:443;
}
server {
listen 443;
ssl_preread on;
proxy_pass $backend;
proxy_protocol on;
}
'';
virtualHosts."service-a.example.com" = {
locations."/.well-known/acme-challenge/" = {
proxyPass = "http://192.168.1.10";
};
};
virtualHosts."service-b.example.com" = {
locations."/.well-known/acme-challenge/" = {
proxyPass = "http://192.168.1.11";
};
};
};
The map block associates each hostname with a backend address. The server block listens on port 443, reads the SNI without decrypting TLS, and forwards the TCP stream to the resolved backend. The proxy_protocol on directive wraps the forwarded stream in a PROXY protocol header, which carries the original client IP so that backend machines can use it.
Now, we earlier said that backend machines manage their own TLS certificates. Certificates issuance involves a challenge to verify that you indeed own the domain names for which you perform a certificate issuance request. Typically, this is done using the HTTP-01 challenge. It requires the ACME client to respond at http://<domain>/.well-known/acme-challenge/<token>. Since port 80 traffic from the internet arrives at the SNI proxy first, we must also be able to pass the challenge requests to the correct backend. We do this using the classical reverse proxy presented earlier. We limit the exposure of the backend to HTTP requests by only forwarding the /.well-known/acme-challenge/ path. Then we need to configure our backend machines.
The client machines
The backend machines need a way to accept PROXY protocol-wrapped connections, and manage their own TLS certificates. On port 443, the SNI proxy sends raw TLS preceded by a PROXY protocol header. To read that header before proceeding with TLS termination, add proxy_protocol to the listen directive. In NixOS, the listen attribute on a virtualhost overrides the generated listen lines, so both port 443 and port 80 must be listed explicitly:
services.nginx = {
enable = true;
commonHttpConfig = ''
real_ip_header proxy_protocol;
set_real_ip_from 192.168.1.1;
'';
virtualHosts."service-a.example.com" = {
enableACME = true;
forceSSL = true;
listen = [
{ addr = "192.168.1.10"; port = 443; ssl = true; extraParameters = [ "proxy_protocol" ]; }
{ addr = "192.168.1.10"; port = 80; ssl = false; }
];
locations."/" = {
proxyPass = "http://localhost:3000";
};
};
};
security.acme = {
acceptTerms = true;
defaults.email = "you@example.com";
};
First, we modify using commonHttpConfig real_ip_header proxy_protocol. It tells nginx to use the IP carried in the PROXY header as $remote_addr, so access logs and any IP-based rules reflect the original client rather than the proxy. More importantly, the set_real_ip_from directive defines which addresses are trusted (from which IP addresses to accept PROXY headers). It protects against forged PROXY header aiming at reaching the backend directly.
Now, we want to specify to nginx that it should expect PROXY protocol headers. We perform this using the proxy_protocol header. It is specified using a custom listen attribute and extraParameter. Finally, enableACME = true enables automatic certificate provisioning and renewal using Let’s Encrypt. Because the HTTP-01 challenge path is forwarded by the SNI proxy as described above, renewals work without any extra configuration.
Avoiding SNI proxy misconfiguration
While deploying a similar SNI proxy on my homelab, I came across an NDSS 2025 paper by Pletinckx et al., A Large-Scale Measurement Study of the PROXY Protocol and its Security Implications. The authors analysed real-world deployments of SNI proxies and identified several common misconfigurations. The following section reviews them and how to avoid them.
Backend accessible without the proxy
This misconfiguration characterizes the case when the backend infrastructure is not hidden from the public. Thus, any client can directly connect to a backend server, circumventing the proxy. In some cases this is alright, because the SNI proxy does not behave as anything more than a proxy (no authentication, no load balancing, DDoS protection…). On some others, we want to restrict the origin of the HTTP/HTTPS traffic. In practice, this means dropping any connection not originating from the SNI proxy machine.
In NixOS, this is done via networking.firewall.extraInputRules, which accepts nftables syntax. On the backend machine, drop any connection to ports 80 and 443 that does not originate from the SNI proxy. Here we consider the SNI proxy to have the IP 192.168.1.1.
# On Machine A (backend)
networking.firewall.extraInputRules = ''
ip saddr != 192.168.1.1 tcp dport { 80, 443 } drop
'';
# If your system still defaults to iptables, enable the nftables backend first
networking.nftables.enable = true;
Backend does not verify the proxy source
This misconfiguration stems from the backend server not checking whether a connection stems from a trusted and known proxy. We already addressed this through the commonHttpConfig attribute and the set_real_ip_from directive.
Backend assumes the proxy handled authentication
This other misconfiguration originates from services behind the SNI proxy to not rely on any authentication mechanism, assuming the proxy already took care of it. This is especially critical when the proxy can be circumvented. Adding an authentication mechanism to any service behind the SNI proxy is an extra layer of security and implements the “zero-trust” philosophy.
Therefore, no specific code snippet for this potential misconfiguration but a reminder to enable authentication on any service you deploy behind your SNI proxy.
Conclusion
This article showed how to deploy a SNI proxy using nginx and NixOS. It is an elegant solution leveraging the ClientHello to route HTTPS traffic across multiple machines without centralizing TLS termination. Unlike a reverse proxy, it never decrypts traffic: it only peeks at the server_name field in the TLS ClientHello to make its routing decision. Each backend machine keeps full ownership of its own certificates and TLS configuration. I’ll acknowledge that it is overkill for my homelab use-case, but on the other hand there is much to learn about routing, PROXY protocol, and how nginx configurations are articulated in NixOS.
NixOS here again makes the whole setup remarkably concise. The stream module configuration, the ACME challenge forwarding, the PROXY protocol handling on backends, and the firewall rules are each a handful of lines.
References
- Pletinckx et al. A Large-Scale Measurement Study of the PROXY Protocol and its Security Implications. NDSS 2025. https://spletinckx.github.io/papers/ndss25_pletinckx.pdf
- nginx
ngx_stream_ssl_preread_moduledocumentation - nginx PROXY protocol guide
- Let’s Encrypt: Challenge Types
- NixOS option:
services.nginx.commonHttpConfig - NixOS nginx module —
extraParameterssource
Banner image: Bernard Boutet de Monvel (1935), (Detail of) Diane et Actéon.