Or how to put “x” in every word of your title.

This is how I use my nginx instance as a “dumb pipe” to proxy traffic to different services, with or without SSL termination using one public IP(v4).

The main problems I wish to solve are

  1. multiple SSL endpoints cannot be on the same IP, partially solved by [SNI]1
  2. there is one nice trick to make things more automated but it turns out I cannot have nice thing now

First of all the nice trick and then the equivalent solution.

The following configuration allows for multiple SSL endpoints on the same IP and discriminates the receiving endpoint by using the SNI data embedded in the variable $ssl_server_name.

It also terminates SSL connections as this instance has access to the SSL keys available for your service, even when these services might not be on the same machine.

http {
        # --- start
        # match the server name to the proxied endopoint
        map $ssl_server_name $backend {
                "foo.example.com" http://127.0.0.1:3000;
                "bar.example.com" http://127.0.0.1:4000;
        }

        # finally set up a HTTP server terminating the SSL connection
        server {
                listen 443 ssl;

                ssl_certificate /etc/letsencrypt/live/$ssl_server_name/fullchain.pem;
                ssl_certificate_key /etc/letsencrypt/live/$ssl_server_name/privkey.pem;

                # .. put all the SSL wizarding options here

                # finally tell nginx who to pass this HTTP request
                proxy_pass $backend;
        }
        # --- end
}

The data flow is not too complicated, when we receive a HTTPS request we decrypt the encrypted payload using the appropriate key as we have received the server name indication during the ClientHello phase.

The server name is embedded in the variable $ssl_server_name which is populated by nginx so we match the service endpoint from that and then ask nginx to decrypt the request using the key we named appropriately.

The main selling points of this configuration are

  1. it’s easy
  2. you don’t have multiple IP for multiple services
  3. you can copy paste the part between start and end into your sites-available/https-proxy and it should just work

The main shortcomings are

  1. The list of services is in one file, I prefer to add and remove files to modify them, it would be nicer to be able to add a file, link it and be done
  2. You cannot use variables such as $ssl_server_name in the expressions you feed to ssl_certificate or ssl_certificate_key previous to nginx 1.15.9
  3. I am running nginx 1.14.2

SSL termination

Here we are doing SSL termination, nginx takes over the encryption/decryption phase so you must trust the network on which your request will be forwarded to the service.

We could also have just be a transparent HTTPS proxy without doing any SSL termination as it happens in the following configuration.

http {
        # --- start
        # match the server name to the proxied endopoint
        map $ssl_server_name $backend {
                "foo.example.com" https://127.0.0.1:3000;
                "bar.example.com" https://127.0.0.1:4000;
        }

        # no SSL termination here
        server {
                listen 443 ssl;

                # tell nginx who to pass this HTTP request
                proxy_pass $backend;
        }
        # --- end
}

solution

We can solve problem 1 by substituting these lines

         # match the server name to the proxied endopoint
         map $ssl_server_name $backend {
-                "foo.example.com" https://127.0.0.1:3000;
-                "bar.example.com" https://127.0.0.1:4000;
+                include /etc/nginx/map-enabled/*;
         }

And creating some files in the directory /etc/nginx/map-enabled containing the previous lines

# /etc/nginx/map-enabled/foo
"foo.example.com" https://127.0.0.1:3000;
# /etc/nginx/map-enabled/bar
"bar.example.com" https://127.0.0.1:4000;

The solution to problem 2 and 3 can be as easy as “upgrade nginx” but that would also mean to upgrade the whole system. Moreover if you go this way you will have to make sure the cost of reading the certificates from disk for every new request is not a limiting factor.

You can set up better caching but a SSL DDOS attack where every new handshake will have you to read the certificate and key from the disk might take your system down (highly unlikely). You have been warned.

This is what I came up with and even if it is a little convoluted it fits better in my mind and is experimentally okish.

I have a new module defining a stream directive. The stream and http modules are similar but for stream you have to set up reading variables in the handshake.

# /etc/nginx/modules-available/https-proxy.conf
stream {
	# turn on SNI mapping to nginx variables
	ssl_preread on;

	# maps a server name to a service name
	map $ssl_preread_server_name $service {
		include /etc/nginx/map-enabled/*;
	}

	# list of servers for a service name
	include /etc/nginx/upstream-enabled/*;

	server {
		listen 443;
		proxy_pass $service;
	}
}

The data flow this time is the same but we are including two more concepts. upstream is a set of servers that will be used for the same service (e.g. high availabillity, redundancy) and this means that instead of mapping from one $ssl_preread_server_name value to an address I map it to a name, e.g. “git.edoput.it” and “projects.edoput.it” can map to the same service name “git”.

# /etc/nginx/map-available/git
"www.git.edoput.it" git;
"git.edoput.it" git;
# /etc/nginx/upstream-available/git
upstream git {
        server "git.local.address.lan"
}

Then by defining an upstream for my service name I can put as many servers as I need (most likeyly 1).

The only tradeoff I am making now is that I am using my stream directive just for the HTTPS proxy feature, I am not loading any other possible server directive in there so when I will need one more I will have to expand a little this approach. If an HTTPS proxy is all you need you are home my friend!

And what about SSL termination? We removed it from the server directive as we can now be a transparent HTTPS proxy, the servers in the upstream set will have to be the SSL termination!

  1. Server name indication