nginx proxy redux
Or how to put “x” in every word of your title.
This is how I use my nginx instance as a “dumb pipe” to proxy traffic to different services, with or without SSL termination using one public IP(v4).
The main problems I wish to solve are
- multiple SSL endpoints cannot be on the same IP, partially solved by [SNI]1
- there is one nice trick to make things more automated but it turns out I cannot have nice thing now
First of all the nice trick and then the equivalent solution.
The following configuration allows for multiple SSL endpoints on the same
IP and discriminates the receiving endpoint by using the SNI data
embedded in the variable $ssl_server_name
.
It also terminates SSL connections as this instance has access to the SSL keys available for your service, even when these services might not be on the same machine.
The data flow is not too complicated, when we receive a HTTPS request we decrypt the encrypted payload using the appropriate key as we have received the server name indication during the ClientHello phase.
The server name is embedded in the variable $ssl_server_name
which
is populated by nginx so we match the service endpoint from that
and then ask nginx to decrypt the request using the key we named appropriately.
The main selling points of this configuration are
- it’s easy
- you don’t have multiple IP for multiple services
- you can copy paste the part between
start
andend
into yoursites-available/https-proxy
and it should just work
The main shortcomings are
- The list of services is in one file, I prefer to add and remove files to modify them, it would be nicer to be able to add a file, link it and be done
- You cannot use variables such as
$ssl_server_name
in the expressions you feed tossl_certificate
orssl_certificate_key
previous to nginx 1.15.9 - I am running nginx 1.14.2
SSL termination
Here we are doing SSL termination, nginx takes over the encryption/decryption phase so you must trust the network on which your request will be forwarded to the service.
We could also have just be a transparent HTTPS proxy without doing any SSL termination as it happens in the following configuration.
solution
We can solve problem 1 by substituting these lines
And creating some files in the directory /etc/nginx/map-enabled
containing
the previous lines
The solution to problem 2 and 3 can be as easy as “upgrade nginx” but that would also mean to upgrade the whole system. Moreover if you go this way you will have to make sure the cost of reading the certificates from disk for every new request is not a limiting factor.
You can set up better caching but a SSL DDOS attack where every new handshake will have you to read the certificate and key from the disk might take your system down (highly unlikely). You have been warned.
This is what I came up with and even if it is a little convoluted it fits better in my mind and is experimentally okish.
I have a new module defining a stream directive. The stream and http modules are similar but for stream you have to set up reading variables in the handshake.
The data flow this time is the same but we are including
two more concepts. upstream is a set of servers that will be used
for the same service (e.g. high availabillity, redundancy) and
this means that instead of mapping from one $ssl_preread_server_name
value
to an address I map it to a name, e.g. “git.edoput.it” and “projects.edoput.it”
can map to the same service name “git”.
Then by defining an upstream for my service name I can put as many servers as I need (most likeyly 1).
The only tradeoff I am making now is that I am using my stream
directive just for the HTTPS proxy feature, I am not loading
any other possible server
directive in there so when I will need
one more I will have to expand a little this approach. If an HTTPS
proxy is all you need you are home my friend!
And what about SSL termination? We removed it from the server
directive
as we can now be a transparent HTTPS proxy, the servers in the upstream
set
will have to be the SSL termination!
-
Server name indication ↩