I recently worked on using nginx as a reverse proxy for a problem that is not well documented, reverse proxy for https.

The main problem with https is that it requires a SSL connection to work and hence some information is “hidden” by cryptography.

Server Name Indication

An extension to TLS that solve this problem is the Server Name Indication.

In SNI the client (Alice) indicates a hostname during the handshake process so we can read it and route the request to the correct host.

This enable us to serve multiple website with SSL/https on the same IP.

But what happens if your reverse proxy does not have contents to serve? Or it does not have any SSL certificate?

We have a device, our router, that recieves requests from the internet and must route them to the correct host, how is this going to happen?

And how can we serve content over an encrypted connection if we can’t read where to route the requests?

An easy solution would be to store every SSL certificate on the reverse proxy but this is prone to errors and security problems.

Streaming requests

This is another problem that has already been solved with streaming requests. We let nginx sit between every requests from the internet and our webserver to decide, using SNI, to which one to send what.

The internet - nginx - hosts

This enable us to store the SSL certificates on the hosts and it enhance security. Nginx does not decrypt an incoming request because the hostname has been put before the encrypted data.

With the hostname we can decide where to route a request.

A template configuration for the stream directive in nginx looks like this:

stream {
        # upstream is kind of a id for a pool of resources
        upstream alice_https {
                server "insert internal ip here":443;
        }

        server {
                # here we listen for a specific domain
                # and pass the requests to a resource
                # called alice_https
                server_name "alice.domain.example";
                listen "insert public ip here":443;
                proxy_pass alice_https;
        }

}

But we have many domains to take care of so we had to stretch it out to work well and be extensible. What happens if we want to add or remove a reverse proxy? Do we have to repeat the upstream and server directive for every one of them? Do I have to regenerate the configuration everytime? How can I prevent nginx from dropping a configuration for a reverse proxy when I add another?

By using the stream_ssl_preread module we can use the variable$ssl_preread_server_name to decide where to route a request.

stream {
        # a map is a bind from a domain name to 
        # a resource id
        map $ssl_preread_server_name $name {

                # load every file in map.conf.d
                # as a map
                include /etc/nginx/map.conf.d/*.conf;
        }

        # an upstream is a definition of a resource id
        # load every file in upstream.conf.d as a map
        include /etc/nginx/upstream.conf.d/*.conf;

        server {
                listen "your public ip here":443;
                ssl_preread on;
                proxy_pass $name;
        }
}

Placing a map file into the /etc/nginx/map.conf.d directory will load it into the system, the very same for the upstream file.

# map file
# domain name -> resource_id
hostname.domain.example hostname_https;
www.hostname.domain.example hostname_https;

In this file we map a domain to a resource id, as an example let’s see the mapping for alice.domain.example.

alice.domain.example alice_https;
www.alice.domain.example alice_https;

And the upstream file in the /etc/nginx/upstream.conf.d

# upstream file
# define a resource id
upstream hostname_https {
        server "internal ip here":443;
}

Again the example for alice.domain.example will map to the resource id alice_https.

upstream alice_https {
        server "alice.domain.example internal ip here":443;
}

Conclusion

This somewhat exotic configuration is very flexible when it comes to

  • a new hostname, on the same or different domain
  • moving services from one internal ip to another
  • adding other machines to the same resource pool
  • enhancing security by placing SSL certificates on different hosts

Moreover every step of this tasks can be easily automated as it consists of placing or removing a file in a directory, not in rewriting the nginx configuration, very cool indeed.