While I was preparing the deployment of a private pet project, I got the impression that my approach had significant room for improvement in the front-facing reverse-proxy department. The project consists of a scalable set of microservices serving several tasks in the backend, tied together with a message bus protocol.

While the backend was perfectly capable of handling its own environmental adaptions and even supports multitarget deployment perfectly, either to a Kubernetes Cluster or docker-compose on my little VPS, the situation was much worse on the frontend. There, I was still required to manually configure nginx in terms of backend addresses and use additional technology for handling infrastructure changes, such as up/downscaling of services, not to mention obtaining letsencrypt certificates, which simply felt wrong and not agile.

A little research unveiled that I am not the only one having that concerns, and that there is a very powerful reverse proxy called Træfik specifically addressing the requirements of containerized applications.

How it works

Træfik basically is a HTTP/s server and reverse proxy built on the Golang HTTP package, which does not sound that exciting in the first place. The noticeable thing about it though is its support for container orchestration frameworks, such as Docker (vanilla, docker-compose, Swarm) or Kubernetes in terms of obtaining all information the operator already defined when setting up the actual container.

Traefik uses a declarative, rule-based approach to automatically setup reverse proxies and route requests accordingly. For instance, a docker container could be labeled with multiple statements on its listening ports, protocol, hostnames and paths, which would be picked up by Traefik instantly. Same applies on changes in scaling or newly started containers - every change causes Traefik to adapt its own proxy configuration without any further operator interaction.

Example Træfik deployment strategy with docker-compose

On constrained hardware, such as a personal vServer, a lightweight approach such as docker-compose can provide an interesting alternative to a kubernetes installation. As Traefik also supports docker-compose, it can be completely configured with key-value pairs in labels section of the services items in the docker-compose file.

For instance, the docker-compose webservice below would start a Spring Boot service on Port 8080 and tell Traefik that:

  • The service serves the kp-backend service group
  • It should listen in APP_HOSTNAME, defined in the .env file or environment variable
  • Only requests below /api/v1/trends should be routed to an instance of that container
  • Port 8080 is listening for HTTP requests
    image: (repo)/(application)/trends   
      - traefik.backend=kp-trending
      - traefik.frontend.rule=Host:${APP_HOSTNAME}; PathPrefix:/api/v1/trends
      - traefik.port=8080
      - SERVER_PORT=8080

Relevance for the proxy can be expressed with the traefik.enable statement. In the example, below, the enable-directive prohibits Traefik from setting up a host for the service.

    image: "rabbitmq:3-management-alpine"
    hostname: "rabbitmq"
      - NAME=rabbitmq
      - traefik.enable=false

The Traefik config file

The configuration file is quite concise and fits on one screen. The file below sets up a basic Traefik installation with HTTPs, HTTPs enforcement and automatic letsencrypt certificate management.

Example: A full-featured file with Letsencrypt support and HTTPs enforcement

defaultEntryPoints = ["http", "https"]

  address = ":80"

    entryPoint = "https"

  address = ":443"

email = "(my mail here)"
storage = "acme.json"
entrypoint = "https"
onHostRule = true
#Use this for testing to avoid running into the Letsencrypt request limit
#caServer = ""

  entrypoint = "http"

3 Minutes

Recent Posts

Transalp Bike tour
codefreeze 2024
Fuzz Testing in Golang
Gran Canaria
Copenhagen | København