1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 3.00 out of 5)

Traefik as an HTTP reverse proxy / load balancer for Micro-Services

Below, I am going to show you how to configure Traefikas an HTTP reverse proxy / Load Balancer for your micro-services. There are a number of Load Balancer options to choose from. Commercial hardware load balancers, like F5 LTM, Netscaller, A10, etc… Or software, like Nginx, HAporxy, etc… Most of those load balancers are designed primarily to handle legacy traffic work-loads and are not the best fit for a micro-services architecture for many reasons. In a micro-services architecture (using Docker, Kubernetes and such) where services are dynamic, come and go all the time, we need a load balancer that re-acts dynamically to these changes. auto-detects new, removed services, without any user intervention. for those reasons (and more) Traefik was created. Note: While Traefik (today) might not be the fastest load balancer (it is defiantly getting faster by the day), it still is (I believe) one of the best options in a micro-services architecture. To name just a few of the many features Traefik supports.
  1. Auto Service-Discovery connections like Consul, etcd, Zookeeper, etc..
  2. Fully integrated – auto generate an ACME SSL signed certificate
  3. Works well with Docker Swarm, Kubernetes, Mesos/Marathon
  4. Full Web Sockets support
  5. HTTP/2
  6. Updates with a ResetAPIs
tip: Traefik is little application written in the Go language, therefor is well integrated in the rest of the Go applications like Docker, etc… Now that we know what benefits Traefik brings to the table, lets jump right into the configuration.

Installing Traefik – Getting Traefik to work

For the most part, installing Traefik is rely simple. First, lets create a working directory.
mkdir traefik_config && cd traefik_config
Next, lets download the Traefik binary.

# Download a sample template
Make the binary executable
chmod +x traefik_linux-amd64
Tip: Use the traefik.sample.toml as a reference for many of the traefik options available.

Configuring an initial Traefik test template

Create a template file, like the below. cat traefik.toml
traefikLogsFile = "log/traefik.log"
accessLogsFile = "log/access.log"
logLevel = "DEBUG"

defaultEntryPoints = ["http"]

    address = ":9090"

address = ":8095"

filename = "./rules.toml"
watch = true
Next, create the rules file cat rules.toml
       method = "wrr"
     url = ""
     weight = 1

  backend = "backend"
    rule = ""
Now, Run traefik in test mode on a regular system (not from a container).
./traefik_linux-amd64 -c traefik.toml
To test just try accessing the traefik web dashboard. The dashboard will be available on port 8095 and the application port is on port 8090. For example.
# Dashboard

# Frontend access port :8090 - forwards to backend port :8080
To test Traefik with docker, use the below configuration.
docker run -d -p 8080:8080 -p 80:80 -v traefik.toml:/etc/traefik/traefik.toml traefik

Using Traefik with Docker-Compose or Docker-Stack

The example below shows how to use Traefik with Docker-Compose, with the scaling feature, at the end of the setup, I will also show you how to scale manually.

Creating a Docker-Compose Traefik configuration

The below docker-compose is using Traefik as the proxy, and emilevauge/whoami Docker image as the web application. the emilevauge/whoami will return the host and header information connected to. cat docker-compose.yml
version: '2'


    image: emilevauge/whoami
      - net
      - "80"
        - ""
        - "traefik.port=80"
        - ""
        - "traefik.backend.loadbalancer.sticky=true"

    image: traefik
    command: --web --docker --logLevel=DEBUG
      - net
      - "90:80"
      - "7080:8080"
      - /var/run/docker.sock:/var/run/docker.sock
      - /dev/null:/traefik.toml

Now, just run the below to bring up the containers
docker-compose up -d
To access the dashboard, just connect to port 7080, like the below.
# Dashboard

# Frontend access port :90 - forwards to backend port :80
Notes on the above configuration:
  • The above configuration will pull the Docker Traefik image (if not local)
  • Configures and starts Traefik on port 7080 – forward to port 8080
  • Creates a Docker switch called traefiktest_net1
  • Pulls the emilevauge/whoami image (if not already local)
  • Configures and starts 1 instance of whoami on port 80 – forward to port 90 with Traefik
  • Last, it sets the sessions to be sticky(this can be removed if its not needed)
Now to test from commend line, you can just run the below.
curl -H
Hostname: fc13746ee0a3
IP: ::1
IP: fe80::42:acff:fe16:4
GET / HTTP/1.1
User-Agent: curl/7.45.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-Proto: http
X-Forwarded-Server: 76a8357916ae
You can also test this by going with a web browser to To bring up anther instance just run the below. Tip: Change the 2 to whatever number you like to scale to.
docker-compose scale whoami=2
Now, lets check docker-compose status, just run the below.
docker-compose ps
           Name                         Command               State                     Ports
traefiktest_loadbalancer_1   /traefik --web --docker -- ...   Up>80/tcp,>8080/tcp
traefiktest_whoami_1         /whoamI                          Up>80/tcp
traefiktest_whoami_2         /whoamI                          Up>80/tcp
Of course you can also check the great traefik Web-UI. I included a screen shot below. To generate traffic (for a test), just ran the below. Note: i know you can use Apache ab for the test, but this was a quick form of a test, Note: keep in mind it will try to run full speed (the fasts it can on a single core – being its not parallelized test).
while :; do curl -H > 2&>1 /dev/null; done
Below is a screen shot of the health monitor in Traefik Note: To see the load balancing between containers, use the CURL option and not the web access, since the sessions are set to sticky it will always go to the same host. To clean-up the environment – remove the environment, just run the below.
docker-compose stop && docker-compose rm # (pres y)

To manual scale the environment

Above, I used docker-compose scale option to scale the web application, below I will show you the manual option to scale the environment. To scale and add anther Docker container, just run the below. Note: You might need to update the name with an unused name in the below configuration.
docker run -d  \
--name traefiktest_whoami_3 \
--label "traefik.port=80" \
--label "" \
--label "traefik.backend=whoami-traefiktest" \
--label "traefik.backend.lb_proxy.sticky=true" \
--label "" \
--network traefiktest_net1 \

Using in a Docker Swarm environment

If you are trying to configure traefik in a Docker Swarm environment, you will need to add the options below to your docker-compose.yml file.
# Add this to the commend options
--docker.swarmmode --constraint 'node.role==manager'

# And add this below the volumes: option.
        condition: any
      mode: replicated
      replicas: 1
        delay: 2s
         constraints: [node.role == manager]

Traefik REST API capability

Traefik can also be manipulated with an API. Below is a simple example of returning the health status.
curl -s "http://localhost:7080/health" | jq .
  "pid": 1,
  "uptime": "42h12m49.098541271s",
  "uptime_sec": 151969.098541271,
  "time": "2017-07-14 16:00:17.55684292 +0000 UTC",
  "unixtime": 1500048017,
  "status_code_count": {},
  "total_status_code_count": {
    "200": 1523,
    "404": 1,
    "500": 1
  "count": 0,
  "total_count": 1525,
  "total_response_time": "17.287156001s",
  "total_response_time_sec": 17.287156001,
  "average_response_time": "11.33584ms",
  "average_response_time_sec": 0.01133584
Below is another API example, returning all running services.
curl -s "http://localhost:7080/api" | jq .
  "docker": {
    "backends": {
      "backend-lb-proxy-traefiktest": {
        "servers": {
          "server-traefiktest_lb_proxy_1": {
            "url": "",
            "weight": 0
        "loadBalancer": {
          "method": "wrr"
      "backend-whoami-traefiktest": {
        "servers": {
          "server-traefiktest_whoami_1": {
            "url": "",
            "weight": 0
          "server-traefiktest_whoami_3": {
            "url": "",
            "weight": 0
        "loadBalancer": {
          "method": "wrr"
    "frontends": {
      "frontend-Host-traefik-domain-com": {
        "entryPoints": [
        "backend": "backend-whoami-traefiktest",
        "routes": {
          "route-frontend-Host-traefik-domain-com": {
            "rule": ""
        "passHostHeader": true,
        "priority": 0,
        "basicAuth": []
      "frontend-Host-lb-proxy-traefiktest": {
        "entryPoints": [
        "backend": "backend-lb-proxy-traefiktest",
        "routes": {
          "route-frontend-Host-lb-proxy-traefiktest": {
            "rule": "Host:lb-proxy.traefiktest."
        "passHostHeader": true,
        "priority": 0,
        "basicAuth": []
For a full list of API capability an options, check out the Traefik API pages Last, I would like to mention. out of the many other great features in Traefik is they fully integrate with ACME SSL certificates, for full details visit the Traefik web site. What do you use as a proxy for micro services? let me know in the comments below. You might also like: Other articles related to Docker Kubernetes / micro-services.
0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x
%d bloggers like this: