DevTech101

DevTech101
1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

High Availability For A Private Docker Registry – Architecture

This is part two on how to configure Docker Registry in a High Availability configuration. To read part one click here. High Availability Docker Registry Digram

Docker Swarm Configuration on CoreOS

To configure Docker Swarm you will need minimum 3 nodes, below I will use the names/ip’s listed in part 1. To initialize the Docker Swarm cluster, just run the below on the first node coreos1.
# Use if the server has multiple interfaces
# docker swarm init --advertise-addr 10.0.2.11
docker swarm init
Next, run the below, on the two other nodes (coreos2 and coreos3). Replace the token with your Docker Swarm token.
docker swarm join \
--token SWMTKN-1-58uz09763fitew2samqtowgse75dc9px7gxk2qokf5uc5mu9f2-20ve9z6nj34o202ybf9tiqk3j \
10.0.2.11:2377
For fail-over to work properly, in recent versions, run the below on node2 and node3, this is needed to allow the other two node become manger in case the first node fails.
docker node promote coreos2
docker node promote coreos3
Now, lets verify the cluster functionality.
docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
8c6qmlmo9uym11obvyo3irgib     coreos1             Ready               Active              Reachable
ilmw12fa1kdyzsi5nrmfpi8bk     coreos3             Ready               Active              Reachable
ykafu6ww70nbklvrvgpc5qoiy *   coreos2             Ready               Active              Leader
Next, create a Docker Swarm network.
docker network create \
--driver overlay \
--subnet 10.20.1.0/24 \
--opt encrypted \
services
Next, create Docker Swarm service.
docker service create \
--replicas 2 \
--name nginx \
--network services \
--publish 80:80 \
nginx
Now, verify the cluster network and service.
docker service ls
docker service ps nginx

docker network ls
Docker helpful examples
A Docker Swarm busybox example.
docker service create \
--name busybox \
--network services \
busybox \
sleep 3000
Verify the ps busybox Docker Swarm.
docker service ps busybox
Running the busybox as part of Docker Swarm.
docker exec -it busybox.1.4czfvm7771qy8lmbiz2bv4nbwdocker /bin/sh
docker service inspect --pretty nginx
Scale the busybox service as part of Docker Swarm.
docker service scale nginx=3
docker service ps nginx
Destroying the Docker Swarm cluster
Single node to leave the cluster.
docker node demote coreos2
docker node demote coreos3
Last node to leave the cluster – on coreos1.
docker swarm leave --force

Installing Minio Object Store

To install/configure the Docker Registry in a High Availability architecture, we need to configure the back-end storage to be shared. One of the many options available is using AWS S3 as the back-end storage. One of the free open source vendors created an S3 alike option. in my configuration I will be using the Minio back-end object store(available from minio.io), I found it to be an awesome option for an S3 alike back-end storage. Note: Just for extras, I will also use an NFS back end shared directory store, helping the fail-over file system always available. One of the requirements for a Distributed Minio is running a minimum of a 4 node cluster (i.e. 4 docker instances), since I started my configuration with only 4 VBox VM’s I will continue that route(with only 3 nodes) but I will configure 4 Docker instances to meet the requirements. Note: Idle in a production environment you should have 4 physical nodes, or there are a few other options like two nodes with 2 or 4 physical disks each, for the full requirements, please visit the Distributed Minio Documentation. Enough talk, lets get to work. A few notes on the Minio setup.
  1. Minio like S3 requiters an access and secret key. I am using minio and minio123 as these keys, please make sure to change them in your environment
  2. Minio by default listens to port 9000, in order not to have any issues if more then one Minio instance land on the same host you should make sure each node uses there own port, I used 9001, 9002, and so on
  3. I used directory /mnt/docker_reg/minio on all Minio instances. for instances 1 I will be using /mnt/docker_reg/minio/f1 and for instances 2 it will be /mnt/docker_reg/minio/f2 and so on..
  4. Also, normally /mnt/docker_reg/minio/ would be a local mount, for good speed an SSD would make sense. in my case I used /mnt/docker_reg/minio/ as an NFS client mount, then f1, f2,.. are sub directory’s
  5. Lastly, I ran in to issues with the official distributed Minio image. until this bug-fix is integrated in the official Minio image I am using the nitisht/minio:swarm4 image which works great
  6. Update: In my most recent testing the minio/minio:edge Docker image also worked without any issues, I updated the docker-compose-secrets.yaml.
One additional note: All Minio participating need to have their system clock synced, this was add as part of the cloud-config in part 1, if you are not using a cloud-config(CoreOS), you can create a schedule job with systemd something like the below. Create two files in /etc/systemd/system/, something like the below. date.service file.
[Unit]
Description=Updates date and time

[Service]
Type=oneshot
ExecStart=/usr/bin/sh -c '/usr/sbin/ntpdate time.bnh.com'
date.timer file.
[Unit]
Description=Run date.service every 1 day

[Timer]
OnBootSec=15min
OnCalendar=*-*-* 00:00:00

[Install]
WantedBy=multi-user.target
Now lets enable and load this in the system
systemctl enable date.timer
systemctl start date.timer
Verify the timer working by running the below. out should look something like the below.
systemctl list-timers --all
NEXT                         LEFT         LAST                         PASSED    UNIT                         ACTIVATES
Wed 2017-09-27 19:19:10 EDT  4h 0min left Tue 2017-09-26 17:12:48 EDT  22h ago   systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.ser
Thu 2017-09-28 00:00:00 EDT  8h left      Wed 2017-09-27 14:51:49 EDT  27min ago date.timer                   date.service
Thu 2017-09-28 00:00:00 EDT  8h left      Wed 2017-09-27 00:00:00 EDT  15h ago   logrotate.timer              logrotate.service
Thu 2017-09-28 02:52:07 EDT  11h left     Wed 2017-09-27 14:52:07 EDT  26min ago rkt-gc.timer                 rkt-gc.service
n/a                          n/a          n/a                          n/a       update-engine-stub.timer     update-engine-stub.service

5 timers listed.
Next, create a Docker Stack file called docker-compose-secrets.yaml, like the one below.
version: '3.2'

services:
  minio1:
    image: minio/minio:edge
    volumes:
      - /mnt/docker_reg/minio/f1:/export
    ports:
      - "9001:9000"
    networks:
      - minio_distributed
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.labels.minio == minio1
      restart_policy:
        condition: on-failure
        delay: 10s
        max_attempts: 10
        window: 60s
    command: server http://minio1/export http://minio2/export http://minio3/export http://minio4/export
    secrets:
      - secret_key
      - access_key

  minio2:
    image: minio/minio:edge
    volumes:
      - /mnt/docker_reg/minio/f2:/export
    ports:
      - "9002:9000"
    networks:
      - minio_distributed
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.labels.minio == minio2
      restart_policy:
        condition: on-failure
        delay: 10s
        max_attempts: 10
        window: 60s
    command: server http://minio1/export http://minio2/export http://minio3/export http://minio4/export
    secrets:
      - secret_key
      - access_key

  minio3:
    image: minio/minio:edge
    volumes:
      - /mnt/docker_reg/minio/f3:/export
    ports:
      - "9003:9000"
    networks:
      - minio_distributed
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.labels.minio == minio3
      restart_policy:
        condition: on-failure
        delay: 10s
        max_attempts: 10
        window: 60s
    command: server http://minio1/export http://minio2/export http://minio3/export http://minio4/export
    secrets:
      - secret_key
      - access_key

  minio4:
    image: minio/minio:edge
    volumes:
      - /mnt/docker_reg/minio/f4:/export
    ports:
      - "9004:9000"
    networks:
      - minio_distributed
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.labels.minio == minio3
      restart_policy:
        condition: on-failure
        delay: 10s
        max_attempts: 10
        window: 60s
    command: server http://minio1/export http://minio2/export http://minio3/export http://minio4/export
    secrets:
      - secret_key
      - access_key

networks:
  minio_distributed:
    driver: overlay
    attachable: true

secrets:
  secret_key:
    external: true
  access_key:
    external: true
Tip: Make sure to specfiy version 3.2+ as prior to that attachable: true was not supported, and you will run in to network issues latter. The highlight lines in the configuration above are added to address placement constrains, if you don’t mind Minio instances jumping around while nodes go up/down you can just omit those lines. Lets create the S3 secret keys (You don’t wont these in your Docker file for security reasons)
echo "minio" | docker secret create access_key -
echo "minio123" | docker secret create secret_key -
We are now going to add labels to the system The labels are used as placement constrains, this will ensure that each Minio Docker instance stays on this node. Verify you labels, default is empty unless you already added some labels…
docker node inspect coreos1
[..] snip
"Labels": {},
[..] snip
Create constrain labels.
docker node update --label-add minio=minio1 coreos1
docker node update --label-add minio=minio2 coreos2
docker node update --label-add minio=minio3 coreos3
Verify new labels are added to the system.
docker node inspect coreos1
[..] snip
            "Labels": {
                "minio": "minio1"
            },
[..] snip
Next, lets deploy the stack, you do so by running the below.
docker stack deploy --compose-file=docker-compose-secrets.yaml minio_stack
Check the minio_stack Docker logs, give it some time to come up…
while :; do docker logs `docker ps -a |grep minio1|head -1|awk '{print $1}'`; sleep 2; done
Last login to the UI to create your bucket, enter your access keys
http://your-host.domain.com:9002/minio/
Once logged in, click on the + sign, like the screen shut below, and create a bucket called docker. Next allow read/write privilege for * by cliking on the 3 dots left side on the bucket name – (of curse you should adjust this production to your need).

Docker Registry Installation and configuration

Redis installation for Cache

Before we start the Docker Registry installation, we need to install and configure Redis for caching (Redis is optional). To install Redis Cache is simple, just run the below on every node running latter the Registry. or you can use separate nodes for that if you like.
docker run -d --name="registryv2-redis" \
       --network minio_stack_minio_distributed \
       -p 6379:6379 \
       --restart="always" sameersbn/redis

Docker Registry Setup Using Redis Cache

Now, we are ready to install the Docker Registry. To install the Docker Registry create a file called docker-registry-config.yml like the below.
version: 0.1
log:
  level: debug
  formatter: text
  fields:
    service: registry
    environment: staging
loglevel: debug
storage:
  s3:
    accesskey: minio
    secretkey: minio123
    region: us-east-1
    regionendpoint: http://minio1:9001
    bucket: docker
    encrypt: false
    secure: true
    v4auth: true
    chunksize: 5242880
    rootdirectory: /
  cache:
    blobdescriptor: redis
  delete:
    enabled: true
  maintenance:
    uploadpurging:
      enabled: true
      age: 168h
      interval: 24h
      dryrun: false
    readonly:
      enabled: false
http:
  addr: :5000
  headers:
       X-Content-Type-Options: [nosniff]
  debug:
       addr: localhost:5001
redis:
    addr: registryv2-redis:6379
    db: 0
    dialtimeout: 10ms
    readtimeout: 10ms
    writetimeout: 10ms
    pool:
      maxidle: 16
      maxactive: 64
      idletimeout: 300s
For a complete reference of all Docker registry attributes click here Next, run the below on all nodes running a Minio instance.
docker run -d --name=registry \
       --network minio_stack_minio_distributed \
       -p 5000:5000 \
       -v $(pwd)/docker-registry-config.yml:/etc/docker/registry/config.yml \
       --restart="always" \
       registry:latest
Tip: You can run the Docker Registry on as little as two nodes if that is enough for your workload, each Registry will access the local Minio instance. To verify Docker Registry is working and using Redis Cache. Run the below in the in one windows.
docker exec -t -i `docker ps -a |grep redis|awk '{print $1}'|head -1` redis-cli monitor
In another window run the below. this will pull hello-world image, then push that in your private Docker Registry, you should node a bunch of activity in the Redis window.
docker pull hello-world 
docker tag hello-world localhost:5000/hello-world-1.0
docker push localhost:5000/hello-world-1.0
docker pull localhost:5000/hello-world-1.0
Another way to verify Redis working is, by looking in the Docker Registry logs, you should see something like the below, look for the words redis. docker logs 148cc9088407|less
time="2017-09-08T17:13:05.850504777Z" level=info msg="using redis blob descriptor cache" environment=staging go.version=go1.7.6 instance.id=560567ff-4c00-4552-b176-49a083734f87 service=registry version=v2.6.2
[..] snip
time="2017-09-08T17:13:21.595916263Z" level=info msg="redis: connect registryv2-redis:6379" environment=staging go.version=go1.7.6 instance.id=560567ff-4c00-4552-b176-49a083734f87 redis.connect.duration=3.395927ms service=registry version=v2.6.2
[..] snip
Now, lets verify the object storage working. see the below screen capture Important Note: The above configuration is not using SSL nor is it using any Authentication/Authorization, token for access control. these are things defiantly required in any production configuration, Click here – to configure a secure registry using SSL, Tokens and LDAP /a>.
Another way how to run Docker Registry
If you are using CoreOS, you can also use fleetctl to create the Docker Registry, you will do so by creating a file called registry.service with the below content.
Description=Private Docker Registry
After=docker.service

[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
ExecStartPre=/usr/bin/docker pull registry:latest
ExecStart=/usr/bin/docker run -d --name=registry --network minio_stack_minio_distributed -p 5000:5000 -v /var/tmp/docker-registry-config.yml:/etc/docker/registry/config.yml --restart="always" registry:latest
ExecStop=/usr/bin/docker stop registry
ExecStopPost=/usr/bin/docker kill registry
ExecStopPost=/usr/bin/docker rm registry

[X-Fleet]
Global=true
Next, run create the service for all nodes
fleetctl start registry.service
Verify the service working.
fleetctl  list-units
UNIT        MACHINE            ACTIVE    SUB
registry.service    3e0abc5b.../10.0.2.12    active    running
registry.service    4f72b720.../10.0.2.13    active    running
registry.service    ba90d46d.../10.0.2.11    active    running
To remove/destroy the unit just run the below.
fleetctl destroy registry.service
Docker Registry SSL and Authentication
I hope to update this soon.. once I have a chance. in the mean time you can check this out Gotchas / Tips Creating Your Own Private Docker Registry With Self Signed Certificate

Docker Registry High Availability And Load Balancing

The last piece of the Docker Registry needed for High Availability is adding a load balancer in front of all Docker Registry instances. There are many options you can use for Load Balancing, the simplest option might be using something like Traefik or Consul and especially if you already have such a setup in place. You can see details how to – Using Traefik As a Load Balancer / HTTP Reverse Proxy For Micro-Services or Using Consul for Service Discovery In Multiple Data Centers – Part 1. Other widely used options are, using Nginx or HAproxy.

Helpful Docker alias I use

dps () { docker ps -a ; }
dl () { docker logs -f $1 & }
drm () { docker stop $1 && docker rm $1; }
In CoreOS you do so by adding the below to the write_files: section.
  - path: /etc/profile.d/aliases.sh
    content: |
      dps () { docker ps -a ; }
      dl () { docker logs -f $1 & }
      drm () { docker stop $1 && docker rm $1; }
In part 3, I discuss Securing A Private Docker Registry By Using An SSL, Tokens, LDAP I hope you enjoyed reading the High Availability Docker Registry, give it a thumbs up by rating the article or by just providing feedback. You might also like – Other articles related to Docker Kubernetes / micro-services.
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
%d bloggers like this: