



(2 votes, average: 3.00 out of 5)High Availability For A Private Docker Registry – Architecture
Below is an update on how to build a Private Docker Registry, the provisos article is available here – Gotchas / Tips Creating Your Own Private Docker Registry With Self Signed Certificate. High Availability Docker Registry Digram
One of the consideration when configuring your private Docker (Cloud) environment (and many times and overlooked component), is the high availability of the Docker Registry. the Docker Registry in most cases stores all your images, code, etc… down time of your private Docker Registry can have a critical effect on your business process success.
There are are multiple options how to configure HA for a private Docker Registry, Commercially available options like Enterprise Docker, or CoreOS, as well as open source free alternatives.
Below I will show you, one of many options you can configure your own Private Docker Registry with a High Availability Design.
The fowling components will be used in the configuration:
- Installation of Container OS (CoreOS)
- Docker Swarm Configuration
- Installation and configuration of a 4 node HA Minio/S3 setup (minio.io)
- Load Balance / Reverse Proxy your Docker Registry
- The configuration below was configured using Virtualbox, but the same apply to a physical hardware setup.
- There are a number of of other S3(Object Store) solutions like Openio, Scality, Ceph, etc.. but not free… 😉 , plus in my testing Minio just worked awesome.
Docker OS Installation / configuration
You can install and configure Docker on almost any Linux distro, but there are those Linux distros doing just one thing.. run containers in the most optimal way, usually build from the ground up. In my configuration below I will be using CoreOS, as find it to be an optimal option, however feel free to use what serves you best. to be fair, there are a few other options, like RancherOS mentioned here Managing Docker On Ubuntu 17.04 Using Rancher Or Portainer.Docker OS Installation
First lets download the CoreOS ISO, I used the Alpha channel. you can download the latest from here or all channels from here. As mentioned above I used Virtualbox for the CoreOS installation. If using Virtualbox, below are a few helpful considerations.-
First lets create a private virtualbox network switch, you do so by going to File > Preferences… > Network.
Click, Add new NAT network.

- Click on Port Forwarding

-
Once done – should look something like this.

-
Now, select your gust VM, click Settings > Network
Modify Network to Nat Network

- While in Settings > USB un-check Enable USB Controller we will not use that
- Also, under Audio un-check Enable Audio
- Under under System Make sure to give the VM at least 2048Mb memeory
- Last, under Storage Select Optical drive (which is empty), select the CoreOS iso you have downloaded.
CoreOS configuration and install to disk
After the OS boots it will look something like the below.
First, we need to generate a random password, do so by running the below.
openssl passwd -1 >/tmp/passwdNext, we need to generate an ssh key for latter use.
ssh-keygen -t rsa -b 2048 Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): /tmp/core_ssh-keyNext we need to get an etcd key, you do so, by going to discovery.etcd.io/new, like the below. Tip: You only need this if you will be using etcd
# For a 3 node cluster (which we will use), run this below https://discovery.etcd.io/new?size=3Save the etcd key output to be added in the cloud_config.yml Notes on the below cloud_config.yml.
- The first line in cloud_config.yml must start with #cloud-config
- Replace usera passwd with the openssl commend output
- Replace the ssh-rsa key with the ssh key output.
- To work as root just run sudo su – or sudo… your command
- The 3 vLan tags are absolutely not a requirement, feel free to modify at your need
# vi the file vi cloud_config.yml # Now read the ssh or passwd file :r /tmp/core_ssh-keyI usually create a cloud_config.yml locally on my host (or parent host in a VBox config), then from the new guest I scp the newly generated cloud_config.yml, something like the below.
coreos # scp remote_host:/tmp/cloud_config.yml .
The table below lists names and IP address used in this configuration (feel free to replace them with your needs).
| CoreOS Cluster IP Address | |
|---|---|
| Name | IP Addrss |
| coreos1 | 10.0.2.11 |
| coreos2 | 10.0.2.12 |
| coreos3 | 10.0.2.13 |
A full example of the first node cloud_config.yml is below, make sure to replace the IP Address like the table above for each of the 3 nodes.
#cloud-config
hostname: coreos1
ssh_authorized_keys:
- "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7ChcV4YlUBFoZwhTMHntBmXuoI12wf//C/2791gyay/8ESRYD8H8TrKbUS+wKVBounkaXgaJBJYOdc9kt8NIFj7wCzRDBDvojjH5yckMMfaghuSoCHuIQ/T1h3DDC9IRl8MkXBLDe4uYY2ZDxDWTHhKrsAsBVrxu6lfD0VS3N1T08whBcmQZcaK6iRmWfW+eR4rCX/7o9tzYcK2hypanW+yf/lFVWGvwhhVafmUxP4bbv11Rf4ckvtVRgVA4pbXcuoeM8mQLOIY5qNmFoRwaYUyQbFzkYOUlr1OPgPSpftG6L/sun284ERbc+bfg8BAcADrLHs7TreXOw5CIjB8oN root@dc1-os4"
users:
- name: usera
passwd: $1$L0gy00B2$ChBC93/KC6BO5zOXZ47gJ1
groups:
- sudo
- docker
coreos:
etcd2:
discovery-proxy: http://10.10.10.50:3172
discovery: https://discovery.etcd.io/f72e8796fc47444fb6dde7a04db50257
advertise-client-urls: http://10.0.2.11:2379
initial-advertise-peer-urls: http://10.0.2.11:2380
listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
listen-peer-urls: http://10.0.2.11:2380,http://10.0.2.11:7001
fleet:
public-ip: 10.0.2.11
metadata: region=dc1,public_ip=10.0.2.11
flannel:
interface: 10.0.2.11
units:
- name: etcd2.service
command: start
# See issue: https://github.com/coreos/etcd/issues/3600#issuecomment-165266437
drop-ins:
- name: "timeout.conf"
content: |
[Service]
TimeoutStartSec=0
- name: fleet.service
command: start
- name: date.service
content: |
[Unit]
Description=Updates date and time
[Service]
Type=oneshot
ExecStart=/usr/bin/sh -c '/usr/sbin/ntpdate time.domain.com'
- name: date.timer
enable: true
content: |
[Unit]
Description=Run date.service every 1 day
[Timer]
OnBootSec=5min
OnCalendar=*-*-* 00:00:00
[Install]
WantedBy=multi-user.target
# Network configuration should be here, e.g:
# https://serverfault.com/questions/654586/coreos-bare-metal-vlan-networking
- name: 00-eno0.network
runtime: true
content: |
[Match]
Name=enp0s3
[Network]
VLAN=v3000
VLAN=v3012
VLAN=v3100
Address=10.0.2.11/24
Gateway=10.0.2.2
DNS=8.8.8.8
DNS=8.8.4.4
Domains=domain.com
Search=domain.com
- name: 00-v3000.netdev
runtime: true
content: |
[NetDev]
Name=v3000
Kind=vlan
[VLAN]
Id=3000
- name: 00-v3012.netdev
runtime: true
content: |
[NetDev]
Name=v3012
Kind=vlan
[VLAN]
Id=3012
- name: 00-v3100.netdev
runtime: true
content: |
[NetDev]
Name=v3100
Kind=vlan
[VLAN]
Id=3100
- name: 11-v3000.network
runtime: true
content: |
[Match]
Name=v3000
[Network]
Address=10.90.0.11/22
Gateway=10.90.0.1
- name: 11-v3012.network
runtime: true
content: |
[Match]
Name=v3012
[Network]
Address=10.90.12.11/22
Gateway=10.90.12.1
- name: 11-v3100.network
runtime: true
content: |
[Match]
Name=v3100
[Network]
Address=10.90.100.11/23
Gateway=10.90.100.1
# - name: 00-eno2.network
# runtime: true
# content: "[Match]\nName=eno2\n\n[Network]\nDHCP=yes\n\n[DHCP]\nUseMTU=9000\n"
- name: 60-docker-wait-for-flannel-config.conf
content: |
[Unit]
After=flanneld.service
Requires=flanneld.service
[Service]
Restart=always
- name: docker-tcp.socket
command: start
enable: true
content: |
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=2375
Service=docker.service
BindIPv6Only=both
[Install]
WantedBy=sockets.target
- name: docker.service
drop-ins:
- name: 50-insecure-registry.conf
content: |
[Service]
Environment=DOCKER_OPTS='--insecure-registry="10.0.2.0/24"'
- name: settimezone.service
command: start
content: |
[Unit]
Description=Set the time zone
[Service]
ExecStart=/usr/bin/timedatectl set-timezone America/New_York
RemainAfterExit=yes
Type=oneshot
- name: flanneld.service
drop-ins:
- name: 50-network-config.conf
content: |
[Service]
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{ "Network": "10.0.2.0/24", "Backend": {"Type": "vxlan"}}'
write_files:
- path: "/etc/motd"
permissions: "0644"
owner: "root"
content: |
Just another day!!
- path: /etc/environment
owner: root:root
permissions: 0644
content: |
HTTP_PROXY=http://10.10.10.50:3172
HTTPS_PROXY=http://10.10.10.50:3172
http_proxy=http://10.10.10.50:3172
https_proxy=http://10.10.10.50:3172
no_proxy=localhost,127.0.0.0/8,127.0.0.1,::1,10.0.2.11,10.0.2.12,10.0.2.13,coreos1,coreos2,coreos3,coreos1.domain.com,coreos2.domain.com,coreos3.domain.com,/var/run/docker.sock
COREOS_PRIVATE_IPV4=10.0.2.11
- path: /etc/systemd/system/docker.service.d/environment.conf
owner: root:root
permissions: 0644
content: |
[Service]
EnvironmentFile=/etc/environment
- path: /etc/systemd/timesyncd.conf
content: |
[Time]
NTP=time.domain.com
- path: /etc/profile.d/aliases.sh
content: |
dps () { docker ps -a ; }
dl () { docker logs -f $1 & }
drm () { docker stop $1 && docker rm $1; }
Tip: You can validate the cloud config by going to the CoreOS validate page.
Part of the process of installing CoreOS, it will fetch the latest release and install that.
For that to work we need to set a proxy (if you are behind a proxy), you do so by running the below.
export http_proxy=http://10.10.10.50:3172 export https_proxy=http://10.10.10.50:3172Finally we are ready to install the OS to disk, do so by running the below.
coreos-install -d /dev/sda -C alpha -c cloud_config.yml # Once installation is completed, you need to reboot, if on VBox make sure to eject the CD first, then reboot by typing reboot. reboot...Tip: To modify configuration data(what was specified in cloud_config.yml) after the OS is installed, you will need to do the modification in /var/lib/coreos-install/user_data since /, usr, etc… are read only. the network configuration is located in /etc/systemd/network/static.network
General Helpful Tips
After the reboot login to the VBox guest (or physical). If the Vbox network was configured correctly the below should work.ssh -p 2011 usera@localhostTo verify the system working as expected, lets check a few things The below should all check out green.
systemctl status docker.service systemctl status -r etcd2 # For full logs journalctl -b -u etcd2A few examples related to etcd (etcd2) verification.
coreos1 ~ # etcdctl cluster-health
member 5709cd63b00424cd is healthy: got healthy result from http://10.0.2.13:2379
member 79977d7e3f74f2f1 is healthy: got healthy result from http://10.0.2.12:2379
member fdd18af27fae07b6 is healthy: got healthy result from http://10.0.2.11:2379
cluster is healthy
coreos1 ~ # curl -s http://localhost:4001/v2/members | jq '.[] | .[].peerURLs '
[
"http://10.0.2.13:2380"
]
[
"http://10.0.2.12:2380"
]
[
"http://10.0.2.11:2380"
]
coreos1 ~ # curl http://127.0.0.1:2379/v2/stats/leader |jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 442 100 442 0 0 43984 0 --:--:-- --:--:-- --:--:-- 63142
{
"leader": "79977d7e3f74f2f1",
"followers": {
"5709cd63b00424cd": {
"latency": {
"current": 0.002165,
"average": 0.005032049314520726,
"standardDeviation": 0.2289442159518092,
"minimum": 0.000454,
"maximum": 446.6701
},
"counts": {
"fail": 5,
"success": 3957232
}
},
"fdd18af27fae07b6": {
"latency": {
"current": 0.001406,
"average": 0.0046639311869352265,
"standardDeviation": 0.016926932297001148,
"minimum": 0.000465,
"maximum": 8.367956
},
"counts": {
"fail": 0,
"success": 3615142
}
}
}
}
coreos1 ~ # etcdctl member list
5665b37dfe704080: name=f7c35cd9952e4ecfb8a1a455c545bf12 peerURLs=http://10.0.2.13:2380 clientURLs=http://10.0.2.13:2379 isLeader=false
8ddcb7b548390858: name=910cc3158f5346b4a127254db9881a83 peerURLs=http://10.0.2.11:2380 clientURLs=http://10.0.2.11:2379 isLeader=false
deed1e4149e9f6b2: name=90fb31cff19b4bb589387b344a70f189 peerURLs=http://10.0.2.12:2380 clientURLs=http://10.0.2.12:2379 isLeader=true
---
coreos1 ~ # fleetctl list-machines
MACHINE IP METADATA
90fb31cf... 10.0.2.12 public_ip=10.0.2.12,region=dc1
910cc315... 10.0.2.11 public_ip=10.0.2.11,region=dc1
f7c35cd9... 10.0.2.13 public_ip=10.0.2.13,region=dc1
---
coreos1 ~ # fleetctl list-machines --full=true
MACHINE IP METADATA
90fb31cff19b4bb589387b344a70f189 10.0.2.12 public_ip=10.0.2.12,region=dc1
910cc3158f5346b4a127254db9881a83 10.0.2.11 public_ip=10.0.2.11,region=dc1
f7c35cd9952e4ecfb8a1a455c545bf12 10.0.2.13 public_ip=10.0.2.13,region=dc1
----------
etcdctl ls -r /
----
etcdctl set /coreos.com/network/config '{"Network": "10.0.2.0/24"}'
etcdctl get /coreos.com/network/config
To test etcd2 keys.
coreos1 ~ # etcdctl set /message "Hello world" # or coreos1 ~ # curl -L http://127.0.0.1:2379/v2/keys/message -XPUT -d value="Hello world" coreos1 ~ # etcdctl get /message # or coreos1 ~ # curl -L http://127.0.0.1:2379/v2/keys/messageIf everything checks out (green) works properly, we can move on to the next section, Docker Swarm configuration, then finally to the Docker registry setup, read all of this and more in part 2. Like what you’re reading? give it a thumbs up by rating the article. You might also like – Other articles related to Docker Kubernetes / micro-services.
0
0
votes
Article Rating