Managing Docker On Ubuntu 17.04 Using Rancher Or Portainer

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 4.00 out of 5)
Loading...

Installing Managing Docker 17.03.1-ce On The Latest Ubuntu Beta

Updating Ubuntu to 17.04

First I am going to upgrade Ubuntu from 16.10 to 17.04
Requirements for the upgrade is below.

  • Install the update-manager-core package if it is not already installed
  • Make sure the Prompt line in /etc/update-manager/release-upgrades is set to normal
  • Run do-release-upgrade -d
  • The system will require a reboot once complete

Installing Docker 1.17.x On Ubuntu 17.04

apt-get -y install   apt-transport-https   ca-certificates   curl
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu yakkety stable"

Modify /etc/apt/sources.list, change from zesty to yakkety

# From
deb [arch=amd64] https://download.docker.com/linux/ubuntu zesty stable
# To
deb [arch=amd64] https://download.docker.com/linux/ubuntu yakkety stable

Finally, update/install Docker.

apt-get update
apt-get -y install docker-ce

Verify your installation worked as expected and Docker is version 17.03.1-ce

docker -v
Docker version 17.03.1-ce, build c6d412e

Configure a proxy (if needed)

Note: If you need to bypass a proxy accessing an outside network, create the below entry’s.

mkdir /etc/systemd/system/docker.service.d

cat /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://yourproxy.domain.com:port/"
Environment="NO_PROXY=localhost,127.0.0.1"

# Re-load the service
systemctl daemon-reload

Verify, then Restart the service if all looks good

systemctl show --property Environment docker
systemctl restart docker

Working with Docker

Now that Docker is working, first lets pull the Docker Registry and Images

Pulling the Docker Registry and Images

docker pull distribution/registry:master
# Simple base image (use for bash), etc.
docker pull ubuntu

# Image with ssh access.
docker pull rastasheep/ubuntu-sshd

Lets create / start an ubuntu ssh container

docker run -ti -d --name=ssh-test -p 2022:22 rastasheep/ubuntu-sshd
ec8b2c340d799674b4303342cc22d17c377d94cfbb6e132b734a97251381d746

# Container status
docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
ec8b2c340d79        rastasheep/ubuntu-sshd              "/usr/sbin/sshd -D"      21 hours ago        Exited (0) 21 hours ago                                        ssh-test

# Stop & remove Container
docker stop ec8b2c340d79
docker rm ec8b2c340d79

Now, lets create a Docker container with CPU/Memory resource controls

Note: The below ssh-test container is set with a resource limit of CPU 1 core, and Memory is set to 10Mb.

docker run -ti -d --name=ssh-test -p 2022:22 --cpus 1 -m "10m" rastasheep/ubuntu-sshd
ec8b2c340d799674b4303342cc22d17c377d94cfbb6e132b734a97251381d746

Now, lets test this resource limits, to stress the Docker container CPU and Memory, just run the below and watch the docker stats.
Run the below to stress the CPU – Reference – how to cpu hog load test

Ssh to localhost port 2022 (the Docker exposed port configured above).
To stress the CPU, create a file with the below content (don’t worry it wont blow up on you 🙂 )

ssh localhost -p 2022
root@ec8b2c340d79:~# cat /tmp/stresscpu.pl
#!/usr/bin/perl
 
print "Eating the CPUs\n";
 
foreach $i (1..16) {
    $pid = fork();
    last if $pid == 0;
    print "Created PID $pid\n";
}
 
while (1) {
    $x++;
}
root@ec8b2c340d79:~#  chmod +x /tmp/stresscpu.pl

Run the CPU stress test, when completed just hit CTRL+C to terminate the process.
Just check top or docker stats to see it using only 1 core.

root@ec8b2c340d79:~# /tmp/stresscpu.pl
Eating the CPUs
Created PID 44
Created PID 45
Created PID 46
[..] snip
^C # when done hit CTR+C.

Now, to test Memory resource control

Run the below to lock about 98% (7-9Mb) of Memory (total available resource memory is about 10Mb).

ssh localhost -p 2022
root@ec8b2c340d79:~#

# Lock about (7-9Mb) of Memory - to see in docker stats.
perl -e '$a = "A" x 2_000_000; sleep 3600' &

# But if you try to lock more then 10Mb, the process will get die.
perl -e '$a = "A" x 3_000_000; sleep 3600' &
[1] 24
root@ec8b2c340d79:~# 
[1]+  Killed                  perl -e '$a = "A" x 3_000_000; sleep 3600'

Now, lets check the container CPU and Memory resource status.

# Without stress test
docker stats
CONTAINER           CPU %               MEM USAGE / LIMIT    MEM %               NET I/O             
ec8b2c340d79        0.00%               4.047 MiB / 10 MiB     40.47%              31.5 kB / 25.4 kB   1.04 MB / 0 B       3

# With stress test
docker stats
CONTAINER           CPU %               MEM USAGE / LIMIT    MEM %               NET I/O             
ec8b2c340d79        100.23%               8.277 MiB / 10 MiB     82.81%              31.5 kB / 25.4 kB   1.04 MB / 0 B       3

Lets explorer Docker Volume and File System options

For this test I am going to create an empty block file filled with zeros. this will be used as an exposed file system.

Creating an empty volume

Use fallocate to crate a 2Gb empty blob.

fallocate -l 2g vol1

# Note: The count uses the amount of Mib used, for this example its 2048 Mib (2Gb)
dd if=/dev/zero of=vol1 bs=1M count=2048

# Create ext3 file system and mount to make sure it works
mkfs -t ext3 vol1
mkdir /media/vol1
mount -t auto -o loop vol1 /media/vol1
ls /media/vol1
umount /media/vol1

Create a Docker container with the above volume and mount

Notice the -v option for volume.

docker run -ti -d --name test-vol -v /media/vol1:/vol1 -p 2024:22 --cpus 1 -m "10m" rastasheep/ubuntu-sshd

# Add the ro option for read only (default read write)
docker run -ti -d --name test-vol -v /media/vol1:/vol1:ro -p 2024:22 --cpus 1 -m "10m" rastasheep/ubuntu-sshd

Login to the container and verify the /vol a 2Gb File System.

df -h
Filesystem                     Size  Used Avail Use% Mounted on
[..] snip
/dev/loop0                     2.0G  3.1M  1.9G   1% /vol1

Shared Volumes or File Systems

Note: To use volumes and or shared File Systems you need a Docker driver.
Flocker is a very common driver used but requiters a separate installation and configuration, described below.

Installing and Configuring Flocker

Note: I am still working on a full easy/clean Flocker installation and its still a working progress, as I ran in to some issues with Ubuntu 17.x pre-release.

Configure and add the flocker repo

apt-get -y install apt-transport-https software-properties-common
add-apt-repository -y "deb https://clusterhq-archive.s3.amazonaws.com/ubuntu/$(lsb_release --release --short)/\$(ARCH) /"

Run the below to create the flocker entry’s

cat < /etc/apt/preferences.d/buildbot-700
Package: *
Pin: origin clusterhq-archive.s3.amazonaws.com
Pin-Priority: 700
EOF

Update the system repo cache and install the Flocker cluster system.

apt-get update
apt-get -y install --force-yes clusterhq-flocker-cli

Note: Additional Flocker configuration is required to get it working. will update below once that is available.

Now, that flocker is working, lets create a flocker volume

To Create a shared Flocker volume, use one of the two options below.
Option 1: Use a pre created volume

# Create flocker volume
docker volume create -d flocker --opt o=size=20GB my-named-volume

# Create docker with flocker volume
docker run -ti -d -v my-named-volume:/vol1 --name test-vol -p 2024:22 --cpus 1 -m "10m" rastasheep/ubuntu-sshd

Option 2: Create a Docker Flocker volume.

docker run -ti -d  --name test-vol --volume-driver=flocker -v my-named-volume:/vol1 -p 2024:22 --cpus 1 -m "10m" rastasheep/ubuntu-sshd

Note: You can just migrate/use this volume on any other Docker Flocker host part of this Flocker cluster.

Initializing Docker Swarm

To use Docker Swarm you first need to initialize it.
Initializing Docker swarm by running the below.

docker swarm init
Swarm initialized: current node (gq3fodrlv0rbfjvkwxirtqoqu) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-4m5fooinsatravaxm31dgpa95ye7eyx5sk6kqo8nh4y2rx8d56-3pvsusrdnfhbsggxbu66pwwo5 \
    10.10.10.11:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Verify Docker swarm working

docker node list
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
gq3fodrlv0rbfjvkwxirtqoqu *  docker-node1   Ready   Active        Leader

Docker and Docker Swarm Web UI(GUI)

I explored multiple Docker UI Managers, some of them work better then others (especially out of the box).
Below I will show two options I played with. Portainer and Rancher, both have there plus and mines.
Note: Tutum is also a nice option, its now part of Docker called Docker Cloud I have not had a chance to test it.

So lets jump right into it.

Installing Portainer

The first one to test was Portainer, installation was very simple, its a rely small application (about 3Mb).
Getting it up and running was not more then the install/pull request below, which exposes port 9000 to access with the web-ui.

Just run the below to install and configure Portainer.

docker run -d -p 9000:9000 -v "/var/run/docker.sock:/var/run/docker.sock" portainer/portainer

After connection to the UI for the first time it will ask you to set an admin password.
Below are a a few Portainer screen shuts.

Portainer Main Dashboard


Portainer Container View

Portainer Container Monitor

Now, lets discuss a bit about Rancher Labs, Rancher is a much wider application, therefore requires a bit more configuration.

Installing Rancher Labs

To install the Rancher server just run the below, this will expose port 8000 for managing with the web ui.

docker run -d --restart=unless-stopped -p 8080:8080 rancher/server

With Portainer it was simple once the application connects with a socket /var/run/docker.sock. but with Rancher it requires a bit more configuration.

After connecting to the Rancher web ui for the first time, you will need to click on Add Host, you will then get a form to complete which will generate something like the below, run that in your terminal.

docker run -e CATTLE_AGENT_IP="10.10.10.10"  -d --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.1 http://rancher.domain.com:8080/v1/scripts/B6CBE5237C400EEA68F7:1483142400000:LlhP1dkPf7h9vNisgXKWMnUMN8

Note: Its possible you will need to login to the container and add your servers name and ip (and your porxy name and ip), then reboot that container to connect.

After issuing the above commend, ranger will configure a number of containers used for his own services, for example ipSec, etc…

Once setup completes adding your container hosts. working in the UI is very nice and powerful.
Below are a few Rancher screen shuts.

Rancher Environment Support


Rancher Stack Service


Rancher Host Monitor


Rancher Container View


Rancher Catalog


Come back to see more on – Configuration Automation with Chef and Docker

References

Rancher Installation
Docker Configuration Options

You might also like Using Chef Kitchen / Docker Build Behind a Corporate Proxy or Firewall.

What tools did you experience managing Docker on Ubuntu? Please let me know in the comments below.

1
Leave a Reply

avatar
3000
1 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
1 Comment authors
Bablofil Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
Bablofil
Guest

Thanks, great article.