Every blog post that I read since a couple of months are mentioning Docker, that’s a fact ! I’ve never been so stressed since years because our jobs are changing. That is not my choice or my will but what we were doing couple of years ago and what we are doing now is going to disappear sooner than I thought. The world of infrastructure as we know it is dying, same thing for sysadmin jobs. I would never have thought this was something that could happen to me during my career, but here we are. Old Unix systems are slowly dying and Linux virtual machines are becoming less and less popular. One part of my career plan is to be excellent on two different systems Linux and AIX but I now have to recognize I probably made a mistake thinking it will saves me from unemployment or from any bullshit job. We’re all gonna end up retired that’s certain but the reality is that I’ll prefer working on something fun and innovative than being stuck on old stuffs forever. We’ve got Openstack for a while and we now have Docker. As no employers will look at a candidate with no Docker experience I had to learn this (in fact I’m using docker since more than one year now. My twitter followers already knows this). I don’t want to be one of the social-reject of a world that is changing too fast. Computer science is living its car crisis and we are the blue collars who will be left behind. There is no choice; there won’t be a place for everyone and you’ll not be the only one fighting in the pit trying to be hired. You have to react now or slowly die … like all the sysadmins I see in banks getting worse and worse. Moving them on Openstack was a real challenge (still not completed) I can’t imagine trying to make them work on Docker. On the other hand I’m also surrounded by excellent people (I have to say I’ve met a true genius a couple of years ago) who are doing crazy things. Unfortunately for me they are not working with me (they are in big companies (ie. RedHat/Oracle/Big blue) or in other places where people tends to understand something is changing and going on)). I feel like being a bad at everything I do. Unemployable. Nevertheless I still have the energy to work on new things and Docker is a part of it. One of my challenge was/is to migrate all our infrastructure services on Docker, not just for the fun but to be able to easily reproduce this infrastructure over and over again. The goal here is to run every infrastructure service in a Docker containers and try at least to make them highly available. We are here going to see how to do that on PowerSystems trying to use Ubuntu or Redhat ppc64le to run our Docker engine and containers. We will next create our own Docker base images (Ubuntu and Redhat ones) and push it in our custom made registry. Then we will create containers for our applications (I’ll just give here some examples (webserver and grafana/influxdb). Then to finish we will try Swarm to make these containers highly available by creating “global/replicas” services. This blog post is also here to prove that Power is an architecture on which you can do the exact same thing as x86. Having Ubuntu 16.04 LTS available on ppc64le arch is a damn good thing because it provides a lot of Opensource products (graphite, grafana, influxdb and all web servers, and so on). Let’s do everything to become a killer DevOps. I have done this for sysadmin stuffs why the hell I’ll not be capable of providing the same effort on DevOps things. I’m not that bad, at least I try.
Installing the docker-engine
Red Hat Enterprise Linux ppc64el
Unfortunately for our “little” community the current Red Hat Enterprise repositories for the ppc64le arch do not provides the Docker packages. IBM is providing a repository at this adress http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/. On my side I’m mirroring this repository on my local site (with wget) and create my own repository as my servers have no access to the internet. Keep in mind that this repository is not up to date with the lastest version of Docker. At the time I’m writing this blog post Docker 1.13 is available on this repository is still serving Docker 1.12. Not exactly what we want for a technology like Docker (we absolutely want to keep the engine up to date):
# wget --mirror http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/docker-ppc64el/ # wget --mirror http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/misc_ppc64el/ # cat docker.repo [docker-ppc64le-misc] name=docker-ppc64le-msic baseurl=http://nimprod:8080/dockermisc-ppc64el/ enabled=1 gpgcheck=0 [docker-ppc64le] name=docker-ppc64le baseurl=http://nimprod:8080/docker-ppc64el/ enabled=1 gpgcheck=0 # yum info docker.ppc64le Loaded plugins: product-id, search-disabled-repos, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Installed Packages Name : docker Arch : ppc64le Version : 1.12.0 Release : 0.ael7b Size : 77 M Repo : installed From repo : docker-ppc64le Summary : The open-source application container engine URL : https://dockerproject.org License : ASL 2.0 Description : Docker is an open source project to build, ship and run any application as a [..] # yum search swarm yum search swarm Loaded plugins: product-id, search-disabled-repos, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. ============================================================================================================================== N/S matched: swarm ============================================================================================================================== docker-swarm.ppc64le : Docker Swarm is native clustering for Docker. [..]
# yum -y install docker [..] Downloading packages: (1/3): docker-selinux-1.12.0-0.ael7b.noarch.rpm | 27 kB 00:00:00 (2/3): libtool-ltdl-2.4.2-20.el7.ppc64le.rpm | 50 kB 00:00:00 (3/3): docker-1.12.0-0.ael7b.ppc64le.rpm | 16 MB 00:00:00 -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 33 MB/s | 16 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : libtool-ltdl-2.4.2-20.el7.ppc64le 1/3 Installing : docker-selinux-1.12.0-0.ael7b.noarch 2/3 setsebool: SELinux is disabled. Installing : docker-1.12.0-0.ael7b.ppc64le 3/3 rhel72/productid | 1.6 kB 00:00:00 Verifying : docker-selinux-1.12.0-0.ael7b.noarch 1/3 Verifying : docker-1.12.0-0.ael7b.ppc64le 2/3 Verifying : libtool-ltdl-2.4.2-20.el7.ppc64le 3/3 Installed: docker.ppc64le 0:1.12.0-0.ael7b Dependency Installed: docker-selinux.noarch 0:1.12.0-0.ael7b libtool-ltdl.ppc64le 0:2.4.2-20.el7 Complete! # systemctl start docker # docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES # docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 1.12.0 [..]
Enabling the device-mapper direct disk mode (instead of loop)
By default on RHEL after installing the docker packages and starting the engine Docker use an lvm loop device to create it’s pool (where the images and the containers will be stored). This is not recommanded and not good for production usage. That’s why on every docker engine host I’m creating a dockervg for this pool. Red Hat provides with the atomic host project a tool called docker-storage-setup to let you configure the thin pool for you (on another volume group).
# git clone https://github.com/projectatomic/docker-storage-setup.git # cd docker-storage-setup # make install
Create a volume group on a physical volume, configure and run docker-storage-setup:
# docker-storage-setup --reset # systemctl stop docker # rm -rf /var/lib/docker # pvcreate /dev/mapper/mpathb Physical volume "/dev/mapper/mpathb" successfully created # vgcreate dockervg /dev/mapper/mpathb Volume group "dockervg" successfully created # cat /etc/sysconfig/docker-storage-setup # Edit this file to override any configuration options specified in # /usr/lib/docker-storage-setup/docker-storage-setup. # # For more details refer to "man docker-storage-setup" VG=dockervg SETUP_LVM_THIN_POOL=yes DATA_SIZE=70%FREE # /usr/bin/docker-storage-setup Rounding up size to full physical extent 104.00 MiB Logical volume "docker-pool" created. Logical volume "docker-pool" changed. # cat /etc/sysconfig/docker-storage DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/dockervg-docker--pool --storage-opt dm.use_deferred_removal=true "
I don’t know why on the version of docker I am running the DOCKER_STORAGE_OPTIONS (in /etc/sysconfig/docker-storage) was not read. I had to manually edit the systemctl unit to be able to let Docker use my thinpooldev:
# vi /usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/dockervg-docker--pool --storage-opt dm.use_deferred_removal=true # systemctl daemon-reload # systemctl start docker # docker info [..] Storage Driver: devicemapper Pool Name: dockervg-docker--pool Pool Blocksize: 524.3 kB Base Device Size: 10.74 GB Backing Filesystem: xfs Data file: Metadata file: Data Space Used: 20.45 MB Data Space Total: 74.94 GB Data Space Available: 74.92 GB Metadata Space Used: 77.82 kB Metadata Space Total: 109.1 MB Metadata Space Available: 109 MB Thin Pool Minimum Free Space: 7.494 GB Udev Sync Supported: true Deferred Removal Enabled: true Deferred Deletion Enabled: false Deferred Deleted Device Count: 0 Library Version: 1.02.107-RHEL7 (2015-10-14)
Ubuntu 16.04 LTS ppc64le
As always on Ubuntu all is always super easy. I’m just deploying an Ubuntu 16.04 LTS and run a single apt install to install the docker engine. Neat. Just for you information as my server do not have any access to the internet I’m using a tool called apt-mirror to mirror the official Ubuntu repositories. The tool can be found easily on github at this address. https://github.com/apt-mirror/apt-mirror. You then just have to specify which arch and which repository you want to clone on your local site:
# cat /etc/apt/mirror.list [..] set defaultarch ppc64el [..] set use_proxy on set http_proxy proxy:8080 set proxy_user benoit set proxy_password mypasswd [..] deb http://ports.ubuntu.com/ubuntu-ports xenial main restricted universe multiverse deb http://ports.ubuntu.com/ubuntu-ports xenial-security main restricted universe multiverse deb http://ports.ubuntu.com/ubuntu-ports xenial-updates main restricted universe multiverse deb http://ports.ubuntu.com/ubuntu-ports xenial-backports main restricted universe multiverse # /usr/local/bin/apt-mirror Downloading 152 index files using 20 threads... Begin time: Fri Feb 17 14:36:03 2017 [20]... [19]... [18]... [17]... [16]... [15]... [14]... [13]... [12]... [11]... [10]... [9]... [8]... [7]... [6]... [5].
After having downloaded the packages create a repository based on these downloaded deb files accessible trough http and install Docker:
# cat /etc/os-release NAME="Ubuntu" VERSION="16.04 LTS (Xenial Xerus)" # uname -a Linux dockermachine1 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:30:22 UTC 2016 ppc64le ppc64le ppc64le GNU/Linux # apt-install docker.io Reading package lists... Done Building dependency tree Reading state information... Done [..] Setting up docker.io (1.10.3-0ubuntu6) ... Adding group `docker' (GID 116) ... # docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
On ubuntu use aufs
I strongly recommend keeping aufs as the default filesystem to store containers and images. I’m creating and mounting the /var/lib/docker/aufs on another disk with a lot of space available and that’s it:
# pvcreate /dev/mapper/mpathb Physical volume "/dev/mapper/mpathb" successfully created # vgcreate dockervg /dev/mapper/mpathb Volume group "dockervg" successfully created # lvcreate -n dockerlv -L99G dockervg Logical volume "dockerlv" created. # mkfs.ext4 /dev/dockervg/dockerlv [..] # echo "/dev/mapper/dockervg-dockerlv /var/lib/docker/ ext4 errors=remount-ro 0 1" > /etc/fstab # systemctl stop docker # mount /var/lib/docker # systemctl start docker # df -h | grep docker /dev/mapper/dockervg-dockerlv 98G 61M 93G 1% /var/lib/docker
The docker-compose case
If you’re installing Docker on a Ubuntu host everything is easy as docker-compose will be available on the Ubuntu official repository. Just run an apt-get install docker-compose and it’s ok.
# apt install docker-compose [..] # docker-compose -v docker-compose version 1.5.2, build unknown
On RedHat compose is not available on the repository delivred by IBM. docker-compose is just a python program and can be downloaded and install via pip. Download compose on a machine with internet access, then use pip to install it:
On the machine having the access to the internet:
# mkdir compose # pip install --proxy "http://benoit:mypasswd@myproxy:8080" --download="compose" docker-compose --force --upgrade [..] Successfully downloaded docker-compose cached-property six backports.ssl-match-hostname PyYAML ipaddress enum34 colorama requests jsonschema docker texttable websocket-client docopt dockerpty functools32 docker-pycreds # scp -r compose dockerhost:~ docker_compose-1.11.1-py2.py3-none-any.whl 100% 83KB 83.4KB/s 00:00 cached_property-1.3.0-py2.py3-none-any.whl 100% 8359 8.2KB/s 00:00 six-1.10.0-py2.py3-none-any.whl 100% 10KB 10.1KB/s 00:00 backports.ssl_match_hostname-3.5.0.1.tar.gz 100% 5605 5.5KB/s 00:00 PyYAML-3.12.tar.gz 100% 247KB 247.1KB/s 00:00 ipaddress-1.0.18-py2-none-any.whl 100% 17KB 17.1KB/s 00:00 enum34-1.1.6-py2-none-any.whl 100% 12KB 12.1KB/s 00:00 colorama-0.3.7-py2.py3-none-any.whl 100% 19KB 19.5KB/s 00:00 requests-2.11.1-py2.py3-none-any.whl 100% 503KB 502.8KB/s 00:00 jsonschema-2.6.0-py2.py3-none-any.whl 100% 39KB 38.6KB/s 00:00 docker-2.1.0-py2.py3-none-any.whl 100% 103KB 102.9KB/s 00:00 texttable-0.8.7.tar.gz 100% 9829 9.6KB/s 00:00 websocket_client-0.40.0.tar.gz 100% 192KB 191.6KB/s 00:00 docopt-0.6.2.tar.gz 100% 25KB 25.3KB/s 00:00 dockerpty-0.4.1.tar.gz 100% 14KB 13.6KB/s 00:00 functools32-3.2.3-2.zip 100% 33KB 33.3KB/s 00:00 docker_pycreds-0.2.1-py2.py3-none-any.whl 100% 4474 4.4KB/s 00:00
On the machine runrunning docker:
# rpm -ivh python2-pip-8.1.2-5.el7.noarch.rpm # cd compose # pip install docker-compose -f ./ --no-index [..] Successfully installed colorama-0.3.7 docker-2.1.0 docker-compose-1.11.1 ipaddress-1.0.18 jsonschema-2.6.0 # docker-compose -v docker-compose version 1.11.1, build 7c5d5e4
Creating you docker base images and run your first application (a web server)
Regardless of which Linux distribution you have chosen you now need a docker base image to run your first containers. You have two choices: downloading an image from the internet and modify it to your own needs or create an image by yourself base on your current os.
Downloading an image from the internet
From a machine having an access to the internet install the docker engine and download the Ubuntu image. Using the docker save command create a tar based image. This one can then be imported on any docker engine using the docker load command:
- On the machine having access to the internet:
# docker pull ppc64le/ubuntu # docker save ppc644le/ubuntu > /tmp/ppc64le_ubuntu.tar
# docker load < ppc64le_ubuntu.tar 4fad21ac6351: Loading layer [==================================================>] 173.5 MB/173.5 MB 625e647dc584: Loading layer [==================================================>] 15.87 kB/15.87 kB 8505832e8bea: Loading layer [==================================================>] 9.216 kB/9.216 kB 9bca281924ab: Loading layer [==================================================>] 4.608 kB/4.608 kB 289bda1cbd14: Loading layer [==================================================>] 3.072 kB/3.072 kB Loaded image: ppc64le/ubuntu:latest # docker images REPOSITORY TAG IMAGE ID CREATED SIZE ppc64le/ubuntu latest 1967d889e07f 3 months ago 167.9 MB
The problem is that this image is not customized for your/my own needs. By this I mean the repositories used by the image are “pointing” to the officials Ubuntu repositories which will obviously not work if you have no access to the internet. We now have to modify the image for our needs. Run a container and launch a shell, then modify the sources.list with you local repository. Then commit this images to validate the changes made inside this one (you will generate a new image based on the current one plus your modifications):
# docker run -it ppc64le/ubuntu /bin/bash # rm /etc/apt/sources.list # echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main/ xenial main" >> /etc/apt/sources.list # echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main/ xenial-updates main" >> /etc/apt/sources.list # echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main/ xenial-security main" >> /etc/apt/sources.list # echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial restricted" >> /etc/apt/sources.list # echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-updates restricted" >> /etc/apt/sources.list # echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-security restricted" >> /etc/apt/sources.list # echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial universe" >> /etc/apt/sources.list # echo "#deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-updates universe" >> /etc/apt/sources.list # echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-security universe" >> /etc/apt/sources.list # echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial multiverse" >> /etc/apt/sources.list # echo "#deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-updates multiverse" >> /etc/apt/sources.list # echo "deb http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports/ xenial-security multiverse" >> /etc/apt/sources.list # exit # docker ps -a # docker commit # docker commit a9506bd5dd30 ppc64le/ubuntucust sha256:423c13b604dee8d24dae29566cd3a2252e4060270b71347f8d306380b8b6817d # docker images
Test the image is working by creating an image based on the one just created before. I’m here creating a dockerfile to do this. I’m not explaining here how dockerfiles are working, there are plenty of tutorial on the internet to learn this. To sum up you need to now the basis of Docker to read this blog post .
# cat dockerfile FROM ppc64le/ubuntucust RUN apt-get -y update && apt-get -y install apache2 ENV APACHE_RUN_USER www-data ENV APACHE_RUN_GROUP www-data ENV APACHE_LOG_DIR /var/log/apache2 ENV APACHE_PID_FILE /var/run/apache2.pid ENV APACHE_RUN_DIR /var/run/apache2 ENV APACHE_LOCK_DIR /var/lock/apache2 RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR EXPOSE 80 CMD [ "-D", "FOREGROUND" ] ENTRYPOINT ["/usr/sbin/apache2"]
I’m building the image calling it ubuntu_apache2 (this image will run a single apache2 server and expose the port 80):
# docker build -t ubuntu_apache2 . Sending build context to Docker daemon 2.048 kB Step 1 : FROM ppc64le/ubuntucust ---> 423c13b604de Step 2 : RUN apt-get -y update && apt-get -y install apache2 ---> Running in 5f868988bf5c Get:1 http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main xenial InRelease [247 kB] Get:2 http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main xenial-updates InRelease [102 kB] Get:3 http://ubuntuppc64le.chmod666.org:8080/ubuntu-ports-main xenial-security InRelease [102 kB] debconf: unable to initialize frontend: Dialog debconf: (TERM is not set, so the dialog frontend is not usable.) debconf: falling back to frontend: Readline Processing triggers for libc-bin (2.23-0ubuntu4) ... Processing triggers for systemd (229-4ubuntu11) ... Processing triggers for sgml-base (1.26+nmu4ubuntu1) ... ---> 4256ac36c0f7 Removing intermediate container 5f868988bf5c Step 3 : EXPOSE 80 ---> Running in fc72a50d3f1d ---> 3c273b0e2c3f Removing intermediate container fc72a50d3f1d Step 4 : CMD -D FOREGROUND ---> Running in 112d87a2f1e6 ---> e6ddda152e97 Removing intermediate container 112d87a2f1e6 Step 5 : ENTRYPOINT /usr/sbin/apache2 ---> Running in 6dab9b99f945 ---> bed93aae55b3 Removing intermediate container 6dab9b99f945 Successfully built bed93aae55b3 # docker images REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu_apache2 latest bed93aae55b3 About a minute ago 301.8 MB ppc64le/ubuntucust latest 423c13b604de 7 minutes ago 167.9 MB ppc64le/ubuntu latest 1967d889e07f 3 months ago 167.9 MB
Run a container with this image and expose the port 80:
# docker run -d -it -p 80:80 ubuntu_apache2 49916e3703c1cf0a671be10984b3215478973c0fd085490a61142b8959495732 # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 49916e3703c1 ubuntu_apache2 "/usr/sbin/apache2 -D" 12 seconds ago Up 10 seconds 0.0.0.0:80->80/tcp high_brattain # ps -ef | grep -i apache root 11282 11267 0 11:04 pts/1 00:00:00 /usr/sbin/apache2 -D FOREGROUND 33 11302 11282 0 11:04 pts/1 00:00:00 /usr/sbin/apache2 -D FOREGROUND 33 11303 11282 0 11:04 pts/1 00:00:00 /usr/sbin/apache2 -D FOREGROUND root 11382 3895 0 11:04 pts/0 00:00:00 grep --color=auto -i apache
On another host test the service is running by using curl (you can see here that you have access to the default index page of the Ubuntu apache2 server):
# curl mydockerhost <body> <div class="main_page"> <div class="page_header floating_element"> <img src="/icons/ubuntu-logo.png" alt="Ubuntu Logo" class="floating_element"/> <span class="floating_element"> Apache2 Ubuntu Default Page [..]
Creating your own image
You can also create your own image from scratch. For RHEL based systems (Centos, Fedora), Redhat provides an awesome script doing the job for you. This script is called mkimage-yum.sh and can be directly download from github. Have a look in it if you want to have the exact details (mknode, yum installroot, …..). The script will create a tar file and import it. After running the script you will have a new image available to use:
# wget https://github.com/docker/docker/blob/master/contrib/mkimage-yum.sh # chmod +x mkimage-yum.sh # ./mkimage-yum.sh baserehel72 [..] + tar --numeric-owner -c -C /tmp/base.sh.bxma2T . + docker import - baserhel72:7.2 sha256:f8b80847b4c7fe03d2cfdeda0756a7aa857eb23ab68e5c954cf3f0cb01f61562 + docker run -i -t --rm baserhel72:7.2 /bin/bash -c 'echo success' success + rm -rf /tmp/base.sh.bxma2T # docker images REPOSITORY TAG IMAGE ID CREATED SIZE baserhel72 7.2 f8b80847b4c7 About a minute ago 309.1 MB [..]
I’m running a web server to be sure everything is working is ok (same thing than on Ubuntu, httpd installation and exposing the port 80). Here below here is the dockerfile and the image build:
# cat dockerfile FROM baserhel72:7.2 RUN yum -y update && yum -y upgrade && yum -y install httpd EXPOSE 80 CMD [ "-D", "FOREGROUND" ] ENTRYPOINT ["/usr/sbin/httpd"] # docker build -t rhel_httpd . Sending build context to Docker daemon 2.048 kB Step 1 : FROM baserhel72:7.2 ---> 0c22a33fc079 Step 2 : RUN yum -y update && yum -y upgrade && yum -y install httpd ---> Running in 74c79763c56f [..] Dependency Installed: apr.ppc64le 0:1.4.8-3.el7 apr-util.ppc64le 0:1.5.2-6.el7 httpd-tools.ppc64le 0:2.4.6-40.el7 mailcap.noarch 0:2.1.41-2.el7 Complete! ---> 73094e173c1b Removing intermediate container 74c79763c56f Step 3 : EXPOSE 80 ---> Running in 045b86d1a6dc ---> f032c1569201 Removing intermediate container 045b86d1a6dc Step 4 : CMD -D FOREGROUND ---> Running in 9edc1cc2540d ---> 6d5d27171cba Removing intermediate container 9edc1cc2540d Step 5 : ENTRYPOINT /usr/sbin/httpd ---> Running in 8280382d61f0 ---> f937439d4359 Removing intermediate container 8280382d61f0 Successfully built f937439d4359
Again I’m launching a container and checking the service is available by curling the docker host. You can see that the image is based on RedHat … and the default page is the RHEL test page :
# docker run # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 30d090b2f0d1 rhel_httpd "/usr/sbin/httpd -D F" 3 seconds ago Up 1 seconds 0.0.0.0:80->80/tcp agitated_boyd # curl localhost <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http//www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <title>Test Page for the Apache HTTP Server on Red Hat Enterprise Linux</title> [..]
Creating your own docker registry
We now have our base Docker images but we want to make the available on every docker hosts without having to recreate them over and over again. To do so we are going to create what we call a docker registry. This registry will allow us to distribute our images across different docker hosts. Neat . When you are installing Docker the package docker-distribution is also installed and is shipped with a binary called “registry”. Why not running the registry … in a Docker container?
- Verify you have the registry command on the system:
# which registry /usr/bin/registry # registry --version registry github.com/docker/distribution v2.3.0+unknown
# yum whatproviders /usr/bin/registry Loaded plugins: product-id, search-disabled-repos, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. docker-distribution-2.3.0-2.ael7b.ppc64le : Docker toolset to pack, ship, store, and deliver content Repo : @docker Matched from: Filename : /usr/bin/registry
It’s a question of “chicken or egg” but you will need to have a base image to create your registry image (it’s obvious). As we have created our images locally we will now use one of these image (the RedHat one) to run the docker registry in a container. Here are the steps we are going to follow.
- Create a dockerfile based on the RedHat image we just created before. This docker file will contain the registry binary (registry) (COPY ./registry), the registry config file (config.yml) (COPY ./config.yml) and a wrapper script allowing its execution (entrypoint.sh) (COPY ./entrypoint.sh). We will also secure the registry with a password using htaccess file type (RUN htpasswd). Finally we will make volumes (VOLUME) /var/lib/registry and /certs available and expose (EXPOSE) the port 5000. Obviously necessaries directories will be create (RUN mkdir) and need tools will be install (RUN yum). I’m also here generating the htaccess file with regimguser with the password regimguser:
# cat dockerfile FROM ppc64le/rhel72:7.2 RUN yum update && yum upgrade && yum -y install httpd-tools RUN mkdir /etc/registry && mkdir /certs COPY ./registry /usr/bin/registry COPY ./entrypoint.sh /entrypoint.sh COPY ./config.yml /etc/registry/config.yml RUN htpasswd -b -B -c /etc/registry/registry_passwd regimguser regimguser VOLUME ["/var/lib/registry", "/certs"] EXPOSE 5000 ENTRYPOINT ["./entrypoint.sh"] CMD ["/etc/registry/config.yml"]
# cp /usr/bin/registry .
# cat entrypoint.sh #!/bin/sh set -e exec /usr/bin/registry "$@"
version: 0.1 storage: filesystem: rootdirectory: /var/lib/registry delete: enabled: true http: addr: :5000 tls: certificate: /certs/domain.crt key: /certs/domain.key auth: htpasswd: realm: basic-realm path: /etc/registry/registry_passwd
# docker build -t registry . ending build context to Docker daemon 13.57 MB Step 1 : FROM ppc64le/rhel72:7.2 ---> 9005cbc9c7f6 Step 2 : RUN yum update && yum upgrade && yum -y install httpd-tools ---> Using cache ---> de34fdf3864e Step 3 : RUN mkdir /etc/registry && mkdir /certs ---> Using cache ---> c801568b6944 Step 4 : COPY ./registry /usr/bin/registry ---> Using cache ---> 49927e0a90b8 Step 5 : COPY ./entrypoint.sh /entrypoint.sh ---> Using cache [..] Removing intermediate container 261f2b380556 Successfully built ccef43825f21 # docker images REPOSITORY TAG IMAGE ID CREATED SIZE16d35e8c1177 About an hour ago 361 MB registry latest 4287d4e389dc 2 hours ago 361 MB
We now need to generate certificates and place it in the right directories to make the registry secure:
- Generate an ssl certificate:
# cd /certs # openssl req -newkey rsa:4096 -nodes -sha256 -keyout /certs/domain.key -x509 -days 365 -out /certs/domain.crt Generating a 4096 bit RSA private key .............................................................................................................................................................++ ..........................................................++ writing new private key to '/certs/domain.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. [..] If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]: State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []:dockerengineppc64le.chmod666.org Email Address []:
# mkdir /etc/docker/certs.d/dockerengineppc64le.chmod666.org\:5000/ # cp /certs/domain.crt /etc/docker/certs.d/dockerengineppc64le.chmod666.org\:5000/cat.crt # cp /certs/domain.crt /etc/pki/ca-trust/source/anchors/dockerengineppc64le.chmod666.org.crt # update-ca-trust
# systemctl restart docker
Now that everything is ok regarding the image and the certificates, let’s now run the Docker container upload and download an image into the registry:
- Run the container, expose the port 5000 (-p 5000:5000), be sure the registry will be started when docker start (–restart=always), let the container access the certificates we have created before (-v /certs:/certs), store the images in /var/lib/registry (-v /var/lib/registry:/var/lib/registry):
# docker run -d -p 5000:5000 --restart=always -v /certs:/certs -v /var/lib/registry:/var/lib/registry --name registry registry 51ad253616be336bcf5a1508bf48b059f01ebf20a0772b35b5686b4012600c46 # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 51ad253616be registry "./entrypoint.sh /etc" 10 seconds ago Up 8 seconds 0.0.0.0:5000->5000/tcp registry
# docker login https://dockerengineppc64le.chmod666.org:5000 Username (regimguser): regimguser Password: Login Succeeded # docker tag grafana dockerengineppc64le.chmod666.org:5000/ppc64le/grafana # The push refers to a repository [dockerengineppc64le.chmod666.org:5000/ppc64le/grafana] 82bca1cb11d8: Pushed 9c1f2163c216: Pushing [==> ] 22.83 MB/508.9 MB 1df85fc1eaaf: Mounted from ppc64le/ubuntucust 289bda1cbd14: Mounted from ppc64le/ubuntucust 9bca281924ab: Mounted from ppc64le/ubuntucust 8505832e8bea: Mounted from ppc64le/ubuntucust 625e647dc584: Mounted from ppc64le/ubuntucust 4fad21ac6351: Mounted from ppc64le/ubuntucust [..] atest: digest: sha256:88eef1b47ec57dd255aa489c8a494c11be17eb35ea98f38a63ab9f5690c26c1f size: 1984 # curl --cacert /certs/domain.crt -X GET https://regimguser:regimguser@dockerengineppc64le.chmod666.org:5000/v2/_catalog {"repositories":["ppc64le/grafana","ppc64le/ubuntucust"]} # docker pull dockerengineppc64le.chmod666.org:5000/ppc64le/grafana Using default tag: latest latest: Pulling from ppc64le/grafana Digest: sha256:88eef1b47ec57dd255aa489c8a494c11be17eb35ea98f38a63ab9f5690c26c1f Status: Image is up to date for dockerengineppc64le.chmod666.org:5000/ppc64le/grafana:latest
Running a more complex application (graphana + influxdb)
One of the application I’m running is grafana. This grafana is used with influxdb as a datasource. We will see here how to run grafana/influxdb in a docker containers running a ppc64le Redhat distribution:
Build the grafana docker image
First create the docker file. You now have seen a lot of dockerfiles in this blog post so I’ll not explain this to you in details. The docker engine is running on Redhat but the image used here is an Ubuntu one. Grafana and Influxdb are available in the Ubuntu repositories.
# cat /data/docker/grafana/dockerfile FROM ppc64le/ubuntucust RUN apt-get update && apt-get -y install grafana gosu VOLUME ['/var/lib/grafana', '/var/log/grafana", "/etc/grafana'] EXPOSE 3000 COPY ./run.sh /run.sh ENTRYPOINT ["/run.sh"]
Here is the entrypoint script that will run grafan when the docker container will start :
# cat /data/docker/grafana/run.sh #!/bin/bash -e : "${GF_PATHS_DATA:=/var/lib/grafana}" : "${GF_PATHS_LOGS:=/var/log/grafana}" : "${GF_PATHS_PLUGINS:=/var/lib/grafana/plugins}" chown -R grafana:grafana "$GF_PATHS_DATA" "$GF_PATHS_LOGS" chown -R grafana:grafana /etc/grafana if [ ! -z "${GF_INSTALL_PLUGINS}" ]; then OLDIFS=$IFS IFS=',' for plugin in ${GF_INSTALL_PLUGINS}; do grafana-cli plugins install ${plugin} done IFS=$OLDIFS fi exec gosu grafana /usr/sbin/grafana \ --homepath=/usr/share/grafana \ --config=/etc/grafana/grafana.ini \ cfg:default.paths.data="$GF_PATHS_DATA" \ cfg:default.paths.logs="$GF_PATHS_LOGS" \ cfg:default.paths.plugins="$GF_PATHS_PLUGINS"
Then build grafana image:
# cd /data/docker/grafana # docker build -t grafana . Step 3 : VOLUME ['/var/lib/grafana', '/var/log/grafana", "/etc/grafana'] ---> Running in 7baf11e2a2b6 ---> f3449dd17ad4 Removing intermediate container 7baf11e2a2b6 Step 4 : EXPOSE 3000 ---> Running in 89e10b7bfa5e ---> cdc65141d2f4 Removing intermediate container 89e10b7bfa5e Step 5 : COPY ./run.sh /run.sh ---> 0a75c203bc8e Removing intermediate container 885719ef1fde Step 6 : ENTRYPOINT /run.sh ---> Running in 56f8b7d1274a ---> 4ca5c23b9aba Removing intermediate container 56f8b7d1274a Successfully built 4ca5c23b9aba # docker images REPOSITORY TAG IMAGE ID CREATED SIZE grafana latest 4ca5c23b9aba 32 seconds ago 676.8 MB ppc64le/ubuntucust latest c9274707505e 12 minutes ago 167.9 MB ppc64le/ubuntu latest 1967d889e07f 3 months ago 167.9 MB
Run it and verify it works ok:
# docker run -d -it -p 443:3000 grafana 19bdd6c82a37a7275edc12e91668530fc1d52699542dae1e17901cce59f1230a # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 19bdd6c82a37 grafana "/run.sh" 26 seconds ago Up 24 seconds 0.0.0.0:443->3000/tcp kickass_mcclintock # docker logs 19bdd6c82a37 2017/02/17 15:28:36 [I] Starting Grafana 2017/02/17 15:28:36 [I] Version: master, Commit: NA, Build date: 1970-01-01 00:00:00 +0000 UTC 2017/02/17 15:28:36 [I] Configuration Info Config files: [0]: /usr/share/grafana/conf/defaults.ini [1]: /etc/grafana/grafana.ini Command lines overrides: [0]: default.paths.data=/var/lib/grafana [1]: default.paths.logs=/var/log/grafana Paths: home: /usr/share/grafana data: /var/lib/grafana [..]
Build the influxdb docker image
Same job for the influxdb image, this one is also based on the Ubuntu image. I’m here showing you the dockerfile (as always packages installation, volume, port exposition). I’m here also including a configuration file influxdb (you can see here I’m also including a configuration file for influxdb):
# cat /data/docker/influxdb/dockerfile FROM ppc64le/ubuntucust RUN apt-get update && apt-get -y install influxdb VOLUME ['/var/lib/influxdb'] EXPOSE 8086 8083 COPY influxdb.conf /etc/influxdb.conf COPY entrypoint.sh /entrypoint.sh ENTRYPOINT ["/entrypoint.sh"] CMD ["/usr/bin/influxd"]
# cat influxdb.conf [meta] dir = "/var/lib/influxdb/meta" [data] dir = "/var/lib/influxdb/data" engine = "tsm1" wal-dir = "/var/lib/influxdb/wal" [admin] enabled = true
# cat entrypoint.sh #!/bin/bash set -e if [ "${1:0:1}" = '-' ]; then set -- influxd "$@" fi exec "$@"
Then build influxdb image:
# docker build -t influxdb . [..] Step 3 : VOLUME ['/var/lib/influxdb'] ---> Running in f3570a5a6c91 ---> 014035e3134c Removing intermediate container f3570a5a6c91 Step 4 : EXPOSE 8086 8083 ---> Running in 590405701bfc ---> 25f557aae499 Removing intermediate container 590405701bfc Step 5 : COPY influxdb.conf /etc/influxdb.conf ---> c58397a5ae7b Removing intermediate container d22132ec9925 Step 6 : COPY entrypoint.sh /entrypoint.sh ---> 25e931d39bbc Removing intermediate container 680eacd6597e Step 7 : ENTRYPOINT /entrypoint.sh ---> Running in 0695135e81c0 ---> 44ed7385ae61 Removing intermediate container 0695135e81c0 Step 8 : CMD /usr/bin/influxd ---> Running in f59cbcd5f199 ---> 073eeeb78055 Removing intermediate container f59cbcd5f199 Successfully built 073eeeb78055 # docker images REPOSITORY TAG IMAGE ID CREATED SIZE influxdb latest 073eeeb78055 28 seconds ago 202.7 MB grafana latest 4ca5c23b9aba 11 minutes ago 676.8 MB ppc64le/ubuntucust latest c9274707505e 23 minutes ago 167.9 MB ppc64le/ubuntu latest 1967d889e07f 3 months ago 167.9 MB
Run an influxdb container to verify it works ok:
# docker run -d -it -p 8080:8083 influxdb c0c042c7bc1a361d1bcff403ed243651eac88270738cfc390e35dfd434cfc457 # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c0c042c7bc1a influxdb "/entrypoint.sh /usr/" 4 seconds ago Up 1 seconds 0.0.0.0:8080->8086/tcp amazing_goldwasser 19bdd6c82a37 grafana "/run.sh" 10 minutes ago Up 10 minutes 0.0.0.0:443->3000/tcp kickass_mcclintock # docker logs c0c042c7bc1a 8888888 .d888 888 8888888b. 888888b. 888 d88P" 888 888 "Y88b 888 "88b 888 888 888 888 888 888 .88P 888 88888b. 888888 888 888 888 888 888 888 888 8888888K. 888 888 "88b 888 888 888 888 Y8bd8P' 888 888 888 "Y88b 888 888 888 888 888 888 888 X88K 888 888 888 888 888 888 888 888 888 Y88b 888 .d8""8b. 888 .d88P 888 d88P 8888888 888 888 888 888 "Y88888 888 888 8888888P" 8888888P" 2017/02/17 15:39:08 InfluxDB starting, version 0.10.0, branch unknown, commit unknown, built unknown 2017/02/17 15:39:08 Go version go1.6rc1, GOMAXPROCS set to 16
docker-compose
Now that we have two images on for grafana and one for influxdb lets make work them together. To do so we will user docker-compose. docker-compose allows to to describe the containers you want to run in a yml file and link them together. You can see below there are two different entries, one for influx db telling which image I’m going to use, the container name, the port that will be expose to the docker host (equivalent of -p 8080:8083 with a docker run command) and the volumes (-v with docker run command). For the grafana container everything is almost the same exception the “links” part. The grafana container should be able to “talk” to the influxdb one (to use influxdb as a datasource). The “links” stanza of the the yml file tells an entry containing the influxdb ip and name will be add in the /etc/hosts file of the grafana container. When you are going to configure grafana you will be able to use the “influxdb” name to access the database.:
# cat docker-compose.yml influxdb: image: influxdb:latest container_name: influxdb ports: - "8080:8083" - "80:8086" volumes: - "/data/docker/influxdb/var/lib/influxdb:/var/lib/influxdb" grafana: image: grafana:latest container_name: grafana ports: - "443:3000" links: - influxdb volumes: - "/data/docker/grafana/var/lib/grafana:/var/lib/grafana" - "/data/docker/grafana/var/log/grafana:/var/log/grafana"
To create the containers just run the “docker-compose up” (from the directory containing the yml file) command, this will create all the containers described in the yml file. Same for destroying them run a “docker-compose down.
# docker-compose up -d Creating influxdb Creating grafana # docker ps ONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5df7f3d58631 grafana:latest "/run.sh" About a minute ago Up About a minute 0.0.0.0:443->3000/tcp grafana 727dfc6763e1 influxdb:latest "/entrypoint.sh /usr/" About a minute ago Up About a minute 8083/tcp, 0.0.0.0:80->8086/tcp, 0.0.0.0:8080->8086/tcp influxdb # docker-compose down Stopping grafana ... done Stopping influxdb ... done Removing grafana ... done Removing influxdb ... done
Just to prove you everything is working ok I’m logging inside the influxdb container and pushing some data to the database using the NOAA_data.txt file provided by influxdb guy (these are just test data).
# docker exec -it 15845e92152f /bin/bash # apt-get install influxdb-client # cd /var/lib/influxdb ; influx -import -path=NOAA_data.txt -precision=s 2017/02/17 17:00:35 Processed 1 commands 2017/02/17 17:00:35 Processed 76290 inserts 2017/02/17 17:00:35 Failed 0 insertsI'm finally logging into the grafana (from a browser) and configuring the access to the database. I can create graphs based on the data just after doing this.
Creating a swarm cluster
Be very careful when starting with swarm. There are 2 different type of “swarm”. The swarm before docker 1.12 (called docker-swarm) and the swarm starting from docker 1.12 (call swarm mode). As the first version of swarm is already deprecated we will here use the swarm more embedded with docker 1.12. In this case no need to install additional software the swarm more is embedded with the docker binaries. The swarm mode can be used with the “docker service” commands to create what we call services (multiple docker-containers running across the swarm cluster with rules/constraints applied on them (create the containers on all the hosts, only on a couple of node and so on). First initialize the swarm mode on the machines (I’ll only use two nodes in my swarm cluster in the examples below) and all the worker nodes be sure you are logged in the registry (certificates are copied, docker login was done):
We will setup the swarm cluster on two nodes just to show you a simple example of the power of this technology. The first step is to choose a leader (there is one leader among the managers and the manager leader is responsible for the orchestration and the management of the swarm cluster) (if the leader has an issue one of the manager will take the lead) and a worker (you can have as many workers as you want in the swarm cluster). In the example below the manager/leader will be called (node1(manager)#) and the worker will be called (node2(worker)#). User the “docker swarm init” command to create your leader. The advertise address is the public address of the machine. The command will give you the commands to launch on the other managers or worker to allow them to join the cluster. Be sure the port tcp 2377 is reachable from all the nodes to the leader/managers. Last thing to add: swarm services rely on a overlay network, you need to createit to be able to create your swarm services:
node1(manager)# docker swarm init --advertise-addr 10.10.10.49 Swarm initialized: current node (813ompnl4c7f4ilkxqy0faj59) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-69tw66gb9jwfl8y46ujeemj3p5v85ikrqvwmqzb2x32kqmek8e-a9dv25loilaor6jfmcdq8je6h \ 10.10.10.49:2377 To add a manager to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-69tw66gb9jwfl8y46ujeemj3p5v85ikrqvwmqzb2x32kqmek8e-9e82z5k7qrzxsk2autu9ajt3r \ 10.10.10.49:2377 node1(manager)# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 813ompnl4c7f4ilkxqy0faj59 * swarm1.chmod666.org Ready Active Leader node1(manager)# docker network create -d overlay mynet 8mv5ydu9vokx node1(amanger)# docker network ls 8mv5ydu9vokx mynet overlay swarm
On the worker node run the command to join the cluster and verify all the nodes are Ready and Active. This will mean that you are ready to use the swarm cluster:
node2(worker)# docker swarm join --token SWMTKN-1-69tw66gb9jwfl8y46ujeemj3p5v85ikrqvwmqzb2x32kqmek8e-a9dv25loilaor6jfmcdq8je6h 10.10.10.49:2377 This node joined a swarm as a worker. node1(manager)# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 813ompnl4c7f4ilkxqy0faj59 * swarm1.chmod666.org Ready Active Leader bh7mhv3hg1x98b9j6lu00c3ef swarm2.chmod666.org Ready Active
The cluster is up and ready. Before working with it we need to find a solution to share the data of our application among the cluster. The best solution (from my point of view) is to use gluster, but for the convenience of this blog post I’ll just create a small nfs server on the leader node and mount the data on the worker node (for a production server the nfs server should be externalized (mounted from a NAS server)):
node1(manager)# exportfs # exportfs /nfsnode2(worker)# mount | grep nfs mount | grep nfs [..] swarm1.chmod666.org:/nfs on /nfs type nfs4 (rw,relatime,vers=4.0,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.10.10.48,local_lock=none,addr=10.10.10.49)
Running an appliction in a swarm cluster
We now have the swarm cluster ready to run some services but we first need a service. I’ll use a web application created by myself called whoami (inspired by the emilevauge/whoami application), just displaying the hostname and the ip address of the node running the service). I’m first creating a dockerfile allowing me to create container image ready to run any cgi ksh scripts. The dockerfile is copying a configuration file in /etc/httpd/conf.d and serving the files in /var/www/mysite on an /whoami/ alias:
# cd /data/dockerfile/httpd # cat dockerfile FROM swarm1.chmod666.org:5000/ppc64le/rhel72:latest RUN yum -y install httpd RUN mkdir /var/www/mysite && chown apache:apache /var/www/mysite EXPOSE 80 COPY ./mysite.conf /etc/httpd/conf.d VOLUME ['/var/www/html', '/var/www/mysite'] CMD [ "-D", "FOREGROUND" ] ENTRYPOINT ["/usr/sbin/httpd"] # cat mysite.conf Alias /whoami/ "/var/www/mysite/"AddHandler cgi-script .ksh DirectoryIndex whoami.ksh Options Indexes FollowSymLinks ExecCGI AllowOverride None Require all granted
I’m then building the image and pushing it into my private registry. The image is now available for download on any node of the swarm cluster:
Sending build context to Docker daemon 3.072 kB Step 1 : FROM dockerengineppc64le.chmod666.org:5000/ppc64le/rhel72:latest ---> 9005cbc9c7f6 Step 2 : RUN yum -y install httpd ---> Using cache ---> 1bc91df747cd [..] ---> Using cache ---> afb3cf77eb8a Step 8 : ENTRYPOINT /usr/sbin/httpd ---> Using cache ---> 187da163e084 Successfully built 187da163e084 # docker tag httpd swarm1.chmod666.org:5000/ppc64le/httpd # docker push swarm1.chmod666.org:5000/ppc64le/httpd The push refers to a repository [swarm1.chmod666.org:5000/ppc64le/httpd] 92d958e708cc: Layer already exists [..] latest: digest: sha256:3b1521432c9704ca74707cd2f3c77fb342a957c919787efe9920f62a26b69e26 size: 1156
Now that the image is ready we will create the application, it’s just a single ksh script and a css file.
# ls /nfs/docker/whoami/ table-responsive.css whoami.ksh # cat whoami.ksh #!/usr/bin/bash hostname=$(hostname) uname=$(uname -a) ip=$(hostname -I) date=$(date) env=$(env) echo "" echo "<html>" echo "<head>" echo " <title>Docker exemple</title>" echo " <link href="table-responsive.css" media="screen" type="text/css" rel="stylesheet" />" echo "</head>" echo "<body>" echo "<h1><span class="blue"><<span>Docker<span class="blue"><span> <span class="yellow">on PowerSystems ppc64le</pan></h1>" echo "<h2>Created with passion by <a href="http://chmod666.org" target="_blank">chmod666.org</a></h2>" echo "<table class="container">" echo " <thead>" echo " <tr>" echo " <th><h1>type</h1></th>" echo " <th><h1>value</h1></th>" echo " </tr>" echo " </thead>" echo " <tbody>" echo " <tr>" echo " <td>hostname</td>" echo " <td>${hostname}</td>" echo " </tr>" echo " <tr>" echo " <td>uname</td>" echo " <td>${uname}</td>" echo " </tr>" echo " <tr>" echo " <td>ip</td>" echo " <td>${ip}</td>" echo " </tr>" echo " <tr>" echo " <td>date</td>" echo " <td>${date}</td>" echo " </tr>" echo " <tr>" echo " <td>httpd env</td>" echo " <td>SERVER_SOFTWARE:${SERVER_SOFTWARE},SERVER_NAME:${SERVER_NAME},SERVER_PROTOCOL:${SERVER_PROTOCOL}</td>" echo " </tr>" echo " </tbody>" echo "</table>" echo " </tbody>" echo "</table>" echo "</body>" echo "</html>"
Just to be sure the web application is working run this image on the worker node (without swarm):
# docker run -d -p 80:80 -v /nfs/docker/whoami/:/var/www/mysite --name httpd swarm1.chmod666.org:5000/ppc64le/httpd a75095b23bc31715ac95d9bb57a7a161b06ef3e6a0f4eb4ed708cf60d03c0e5d # curl localhost/whoami/ [..]hostname a75095b23bc3 uname Linux a75095b23bc3 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux ip 172.17.0.2 date Wed Feb 22 14:33:59 UTC 2017 # docker rm a75095b23bc3 -f a75095b23bc3We are now ready to create a swarm service with our application. Verify the swarm cluster health and create a service in global mode. The global mode means swarm will create one docker container per node.
node1(manager)# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 813ompnl4c7f4ilkxqy0faj59 * swarm1.chmod666.org Ready Active Leader bh7mhv3hg1x98b9j6lu00c3ef swarm2.chmod666.org Ready Active node1(manager)# docker service create --name whoami --mount type=bind,source=/nfs/docker/whoami/,destination=/var/www/mysite --mode global --publish 80:80 --network mynet swarm1.chmod666.org:5000/ppc64le/httpd 7l8c4stcl3zgiijf6oe2hvu1r node1(manager) # docker service ls ID NAME REPLICAS IMAGE COMMAND 7l8c4stcl3zg whoami global swarm1.chmod666.org:5000/ppc64le/httpdVerify there is one container available on each swarm node:
node1(manager) docker service ps 7l8c4stcl3zg docker service ps 7l8c4stcl3zg ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 2sa543un5v4hpvwgouyorhndm whoami swarm1.chmod666.org:5000/ppc64le/httpd swarm1.chmod666.org Running Running 2 minutes ago 5061eogr8wimt9al6uss1wet2 \_ whoami swarm2.chmod666.org:5000/ppc64le/httpd swarm2.chmod666.org Running Running 2 minutes agoI’m now accessing the webservice with both dns (swarm1 and swarm2) and I’m verifying I’m accessing a different container each time I’m doing an http resquest:
- When access swarm1.chmod666.org (I’m seeing a docker hostname and ip)
- When access swarm2.chmod666.org (I’m seeing a docker hostname and ip different that the first one)
You will now say: Ok that great !. But that’s not “redundant”. In fact it is because swarm is embedded with a very cool feature call swarm mesh routing. When you create service in the swarm cluster with the –publish option each swarm node will listen on this port, even the node on which the docker containers are not running, if you access any node on this port you will reach the container, by this I mean by accessing swarm1.chmod666.org you may reach the container running on swarm2.chmod666.org. When you will make another http request you can reach any of the containers running for this service. Let’s try creating a service with 10 replicas and access the same node over and over again.
node1(manager)# docker service create --name whoami --mount type=bind,source=/nfs/docker/whoami/,destination=/var/www/mysite --replicas 10 --publish 80:80 --network mynet swarm1.chmod666.org:5000/ppc64le/httpd el7nyiuga1vxtfgzktpfahucw node1(manager)# docker service ls ID NAME REPLICAS IMAGE COMMAND el7nyiuga1vx whoami 10/10 swarm1.chmod666.org:5000/ppc64le/httpd node2(worker)# docker service ps el7nyiuga1vx ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR bed84pmdjy6c0758g3r52mmsq whoami.1 swarm1.chmod666.org:5000/ppc64le/httpd swarm2.chmod666.org Running Running 46 seconds ago dgdj4ygqdr476e156osk8dd95 whoami.2 swarm1.chmod666.org:5000/ppc64le/httpd swarm2.chmod666.org Running Running 46 seconds ago ba2ni51fo96eo6c4qfir90t7q whoami.3 swarm1.chmod666.org:5000/ppc64le/httpd swarm2.chmod666.org Running Running 48 seconds ago 9qkwigxkrqje48do39ru3cv2h whoami.4 swarm1.chmod666.org:5000/ppc64le/httpd swarm2.chmod666.org Running Running 40 seconds ago 3hgwwdly23ovafv1g0jvegu16 whoami.5 swarm1.chmod666.org:5000/ppc64le/httpd swarm2.chmod666.org Running Running 43 seconds ago 0f3y844yqfbll2lmb954ro3cy whoami.6 swarm1.chmod666.org:5000/ppc64le/httpd swarm1.chmod666.org Running Running 51 seconds ago 0955dz84rv4gpb4oqv8libahd whoami.7 swarm1.chmod666.org:5000/ppc64le/httpd swarm1.chmod666.org Running Running 42 seconds ago c05hrs9h0mm6ghxxdxc1afco9 whoami.8 swarm1.chmod666.org:5000/ppc64le/httpd swarm1.chmod666.org Running Running 50 seconds ago 03qcbiuxlk13p60we0ke6vqka whoami.9 swarm1.chmod666.org:5000/ppc64le/httpd swarm1.chmod666.org Running Running 54 seconds ago 0otgw4ncka81hlxgyt82z36zj whoami.10 swarm1.chmod666.org:5000/ppc64le/httpd swarm1.chmod666.org Running Running 48 seconds ago node1(manager)# docker ps # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a25404371765 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 5 minutes ago Up 4 minutes 80/tcp whoami.7.0955dz84rv4gpb4oqv8libahd 07c38a306a68 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 5 minutes ago Up 4 minutes 80/tcp whoami.4.9qkwigxkrqje48do39ru3cv2h e88a8c8a3639 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 5 minutes ago Up 5 minutes 80/tcp whoami.8.c05hrs9h0mm6ghxxdxc1afco9 f73a84cc6622 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 5 minutes ago Up 5 minutes 80/tcp whoami.1.bed84pmdjy6c0758g3r52mmsq 757be5ec73a4 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 5 minutes ago Up 5 minutes 80/tcp whoami.3.ba2ni51fo96eo6c4qfir90t7q 51ad253616be registry "./entrypoint.sh /etc" 45 hours ago Up 2 hours 0.0.0.0:5000->5000/tcp registry node2(worker)# docker ps # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f015b0da7f2e swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 6 minutes ago Up 5 minutes 80/tcp whoami.5.3hgwwdly23ovafv1g0jvegu16 4b7452245406 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 6 minutes ago Up 5 minutes 80/tcp whoami.10.0otgw4ncka81hlxgyt82z36zj 71722a2d7f38 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 6 minutes ago Up 5 minutes 80/tcp whoami.6.0f3y844yqfbll2lmb954ro3cy 01bc73d6fdf7 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 6 minutes ago Up 5 minutes 80/tcp whoami.9.03qcbiuxlk13p60we0ke6vqka 438c0d553550 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 6 minutes ago Up 5 minutes 80/tcp whoami.2.dgdj4ygqdr476e156osk8dd95Let’s now try accessing the service. I’m modifying my whoami.ksh just to print the information I need (the hostname).
cat /nfs/docker/whoami/whoami.ksh #!/usr/bin/bash hostname=$(hostname) uname=$(uname -a) ip=$(hostname -I) date=$(date) env=$(env) echo "" echo "hostname: ${hostname}" echo "ip: ${ip}" echo "uname:${uname}" # for i in $(seq 1 10) ; do for i in $(seq 1 10) ; do echo "[CALL $1]" ; curl -s http://swarm1.chmod666.org/whoami/ ; done [CALL ] hostname: f015b0da7f2e ip: 10.255.0.14 10.255.0.2 172.18.0.7 10.0.0.12 10.0.0.2 uname:Linux f015b0da7f2e 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux [CALL ] hostname: 4b7452245406 ip: 10.255.0.11 10.255.0.2 172.18.0.6 10.0.0.9 10.0.0.2 uname:Linux 4b7452245406 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux [CALL ] hostname: 438c0d553550 ip: 10.0.0.5 10.0.0.2 172.18.0.4 10.255.0.7 10.255.0.2 uname:Linux 438c0d553550 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux [CALL ] hostname: 71722a2d7f38 ip: 10.255.0.10 10.255.0.2 172.18.0.5 10.0.0.8 10.0.0.2 uname:Linux 71722a2d7f38 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux [CALL ] hostname: 01bc73d6fdf7 ip: 10.255.0.6 10.255.0.2 172.18.0.3 10.0.0.4 10.0.0.2 uname:Linux 01bc73d6fdf7 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux [CALL ] hostname: a25404371765 ip: 10.255.0.9 10.255.0.2 172.18.0.7 10.0.0.7 10.0.0.2 uname:Linux a25404371765 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux [CALL ] hostname: 07c38a306a68 ip: 10.255.0.8 10.255.0.2 172.18.0.6 10.0.0.6 10.0.0.2 uname:Linux 07c38a306a68 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux [CALL ] hostname: e88a8c8a3639 ip: 10.255.0.4 10.255.0.2 172.18.0.5 10.0.0.3 10.0.0.2 uname:Linux e88a8c8a3639 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux [CALL ] hostname: f73a84cc6622 ip: 10.255.0.12 10.255.0.2 172.18.0.4 10.0.0.10 10.0.0.2 uname:Linux f73a84cc6622 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/Linux [CALL ] hostname: 757be5ec73a4 ip: 10.255.0.13 10.255.0.2 172.18.0.3 10.0.0.11 10.0.0.2 uname:Linux 757be5ec73a4 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 2015 ppc64le ppc64le ppc64le GNU/LinuxI'm here doing ten calls and I'm seeing that I'm reaching a different docker container on each call, I can see that by checking the hostname. It shows you that the routing mesh is correctly working.
HAproxy
To access the service for a single ip I’m installing an haproxy server on aonther host (an Ubuntu ppc64le host). I’m then modifying the configuration file my swarm nodes. The haproxy will check for the accessibility of the web application and will round robin the request between the two docker host. If one of the docker swarm node is failing all requests will be send to the remaining alive node.
# apt-get install haproxy apt-get install haproxy Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: liblua5.3-0 Suggested packages: [..] Setting up haproxy (1.6.3-1ubuntu0.1) ... Processing triggers for libc-bin (2.23-0ubuntu5) ... Processing triggers for systemd (229-4ubuntu13) ... Processing triggers for ureadahead (0.100.0-19) ... # cat /etc/haproxy.conf frontend http_front bind *:80 stats uri /haproxy?stats default_backend http_back backend http_back balance roundrobin server swarm1.chmod666.org 10.10.10.48:80 check server swarm2.chmod666.org 10.10.10.49:80 checkI’m again changing the whoami.sh script just to print the hostname. Then from another host I’m running 10000 http request on the public ip of my haproxy server. I’m then counting how many request per containers were done. By doing this we can see two things. The haproxy service is correctly spreading the requests across each swarm nodev (I’m reaching ten different containers). The swarm mesh routing is working ok: all the request are almost equally spread among all the running containers. You can see the sessions spread in the haproxy stats page and in the curl example:
# /nfs/docker/whoami/whoami.sh #!/usr/bin/bash hostname=$(hostname) uname=$(uname -a) ip=$(hostname -I) date=$(date) env=$(env) echo "" echo "${hostname}" # for i in $(seq 1 10000) ; do curl -s http://10.10.10.50/whoami/ ; done | sort | uniq -c 999 01bc73d6fdf7 1003 07c38a306a68 993 438c0d553550 998 4b7452245406 1006 71722a2d7f38 996 757be5ec73a4 1004 a25404371765 1004 e88a8c8a3639 995 f015b0da7f2e 1002 f73a84cc6622I’m finally shutting down one of the worker nodes. We can also see two things here. The service is created with 10 replicas. When can see here that shutting down one done results in the creation of 5 more containers on the other node. By checking the haproxy stats page we also see that one node is detected down and all the request will be send to the remaining one. We have our high available docker service (to be totally redundant we also need to be sure the haproxy is running on two different host with a “floating” ip (I’ll not explain this here):
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 82fe21465b96 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" About a minute ago Up 29 seconds 80/tcp whoami.5.2d0t99pjide4w7nenzrribjph 71a4c51460ef swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" About a minute ago Up 21 seconds 80/tcp whoami.9.5f9qkx6t47vvjt8b9k5jhj79h 5830f0696cca swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" About a minute ago Up 32 seconds 80/tcp whoami.6.eso8uwhx6ij2we2iabmzx3tdu dbc2b731c547 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" About a minute ago Up 16 seconds 80/tcp whoami.2.8tc8zoxrpdell4f4d8zsr0rlw 050aacdf8126 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" About a minute ago Up 23 seconds 80/tcp whoami.10.ej8ahxzzp8bw3pybc6fib17qh a25404371765 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 2 hours ago Up 2 hours 80/tcp whoami.7.0955dz84rv4gpb4oqv8libahd 07c38a306a68 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 2 hours ago Up 2 hours 80/tcp whoami.4.9qkwigxkrqje48do39ru3cv2h e88a8c8a3639 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 2 hours ago Up 2 hours 80/tcp whoami.8.c05hrs9h0mm6ghxxdxc1afco9 f73a84cc6622 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 2 hours ago Up 2 hours 80/tcp whoami.1.bed84pmdjy6c0758g3r52mmsq 757be5ec73a4 swarm1.chmod666.org:5000/ppc64le/httpd:latest "/usr/sbin/httpd -D F" 2 hours ago Up 2 hours 80/tcp whoami.3.ba2ni51fo96eo6c4qfir90t7q 51ad253616be registry "./entrypoint.sh /etc" 2 days ago Up 4 hours 0.0.0.0:5000->5000/tcp registryConclusion
What we have reviewed in this blog post is pretty. The PowerSystem ecosystem is capable of doing the exact same thing as the x86 one. Everything here is proved. Powersystems are definitly ready to run Linux. The mighty RedHat and the incredible Ubuntu both provides a viable way to enter the world of DevOps on PowerSystems. We don’t need anymore to recompile everything or search for this or that package not available on Linux. The Ubuntu repository is huge. I was super impressed by the variety of packages available that are running on Power. A few days ago RedHat finally joined the OpenPower foundation and I can assure you that this is a big news. Maybe people are still not believing in the spreading of PowerSystems but things are slowy changing and with the first OpenPower servers running on Power9 I can assure you (at least I want to believe) that things will change. Regarding Docker I was/am a big x86 user of the solution, I’m running the blog and all my “personal” services on Docker and I have to recognize that ppc64le Linux distributions provides the exact same value as the x86. Hire me if you want to do such things (DevOps on Power). They’ll probably don’t want to do anything about Linux On Power in my company (I still have the faith as we have purchased 120 pairs of power sockets of Redhat ppc64le
![]()
).
Last words: sorry for not publishing more blog posts these days but I’m not living the best part of my life at work (nobody cares about what I’m doing, I’m just nothing …) and personally (different health problems for me and the people I love). Please accept my apologizes.