Chapter 4. Docker Configuration and Development
4.0 Introduction
If you have read all the chapters so far, you have learned all the basics of using Docker. You can install the Docker engine, start and manage containers, create and share images, and you have a good understanding of container networking including networking across multiple hosts. This chapter will now look at more advanced Docker topics, first for developers and then for configuration.
Recipe 4.1 looks at how to configure the Docker engine, then Recipe 4.2 shows how to compile Docker from source. Recipe 4.3 presents how to run all the tests to verify your build and Recipe 4.4 shows how to use this newly built binary instead of the official released Docker engine.
Developers might also want to look at the nsenter
utility in Recipe 4.5. While not needed for using Docker, it is of use to better understand how Docker leverages Linux namespaces to create containers. Recipe 4.6 is a sneak peek at the underlying library used to managed containers. Originally called libcontainer, runc
has been donated to the Open Container Initiative to be the seed source code to help drive a standard for container runtime and image format.
To dive deeper into configuration and how to access the Docker engine, Recipe 4.7 presents how to access Docker remotely and Recipe 4.8 introduces the application programming interface (API) exposed by Docker. The Docker client uses this API to manage containers. Accessing this API remotely and securely is described in Recipe 4.9, it shows how to set up TLS-based access to the Docker engine. To finish the configuration topics, Recipe 4.12 shows how to change the underlying storage driver that provides a union filesystem to support Docker images.
If you are a user of Docker, you will benefit from looking at Recipe 4.10 and Recipe 4.11. These two recipes present docker-py
, a Python module to communicate with the Docker API. This is not the only client library available for Docker, but it provides an easy entrypoint to learn the API.
4.1 Managing and Configuring the Docker Daemon
Solution
Use the docker
init script to manage the Docker daemon. On most Ubuntu/Debian-based systems, it will be located in the /etc/init.d/docker file. Like most other init services, it can be managed via the service
command. The Docker daemon runs as root
:
# service docker status docker start/running, process 2851 # service docker stop docker stop/waiting # service docker start docker start/running, process 3119
The configuration file is located in /etc/default/docker. On Ubuntu systems, all configuration variables are commented out. The /etc/default/docker file looks like this:
# Docker Upstart and SysVinit configuration file # Customize location of Docker binary (especially for development testing). #DOCKER="/usr/local/bin/docker" # Use DOCKER_OPTS to modify the daemon startup options. #DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4" # If you need Docker to use an HTTP proxy, it can also be specified here. #export http_proxy="http://127.0.0.1:3128/" # This is also a handy place to tweak where Docker's temporary files go. #export TMPDIR="/mnt/bigdrive/docker-tmp"
For example, if you wanted to configure the daemon to listen on a TCP socket to enable remote API access, you would edit this file as explained in Recipe 4.7.
Discussion
On systemd
-based systems like Ubuntu 15.05 or CentOS 7, you need to modify the systemd
unit file for Docker. It can be located in the /etc/systemd/system/docker.service.d directory or it can be the /etc/systemd/system/docker.service file. For more details on Docker daemon configuration using systemd
, see this article from the Docker documentation.
Finally, although you can start Docker as a Linux daemon, you can also start it interactively by using the docker -d
command or, starting with Docker 1.8, the docker daemon
command. You would then pass the options directly to the command. Check the help to see what options can be set:
$ docker daemon --help Usage: docker daemon [OPTIONS] Enable daemon mode --api-cors-header= Set CORS headers in the remote API -b, --bridge= Attach containers to a network bridge --bip= Specify network bridge IP -D, --debug=false Enable debug mode --default-gateway= Container default gateway IPv4 address ...
4.2 Compiling Your Own Docker Binary from Source
Solution
Use Git to clone the Docker repository from GitHub and use a Makefile to create your own binary.
Docker is built within a Docker container. In a Docker host, you can clone the Docker repository and use the Makefile rules to build a new binary.
This binary is obtained by running a privileged Docker container. The Makefile contains several targets, including a binary
target:
$ cat Makefile ... default: binary all: build $(DOCKER_RUN_DOCKER) hack/make.sh binary: build $(DOCKER_RUN_DOCKER) hack/make.sh binary ...
Therefore, it is as easy as sudo make binary
:
Tip
The hack directory in the root of the Docker repository has been moved to the project directory. Therefore, the make.sh script is in fact at project/make.sh. It uses scripts for each bundle that are stored in the project/make/ directory.
$ sudo make binary ... docker run --rm -it --privileged \ -e BUILDFLAGS -e DOCKER_CLIENTONLY -e DOCKER_EXECDRIVER \ -e DOCKER_GRAPHDRIVER -e TESTDIRS -e TESTFLAGS \ -e TIMEOUT \ -v "/tmp/docker/bundles:/go/src/github.com/docker/docker/\ bundles" \ "docker:master" hack/make.sh binary ---> Making bundle: binary (in bundles/1.9.0.-dev/binary) Created binary: \ /go/src/github.com/docker/docker/bundles/1.9.0-dev/binary/docker-1.9.0-dev
You see that the binary
target of the Makefile will launch a privileged Docker container from the docker:master image, with a set of environment variables, a volume mount, and a call to the hack/make.sh binary
command.
With the current state of Docker development, the new binary will be located in the bundles/1.9.0-dev/binary/ directory. The version number might differ, depending on the state of Docker releases.
Discussion
To ease this process, you can clone the repository that accompanies this cookbook. A Vagrantfile is provided that starts an Ubuntu 14.04 virtual machine, installs the latest stable Docker release, and clones the Docker repository:
$ git clone https://github.com/how2dock/docbook $ cd docbook/ch04/compile/ $ vagrant up
Once the machine is up, ssh
to it and go to the /tmp/docker directory, which should have been created during the Vagrant provisioning process. Then run make
.
The first time you run the Makefile, the stable Docker version installed on the machine will pull the base image being used by the Docker build process ubuntu:14.04, and then build the docker:master image defined in /tmp/docker/Dockerfile. This can take a bit of time the first time you do it:
$ vagrant ssh $ cd /tmp/docker $ sudo make binary docker build -t "docker:master" . Sending build context to Docker daemon 55.95 MB Sending build context to Docker daemon Step 0 : FROM ubuntu:14.04 ...
Once this completes, you will have a new Docker binary:
$ cd bundles/1.9.0-dev/binary/docker $ ls docker docker-1.9.0-dev docker-1.9.0-dev.md5 docker-1.9.0-dev.sha256
See Also
-
How to contribute to Docker on GitHub
4.3 Running the Docker Test Suite for Docker Development
Solution
Use the Makefile test
target to run the four sets of tests present in the Docker source. Alternatively, pick only the set of tests that matters to you:
$ cat Makefile ... test: build $(DOCKER_RUN_DOCKER) hack/make.sh binary cross \ test-unit test-integration \ test-integration-cli test-docker-py test-unit: build $(DOCKER_RUN_DOCKER) hack/make.sh test-unit test-integration: build $(DOCKER_RUN_DOCKER) hack/make.sh test-integration test-integration-cli: build $(DOCKER_RUN_DOCKER) hack/make.sh binary test-integration-cli test-docker-py: build $(DOCKER_RUN_DOCKER) hack/make.sh binary test-docker-py ...
You can see in the Makefile that you can choose which set of tests you want to run. If you run all of them with make test
, it will also build the binary:
$ sudo make test .... ---> Making bundle: test-docker-py (in bundles/1.9.0-dev/test-docker-py) +++ exec docker --daemon --debug --storage-driver vfs \ -exec-driver native \ --pidfile \ /go/src/github.com/docker/docker/bundles/1.9.0-dev/ \ test-docker-py/docker.pid ........................................................ ---------------------------------------------------------------------- Ran 56 tests in 75.366s OK
Depending on test coverage, if all the tests pass, you have some confidence that your new binary works.
See Also
-
Official Docker development environment documentation
4.4 Replacing Your Current Docker Binary with a New One
Problem
You have built a new Docker binary and run the unit and integration tests described in Recipe 4.2 and Recipe 4.3. Now you would like to use this new binary on your host.
Solution
Start from within the virtual machine setup in Recipe 4.2.
Stop the current Docker daemon. On Ubuntu 14.04, edit the /etc/default/docker file to uncomment the DOCKER
variable that defines where to find the binary and set it to DOCKER="/usr/local/bin/docker"
. Copy the new binary to /usr/local/bin/docker, and finally, restart the Docker daemon:
$ pwd /tmp/docker $ sudo service docker stop docker stop/waiting $ sudo vi /etc/default/docker $ sudo cp bundles/1.8.0-dev/binary/docker-8.0-dev /usr/local/bin/docker $ sudo cp bundles/1.8.0-dev/binary/docker-1.8.0-dev /usr/bin/docker $ sudo service docker restart stop: Unknown instance: $ docker version Client: Version: 1.8.0-dev API version: 1.21 Go version: go1.4.2 Git commit: 3e596da Built: Tue Aug 11 16:51:56 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.8.0-dev API version: 1.21 Go version: go1.4.2 Git commit: 3e596da Built: Tue Aug 11 16:51:56 UTC 2015 OS/Arch: linux/amd64
You are now using the latest Docker version from the master development branch (i.e., master branch at Git commit 3e596da at the time of this writing).
Discussion
The Docker bootstrap script used in the Vagrant virtual machine provisioning installs the latest stable version of Docker with the following:
sudo curl -sSL https://get.docker.com/ubuntu/ | sudo sh
This puts the Docker binary in /usr/bin/docker. This may conflict with your new binary installation. Either remove it or replace it with the new one if you see any conflicts when running docker version
.
4.5 Using nsenter
Solution
Use nsenter
. Starting with Docker 1.3, docker exec
allows you to easily enter a running container, so there is no need to do things like running an SSH server and exposing port 22 or using the now deprecated attach
command.
nsenter
was created to solve the problem of entering the namespace (hence, nsenter) of a container prior to the availability of docker exec
. Nonetheless, it is a useful tool that merits a short recipe in this book.
Let’s start a container that sleeps for the duration of this recipe, and for completeness, let’s enter the running container with docker exec
:
$ docker pull ubuntu:14.04 $ docker run -d --name sleep ubuntu:14.04 sleep 300 $ docker exec -ti sleep bash root@db9675525fab:/#
nsenter
gives the same result. Conveniently, it comes as an image in Docker Hub. Pull the image, run the container, and use nsenter
.
$ docker pull jpetazzo/nsenter $ sudo docker run docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter
At this time, it is useful to have a look at the Dockerfile for nsenter
and check the CMD
option. You will see that it runs a script called installer. This small bash script does nothing but detect whether a mount point exists at /target. If it does, it copies a script called docker-enter and a binary called nsenter to that mount point. In the docker run
command, since you specified a volume (i.e., -v /usr/local/bin:/target
), running the container will have the effect of copying nsenter on your local machine. Quite a nice trick with a powerful effect:
$ which docker-enter nsenter /usr/local/bin/docker-enter /usr/local/bin/nsenter
Note
To copy the files in /usr/local/bin, I run the container with sudo
. If you do not want to use this mount-point convenience, you can copy the files locally with a command like this:
$ docker run --rm jpetazzo/nsenter cat /nsenter \ > /tmp/nsenter && chmod +x /tmp/nsenter
You are now ready to enter the container. You can pass a command, if you do not want to get an interactive shell in the container:
$ docker-enter sleep root@db9675525fab:/# $ docker-enter sleep hostname db9675525fab
docker-enter
is nothing more than a wrapper around nsenter
. You could use nsenter
directly after finding the process ID of the container with docker inspect
, like so:
$ docker inspect --format {{.State.Pid}} sleep 9302 $ sudo nsenter --target 9302 --mount --uts --ipc --net --pid root@db9675525fab:/#
Discussion
Starting with Docker 1.3, you do not need to use nsenter
; use docker exec
instead:
$ docker exec -h Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...] Run a command in a running container -d, --detach=false Detached mode: run command in the background --help=false Print usage -i, --interactive=false Keep STDIN open even if not attached -t, --tty=false Allocate a pseudo-TTY
See Also
-
GitHub page from Jerome Petazzoni
nsenter
repository
4.6 Introducing runc
Solution
The Open Container Project (OCP) was established in June 2015, and the specifications coming from that project are still not done.
However, Docker Inc. donated its libcontainer codebase as an early implementation of a standard runtime for containers. This runtime is called runc
.
Warning
The OCP was just launched, so the specifications are not out yet. Expect many changes until the specifications and reference implementations are considered stable and official.
This recipe will give you a quick feel for runc
, including instructions to compile the Go codebase. As always, I prepared a Vagrant box that gives you a Docker host, a Go 1.4.2 installation, and a clone of the runc
code. To get started with this box, use the following:
$ git clone https://github.com/how2dock/docbook.git $ cd dockbook/ch04/runc $ vagrant up $ vagrant ssh
Once you are on a terminal inside this VM, you need to grab all the dependencies of runc
by using the go get
command. Once this completes, you can build runc
and install. Verify that you have a running runc
in your path:
Warning
Expect a change in the build process sometime soon. Most likely the build will use Docker itself.
$ cd go/src $ go get github.com/opencontainers/runc $ cd github.com/opencontainers/runc/ $ make $ sudo make install $ runc -v runc version 0.2
To run a container with runc
, you need a root filesystem describing your container image. The easiest way to get one is to use Docker itself and the docker export
command. So let’s pull a Docker image, start a container, and export it to a tarball:
$ cd ~ $ mkdir foobar $ cd foobar $ docker run --name foobar -d ubuntu:14.04 sleep 30 $ docker export -o foobar.tar foobar $ sudo -xf foobar.tar $ rm foobar.tar
To run this container, you need to generate a configuration file. This is most easily done with the runc spec
command. To get a container started quickly, you will need to get only one change, which is the location of the root filesystem. In that JSON file, edit the path to it; you see an excerpt here:
$ runc spec > config.json $ vi config.json ... "root": { "path": "./", "readonly": true ...
You are now ready to start your container with runc
as root, and you will get a shell inside your container:
$ sudo runc #
This is the low-level plumbing of Docker and what should evolve to become the Open Container standard runtime. You can now explore the configuration file and see how you can define a start-up command for your container, as well as a network namespace and various volume mounts.
Discussion
The Open Container Project is good news. In late 2014, CoreOS had started developing an open standard for container images, including a new trust mechanism, appc
. CoreOS also developed a container runtime implementation for running appc
-based containers. As part of the OCP, appc
developers will help develop the new runc
specification. This will avoid fragmentation in the container image format and runtime implementation.
If you look at an application container image (i.e., ACI) manifest, you will see high similarities with the configuration file obtained from runc spec
in the preceding solution section. You might see some of the rkt
implementation features being ported back into runc
.
4.7 Accessing the Docker Daemon Remotely
Solution
Switch the listening protocol that the Docker daemon is using by editing the configuration file in /etc/default/docker and issue a remote API call.
In /etc/default/docker, add a line that sets DOCKER_HOST
to use tcp
on port 2375. Then restart the Docker daemon with sudo service docker restart
:
$ cat /etc/default/docker ... # Use DOCKER_OPTS to modify the daemon startup options. #DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4" DOCKER_OPTS="-H tcp://127.0.0.1:2375" ...
You will then be able to use the Docker client by specifying a host accessed using TCP:
$ docker -H tcp://127.0.0.1:2375 images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE ubuntu 14.04 04c5d3b7b065 6 days ago 192.7 MB
Warning
This method is unencrypted and unauthenticated. You should not use this on a publicly routable host. This would expose your Docker daemon to anyone. You will need to properly secure your Docker daemon if you want to do this in production. (See Recipe 4.9.)
Discussion
With the Docker daemon listening over TCP you can now use curl
to make API calls and explore the response. This is a good way to learn the Docker remote API:
$ curl -s http://127.0.0.1:2375/images/json | python -m json.tool [ { "Created": 1418673175, "Id": "04c5d3b7b0656168630d3ba35d8889bdaafcaeb32bfbc47e7c5d35d2", "ParentId": "d735006ad9c1b1563e021d7a4fecfd384e2a1c42e78d8261b83d6271", "RepoTags": [ "ubuntu:14.04" ], "Size": 0, "VirtualSize": 192676726 } ]
We pipe the output of the curl
command through python -m json.tool
to make the JSON object that is returned readable. And the -s
option removes the information of the data transfer.
4.8 Exploring the Docker Remote API to Automate Docker Tasks
Problem
After being able to access the Docker daemon remotely (see Recipe 4.7), you want to explore the Docker remote API in order to write programs. This will allow you to automate Docker tasks.
Solution
The Docker remote API is fully documented. It is currently on version 1.21. It is a REST API, in the sense that it manipulates resources (e.g., images and containers) through HTTP calls using various HTTP methods (e.g., GET
, POST
, DELETE
). The attach and pull APIs are not purely REST, as noted in the documentation.
You already saw how to make the Docker daemon listen on a TCP socket (Recipe 4.7) and use curl
to make API calls. Tables 4-1 and 4-2 show a summary of the remote API calls that are available.
Action on containers | HTTP method | URI |
---|---|---|
List containers |
GET |
/containers/json |
Create container |
POST |
/containers/create |
Inspect a container |
GET |
/containers/(id)/json |
Start a container |
POST |
/containers/(id)/start |
Stop a container |
POST |
/containers/(id)/stop |
Restart a container |
POST |
/containers/(id)/restart |
Kill a container |
POST |
/containers/(id)/kill |
Pause a container |
POST |
/containers/(id)/pause |
Remove a container |
DELETE |
/containers/(id) |
Action on images | HTTP method | URI |
---|---|---|
List images |
GET |
/images/json |
Create an image |
POST |
/images/create |
Tag an image into a repository |
POST |
/images/(name)/tag |
Remove an image |
DELETE |
/images/(name) |
Search images |
GET |
/images/search |
For example, let’s download the Ubuntu 14.04 image from the public registry (a.k.a. Docker Hub), create a container from that image, and start it. Remove it and then remove the image. Note that in this toy example, running the container will cause it to exit immediately because you are not passing any commands:
$ curl -X POST -d "fromImage=ubuntu" -d "tag=14.04" http://127.0.0.1:2375/images/create $ curl -X POST -H 'Content-Type: application/json' -d '{"Image":"ubuntu:14.04"}' http://127.0.0.1:2375/containers/create {"Id":"6b6bd46f483a5704d4bced62ff58a0ac5758fb0875ec881fa68f0e...",\ "Warnings":null} $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS ... $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS ... 6b6bd46f483a ubuntu:14.04 "/bin/bash" 16 seconds ago ... $ curl -X POST http://127.0.0.1:2375/containers/6b6bd46f483a/start $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED ... 6b6bd46f483a ubuntu:14.04 "/bin/bash" About a minute ago ...
Now let’s clean things up:
$ curl -X DELETE http://127.0.0.1:2375/containers/6b6bd46f483a $ curl -X DELETE http://127.0.0.1:2375/images/04c5d3b7b065 [{"Untagged":"ubuntu:14.04"} ,{"Deleted":"04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2"} ,{"Deleted":"d735006ad9c1b1563e021d7a4fecfd75ed36d4384e2a1c42e78d8261b83d6271"} ,{"Deleted":"70c8faa62a44b9f6a70ec3a018ec14ec95717ebed2016430e57fec1abc90a879"} ,{"Deleted":"c7b7c64195686444123ef370322b5270b098c77dc2d62208e8a9ce28a11a63f9"} ,{"Deleted":"511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158"} $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS ... $ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
Tip
After enabling remote API access, you can set the DOCKER_HOST
variable to its HTTP endpoint. This relieves you from passing it to the docker
command as an -H
option.
For example, instead of docker -H http://127.0.0.1:2375 ps
, you can use export DOCKER_HOST=tcp://127.0.0.1:2375
and you will be able to simply use docker ps
.
Discussion
Although you can use curl
or write your own client, existing Docker clients like docker-py (see Recipe 4.10) can ease calling the API.
The list of APIs presented in Table 4-1 and Table 4-2 is not exhaustive, and you should check the complete API documentation for all API calls, query parameters, and response examples.
4.9 Securing the Docker Daemon for Remote Access
Solution
Set up TLS-based access to your Docker daemon. This will use public-key cryptography to encrypt and authenticate communication between a Docker client and the Docker daemon that you have set up with TLS.
The basic steps to test this security feature are described on the Docker website. However, it shows how to create your own certificate authority (CA) and sign server and client certificates using the CA. In a properly set up infrastructure, you need to contact the CA that you use routinely and ask for server certificates.
To conveniently test this TLS setup, I created an image containing a script that creates the CA and the server and client certificates and keys. You can use this image to create a container and generate all the needed files.
You start with an Ubuntu 14.04 machine, running the latest Docker version (see Recipe 1.1). Download the image and start a container. You will need to mount a volume from your host and bind mount it to the /tmp/ca inside the Docker container. You will also need to pass the hostname as an argument to running the container (in the following example, <hostname>). Once you are done running the container, all CA, server, and client keys and certificates will be available in your working directory:
$ docker pull runseb/dockertls $ docker run -ti -v $(pwd):/tmp/ca runseb/dockertls <hostname> $ ls cakey.pem ca.pem ca.srl clientcert.pem client.csr clientkey.pem extfile.cnf makeca.sh servercert.pem server.csr serverkey.pem
Stop the running Docker daemon. Create an /etc/docker directory and a ~/.docker directory. Copy the CA, server key, and server certificates to /etc/docker. Copy the CA, client key, and certificate to ~/.docker:
$ sudo service docker stop $ sudo mkdir /etc/docker $ mkdir ~/.docker $ sudo cp {ca,servercert,serverkey}.pem /etc/docker $ cp ca.pem ~/.docker/ $ cp clientkey.pem ~/.docker/key.pem $ cp clientcert.pem ~/.docker/cert.pem
Edit the /etc/default/docker (you need to be root
) configuration file to specify DOCKER_OPTS
(replace test with your own hostname):
DOCKER_OPTS="-H tcp://<test>:2376 --tlsverify \ --tlscacert=/etc/docker/ca.pem \ --tlscert=/etc/docker/servercert.pem \ --tlskey=/etc/docker/serverkey.pem"
Then restart the Docker service with sudo service docker restart
and try to connect to the Docker daemon:
$ docker -H tcp://test:2376 --tlsverify images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE runseb/dockertls latest 5ed60e0f6a7c 17 minutes ago 214.7 MB
Discussion
Tip
The runseb/dockertls convenience image is automatically built from the https://github.com/how2dock/docbook/ch04/tls
Dockerfile. Check it out.
By setting up a few environment variables (DOCKER_HOST
and DOCKER_TLS_VERIFY
), you can easily configure the TLS connection from the CLI:
$ export DOCKER_HOST=tcp://test:2376 $ export DOCKER_TLS_VERIFY=1 $ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE runseb/dockertls latest 5ed60e0f6a7c 19 minutes ago 214.7 MB
You can still use curl
as discussed in Recipe 4.7, but you need to specify the client key and certificate:
$ curl --insecure --cert ~/.docker/cert.pem --key ~/.docker/key.pem \ -s https://test:2376/images/json | python -m json.tool [ { "Created": 1419280147, "Id": "5ed60e0f6a7ce3df3614d20dcadf2e4d43f4054da64d52709c1559ac", "ParentId": "138f848eb669500df577ca5b7354cef5e65b3c728b0c241221c611b1", "RepoTags": [ "runseb/dockertls:latest" ], "Size": 0, "VirtualSize": 214723529 } ]
Note that you used the --insecure curl
option, because you created your own certificate authority. By default, curl
will check the certificates against the CAs contained in the default CA bundle installed on your server. If you were to get server and client keys and certificates from a trusted CA listed in the default CA bundle, you would not have to make an --insecure
connection. However, this does not mean that the connection is not properly using TLS.
4.10 Using docker-py to Access the Docker Daemon Remotely
Solution
Import the docker-py Python module from Pip. In a Python script or interactive shell, create a connection to a remote Docker daemon and start making API calls.
Note
Although this recipe is about docker-py, it serves as an example that you can use your own client to communicate with the Docker daemon and you are not restricted to the default Docker client. Docker clients exist in several programming languages (e.g., Java, Groovy, Perl, PHP, Scala, Erlang, etc.), and you can write your own by studying the API reference.
docker-py is a Python client for Docker. It can be installed from source or simply fetched from the Python Package Index by using the pip
command. First install python-pip
, and then get the docker-py package. On Ubuntu 14.04:
$ sudo apt-get install python-pip $ sudo pip install docker-py
The documentation tells you how to create a connection to the Docker daemon. Create an instance of the Client()
class by passing it a base_url
argument that specifies how the Docker daemon is listening. If it is listening locally on a Unix socket:
$
python
Python
2.7
.
6
(
default
,
Mar
22
2014
,
22
:
59
:
56
)
[
GCC
4.8
.
2
]
on
linux2
Type
"help"
,
"copyright"
,
"credits"
or
"license"
for
more
information
.
>>>
from
docker
import
Client
>>>
c
=
Client
(
base_url
=
"unix://var/run/docker.sock"
)
>>>
c
.
containers
()
[]
If it is listening over TCP, as you set it up in Recipe 4.7:
$
python
Python
2.7
.
6
(
default
,
Mar
22
2014
,
22
:
59
:
56
)
[
GCC
4.8
.
2
]
on
linux2
Type
"help"
,
"copyright"
,
"credits"
or
"license"
for
more
information
.
>>>
from
docker
import
Client
>>>
c
=
Client
(
base_url
=
"tcp://127.0.0.1:2375"
)
>>>
c
.
containers
()
[]
You can explore the methods available via docker-py by using help(c)
at the Python prompt in the interactive sessions.
Discussion
The docker-py module has a few basics documented. Of note is the integration with Boot2Docker (Recipe 1.7), which has a helper function to set up the connection. Since the latest Boot2Docker uses TLS for added security in accessing the Docker daemon, the setup is slightly different than what we presented. In addition, there is currently a bug that is worth mentioning for those who will be interested in testing docker-py.
Start Boot2Docker:
$ boot2docker start Waiting for VM and Docker daemon to start... ................oooo Started. Writing /Users/sebgoa/.boot2docker/certs/boot2docker-vm/ca.pem Writing /Users/sebgoa/.boot2docker/certs/boot2docker-vm/cert.pem Writing /Users/sebgoa/.boot2docker/certs/boot2docker-vm/key.pem To connect the Docker client to the Docker daemon, please set: export DOCKER_HOST=tcp://192.168.59.103:2376 export DOCKER_CERT_PATH=/Users/sebgoa/.boot2docker/certs/boot2docker-vm export DOCKER_TLS_VERIFY=1
This returns a set of environment variables that need to be set. Boot2Docker provides a nice convenience utility, $(boot2docker shellinit)
, to set everything up. However, for docker-py to work, you need to edit your /etc/hosts file and set a different DOCKER_HOST
. In /etc/hosts add a line with the IP of boot2docker
and its local DNS name (i.e., boot2docker
) and then export DOCKER_HOST=tcp://boot2docker:2376
. Then in a Python interactive shell:
>>>
from
docker.client
import
Client
>>>
from
docker.utils
import
kwargs_from_env
>>>
client
=
Client
(
**
kwargs_from_env
())
>>>
client
.
containers
()
[]
4.11 Using docker-py Securely
Solution
After setting up a Docker host as explained in Recipe 4.9, verify that you can connect to the Docker daemon with TLS.
For example, assuming a host with the hostname dockerpytls
and client certificate, key, and CA located in the default location at ~/.docker/, try this:
$ docker -H tcp://dockerpytls:2376 --tlsverify ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Tip
Make sure you have installed docker_py:
sudo apt-get -y install python-pip sudo pip install docker-py
Once this is successful, open a Python interactive shell and create a docker-py client instance by using the following configuration:
tls_config
=
docker
.
tls
.
TLSConfig
(
client_cert
=
(
'/home/vagrant/.docker./cert.pem'
,
\'/home/vagrant/.docker/key.pem'
),
\ca_cert
=
'/home/vagrant/.docker/ca.pem'
)
client
=
docker
.
Client
(
base_url
=
'https://host:2376'
,
tls
=
tls_config
)
This is equivalent to calling the Docker daemon on the command line as follows:
$ docker -H tcp://host:2376 --tlsverify --tlscert /path/to/client-cert.pem \ --tlskey /path/to/client-key.pem \ --tlscacert /path/to/ca.pem ...
4.12 Changing the Storage Driver
Solution
This recipe illustrates how to change the storage backend used by Docker. You will start from a Ubuntu 14.04 installation with a 3.13 kernel and a Docker 1.7 setup with Another Union File System (AUFS), and you will switch to the overlay filesystem. As before, you can grab a Vagrantfile from the repository that comes with this book. Let’s do it:
$ git clone https://github.com/how2dock/docbook.git $ cd docbook/ch04/overlay $ vagrant up $ vagrant ssh $ uname -r 3.13.0-39-generic $ docker info | grep Storage Storage Driver: aufs $ docker version | grep Server Server version: 1.7.0
The overlay filesystem is available in the Linux kernel starting with 3.18. Therefore to switch storage backends, you first need to upgrade the kernel of your machine to 3.18 and restart:
$ cd /tmp $ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.18-vivid/\ linux-headers-3.18.0-031800-generic_3.18.0-031800.201412071935_amd64.deb $ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.18-vivid/\ linux-headers-3.18.0-031800_3.18.0-031800.201412071935_all.deb $ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.18-vivid/\ linux-image-3.18.0-031800-generic_3.18.0-031800.201412071935_amd64.deb $ sudo dpkg -i linux-headers-3.18.0-*.deb linux-image-3.18.0-*.deb $ sudo update-grub $ sudo shutdown -r now
Once the machine has restarted, connect to it again. You can now edit the Docker configuration file and specify Overlay as a storage driver by using the -s
option in starting the Docker daemon:
$ uname -r 3.18.0-031800-generic $ sudo su # service docker stop # echo DOCKER_OPTS=\"-s overlay\" >> /etc/default/docker # service docker start
You now switch the storage backend for Docker:
$ docker info | grep Storage Storage Driver: overlay
Warning
AUFS has been the default storage backend for 3.13–3.16 kernels, especially on Ubuntu systems. Overlay is now in the upstream kernel starting with 3.18, and AUFS is not available. Consider switching to Overlay.
Discussion
Docker can use multiple storage backends to store images and container filesystems. The storage abstraction in Docker tries to minimize the space used by images and container filesystems by keeping them in layers and tracking only the modifications from layer to layer. It relies on union-based filesystems to accomplish this.
You can choose between the following storage backends:
-
vfs
-
devicemapper
-
btrfs
-
aufs
-
overlay
Analyzing the differences in stability and performance of each of these solutions as Docker storage backends is beyond the scope of this recipe.
See Also
-
Docker-supported filesystems
Get Docker Cookbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.