Docker data volumes and sharing data

Docker containers are temporary.

You can experiment this by starting a basic ubuntu image and create a test file or dir. The soon you exit the container all your changes will disappear.

Let’s try to bring that container up again:

Thankfully Docker provides the solution to keep data persistent.

Docker Data Volumes

Docker data volumes are used when:

  1. You want to persist data, even through container restarts
  2. You want to share data between the host filesystem and the Docker container
  3. You want to share data with other Docker containers.

Data persist

There’s no way to directly create a “data volume” in Docker, so instead we create a data volume container with a volume attached to it. For any other containers that you then want to connect to this data volume container, use the Docker’s –volumes-from option to grab the volume from this container and apply them to the current container.

Le’s create a data volume container to store our volume:

This created a container named datacontainer based on ubuntu image and in the directory /data.

If we reiterate our initial test with –volumes-from flag, anything we write to /data directory into current container will be saved to the /data volume of our datacontainer.

Now rerun the container and check if /data/my_file is persisted.

You can also create as many data volume containers as you’d like but you are restricted to choose the mount inside the container (/data in our example).

Sharing data between containers – shared volumes

There is the need to share data between host and container itself. Docker gives you the option to run a container and override one of its directories with the contents of a directory on the host system.

Let’s imagine you’re running your application and you want to keep the logs out of container.

We set up a volume that links the /var/log/nginx directory from the nginx container to ~/nginxlogs on our host.

If you make any changes to the ~/nginxlogs folder, you’ll be able to see them from inside the Docker container in real-time as well.

Additional links:


Docker compose

Docker is a great tool but for complex applications with a lot of components, orchestrating all the containers to start up and shut down together (not to mention talk to each other) can quickly become difficult.

Docker Compose makes dealing with the orchestration processes of Docker containers (such as starting up, shutting down, and setting up intracontainer linking and volumes) really easy.

Installing Docker Compose

Running a Container with Docker Compose


While still in the ~/hello-world directory, execute the following command to create the container:

To show the group of Docker containers (both stopped and currently running), use the following command:

To stop all running Docker containers for an application group, run the following command in the same directory as the docker-compose.yml file used to start the Docker group:

If you want to start from scratch you can use the rm command to fully delete all the containers that make up your container group:

Additional links:




Docker installation on Ubuntu 12.04 LTS

Docker requires a 64-bit installation regardless of your Ubuntu version. Additionally, your kernel must be 3.10 at minimum.

To check current kernel version:

Create /etc/apt/sources.list.d/docker.list and add below line:

For Ubuntu Trusty, Vivid, and Wily, it’s recommended to install the linux-image-extra kernel package. The linux-image-extra package allows you use the aufs storage driver.

Install docker:


To avoid : Cannot connect to the Docker daemon. Is the docker daemon running on this host?

Log out and log in back. This ensures your user is running with the correct permissions.


Additional links:


Docker – Intro


Docker provides an integrated technology suite that enables development and IT operations teams to build, ship, and run distributed applications anywhere.

“Docker wasn’t on anyone’s roadmap in 2014, it is on everyone’s roadmap for 2015”


  • Docker Images – blueprints of our application
  • Docker Container – created from docker images and are real instances of our application
  • Docker Daemon – building, running and distributing Docker images
  • Docker Client – Run on our local machine and connect to the daemon
  • Docker Hub – a registry of docker images


Quick and easy install script provided by Docker:

If you’re looking for detailed steps or how to install on different Windows / OSX please check

If you’re running docker under Linux you need to run

to avoid: Cannot connect to the Docker daemon. Is the docker daemon running on this host? Log out and log in back. This ensures your user is running with the correct permissions.

On Linux the 3.10.x kernel is the minimum requirement for Docker. For OSX 10.8 “Mountain Lion” or newer is required.

Both OSX and Windows require Docker Toolbox to be installed in order to be able to run docker containers.

Early versions of Docker Toolbox require you to install a VM with Docker Machine using the VirtualBox provider:

However the latest versions of Docker Toolbox will take care of this part. You can check Docker-machine version and status and ip:

Docker Images

Docker at its core is a way to separate an application and the dependencies needed to run it from the operating system itself. To make this possible Docker uses containers and images.

A Docker image is basically a template for a filesystem. When you run a Docker image, an instance of this filesystem is made live and runs on your system inside a Docker container. By default this container can’t touch the original image itself or the filesystem of the host where Docker is running. It’s a self-contained environment.

Let’s search the official Ubuntu docker image.

Limit the search to Ubuntu images with at least 1000 stars/rating:

  • docker run – run and interact with the image:

View previous images and history:


Remove docker instance:


While you can use the docker rmi command to remove specific images, there’s a tool called docker-gc that will clean up images that are no longer used by any containers in a safe manner.

  • docker load loads an image from a tar archive as STDIN, including images and tags (as of 0.7).
  • docker save saves an image to a tar archive stream to STDOUT with all parent layers, tags & versions (as of 0.7).
  • docker history shows history of image.
  • docker tag tags an image to a name (local or registry).



Docker containers wrap up your application in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.

Containers are for software, not for data

Docker containers are consumables. Docker containers should be used to scale applications, for firing more app servers with the same setup etc. Data belongs onto the filesystem but not into a container that can neither be cloned in an easy way nor incrementally backed up in a reasonable way. Containers are for software, not for data.

Docker container commands

If you want to map a directory on the host to a docker container, docker run -v $HOSTDIR:$DOCKERDIR. Also see Volumes.

  • docker start starts a container so it is running.
  • docker stop stops a running container.
  • docker restart stops and starts a container.
  • docker pause pauses a running container, “freezing” it in place.
  • docker unpause will unpause a running container.
  • docker wait blocks until running container stops.
  • docker kill sends a SIGKILL to a running container.
  • docker attach will connect to a running container.
  • docker logs gets logs from container. (You can use a custom log driver, but logs is only available for json-file and journald in 1.10)
  • docker inspect looks at all the info on a container (including IP address).
  • docker events gets events from container.
  • docker port shows public facing port of container.
  • docker top shows running processes in container.
  • docker stats shows containers’ resource usage statistics.
  • docker diff shows changed files in the container’s FS.
  • docker stats --all shows a running list of containers.

Additional links:


How to enables SSL on webmail.domain.tld

In order to secure your roundcube webmail with your SSL certificate you need to follow below steps in your Plesk CP.

  1. Go to Server -> SSL Certificates -> Add your SSL certificate here if you haven’t already.
  2. Go to Server -> IP Addresses -> [your public IP] -> Change the SSL certificate to the certificate you added in Step 1
  3. Add

    to /etc/apache2/plesk.conf.d/ somewhere after “RewriteEngine On”

Enjoy your https://webmail.domain.tld.

Setup SSH Keys for remote access

SSH keys are a more secured way to connect to your servers / VPS, compared to simple password authentication. Once you’ve setup a SSH key pair they can be deployed on all your servers in order to allow secured access. To add an additional layer of protection you can also password protect your SSH keys.

Creating SSH keys

There are few options and tools to generate SSH keys, but for windows based systems the PuTTYgen is the way to go.


I use SSH-2 RSA and 2048 bits for generated key (increasing the bits makes it harder to crack the key by brute-force methods).

After generating the keys you should store them in a safe place (specially the private one). If you lose your keys and have disabled username/password logins, you will no longer be able log in!)

NOTE: PuTTY and OpenSSH use different formats for public SSH keys. If the SSH Key you copied starts with “—- BEGIN SSH2 PUBLIC KEY …”, it is in the wrong format. Be sure to follow the instructions carefully. Your key should start with “ssh-rsa AAAA ….”

Deploy the Public Key on your Server

You need to upload the public key in the file ~/.ssh/authorized_keys on your server.

1. Log in to your destination server using puTTY.

2. If your SSH folder does not yet exist, create it manually:

3. Paste/insert the content of the public key in autorized_keys file.

Create the Putty Profile

1. Open putty.exe and specify the hostname (FQD name or IP), select connection type to SSH.

2. Navigate to Connection -> Data -> Login details -> Autologin username -> specify the username for the account you want to login.

3.Navigate to Connection -> SSH -> Auth -> Browse -> private key file saved from PuttyGen.


4. Go to Session and hit Save button to keep the settings.

5. Enjoy

Disable Password Login

You can go further and add the extra security that SSH keys offer by disabling password login to your server. Before you do this it is essential you keep your SSH key files in a safe place and take a backup… in another safe place.

When password login is disabled you won’t be able to login without these keys.

On debian/Ubuntu systems the SSH password authentication can be disabled by editing /etc/ssh/sshd_config.

Please don’t forget to restart your SSH daemon service:

Now your servers should be secured with SSH keys.

If you’re on linux as a client, things are much easier:

The public key can now be traced to the link ~/.ssh/

Now it’s time to place the public key on the server that we intend to use:

While a password stands the risk of being finally cracked, SSH keys are rather impossible to decipher using brute force.

Creating home dir for already created users in Linux

I created some users with:

but I forgot to specify the parameter -m to create the home directory and to have the skeleton files copied into it.

Some say mkhomedir_helper can do the trick but running it and nothing was changes.

I end up doing: