It’s sunset in 2022, and most of the IT industry is doing nothing but working with containers. Where did they come from, how did they achieve global recognition, and what does Docker have to do with it?
#1. Let's start with the basics
What is Docker
Docker developers give it this definition: "Docker helps developers bring their ideas to life by conquering the complexity of app development". Sounds promising, doesn't it?
More specifically, Docker is a tool that makes it easy for developers, system administrators, and anyone else to run different applications in isolated containers on the same server.
Containers do not know that other containers with applications are deployed nearby, they are completely isolated from each other. In each container, you can configure the environment required for this particular application.
Unlike virtual machines, containers do not require serious capacities, which allows more efficient use of server resources.
What is a container
Until recently, applications were deployed on physical servers, so there were difficulties when it needed to be done quickly.
- All servers were configured manually (or almost manually). Connecting the server, installing the OS, setting up the right environment, network and other settings took a lot of time.
- There were problems with flexible scaling. Imagine that you have an online store deployed on your server. In normal times, the application copes with the flow of users, but on New Year's Eve, the audience grows, because everyone wants to buy gifts. And then it turns out that the online store can not cope with the load and it is necessary either to add resources to the server, or to raise several more instances of the service. Yes, we can think about the holiday in advance and anticipate the influx of buyers, but what to do with those resources that will be idle after the New Year?
- It was necessary to use resources more efficiently. If you host some modest application on a large and powerful physical server that needs at most 20% of all capacities, what to do with the rest? Maybe add one or more to this application? It would seem an option until you find out that applications need different versions of the same package to work.
Programmers are smart and creative people, so they started thinking about how to avoid these complexities. And so virtualization was born!
Virtualization is a technology that allows you to create a virtual representation of resources separately from hardware ones. For example, under the operating system (hereinafter - OS), you can give not the entire disk, but only a part, creating its virtual representation.
There are many different types of virtualization, and one of them is hardware virtualization.
The idea is to take a server and split it into pieces. Let's say you have a server on which a host OS is installed, and virtual machines (hereinafter referred to as VMs) with guest OSes are launched inside it. There is a layer between the host OS and the VM - the hypervisor, which manages the sharing of resources, as well as the isolation of the guest OS.
Hardware virtualization has a big plus: inside a VM, you can run completely different OSes that are different from the host OS, but at the cost of additional costs for the hypervisor.
It would seem that the problems with resource utilization and application isolation have been resolved, but what about installing the OS and setting up the environment: are we still doing it manually and on each VM? And why pay for a hypervisor if you don’t need to keep Windows and Linux on the same server - is the host OS kernel enough?
In this case, they came up with container virtualization . With this type of virtualization occurs at the OS level: there is a host OS and special mechanisms that allow you to create isolated containers. The host OS acts as a hypervisor - it is responsible for sharing resources between containers and ensuring their isolation.
A container is an isolated process that uses the main OS kernel. Working with containers helps to solve the following problems:
- resource utilization (one server can run several containers);
- application isolation;
- OS installation (in fact, we use the host OS);
- environment settings for the application (you can set up the environment once and quickly clone it between containers).
Why Containers and Docker
As we already know, a container is an isolated process that works with its own piece of the file system, memory, kernel, and other resources. At the same time, he thinks that all resources belong only to him.
All mechanisms for creating containers are built into the Linux kernel, but in practice, they usually use ready-made runtimes like Docker, containerd and cri-o, which help automate the deployment and management of containers.
- Short life cycle . Any container can be stopped, restarted or deleted. The data contained in the container will also be lost. Therefore, when designing applications that are suitable for containerization, use the rule: do not store important data in a container. This design approach is called Stateless.
- Containers are small and light , their volume is measured in megabytes. This happens because only those processes and OS dependencies that are necessary for the application are packed into the container. Lightweight containers take up little disk space and run quickly.
- Containerization provides process isolation. Applications that run inside a container do not have access to the main OS.
- Thanks to containers, you can move from a monolith to a microservice architecture .
- There is no need to spend money on a hypervisor, and you can run more containers than VMs on the same resources.
- Containers are stored in special repositories, and each container contains all the necessary environment to run the application, thanks to which you can automate the deployment of the application on different hosts.
Now let's discuss the benefits of Docker.
- Community . There is a huge repository of open source containers, and you can download a ready-made image for a specific task.
- Flexibility . Docker allows you to create basic container templates (image) and reuse them on different hosts. Docker containers can be easily run both on a local device and in any cloud infrastructure.
- Deployment speed . The container template contains all the necessary environment and settings for the application to work, we do not need to set it all up from scratch every time.
- No problem with package dependencies and versions . Docker allows you to package different programming languages and a stack of technologies into a container, which eliminates the problem of incompatibility of different libraries and technologies within the same host.
#2. How does Docker work?
As you already know, the Linux kernel has all the necessary mechanisms for creating containers out of the box:
- capabilities - allows you to give the process part of the extended rights that are available only to
root. For example, allow deleting other people's files, terminating other processes (command
kill) or changing file attributes (command
- namespace is an abstraction in Linux, with which you can create your own isolated environment in the OS. That is, a box in which its users, its network, its processes and everything else. In this case, changes in the namespace are visible only to members of this namespace. There are six types of namespaces (namespaces) : IPC, Network, Mount, PID, User, UTS.
- Network namespace is responsible for the resources associated with the network. Each namespace will have its own network interfaces, its own routing tables.
- User namespace specializes in users and groups within a namespace.
- PID namespace manages a set of process IDs. The first process created in the new namespace has , and child processes are assigned the following PIDs.
PID = 1
- cgroup groups multiple processes into a group and manages resources for that group.
Traditionally, limits in Linux can be set for a single process, and this is inconvenient: you could give a process no more than n megabytes of memory, but how to set limits for an application if it has more than one process? Therefore, there appeared
cgroups, allowing you to combine processes into a group and hang limits on it.
Let's take a look at how Docker creates a container from capabilities, namespace, and cgroup.
Docker is a very thin layer around the core. It creates a container based on a docker image with the given settings. When you ask Docker to create a container, it will automatically create a set of namespaces and cgroups for that container.
Namespace PIDs are needed so that processes inside the container cannot see and influence other processes that are running in another container or on the host system.
Network namespace - the container will get its own network stack, which means it will not be able to access the sockets or network interfaces of another container.
A similar story with all other namespaces - each container has its own directory tree, hostnames, and so on.
When creating a Docker container, we can specify how much memory or cpu to give to a particular container, and the OS will follow this limit. Such control is needed so that one container does not accidentally kill the entire system by eating all the memory or overloading the processor.
By default, when creating a container, Docker truncates all the capabilites inside it, leaving only a part of the possibilities - changing the attributes
chroot and a few others. This is done for security reasons, so that an attacker would not get all root rights if he could get out of the container.
There are a few terms to learn before getting started with Docker .
An image is a template for your future containers. The image describes what should be installed in the container and what actions should be performed when the container is started.
In the practical part, you will use the command to download the busybox image from a special Docker image repository - docker hub .
A container is an executable instance of an image. It can be created, started, stopped and deleted. You can also connect storage to a container, connect containers to one or more networks, and communicate with containers using the Docker API or CLI.
You can see the list of running containers with the command .
Docker daemon (dockerd) is a background process in the operating system that handles Docker API requests and manages Docker objects: images, containers, networks, and volumes.
Docker client is a command line tool (Comand Line Interface - CLI) through which the user interacts with the daemon.
When you use the command , the Docker client sends the dockerd command . Similar story with other teams .
docker rundocker <ad>
Docker Hub is a public Docker registry , that is, a repository of all available Docker images. If necessary, you can deploy your own private Docker registries, host your own Docker registries and use them to pull images.
#4. Starting and initial setup of Docker
For work you will need:
- basic command line skills;
After installing Docker, it's worth checking that it works.
To do this, run:
$ docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 2db29710123e: Pull complete Digest: sha256:62af9efd515a25f84961b70f973a798d2eca956b1b2b026d0a4a63a3b0b6a3f2 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly **....**
Now that Docker is installed, let's run the first container in it. Take the Busybox image as the basis for the container . Enter the command in the terminal:
$ docker pull busybox
Note You may see an error after running the command. If you're on a Mac, make sure the Docker engine is running. If you're on Linux, prefix commands with . Also, you can create a docker group to get rid of this problem.
To see a list of all images on your system, use the command :
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE busybox latest ff4a8eb070e1 2 days ago 1.24MB=
Excellent! Let's now run a Docker container based on this image. Use command :
$ docker run busybox $
Don't be embarrassed that nothing happened. There are no mistakes here, and everything is going according to plan. When you call
run, the Docker client finds the image (busybox in our case), loads the container, and runs the command in it.
When you ran , you didn't pass a command, so the container loaded, did nothing, and then exited.
docker run busybox
Let's pass the command and see what happens:
$ docker run busybox echo "hello from busybox" hello from busybox
Hooray, at least some result! The Docker client executed a command
echo in the busybox container and then exited it. And it all happened pretty quickly.
Okay, you've started the container, but how can you see which containers are running on the server right now? There is a command for this :
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
There are currently no containers running and you see an empty line. Try a more useful option - :
docker ps -a
$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 533d8a396c4c busybox "echo 'hello from bu…" 7 minutes ago Exited (0) 7 minutes ago zealous_hugle
A list of all the containers that you have launched has appeared. Notice the column
STATUS shows that these containers were closed a few minutes ago.
So, you started the container, executed one command, and the container ended. What's the point of this? Maybe there is a way to run more than one command?
Of course have! Let's execute :
docker run -it busybox sh
$ docker run -it busybox sh / # ls bin dev etc home proc root sys tmp usr var / # uptime 13:44:12 up 15 min, 0 users, load average: 0.02, 0.01, 0.00 / #
run with flags
-it will connect you to an interactive terminal in the container. Now you can run as many commands as you want in the container.
Try running your favorite commands in the container. It’s also worth spending some time learning about the capabilities of the command
run, since it’s the one you will use most often.
To see a list of all the flags supported by
docker run --help.
Once you have learned how to create containers, you need to practice deleting them. You yourself saw that even after stopping the container, information about it remains on the host. You can run multiple times and get orphaned containers that will take up disk space.
The disk space is non-rubber, so you have to clean up and remove unnecessary containers. This command will help :
# what are the containers $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 75f77b63681e busybox "sh" 17 minutes ago Exited (130) 3 seconds ago optimistic_dev c433938c56ad busybox "echo 'hello from bu…" 33 minutes ago Exited (0) 33 minutes ago zealous_trutle # deleting containers by CONTAINER ID $ docker rm 75f77b63681e c433938c56ad # checking that containers are deleted or not $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
If there are a lot of containers on the host that need to be deleted, then you will have to copy a lot , and this can be tedious. To make your life easier, you can use :
CONTAINER IDdocker container prune
$ docker container prune WARNING! This will remove all stopped containers. Are you sure you want to continue? [y/N] y Deleted Containers: b98719700a7e620c5b2b177f7a4057885bd6816f716f3e23e517337224e3e26e
#5. Web Application Deployment
So you've considered launching
docker and playing around with the container. It's time to move on to more real stuff and deploy a web application with Docker.
The first step is to start a very simple static site. To do this, grab a Docker image from Docker Hub, run it, and verify that you have a working web server.
The image you'll be using is a one-page website specially created for demonstration and hosted in the registry - .
You can download and run the image immediately using , the flag will automatically remove the container when you exit it, and the flag will launch an interactive terminal that can be exited with Ctrl+C. The container will be destroyed.
$ docker pull ifireice/static-sitedocker run --rm -it ifireice/static-site
Since the image is not yet on the host, the Docker client will first download the image from the registry and then run it. If everything goes according to script, you should see a message in the terminal.
Nginx is running...
The server is running, but how can I see the site? What port is the site running on? How to access the container?
The client doesn't provide any ports, so you need to re-run and publish the ports. Press Ctrl+C to stop the container.
You also need to make sure that the running container is not tied to the terminal. This is necessary so that after the terminal is closed, the container continues to work - the principle of the detached mode :
$ docker run -d -P --name static-site ifireice/static-site 303ded27b9c7b2f6943446b8fffc18ab5b987cb3af5151bde0ab990d9a3deb9c
-d- disconnect the terminal,
-P- publish all open ports to random ports,
--name— set a name for the container.
Now you can see the ports by running the command :
docker port [CONTAINER]
$ docker port static-site 80/tcp -> 0.0.0.0:55000
Open http://localhost:55000 in a browser. You can also specify a custom port to which the Docker client will redirect connections to the container.
$ docker run -p 8888:80 ifireice/static-site Nginx is running...
To stop a container, run
docker stopwith the container ID. In this case, you can use the name
static-sitethat you gave the container when you started it.
$ docker stop static-site static-site
To deploy the same site on a remote server, you need to install Docker and run the above command.
#6. Creating a Docker Image
Now that you've seen how to run a web server inside a Docker image, you might want to create your own Docker image?
Remember the command that lists the images that are located locally?
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE ifireice/django-app latest g762f0b4c5ca 13 seconds ago 945MB ifireice/static-site latest 77f3deb226bc 15 minutes ago 141MB ubuntu 18.04 b117236c4bde 5 days ago 62.1MB busybox latest 0f08ebe7a1f4 5 days ago 1.14MB
Here is a list of images downloaded from the registry, as well as images that we created:
TAG- refers to a specific image snapshot;
IMAGE IDis the unique identifier for this image.
Images can be committed with changes and have multiple versions. If you don't specify a specific version number, the client will default to the latest version,
latest. For example, you can pull a specific version of an image
$ docker pull ubuntu:18.04
You can either download a new image from the registry or create your own.
Let's say you want to create an image that wraps a simple Django app in a container that displays a random cat image. First, clone this application to your local computer (not to the Docker container):
$ git clone https://github.com/ifireice/docker-article.git $ cd docker-article
Now this application needs to be packaged in image. This is where definitions about images come in handy.
- Base images are images that do not have a parent image. Usually these are OS images - ubuntu, busybox or debian;
- Child images are images built from base images with additional functionality.
There are also such concepts as official and custom images.
- The official images are maintained by the Docker community. Usually their name consists of one word, for example, python, ubuntu, busybox and hello-world.
- Custom images are created by users. They are built on the basis of the basic one and contain additional functionality. Only they are no longer supported by the community, but by the user who created it. The name of such images is usually the username/image name.
You will be creating a custom Python-based image because you are using a Django application. You will also need a Dockerfile.
Dockerfile is a simple text file with a list of commands that the Docker client calls when creating an image. The commands are almost like in Linux, which means you don't need to learn another language to create a Dockerfile.
The application directory already has
Dockerfile, but you will create it from scratch. So rename it and create an empty file named
Dockerfile in the django-app directory.
Start by defining a base image. To do this, use the keyword
Then set the working directory and copy all application files:
# set directory for application WORKDIR /usr/src/app # copy all files to container COPY . .
Now that you have the files, you can install the dependencies:
# installing dependencies RUN pip install --no-cache-dir -r requirements.txt
Add the port to be opened. The application is running on port 5000, specify it:
The last step is to write a very simple command to run the application:
python ./manage.py runserver 0.0.0.0:5000. To do this, use the command
CMD. It tells what command the container should run at startup.
CMD ["python", "./manage.py", "runserver", "0.0.0.0:5000"]
Dockerfile is ready and looks like this:
FROM python:3.8 # set directory for application WORKDIR /usr/src/app # copy all files to container COPY . . # installing dependencies RUN pip install --no-cache-dir -r requirements.txt # what port should the container expose EXPOSE 5000 # run command CMD ["python", "./manage.py", "runserver", "0.0.0.0:5000"]
Once you have
Dockerfile, you need to collect the image. To do this, use and pass an optional flag - the name of the tag and the location of the directory containing the .
To save (push) the finished image to Docker Hub , you need to create an account there. Save so you can later get the image and deploy a container based on it on any server.
The tag must contain your Docker Hub
username account name , otherwise nothing will work.
$ docker build -t username/poets . [+] Building 8.4s (10/10) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 354B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B .... => => writing image sha256:558d4c28ee06f39f9beb816cb63185e27619c65310cb3021e36e8a6f7663c5d9 0.0s => => naming to docker.io/username/poets
If there is no image on the local machine , the Docker client will first download the image and then build yours. Then the output of the command may be different.
If everything went well, then the image is ready! Run it, remembering to change it
yourusername to the correct one:
$ docker run -p 8888:5000 username/poets
The command took the port
5000 inside the container and mapped it to a port
8888 on the host. And now, if you access the port of the
8888 host machine, the request will be redirected to the container on port
5000. You can find out what the application will return for a request using the path: .
Congratulations! You have successfully created your first Docker image.
$ docker login --username username 1 1772 04:12:45 Password: Login Succeeded Logging in with your password grants your terminal complete access to your account. For better security, log in with a limited-privilege personal access token. Learn more at https://docs.docker.com/go/access-tokens/
When you enter the password, it will not be displayed in the console. This is the norm, and it should not be visible to everyone.
To save an image to the registry, simply type
docker push username/poets. It is important that the tag has the format
username/image_name. Then the Docker client will know where to save the image:
$ docker push username/poets
Once the image is stored in the registry, it can be seen in Docker Hub at https://hub.docker.com/r/username/poets .
Well, you already know how to pick up and run an image from the registry!
The main ideas of this work:
- dealt with virtual machines, containers and understood how they differ;
- learned what Docker is and discussed the basic terminology: image, container, docker daemon, docker client, registry;
- launched Hello Docker;
- launched a container based on an image from Docker Hub;
- created your own container and saved it on Docker Hub.
#8. More Docker Resources
If you want to learn more about Docker, check out the links below: