Docker Detailed Illustrations

Docker Detailed Illustrations

1. Introduction to Docker

1.1 Virtualization

1.1.1 What is virtualization?

In computers, virtualization is a resource management technology that abstracts and transforms various physical resources of a computer, such as servers, networks, memory, and storage, to break the inseparable barriers between physical structures and allow users to use these resources in a better way than the original configuration. These new virtual portions of resources are not constrained by the way existing resources are deployed, geographically or physically. Generally speaking, virtualized resources include computing power and data storage.

In the actual production environment, virtualization technology is mainly used to solve the problem of overcapacity of high-performance physical hardware and undercapacity of old hardware, and to reorganize and reuse it, making the underlying physical hardware transparent, so as to maximize the use of physical hardware to make full use of resources. There are many types of virtualization technology, such as software virtualization, hardware virtualization, memory virtualization, network virtualization (VIP), desktop virtualization, service virtualization, virtual machines, etc.

1.1.2 Types of Virtualization

(1) Full virtualization architecture: virtual hardware -> virtual operating system

The monitor (hypervisor) of a virtual machine is similar to a user's application running on the host's OS, such as VMware's workstation. This virtualization product provides virtual hardware.

(2) OS layer virtualization architecture: no hardware virtualization, same kernel operating system

(3) Hardware-layer virtualization

Hardware-layer virtualization has high performance and isolation because the hypervisor runs directly on the hardware, which helps control the VM's OS access to hardware resources. Products that use this solution include VMware ESXi and Xen server.

Hypervisor is an intermediate software layer running between the physical server and the operating system. It allows multiple operating systems and applications to share a set of basic physical hardware. Therefore, it can also be regarded as a "meta" operating system in a virtual environment. It can coordinate access to all physical devices and virtual machines on the server. It is also called a virtual machine monitor (VMM).

Hypervisor is the core of all virtualization technologies. When the server starts and executes the Hypervisor, it allocates the appropriate amount of memory, CPU, network, and disk to each virtual machine and loads the client operating system of all virtual machines. Host

Hypervisor is the core of all virtualization technologies. It makes the hardware and software architecture and management more efficient and flexible, and enables the hardware performance to be better utilized. Common products include: VMware, KVM, Xen, etc. Openstack

1.2 What is Docker

1.2.1Container technology is similar to OS layer virtualization architecture:

In the world of computing, containers have a long and storied history. Containers are different from hypervisor virtualization (HV). Hypervisor virtualization runs one or more independent machines virtually on physical hardware through an intermediate layer, while containers run directly in user space on the operating system kernel. Therefore, container virtualization is also called "operating system-level virtualization". Container technology allows multiple independent user spaces to run on the same host machine.

Because containers are "guests" on the operating system, they can only run the same or similar operating system as the underlying host, which does not seem very flexible. For example, you can run Redhat Enterprise Linux on an Ubuntu server, but you cannot run Microsoft Windows on an Ubuntu server.

Compared to fully isolated hypervisor virtualization, containers are considered insecure. Those who oppose this view believe that since the virtual container virtualizes a complete operating system, this undoubtedly increases the scope of attack, and there is also the potential exposure risk of the hypervisor layer to consider.

Despite their limitations, containers are widely deployed in a variety of applications. Container technology is very popular in ultra-large-scale multi-tenant service deployments, lightweight sandboxes, and isolated environments with less stringent security requirements. The most common example is a chroot jail, which creates an isolated directory environment to run processes. If a running process in the permission isolation jail is breached by an intruder, the intruder will find himself "jailed" because he is trapped in the directory created by the container due to insufficient permissions and cannot further damage the host machine.

The latest container technologies include OpenVZ, Solaris Zones, and Linux Containers (LXC). With these new technologies, containers are no longer just a simple operating environment. Within its own permission class, a container is more like a complete host machine. For Docker, it benefits from modern Linux features such as control groups and namespace technology. The isolation between containers and hosts is more thorough. Containers have independent network and storage stacks, as well as their own resource management capabilities, allowing multiple containers on the same host to coexist in a friendly manner.

Containers are considered a lean technology because they require limited overhead. Compared with traditional virtualization and paravirtualization, containers do not require an emulation layer and a hypervisor layer, but instead use the system call interface of the operating system. This reduces the overhead required to run a single container and allows more containers to run on the host.

Despite their illustrious history, containers have yet to gain widespread acceptance. One very important reason is the complexity of container technology: containers themselves are relatively complex, not easy to install, and difficult to manage and automate. Docker was born to change all this.

1.2.2 Comparison between containers and virtual machines

(1) Essential differences

(2) Differences in usage

1.2.3 Docker Features

(1) Easy to get started.

It only takes a few minutes for users to "Dockerize" their programs. Docker relies on a copy-on-write model, which makes it very quick to modify applications. It can be said that it reaches the state of "change the code as you wish".

You can then create a container to run your application. Most Docker containers start in less than a second. By removing the overhead of the hypervisor, Docker containers have high performance, and more containers can be run on the same host machine, allowing users to make the best possible use of system resources.

(2) Logical classification of responsibilities

With Docker, developers only need to worry about the applications running in the container, and operators only need to worry about how to manage the container. The purpose of Docker design is to enhance the consistency between the development environment where developers write code and the production environment where applications are deployed. This will reduce the perception that “everything was normal during development, so it must be an operation and maintenance problem (the test environment was normal, but problems occurred after going online, so it must be an operation and maintenance problem)”

(3) Fast and efficient development life cycle

One of Docker's goals is to shorten the code development, testing, deployment, and online operation cycle, making your applications portable, easy to build, and easy to collaborate on. (To put it simply, Docker is like a box that can hold many objects. If you need these objects, you can take them out directly from the big box without having to take them out one by one from the box.)

(4) Encourage the use of service-oriented architecture

Docker also encourages service-oriented architecture and microservices architecture. Docker recommends that a single container run only one application or process, thus forming a distributed application model. In this model, applications or services can be represented as a series of internally interconnected containers, making it very simple to deploy distributed applications, expand or debug applications, and also improving the introspection of the program. (Of course, you can run multiple applications in one container)

1.3 Docker components

1.3.1 Docker Client and Server

Docker is a client-server (C/S) architecture program. The Docker client only needs to make requests to the Docker server or daemon, and the server or daemon will do all the work and return the results. Docker provides a command-line tool Docker and a complete set of RESTful APIs. You can run the Docker daemon and client on the same host, or you can connect from a local Docker client to a remote Docker daemon running on another host.

1.3.2 Docker Image

Images are the building blocks of Docker. Users run their own containers based on the image. Images are also the "build" part of the Docker lifecycle. The image is a layered structure based on the union file system, which is built step by step by a series of instructions. For example:

  • Add a file;
  • Execute a command;
  • Open a window.

You can also think of an image as the "source code" of a container. The image is small and very portable, making it easy to share, store, and update.

1.3.3 Registry

Docker uses Registry to store user-built images. There are two types of registries: public and private. Docker Inc operates a public registry called Docker Hub. Users can register an account on Docker Hub, share and save their own images (Note: downloading images from Docker Hub is very slow, you can build your own private registry).

1.3.4 Docker Container

Docker can help you build and deploy containers. You just need to package your own applications or services into containers. The container is started based on the image, and one or more processes can run in the container. We can think of the image as the build or packaging stage in the Docker lifecycle, while the container is the startup or execution stage. The container is started based on the image. Once the container is started, we can log in to the container and install the software or services we need.

So a Docker container is:

  • An image format;
  • A series of standard operations;
  • An execution environment.

Docker borrows the concept of standard containers. Standard containers transport goods around the world, and Docker applied this model to its own design, with the only difference being that containers transport goods, while Docker transports software.

Like containers, Docker does not care what is in the container when performing the above operations, whether it is a web server, a database, or an application server. All containers "load" content into them in the same way.

Docker also doesn't care where you ship the container: we can build the container in our own laptop, upload it to the Registry, then download it to a physical or virtual server for testing, and then deploy the container to a specific host. Like standard shipping containers, Docker containers are easily replaceable, stackable, easy to distribute, and as versatile as possible.

Using Docker, we can quickly build an application server, a message bus, a set of utilities, a continuous integration (CI) testing environment, or any other application, service, or tool. We can build a complete test environment locally, or quickly replicate a complex application stack for production or development.

Summarize

The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. Thank you for your support of 123WORDPRESS.COM. If you want to learn more about this, please check out the following links

You may also be interested in:
  • Docker container deployment attempt - multi-container communication (node+mongoDB+nginx)
  • Detailed explanation of Docker containerized spring boot application
  • Detailed explanation of using ELK to build a Docker containerized application log center
  • How to deploy nodejs service using Dockerfile
  • Docker private repository management and deletion of images in local repositories
  • Example of how to upload a Docker image to a private repository
  • Docker uses dockerfile to start node.js application
  • Example of how to deploy a Django project using Docker
  • Detailed explanation of how to solve the problem that the docker container cannot access the host machine through IP
  • Docker large-scale project containerization transformation

<<:  Tutorial on installing and configuring MySql5.7 in Alibaba Cloud ECS centos6.8

>>:  Implementation steps for setting up the React+Ant Design development environment

Recommend

Methods and steps for deploying multiple war packages in Tomcat

1 Background JDK1.8-u181 and Tomcat8.5.53 were in...

Detailed explanation of using javascript to handle common events

Table of contents 1. Form events 2. Mouse events ...

Let's talk about the difference between containers and images in Docker

What is a mirror? An image can be seen as a file ...

Vue backend management system implementation of paging function example

This article mainly introduces the implementation...

Alibaba Cloud Server Linux System Builds Tomcat to Deploy Web Project

I divide the whole process into four steps: Downl...

Mysql join query principle knowledge points

Mysql join query 1. Basic concepts Connect each r...

The whole process record of Vue export Excel function

Table of contents 1. Front-end leading process: 2...

Vue implements multiple ideas for theme switching

Table of contents Dynamically change themes The f...

Detailed explanation of Linux inotify real-time backup implementation method

Real-time replication is the most important way t...

Teach you step by step to configure MySQL remote access

Preface When using the MySQL database, sometimes ...

Analysis of MySql index usage strategy

MySql Index Index advantages 1. You can ensure th...

Understanding MySQL precompilation in one article

1. Benefits of precompilation We have all used th...

How to connect to docker server using ssh

When I first came into contact with docker, I was...