Docker large-scale project containerization transformation

Docker large-scale project containerization transformation

Virtualization and containerization are two inevitable issues in cloud-based projects. Since virtualization is a pure platform operation, a project running on the Linux operating system does not need to be modified to support virtualization. If the project wants to support containerization, it needs to do a lot of detailed transformation work. The advantages of containerization over virtualization are also quite obvious. It has high performance when running on bare metal and can start and stop containers in seconds, not to mention a consistent environment for development, testing, and deployment (DevOps concept), as well as the microservice capabilities mentioned in the previous article. You can also find various articles to introduce the knowledge of containerization (Docker), so we will not go into details here. Next, we will introduce the problems and solutions faced by containerization transformation based on the actual situation of the project.

Containerize a large project with hundreds of thousands of lines of C++ code and dozens of applications. How to minimize the modification of the original code, or even eliminate the need to modify the code at all. How to do it quietly without even letting the business programmers notice. How to minimize the size of business images. How to quickly create a business image. These have been problems that have troubled us for a long time. When classifying containers, if the code organization and architecture need to be adjusted, it will be a disaster for projects with hundreds of thousands of lines. After the transformation, if the development model changes too drastically, dozens or hundreds of business programmers will inevitably face the process of relearning and adapting, which will be staggeringly costly. The size of the business image directly affects the convenience of updating the container on site, especially when the project is overseas and the network speed is not very fast. Automated and fast image creation is the key to agile development.

1. How to get started

How to move a project running on Linux into a container is usually the first problem encountered. Find a basic image with a gcc compiler and a Linux operating system on the Internet. Based on this image, you can first make a build image for compilation and CI checking (code checking, running unit tests, etc.). Use the build image for compilation and CI checking, then create a running image based on the base image and copy the compiled libraries and executable programs into it (through Dockerfile). Such a simplest image is created.

The business image made by the above method can be run, but there are two problems. The production time is very long (our project takes one hour), and the business layer of the image is very large (our project has 1G). The two problems are not particularly serious, but they will be very troublesome if the project is used for commercial purposes.

2. Container Layering

The concept of container layering is the core concept of Docker, which supports that each container can "inherit" from another container. The inheritance here should be the same concept as inheritance in object-oriented programming. In addition to the benefits of the "inheritance" feature, when the underlying image changes, there is no need to update the upper-level image, so a lot less things can be updated. It's really great, I didn't even think object-oriented inheritance was so useful! Influenced by this feature, we separated the third-party libraries used in the project into a separate layer. The production process also changes accordingly as shown in the figure below.

Although the process has one more step, the effect is immediate. The production time of the business layer is shortened from 1 hour to 12 minutes, and the size is reduced to about 100M.

3. Business Container Classification

According to Docker best practices, a container should only run one type of program or one category of programs. As before, it is definitely not appropriate for one container to run dozens of processes. Clearly classified containers also make it easier to manage and perform various operations. At the same time, in the best practices of microservices, it is recommended to split the project code into microservices. The code of each microservice is maintained by a different team and is independent of each other. We will not discuss the pros and cons of this approach for now. The original project was a large project with hundreds of thousands of lines and dozens of programs, dozens of developers, countless common modules, and it was common for each module to reference each other. Each program was composed of varying numbers of modules. If Docker's business classification is carried out according to the above suggestions, it will undoubtedly bring huge changes to the project and involve major adjustments to the organizational structure, which is almost an impossible task. So how can we classify containers while keeping the original development model unchanged? Sometimes the best way to advance a new technology is to make the change imperceptible.

The method is actually very simple. There is a script called docker-entrypoint.sh in the container to manage which processes to start after the container is started. We have created a unified image for the project above. When classifying, we only need to modify different docker-entrypoint.sh to start different types of processes according to different types of containers. It is necessary to set different environment variables, different configuration files, etc. Of course, it's all easy!

Summarize

The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. Thank you for your support of 123WORDPRESS.COM. If you want to learn more about this, please check out the following links

You may also be interested in:
  • Django Docker container deployment Django-Docker local deployment
  • Summary of Node.js service Docker container application practice
  • .NETCore Docker implements containerization and private image repository management
  • Docker practice: Python application containerization
  • Docker container deployment attempt - multi-container communication (node+mongoDB+nginx)
  • Detailed explanation of Docker containerized spring boot application
  • Detailed explanation of using ELK to build a Docker containerized application log center
  • Analysis of the process of deploying Python applications in Docker containers

<<:  React useMemo and useCallback usage scenarios

>>:  Tutorial on installing MySQL 5.7.18 decompressed version on Windows

Recommend

jQuery plugin to achieve carousel effect

A jQuery plugin every day - jQuery plugin to impl...

MYSQL database basics - Join operation principle

Join uses the Nested-Loop Join algorithm. There a...

A brief discussion on CSS cascading mechanism

Why does CSS have a cascading mechanism? Because ...

JavaScript function call, apply and bind method case study

Summarize 1. Similarities Both can change the int...

Overview of the definition of HTC components after IE5.0

Before the release of Microsoft IE 5.0, the bigges...

Click the toggle button in Vue to enable the button and then disable it

The implementation method is divided into three s...

Briefly describe the four transaction isolation levels of MySql

Isolation Level: Isolation is more complicated th...

Go to another file after submitting the form

<br />Question: How to write in HTML to jump...

mysql trigger creation and usage examples

Table of contents What is a trigger Create a trig...

Learn more about using regular expressions in JavaScript

Table of contents 1. What is a regular expression...

How to Monitor Linux Memory Usage Using Bash Script

Preface There are many open source monitoring too...