Docker image compression and optimization operations

Docker image compression and optimization operations

The reason why Docker is so popular nowadays is mainly because of its lightweight, rapid deployment and resource utilization. However, the quality of a Docker image mainly depends on the quality of the Dockerfile. The same function image, but different Dockerfile builds the image size is different, this is because Docker is accumulated layer by layer read-only layer, and each layer is each instruction in the Dockerfile, so the size of the Docker image depends entirely on the size of the intermediate layer generated by each instruction in the Dockerfile.

Let's take a small example to explain the formation of dockerimage in detail.

We have a Dockerfile:

FROM Ubuntu:14.04
ADD run.sh /
VOLUME /data
CMD [“./run.sh”]

The main things this simple Dockerfile does are: based on the Ubuntu:14.04 system, put run.sh in the root directory, set the volume mount point, and then run the script run.sh when the image starts.

The following figure shows the resulting docker image:

Obviously, from the figure we can see that the four instructions form four layers respectively. Assuming that Ubuntu:14.04 is 150MB and run.sh is 1MB, then the size of the FROM Ubuntu:14.04 layer is 150MB, the size of the ADD run.sh / layer is 1MB, and the size of the VOLUME /data layer and CMD ["./run.sh"] is 0 because no files or other data are added and no data is generated in the system.

So the size of the entire image is 151MB. After knowing the principle of docker image generation, let's talk about the optimization and compression of docker images.

One thing that needs to be explained is that the number of layers does not determine the size of the image in some cases. Only when the Dockerfile appears:

RUN yum install ***

RUN yum uninstall ***

The image can be compressed and optimized when RUN yum uninstall *** is used. Because the above two sentences install a tool and then uninstall it. Under normal circumstances, we feel that the size of the installation and uninstallation is 0, but this is not the case in the docker image. RUN yum uninstall *** can only make the previous layer invisible, but the size of the previous layer will not change. Therefore, if we want to achieve the effect of 0, we need to compress these two layers into one layer, which is written like this:

RUN yum install *** && \

yumuninstall ***

This will achieve the effect of compressing the image.

Therefore, there are two main points for compressing images:

1. Select a smaller original image, that is, the image after FROM should be as small as possible.

2. Merge the layers in Dockerfile according to the actual situation. The specific situation is as mentioned above. It should be noted that the effect will not be achieved by merging layers randomly.

Additional knowledge: How to build anaconda+jupyter into a docker image

Recently, due to business needs, we need to build a jupyter image to provide services. Because docker is lightweight, it can be easily migrated.

Here is a brief introduction to what I did and the pitfalls I encountered:

First, let's install anaconda. There are python2 and 3 versions. The versions are different but the build process is the same. There are two ways. The first one is that you can build the image through Dockerfile, but there is no interaction when running the Anaconda2-5.0.1-Linux-x86_64.sh script, so I used the docker commit method to execute it. But it turns out that you can also build it through Dockerfile. You only need to run the Anaconda2-5.0.1-Linux-x86_64.sh script on your local computer first, and ADD the generated folder, which is anaconda2, to the corresponding location in the image, and modify the environment variables to add PATH.

Let's take python2 as an example:

1. Download and run the script Anaconda2-5.0.1-Linux-x86_64.sh from the anaconda official website. When downloading, pay attention to whether your system is 32-bit or 64-bit.

2. scp the script to the base image and install the decompression command bzip2

yum install bzip2

3. Run the script (enter yes all the way)

sh Anaconda2-5.0.1-Linux-x86_64.sh

4. Update anaconda

conda update anaconda

5. Install Jupyter

conda install jupyter

6. Create a login password

root@localhost ~]# ipython
 
Python 3.5.2 (default, Aug 4 2017, 02:13:48)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.1.0 -- An enhanced Interactive Python. Type '?' for help.
 
In [1]: from notebook.auth import passwd
In [2]: passwd()
Enter password:
Verify password:
Out[2]: 'sha1:5311cd8b9da9:70dd3321fccb5b5d77e66080a5d3d943ab9752b4'

7. Generate configuration files

jupyter notebook --generate-config --allow-root

Note: You may encounter encoding errors at this step: UnicodeEncodeError:'ascii'codec can't encode characters in position...

The solution is: in the anaconda2 folder, in lib>python2.7>site.py, change:

if 0:  
 # Enable to support locale aware default string encodings.  
 import locale  
 loc = locale.getdefaultlocale()  
 if loc[1]:   
 encoding = loc[1]
#Change the 0 after if in the above code segment to 1, save, and restart anaconda.

8. Modify the configuration file:

vi ~/.jupyter/jupyter_notebook_config.py

Add the following content:

c.NotebookApp.ip='*'
c.NotebookApp.password = u'sha1:5311cd8b9da9:70dd3321fccb5b5d77e66080a5d3d943ab9752b4' #Note that the key here is the one just generated c.NotebookApp.open_browser = False
c.NotebookApp.port =8888 #Specify a port at random, you can also use the default 8888

9. Save the image

docker commit container ID image name

10. Start images to provide services:

docker run --privileged -d -p 8889:8888 -v /sys/fs/cgroup:/sys/fs/cgroup --name jupyter jupyter2:v2 /usr/sbin/init

Note: There is a big pitfall in centos7. When you turn off the firewall, systemctl cannot be used and an error message is given: Failed to get D-Bus connection: Operation not permitted

So you have to use init to start, and in the Dockerfile you can use CMD to start the runtime.

11. Enter the docker image

docker exec -it jupyter /bin/bash

12. Turn off the firewall

systemctl stop firewalls.service

13. Start Jupyter

jupyter notebook --notebook-dir=/root/ --allow-root

14. Enter the server IP + mapped port number in the browser to access it. Done~

The above Docker image compression and optimization operation is all the content that the editor shares with you. I hope it can give you a reference. I also hope that you will support 123WORDPRESS.COM.

You may also be interested in:
  • Docker image analysis tool dive principle analysis
  • How to create Apache image using Dockerfile
  • Docker image access to local elasticsearch port operation
  • Multi-service image packaging operation of Dockerfile under supervisor
  • How to create your own Docker image and upload it to Dockerhub

<<:  Realizing tree-shaped secondary tables based on angular

>>:  Key points for writing content of HTML web page META tags

Recommend

Detailed explanation of how to reduce memory usage in MySql

Preface By default, MySQL will initialize a large...

Deep understanding of JavaScript syntax and code structure

Table of contents Overview Functionality and read...

Vue implements sample code to disable browser from remembering password function

Find information Some methods found on the Intern...

Understanding of haslaylout and bfc parsing

1. haslayout and bfc are IE-specific and standard ...

Basic JSON Operation Guide in MySQL 5.7

Preface Because of project needs, the storage fie...

Detailed explanation of the problem of CSS class names

The following CSS class names starting with a num...

javascript Blob object to achieve file download

Table of contents illustrate 1. Blob object 2. Fr...

How to install iso file in Linux system

How to install iso files under Linux system? Inst...

Detailed explanation of MySQL database transaction isolation levels

Database transaction isolation level There are 4 ...

Summary of common docker commands

Docker installation 1. Requirements: Linux kernel...

Vue development tree structure components (component recursion)

This article example shares the specific code of ...