Continuous delivery using Jenkins and Docker under Docker

Continuous delivery using Jenkins and Docker under Docker

1. What is Continuous Delivery

The software product output process should be completed within a short period of time to ensure that the software can be stably and continuously maintained in a state where it can be released at any time. Its goal is to build, test, and release software faster and more frequently. This approach can reduce the cost and time of software development and reduce risks.

2. Comparison between Continuous Delivery and Traditional Delivery

The release cycle of traditional delivery can be represented as the following figure:

Disadvantages of traditional delivery:

Slow Delivery: Here, the customer receives the product long after specifying the requirement. This resulted in unsatisfactory time-to-market and delayed customer feedback.

Long feedback cycle: The feedback cycle is not only about customers but also about developers. Suppose you accidentally create a bug and learn about it during the UAT phase. How long does it take to fix something you fixed two months ago? Even small mistakes can take weeks.

Dangerous hot fixes: Hot fixes usually cannot wait for a full UAT phase, so they are often tested differently (with a shortened UAT phase) or not tested at all.

Stress: Unpredictable releases are stressful for operations teams. What's more, release cycles are often tightly scheduled, which puts additional pressure on developers and testers.

To be able to deliver products continuously without spending a ton of money on an operations team working 24/7, we need automation. That’s why, continuous delivery is all about changing each stage of the traditional delivery process into a series of scripts, called an automated deployment pipeline or continuous delivery pipeline.

We can then run the process after every code change, continually delivering the product to users, without the need for manual steps.

Advantages of Continuous Delivery:

Fast delivery: Customers can use the product as soon as the development is completed, which greatly shortens the time to market. Remember, software only generates revenue when it is in the hands of users.

Fast feedback cycles: Let’s say you create a bug in your code and it goes into production on the same day. How much time would it take to fix something you were working on that day? Probably not that much. This, together with a fast rollback strategy, is the best way to keep production stable.

Low-risk releases: If you release every day, the process becomes repeatable and therefore safer.

Flexible release options: If you need to release immediately, everything is already ready, so there is no additional time/cost associated with a release decision.

Needless to say, we can realize all the benefits by eliminating all delivery stages and developing directly on production. However, this will result in a loss of quality. In fact, the entire difficulty in introducing continuous delivery is the fear that quality will decrease as manual steps are removed. We will show how to handle it in a safe way, delivering products that continually have fewer bugs and are better adapted to customer needs.

3. How to achieve continuous delivery

The automated deployment pipeline consists of three stages as shown in the following diagram:

Each step corresponds to a stage in the traditional delivery process, as follows:

Continuous Integration: Checks to ensure that code written by different developers is integrated together

Automated acceptance testing: This will replace the manual QA phase and check whether the features implemented by the developers meet the customer’s requirements

Configuration management: This will replace the manual phase - configuring the environment and deploying software

1. Continuous Integration

The continuous integration phase provides the first feedback to the developers. It checks out code from the repository (git, svn), compiles the code, runs unit tests, and verifies the code quality. If any step fails, the pipeline execution is stopped and the first thing the developer should do is to fix the continuous integration build.

2. Automated Acceptance Testing

The automated acceptance testing phase is a set of tests written in conjunction with QAs that should replace the manual UAT phase. It acts as a quality check gate to decide whether a product is ready for release. If any acceptance test fails, the pipeline execution is stopped and no further steps are run. It prevents movement to the configuration management phase and thus prevents release.

3. Configuration Management

The configuration management phase is responsible for tracking and controlling changes in the software and its environment. It involves preparing and installing necessary tools, scaling the number of service instances and their distribution, infrastructure inventory, and all tasks related to application deployment.

Configuration management is a solution to the problems that come with manually deploying and configuring applications in a production environment. Configuration management tools such as Ansible, Chef, or Puppet support storing configuration files in a version control system and tracking every change made on production servers.

Another job that replaces manual tasks of the operations team is responsible for application monitoring. This is typically done by streaming logs and metrics from running systems to a common dashboard that is monitored by developers (or DevOps teams, as described in the next section).

4. Tools

1.Docker Ecosystem

Docker, as the leader in containerization, has dominated the software industry in recent years. It allows packaging applications in environment-agnostic images, thus treating the server as a resource farm rather than a machine that must be configured for each application.

Docker was a clear choice as it fits perfectly into the world of (micro)services and the continuous delivery process.

2.jenkins

Jenkins is the most popular automation server on the market. It helps in creating continuous integration and continuous delivery pipelines, and generally helps in creating scripts for any other automation. Highly plugin-oriented, it has a great community that constantly extends it with new features.

More importantly, it allows pipelines to be written as code and supports distributed build environments.

3. Ansible

Ansible is an automation tool that helps with software provisioning, configuration management, and application deployment. It uses an agentless architecture and is well integrated with Docker.

4.gitHub

GitHub is definitely the number one hosted version control system. It provides a very stable system, a web-based UI, and a free service with a public repository.

Nonetheless, any source control management service or tool can use continuous delivery, whether it's in the cloud or self-hosted, and whether it's based on Git, SVN, Mercurial, or any other tool.

5. Docker Practice

1. Docker Overview

Docker is an open source project designed to facilitate application deployment using software containers. The following is quoted from the official Docker page:

A Docker container wraps a piece of software in a complete file system that contains everything it needs to run: code, runtime, system tools, system libraries—anything that can be installed on a server. This ensures that the software will always run the same, regardless of its environment.

Thus, in a similar manner to virtualization, Docker allows applications to be packaged into images that can be run anywhere.

2. Virtualization and containerization

Without Docker, isolation and other benefits can be achieved using hardware virtualization (commonly known as virtual machines). The most popular solutions are VirtualBox, VMware, and Parallels.

A virtual machine emulates a computer architecture and provides the functionality of a physical computer. If each application is delivered and run as a separate virtual machine image, we can achieve complete isolation of the applications. The following figure shows the concept of virtualization:

Each application is launched as a standalone image that includes all dependencies and the guest operating system. The image is run by a hypervisor, which emulates the physical computer architecture.

This deployment method is widely supported by many tools such as Vagrant, and is dedicated to development and testing environments. However, virtualization has three significant disadvantages:

Low performance: A virtual machine emulates the entire computer architecture to run the guest operating system, so each operation has a large overhead.

High resource consumption: Simulations require a lot of resources and must be performed separately for each application. That's why on a standard desktop, only a few applications can run at the same time.

Large images: Each application is delivered with a complete operating system, so deployment on a server means sending and storing a lot of data.

The following figure shows the difference brought by Docker:

3. Installation of docker

The Docker installation process is quick and easy. Most Linux operating systems support it these days, and many of them provide dedicated binaries. Mac and Windows are also well supported with native apps.

However, it is important to understand that Docker internally is based on the Linux kernel and its specifics, which is why in Mac and Windows, it uses a virtual machine (xhyve for Mac and hyv for Windows) to run the Docker engine environment.

Here we only talk about the operation on Ubuntu 16.04 on Linux (official commands):

$ sudo apt-get update
$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
$ sudo apt-add-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial main stable'
$ sudo apt-get update
$ sudo apt-get install -y docker-ce

If an error message is displayed:

,

You can execute the following command again:

$ cd /etc/apt/sources.list.d
$ sudo vi docker.list
  deb https://download.docker.com/linux/ubuntu zesty edge
$ sudo apt update
$ sudo apt install docker-ce

This time there was no error, but it was found to be too slow, because the download of docker-ce is relatively large and it is from a foreign website. Here you can change it to a domestic source. The instructions are as follows:

sudo apt-get update 
sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ software-properties-common
curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add
sudo add-apt-repository "deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update 
sudo apt-get install docker-ce

Test whether the installation is complete: docker -v or docker info can show some basic information about docker, indicating that the installation is successful:

4. Run Docker

The docker environment has been installed. We can first run a very classic example: hello world:

$ docker run hello-world

When you see the following message, it means you are running correctly:

Let's take a step-by-step look at what's happening under the hood:

1. Run the Docker client using the run command.

2. The Docker client contacts the Docker daemon and asks to create a container from the image named hello-world.

3. The Docker daemon checks if it contains the hello-world image locally, and since it does not, it requests the hello-world image from the remote Docker Hub registry.

4. The Docker Hub registry contains the hello-world image, so drop it on the Docker daemon.

5. The Docker daemon created a new container from the hello-world image, which started the executable that generated the output.

6. The Docker daemon streams this output to the Docker client.

7.The Docker client sends it to your terminal.

5. Build the image

There are two ways to build an image:

Docker commit command and Dockerfile automatic build. Let's discuss how Docker builds images.

I will only talk about the Dockerfile method:

Manually creating each Docker image with the commit command can be laborious, especially in the case of build automation and continuous delivery processes. Fortunately, there is a built-in language for specifying all the instructions that need to be executed to build a Docker image.

1. Create a DockerFile file and enter the following content:

FROM ubuntu:16.04
RUN apt-get update && \
 apt-get install -y python

2. Execute the command to build the image:

docker build -t ubuntu_with_python .

3. We can use the command:

docker images see the image we created:

6.Docker container

We can use the command: docker ps to view the running containers, and docker ps -a to view all containers. Containers are stateful.

Start the container through the image and view the status of the container:

To stop the docker container, use the command: docker stop container id

7. Run tomcat and use external access

1. Run the tomcat image:

docker run -d tomcat

However, our external browser cannot access tomcat port 8080 because a virtual machine blocks the network connection.

So when we start the container, we need to use the -p command to connect the virtual host and the network port mapping of the docker container

2.-p start

docker run -d -p 8080:8080 tomcat

Enter the virtual machine ip+port on the web page to access as follows:

6. Jenkins Practice

1. Introduction to Jenkins

Jenkins is an open source automation server written in Java. It is the most popular tool for implementing Continuous Integration and Continuous Delivery processes due to very active community-based support and a large number of plugins.

Jenkins outperforms other continuous integration tools and is the most widely used software of its kind. This is all possible due to its features and functionality.

2. Install Jenkins

The Jenkins installation process is quick and easy. There are many different ways to do this, but since we are already familiar with the Docker tool and the benefits it brings, we will start with a Docker-based solution. It’s also the easiest, most predictable, and smartest approach.

There are some environmental requirements for installing Jenkins:

Java 8 256MB free memory 1 GB+ free disk space

However, it is important to understand that the requirements strictly depend on what you plan to do with Jenkins. If Jenkins is used as a continuous integration server for a whole team, then 1gb+ free memory and 50gb+ free disk space is recommended even for a small team. Needless to say, Jenkins also performs some computations and transfers a lot of data over the network, so CPU and bandwidth are critical.

There are two ways to install Jenkins:

1. Use Docker image

2. Not using docker images

1. Install Jenkins using Docker image

Use command:

docker run -p <host_port>:8080 -v <host_volume>:/var/jenkins_home jenkins:2.60.1

Enter the URL on the web page. The picture below shows that the installation is successful:

Enter the password, and you will see an initial password in the log:

2. Install Jenkins without using Docker image

The installation is also very simple, just execute the following command:

$ wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
$ sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
$ sudo apt-get update
$ sudo apt-get install jenkins

3.jenkins simple application (hello world)

Let’s follow this rule and look at the steps to create our first Jenkins pipeline:

Click New Item.

Enter hello world as the project name, select Pipeline, and click OK.

There are many options. We will skip them now and go directly to the pipeline part.

In the Script text box, we can enter the pipeline script:

pipeline
   agent any
   stages {
    stage("Hello") {
      steps {
       echo 'Hello World'
      }
    }
   }
  }

Click Save and Build Now. We can see the following figure in the output log:

7. Continuous Integration Pipeline

1. Introduction to pipeline

A pipeline can be understood as a series of automated operations, which can be seen as a simple script chain:

Action grouping: Grouping actions into stages (also called gates or quality gates) that introduce a structure to the process with clearly defined rules: if one stage fails, no other stages are executed

Visibility: All aspects of the process are visualized, which facilitates rapid failure analysis and promotes team collaboration

Feedback: Team members are informed of any issues as they occur so they can respond quickly

2. Pipeline structure

A Jenkins pipeline consists of two elements: stages and steps. The following diagram shows how to use them:

3. Hello world of pipeline

pipeline
  agent any
  stages {
   stage('First Stage') {
    steps {
     echo 'Step 1. Hello World'
    }
   }
   stage('Second Stage') {
    steps {
     echo 'Step 2. Second time Hello'
     echo 'Step 3. Third time Hello'
    }
   }
  }
}

After the build is successful, you can see the following picture:

4. Pipeline rules

Agent: It specifies where the execution takes place and can define labels to match agents with the same label or docker to specify dynamically prepared containers to provide the environment for pipeline execution

Triggers: This defines how the pipeline is automatically triggered and can be used to set up a time-based schedule using cron or pollScm to check for changes in the repository (we will cover this in detail in the Triggers and Notifications section)

Options: This specifies options for a particular pipeline, such as timeout (the maximum time the pipeline should run) or retries (the number of times the pipeline should be rerun after a failure)

Environment: This defines a set of key values ​​that are used as environment variables during the build process

Parameters: This defines a list of user input parameters

Stage: This allows for logical grouping of steps

When: This determines whether the stage should be executed based on the given condition

This is the most basic knowledge, and some more advanced features will be released later.

Summarize

The above is what I introduced to you about using Jenkins and Docker to achieve continuous delivery under Docker. I hope it will be helpful to you. If you have any questions, please leave me a message and I will reply to you in time!

You may also be interested in:
  • Detailed tutorial on building a continuous integration delivery environment based on Docker+K8S+GitLab/SVN+Jenkins+Harbor
  • Sustainable delivery issues based on Docker

<<:  MySQL 5.7.21 winx64 installation and configuration method graphic tutorial

>>:  JavaScript to achieve simple image switching

Recommend

How to use HTML 5 drag and drop API in Vue

The Drag and Drop API adds draggable elements to ...

Tutorial on how to deploy LNMP and enable HTTPS service

What is LNMP: Linux+Nginx+Mysql+(php-fpm,php-mysq...

How to install ZSH terminal in CentOS 7.x

1. Install basic components First, execute the yu...

CSS to achieve glowing text and a little bit of JS special effects

Implementation ideas: Use text-shadow in CSS to a...

Analysis of the implementation of MySQL statement locking

Abstract: Analysis of two MySQL SQL statement loc...

3 functions of toString method in js

Table of contents 1. Three functions of toString ...

Summary of the differences between Vue's watch, computed, and methods

Table of contents 1 Introduction 2 Basic usage 2....

How to set Tomcat as an automatically started service? The quickest way

Set Tomcat to automatically start the service: I ...

W3C Tutorial (3): W3C HTML Activities

HTML is a hybrid language used for publishing on ...

How MLSQL Stack makes stream debugging easier

Preface A classmate is investigating MLSQL Stack&...

Some conclusions on developing mobile websites

The mobile version of the website should at least...

Mysql slow query optimization method and optimization principle

1. For comparison of date size, the date format p...

Introduction to commonly used MySQL commands in Linux environment

Enter the mysql command: mysql -u+(user name) -p+...