How to use Docker containers to implement proxy forwarding and data backup

How to use Docker containers to implement proxy forwarding and data backup

Preface

When we deploy applications to servers as Docker containers, we usually need to consider two aspects: network and storage.

In terms of network, some applications need to occupy ports, and some of them even need to provide external access.

For security reasons, proxy forwarding is more appropriate than directly opening firewall ports.

In terms of storage, since the container is not suitable for data persistence, data is generally saved on the server disk by mounting volumes.

However, the server cannot guarantee absolute security, so the data also needs to be backed up to the cloud.

Proxy forwarding

By default, the networks between containers are isolated from each other. However, for some related applications (web front-end container, server container, and database container), they are generally divided into an independent bridge subnet (hereinafter referred to as subnet) so that these containers can communicate with each other but at the same time be isolated from the outside.

For containers that need to provide access outside the subnet, you can map ports to the server host. The whole structure is as follows:


The above port mapping only solves the problem of the server (host) accessing the container network service. If we want to access the container on the server from the local machine through the Internet, it is generally not possible, because in addition to security considerations, the server will enable the firewall by default and only open a few ports such as 22.

For traditional network processes, the implementation method is to forward network requests through a reverse proxy server, such as using Nginx to configure the following proxy:

# Forwarding for different paths server {
 listen 80;               
 server_name www.xx.com;            

 location /a {
  proxy_pass localhost:1234;
 }
 location /b {
  proxy_pass localhost:2234;
 }
}
# Forward server for different domain names {
 listen 80;               
 server_name www.yy.com;            

 location / {
  proxy_pass localhost:1234;
 }
}

So the problem seems to be solved at this point, but what if Nginx is also running in a container?

Just now we mentioned that the subnet is isolated from external containers, so the Nginx container will not be able to access these external services.

You may easily think of dividing the Nginx container into corresponding subnets. The container does support the configuration of multiple subnets, but the trouble with this operation method is that every time a new subnet is added, the network configuration of the Nginx container needs to be modified and the container restarted.

So a better way is to set Nginx to HOST network mode. Abandon the isolation between the Nginx container and the server, and directly share the network and port with the server. Then the Nginx container can directly access all containers with mapped ports.

As shown in the following figure:


Data backup

Application Scenario

Taking speed and security into consideration, companies usually have some servers that are only accessible through the intranet. However, the data on these servers, including the servers themselves, may be modified or fail at any time.

Therefore, data backup is particularly important. Here we discuss smaller data backups.

Take the knowledge base server I recently built for my team as an example.

This web application is a small Python service that is deployed on an intranet server in the form of a container. It supports online editing and saves data in the form of md files.

Once a container fails, the internal data will no longer be accessible. Therefore, it is definitely unsafe to place the data directly in the container. The only way to allow the container and the server to share data reading and writing is to mount files.

So how to back up the data? Here we choose GitHub's private repository to save. There are three reasons:

  • Safety. Data is not easily lost or stolen.
  • Convenient, you only need to use the git command to back up.
  • fast. Because the volume and quantity of the backed up data are not large.

Although the method has been determined, there are still two problems to be solved:

  • Permission authentication is required to access the GitHub repository.
  • How to schedule or automatically submit data to GitHub.

Implementation

First, according to the principle of single responsibility of container, we should create a new container to perform the backup task.

Here we can use docker-compose or other orchestration tools to create multiple containers.

Then comes the permission authentication. Create an ssh key on the local machine and add it to the GitHub settings, so that the container can push files to the corresponding warehouse.

However, only the server can push code now, not the container, so you also need to copy the .ssh file to the container.

Finally, the implementation of automatic backup. A better way is to submit and push the code every time the file changes, but there is currently no simple way to monitor files in the container, so the next best option is to use a scheduled task strategy, that is, execute the corresponding git command every 5 minutes to submit and push files to the warehouse.

Here you can use a lightweight container based on the busybox image package, mount the project code into the container to ensure the synchronous update of the files, and then start the cron service to implement the operation.

Summarize

The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. If you have any questions, you can leave a message to communicate. Thank you for your support for 123WORDPRESS.COM.

You may also be interested in:
  • Detailed explanation of how to copy and backup docker container data
  • Detailed explanation of psql database backup and recovery in docker
  • Docker uses the mysqldump command to back up and export mysql data in the project
  • Database backup in docker environment (postgresql, mysql) example code
  • Detailed explanation of docker command to backup linux system
  • Detailed explanation of backup, recovery and migration of containers in Docker
  • Detailed explanation of Docker data backup and recovery process

<<:  The core process of nodejs processing tcp connection

>>:  Solution to changing the data storage location of the database in MySQL 5.7

Recommend

How to use Baidu Map API in vue project

Table of contents 1. Register an account on Baidu...

Summary of methods for writing judgment statements in MySQL

How to write judgment statements in mysql: Method...

Extract specific file paths in folders based on Linux commands

Recently, there is a need to automatically search...

Summary of methods for inserting videos into HTML pages

Now if you want to use the video tag in a page, y...

Vue implements book management case

This article example shares the specific code of ...

Docker practice: Python application containerization

1. Introduction Containers use a sandbox mechanis...

MySql login password forgotten and password forgotten solution

Method 1: MySQL provides a command line parameter...

About the overlap of margin value and vertical margin in CSS

Margin of parallel boxes (overlap of double margi...

Using react+redux to implement counter function and problems encountered

Redux is a simple state manager. We will not trac...

A colorful cat under Linux

Friends who have used the Linux system must have ...

Learn one minute a day to use Git server to view debug branches and fix them

Debug branch During the normal development of a p...

Installation tutorial of mysql 5.7 under CentOS 7

1. Download and install the official MySQL Yum Re...

A brief discussion on JS packaging objects

Table of contents Overview definition Instance Me...