Use docker to build kong cluster operation

Use docker to build kong cluster operation

It is very simple to build a kong cluster under the docker container. The introduction on the official website is also very simple. Beginners may often not know how to deal with it. After my painstaking thinking, I finally built it.

The main idea: different kongs connect to the same database (just one sentence)

Difficulty: How to use Kong to connect to the same database on different hosts

Require:

1. Two hosts 172.16.100.101 172.16.100.102

step:

1. Install the database on 101 (use cassandra here)

docker run -d --name kong-database \
       -p 9042:9042 \
       cassandra:latest

2. Migrate the database (you can understand initializing the database)

docker run --rm \
  --link kong-database:kong-database \
  -e "KONG_DATABASE=cassandra" \
  -e "KONG_PG_HOST=kong-database" \
  -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
  kong:latest kong migrations up

3. Install kong

docker run -d --name kong \
  --link kong-database:kong-database \
  -e "KONG_DATABASE=cassandra" \
  -e "KONG_PG_HOST=kong-database" \
  -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
  -e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
  -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
  -e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
  -e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
  -p 8000:8000 \
  -p 8443:8443 \
  -p 8001:8001 \
  -p 8444:8444 \
  kong:latest

Note: The above three steps are all completed on 101, and the official website has https://getkong.org/install/docker/?_ga=2.68209937.1607475054.1519611673-2089953626.1519354770. The next fourth step is completed on another host 102. Link can be used on the same host, but link cannot be used for container associations on different hosts. The following configuration is sufficient

4. Install another kong on 102 to implement a multi-node kong cluster

docker run -d --name kong\
 -e "KONG_DATABASE=cassandra" \
 -e "KONG_PG_HOST=kong-database" \
 -e "KONG_CASSANDRA_CONTACT_POINTS=172.16.100.101" \
 -e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
 -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
 -e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
 -e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
 -p 8000:8000 \
 -p 8443:8443 \
 -p 8001:8001 \
 -p 8444:8444 \
 kong:latest

5. The Cassandra database is used here, so you need to modify a configuration parameter db_update_propagation. The default value is 0, which can be changed to 5. Enter the container

docker exec -it kong bash //Enter the kong container cd etc/kong //Enter the directory cp kong.conf.default kong.conf //Copy the kong.conf.default file to the kong.conf file vi kong.conf //Modify the db_update_propagation configuration item

exit //Exit the empty container

docker restart kong //Restart kong

Note: Kong on both 101 and 102 needs to modify this configuration item. For an introduction to the db_update_propagation configuration item, please visit the official website.

6. Verify the kong cluster

You can register an API on 101 as follows

curl -i -X ​​POST \
 --url http://172.16.100.101:8001/apis/ \
 --data 'name=example-api' \
 --data 'hosts=example.com' \
 --data 'upstream_url=http://mockbin.org'

Then check whether the API is successfully registered:

curl -i http://172.16.100.101:8001/apis/example-api

The return is as follows:

You can also query through the 102 machine host:

curl -i http://172.16.100.102:8001/apis/example-api

If the same result as above is returned, it means that the same API can be accessed. The API information is saved in the database, which means that the same database can be accessed. In this way, your Kong cluster is successfully built. I hope it will be helpful to you.

Supplementary knowledge: Use docker-compose to create a hadoop cluster

Download the docker image

First download the five docker images you need to use

docker pull bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
docker pull bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
docker pull bde2020/hadoop-resourcemanager:1.1.0-hadoop2.7.1-java8
docker pull bde2020/hadoop-historyserver:1.1.0-hadoop2.7.1-java8
docker pull bde2020/hadoop-nodemanager:1.1.0-hadoop2.7.1-java8

Setting hadoop configuration parameters

Create a hadoop.env file with the following content:

CORE_CONF_fs_defaultFS=hdfs://namenode:8020
CORE_CONF_hadoop_http_staticuser_user=root
CORE_CONF_hadoop_proxyuser_hue_hosts=*
CORE_CONF_hadoop_proxyuser_hue_groups=*

HDFS_CONF_dfs_webhdfs_enabled=true
HDFS_CONF_dfs_permissions_enabled=false

YARN_CONF_yarn_log___aggregation___enable=true
YARN_CONF_yarn_resourcemanager_recovery_enabled=true
YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
YARN_CONF_yarn_resourcemanager_fs_state___store_uri=/rmstate
YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
YARN_CONF_yarn_log_server_url=http://historyserver:8188/applicationhistory/logs/
YARN_CONF_yarn_timeline___service_enabled=true
YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true
YARN_CONF_yarn_resourcemanager_hostname=resourcemanager
YARN_CONF_yarn_timeline___service_hostname=historyserver
YARN_CONF_yarn_resourcemanager_address=resourcemanager:8032
YARN_CONF_yarn_resourcemanager_scheduler_address=resourcemanager:8030
YARN_CONF_yarn_resourcemanager_resource___tracker_address=resourcemanager:8031

Create a docker-compose file

Create a docker-compose.yml file with the following content:

version: "2"

services:
 namenode:
  image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
  container_name: namenode
  volumes:
   - hadoop_namenode:/hadoop/dfs/name
  environment:
   - CLUSTER_NAME=test
  env_file:
   - ./hadoop.env

 resourcemanager:
  image: bde2020/hadoop-resourcemanager:1.1.0-hadoop2.7.1-java8
  container_name: resourcemanager
  depends_on:
   - namenode
   -datanode1
   -datanode2
   -datanode3
  env_file:
   - ./hadoop.env

 historyserver:
  image: bde2020/hadoop-historyserver:1.1.0-hadoop2.7.1-java8
  container_name: historyserver
  depends_on:
   - namenode
   -datanode1
   -datanode2
   -datanode3
  volumes:
   - hadoop_historyserver:/hadoop/yarn/timeline
  env_file:
   - ./hadoop.env

 nodemanager1:
  image: bde2020/hadoop-nodemanager:1.1.0-hadoop2.7.1-java8
  container_name: nodemanager1
  depends_on:
   - namenode
   -datanode1
   -datanode2
   -datanode3
  env_file:
   - ./hadoop.env

 datanode1:
  image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
  container_name: datanode1
  depends_on:
   - namenode
  volumes:
   - hadoop_datanode1:/hadoop/dfs/data
  env_file:
   - ./hadoop.env

 datanode2:
  image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
  container_name: datanode2
  depends_on:
   - namenode
  volumes:
   - hadoop_datanode2:/hadoop/dfs/data
  env_file:
   - ./hadoop.env

 datanode3:
  image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
  container_name: datanode3
  depends_on:
   - namenode
  volumes:
   - hadoop_datanode3:/hadoop/dfs/data
  env_file:
   - ./hadoop.env

volumes:
 hadoop_namenode:
 hadoop_datanode1:
 hadoop_datanode2:
 hadoop_datanode3:
 hadoop_historyserver:

Create and start a Hadoop cluster

sudo docker-compose up

After starting the hadoop cluster, you can use the following command to view the container information of the hadoop cluster:

# View the containers contained in the cluster and the exported port number sudo docker-compose ps
   Name Command State Ports
------------------------------------------------------------
datanode1 /entrypoint.sh /run.sh Up 50075/tcp
datanode2 /entrypoint.sh /run.sh Up 50075/tcp
datanode3 /entrypoint.sh /run.sh Up 50075/tcp
historyserver /entrypoint.sh /run.sh Up 8188/tcp
namenode /entrypoint.sh /run.sh Up 50070/tcp
nodemanager1 /entrypoint.sh /run.sh Up 8042/tcp
resourcemanager /entrypoint.sh /run.sh Up 8088/tc

# View the IP address of namenode sudo docker inspect namenode | grep IPAddress

You can also view the cluster status through http://:50070.

Submitting Assignments

To submit a job, we first need to log in to a node in the cluster, here we log in to the namenode node.

sudo docker exec -it namenode /bin/bash

Prepare data and submit the job

cd /opt/hadoop-2.7.1

# Create user directory hdfs dfs -mkdir /user
hdfs dfs -mkdir /user/root

# Prepare data hdfs dfs -mkdir input
hdfs dfs -put etc/hadoop/*.xml input

# Submit the job hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep input output 'dfs[az.]+'

# View the job execution results hdfs dfs -cat output/*

Clear data

hdfs dfs -rm input/*
hdfs dfs -rmdir input/
hdfs dfs -rm output/*
hdfs dfs -rmdir output/

Stop the cluster

You can terminate the cluster by pressing CTRL+C or by using "sudo docker-compose stop".

After stopping the cluster, the created container will not be deleted. You can use "sudo docker-compose rm" to delete the stopped container. You can also use "sudo docker-compose down" to stop and remove the container.

After deleting the container, use "sudo docker volume ls" to see the volume information used by the above cluster. We can use "sudo docker rm" to delete it.

The above article about using docker to build a kong cluster operation is all the content that the editor shares with you. I hope it can give you a reference, and I also hope that you will support 123WORDPRESS.COM.

You may also be interested in:
  • Detailed explanation of Docker service command (summary)
  • docker.service failed to start: Causes and solutions for Unit not found
  • Detailed explanation of overlay network in Docker
  • Docker online and offline installation and common command operations
  • How to change the domestic image source for Docker
  • Detailed troubleshooting of docker.service startup errors

<<:  Using cursor loop to read temporary table in Mysql stored procedure

>>:  XHTML: Frame structure tag

Recommend

How to optimize MySQL index function based on Explain keyword

EXPLAIN shows how MySQL uses indexes to process s...

JS implements dragging the progress bar to change the transparency of elements

What I want to share today is to use native JS to...

HTML tag marquee realizes various scrolling effects (without JS control)

The automatic scrolling effect of the page can be...

Basic usage and examples of yum (recommended)

yum command Yum (full name Yellow dog Updater, Mo...

Three common style selectors in html css

1: Tag selector The tag selector is used for all ...

Solution to the problem of var in for loop

Preface var is a way to declare variables in ES5....

Detailed explanation of MySQL syntax, special symbols and regular expressions

Mysql commonly used display commands 1. Display t...

Example code and method of storing arrays in mysql

In many cases, arrays are often used when writing...

How to connect to a remote docker server with a certificate

Table of contents 1. Use scripts to encrypt TLS f...

Solve the problem of specifying udp port number in docker

When Docker starts a container, it specifies the ...

A nice html printing code supports page turning

ylbtech_html_print HTML print code, support page t...

Summary of experience in using div box model

Calculation of the box model <br />Margin + ...