Tutorial on how to quickly deploy a Nebula Graph cluster using Docker swarm

Tutorial on how to quickly deploy a Nebula Graph cluster using Docker swarm

1. Introduction

This article describes how to use Docker Swarm to deploy a Nebula Graph cluster.

2. Nebula cluster construction

2.1 Environmental Preparation

Machine preparation

ip

Memory (Gb)

cpu(cores)

192.168.1.166

16

4

192.168.1.167

16

4

192.168.1.168

16

4

Before installation, make sure Docker is installed on all machines.

2.2 Initialize swarm cluster

Execute on the 192.168.1.166 machine

$ docker swarm init --advertise-addr 192.168.1.166
Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.
To add a worker to this swarm, run the following command:
 docker swarm join \
 --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
 192.168.1.166:2377
 
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

2.3 Joining a worker node

According to the init command prompt, join the swarm worker node and execute it at 192.168.1.167 and 192.168.1.168 respectively.

docker swarm join \
 --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
 192.168.1.166:2377

2.4 Verify the cluster

docker node ls
 
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
h0az2wzqetpwhl9ybu76yxaen * KF2-DATA-166 Ready Active Reachable 18.06.1-ce
q6jripaolxsl7xqv3cmv5pxji KF2-DATA-167 Ready Active Leader 18.06.1-ce
h1iql1uvm7123h3gon9so69dy KF2-DATA-168 Ready Active 18.06.1-ce

2.5 Configure Docker Stack

vi docker-stack.yml

Configure the following

 version: '3.6'
 services:
  metad0:
  image: vesoft/nebula-metad:nightly
  env_file:
   - ./nebula.env
  command:
   - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
   - --local_ip=192.168.1.166
  - --ws_ip=192.168.1.166
  - --port=45500
  - --data_path=/data/meta
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
  constraints:
   - node.hostname == KF2-DATA-166
 healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.166:11000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
 ports:
  - target: 11000
   published: 11000
   protocol: tcp
  mode: host
  - target: 11002
   published: 11002
   protocol: tcp
  mode: host
  - target: 45500
   published: 45500
  protocol: tcp
   mode: host
 volumes:
  - data-metad0:/data/meta
  - logs-metad0:/logs
 networks:
  - nebula-net
 
 metad1:
 image: vesoft/nebula-metad:nightly
  env_file:
  - ./nebula.env
 command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.167
  - --ws_ip=192.168.1.167
  - --port=45500
  - --data_path=/data/meta
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
 deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-167
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.167:11000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
 start_period: 20s
  ports:
  - target: 11000
   published: 11000
  protocol: tcp
   mode: host
  - target: 11002
   published: 11002
  protocol: tcp
   mode: host
  - target: 45500
   published: 45500
   protocol: tcp
   mode: host
 volumes:
  - data-metad1:/data/meta
  - logs-metad1:/logs
 networks:
  - nebula-net

 metad2:
  image: vesoft/nebula-metad:nightly
 env_file:
  - ./nebula.env
 command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.168
  - --ws_ip=192.168.1.168
  - --port=45500
  - --data_path=/data/meta
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-168
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.168:11000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 11000
   published: 11000
   protocol: tcp
   mode: host
  - target: 11002
   published: 11002
   protocol: tcp
   mode: host
  - target: 45500
   published: 45500
   protocol: tcp
   mode: host
  volumes:
  - data-metad2:/data/meta
  - logs-metad2:/logs
  networks:
  - nebula-net
 
 storaged0:
  image: vesoft/nebula-storaged:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.166
  - --ws_ip=192.168.1.166
  - --port=44500
  - --data_path=/data/storage
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-166
  depends_on:
  -metad0
  -metad1
  -metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.166:12000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 12000
   published: 12000
   protocol: tcp
   mode: host
  - target: 12002
   published: 12002
   protocol: tcp
   mode: host
  volumes:
  - data-storaged0:/data/storage
  - logs-storaged0:/logs
  networks:
  - nebula-net
 storaged1:
  image: vesoft/nebula-storaged:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.167
  - --ws_ip=192.168.1.167
  - --port=44500
  - --data_path=/data/storage
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-167
  depends_on:
  -metad0
  -metad1
  -metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.167:12000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 12000
   published: 12000
   protocol: tcp
   mode: host
  - target: 12002
   published: 12004
   protocol: tcp
   mode: host
  volumes:
  - data-storaged1:/data/storage
  - logs-storaged1:/logs
  networks:
  - nebula-net
 
 storaged2:
  image: vesoft/nebula-storaged:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.168
  - --ws_ip=192.168.1.168
  - --port=44500
  - --data_path=/data/storage
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-168
  depends_on:
  -metad0
  -metad1
  -metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.168:12000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 12000
   published: 12000
   protocol: tcp
   mode: host
  - target: 12002
   published: 12006
   protocol: tcp
   mode: host
  volumes:
  - data-storaged2:/data/storage
  - logs-storaged2:/logs
  networks:
  - nebula-net
 graphd1:
  image: vesoft/nebula-graphd:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --port=3699
  - --ws_ip=192.168.1.166
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-166
  depends_on:
  -metad0
  -metad1
  -metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.166:13000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 3699
   published: 3699
   protocol: tcp
   mode: host
  - target: 13000
   published: 13000
   protocol: tcp
 # mode: host
  - target: 13002
   published: 13002
   protocol: tcp
   mode: host
  volumes:
  - logs-graphd:/logs
  networks:
  - nebula-net
 
 graphd2:
  image: vesoft/nebula-graphd:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --port=3699
  - --ws_ip=192.168.1.167
  - --log_dir=/logs
  - --v=2
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-167
  depends_on:
  -metad0
  -metad1
  -metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.167:13001/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 3699
   published: 3640
   protocol: tcp
   mode: host
  - target: 13000
   published: 13001
   protocol: tcp
   mode: host
  - target: 13002
   published: 13003
   protocol: tcp
 # mode: host
  volumes:
  - logs-graphd2:/logs
  networks:
  - nebula-net
 graphd3:
  image: vesoft/nebula-graphd:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --port=3699
  - --ws_ip=192.168.1.168
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-168
  depends_on:
  -metad0
  -metad1
  -metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.168:13002/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 3699
   published: 3641
   protocol: tcp
   mode: host
  - target: 13000
   published: 13002
   protocol: tcp
 # mode: host
  - target: 13002
   published: 13004
   protocol: tcp
   mode: host
  volumes:
  - logs-graphd3:/logs
  networks:
  - nebula-net
 networks:
 nebula-net:
  external: true
  attachable: true
  name: host
 volumes:
 data-metad0:
 logs-metad0:
 data-metad1:
 logs-metad1:
 data-metad2:
 logs-metad2:
 data-storaged0:
 logs-storaged0:
 data-storaged1:
 logs-storaged1:
 data-storaged2:
 logs-storaged2:
 logs-graphd:
 logs-graphd2:
 logs-graphd3:
docker-stack.yml

編輯nebula.env

Add the following content

 TZ=UTC
USER=root

nebula.env

2.6 Start the nebula cluster

docker stack deploy nebula -c docker-stack.yml

3. Cluster load balancing and high availability configuration

The Nebula Graph client currently (1.X) does not provide load balancing capabilities, it just randomly selects a graphd to connect to. Therefore, when using it in production, you need to do load balancing and high availability yourself.

Figure 3.1

The entire deployment architecture is divided into three layers: data service layer, load balancing layer and high availability layer. As shown in Figure 3.1

Load balancing layer: load balances client requests and distributes them to the data service layer below

High availability layer: This layer implements the high availability of haproxy, ensuring the service of the load balancing layer and thus ensuring the normal service of the entire cluster.

3.1 Load Balancing Configuration

haproxy is configured using docker-compose. Edit the following three files separately

Dockerfile adds the following content

FROM haproxy:1.7
 COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
EXPOSE 3640

Dockerfile

Add the following to docker-compose.yml

 version: "3.2"
 services:
 haproxy:
  container_name: haproxy
  build: .
  volumes:
  - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
  ports:
  -3640:3640
  restart: always
  networks:
  - app_net
 networks:
 app_net:
  external: true

docker-compose.yml

Add the following to haproxy.cfg

global
  daemon
  maxconn 30000
  log 127.0.0.1 local0 info
 log 127.0.0.1 local1 warning

 defaults
 log-format %hr\ %ST\ %B\ %Ts
 log global
  mode http
  option http-keep-alive
  timeout connect 5000ms
  timeout client 10000ms
  timeout server 50000ms
  timeout http-request 20000ms
 
 # customize your own frontends && backends && listen conf
 #CUSTOM
 
 listen graphd-cluster
  bind *:3640
  mode tcp
  maxconn 300
  Balance Round Robin
  server server1 192.168.1.166:3699 maxconn 300 check
  server server2 192.168.1.167:3699 maxconn 300 check
  server server3 192.168.1.168:3699 maxconn 300 check
 
 listen stats
  bind *:1080
  stats refresh 30s
  stats uri /stats

3.2 Start haproxy

docker-compose up -d

3.2 High Availability Configuration

Note: To configure keepalive, you need to prepare a VIP (virtual IP) in advance. In the following configuration, 192.168.1.99 is the virtual IP.

Make the following configuration on 192.168.1.166, 192.168.1.167, and 192.168.1.168

Install keepalived

apt-get update && apt-get upgrade && apt-get install keepalived -y

Change the keepalived configuration file /etc/keepalived/keepalived.conf (make the following configuration on the three machines, and the priority should be set to different values ​​to determine the priority)

192.168.1.166 machine configuration

 global_defs {
  router_id lb01 #Identification information, just a name;
 }
 vrrp_script chk_haproxy {
  script "killall -0 haproxy" interval 2
 }
 vrrp_instance VI_1 {
  state MASTER
  interface ens160
  virtual_router_id 52
  priority 999
  # Set the time interval for synchronization check between MASTER and BACKUP load balancer, in seconds advert_int 1
  # Set the authentication type and password authentication {
  # Set the authentication type, there are two main types: PASS and AH auth_type PASS
  # Set the authentication password. Under the same vrrp_instance, MASTER and BACKUP must use the same password to communicate normally. auth_pass amber1
  }
  virtual_ipaddress {
   # The virtual IP is 192.168.1.99/24; the bound interface is ens160; the alias is ens169:1, the primary and backup interfaces are the same 192.168.1.99/24 dev ens160 label ens160:1
  }
  track_script {
   chk_haproxy
  }
 }

167 Machine Configuration

 global_defs {
  router_id lb01 #Identification information, just a name;
 }
 vrrp_script chk_haproxy {
  script "killall -0 haproxy" interval 2
 }
 vrrp_instance VI_1 {
  state BACKUP
  interface ens160
  virtual_router_id 52
  priority 888
  # Set the time interval for synchronization check between MASTER and BACKUP load balancer, in seconds advert_int 1
  # Set the authentication type and password authentication {
  # Set the authentication type, there are two main types: PASS and AH auth_type PASS
  # Set the authentication password. Under the same vrrp_instance, MASTER and BACKUP must use the same password to communicate normally. auth_pass amber1
  }
  virtual_ipaddress {
   # The virtual IP is 192.168.1.99/24; the bound interface is ens160; the alias is ens160:1, the primary and backup interfaces are the same 192.168.1.99/24 dev ens160 label ens160:1
  }
  track_script {
   chk_haproxy
  }
 }

168 Machine Configuration

 global_defs {
  router_id lb01 #Identification information, just a name;
 }
 vrrp_script chk_haproxy {
  script "killall -0 haproxy" interval 2
 }
 vrrp_instance VI_1 {
  state BACKUP
  interface ens160
  virtual_router_id 52
  priority 777
  # Set the time interval for synchronization check between MASTER and BACKUP load balancer, in seconds advert_int 1
  # Set the authentication type and password authentication {
  # Set the authentication type, there are two main types: PASS and AH auth_type PASS
  # Set the authentication password. Under the same vrrp_instance, MASTER and BACKUP must use the same password to communicate normally. auth_pass amber1
  }
  virtual_ipaddress {
   # The virtual IP is 192.168.1.99/24; the bound interface is ens160; the alias is ens160:1, the primary and backup interfaces are the same 192.168.1.99/24 dev ens160 label ens160:1
  }
  track_script {
   chk_haproxy
  }
 }

keepalived related commands

# Start keepalived
systemctl start keepalived
# Enable keepalived to start automatically at boot systemctl enable keeplived
# Restart keepalived
systemctl restart keepalived

IV. Others

How to deploy offline? Just change the image to a private image library. If you have any questions, please feel free to contact us.

This is the end of this article about how to quickly deploy a Nebula Graph cluster with Docker swarm. For more information about how to deploy a Nebula Graph cluster with Docker, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • Nebula Graph solves risk control business practices

<<:  Vue-Router installation process and principle detailed

>>:  MySql grouping and randomly getting one piece of data from each group

Recommend

Example code of setting label style using CSS selector

CSS Selectors Setting style on the html tag can s...

Optimization methods when Mysql occupies too high CPU (must read)

When Mysql occupies too much CPU, where should we...

Detailed explanation of MySQL persistent statistics

1. The significance of persistent statistical inf...

Description of the hr tag in various browsers

Generally, we rarely meet HR, but once we do, it c...

The use of vue directive v-bind and points to note

Table of contents 1. v-bind: can bind some data t...

Introduction to JavaScript strict mode use strict

Table of contents 1. Overview 1.1 What is strict ...

sql script function to write postgresql database to implement parsing

This article mainly introduces the sql script fun...

How to make a tar file of wsl through Docker

I've been playing with the remote development...

Detailed steps for installing and configuring MySQL 8.0 on CentOS

Preface Here are the steps to install and configu...

Install Tomcat on Linux system and configure Service startup and shutdown

Configure service startup and shutdown in Linux s...

Building FastDFS file system in Docker (multi-image tutorial)

Table of contents About FastDFS 1. Search for ima...