Implementation of services in docker accessing host services

Implementation of services in docker accessing host services

1. Scenario

Use windows and wsl2 for daily development and testing work. But wsl2 often encounters network problems. For example, today I am testing a project whose core function is to synchronize postgres data to clickhouse using the open source component synch.

Components required for testing

  1. postgres
  2. Kafka
  3. zookeeper
  4. redis
  5. synch container

When testing at the beginning, the solution chosen was to orchestrate the above five services using docker-compose, and network_modules used the hosts mode. Considering the listening security mechanism of Kafka, this network mode does not require separate specification of exposed ports.

The docker-compose.yaml file is as follows

version: "3"
 
services:
  postgres:
    image: failymao/postgres:12.7
    container_name: postgres
    restart: unless-stopped
    privileged: true # Set docker-compose env file command: [ "-c", "config_file=/var/lib/postgresql/postgresql.conf", "-c", "hba_file=/var/lib/postgresql/pg_hba.conf" ]
    volumes:
      - ./config/postgresql.conf:/var/lib/postgresql/postgresql.conf
      - ./config/pg_hba.conf:/var/lib/postgresql/pg_hba.conf
    environment:
      POSTGRES_PASSWORD: abc123
      POSTGRES_USER: postgres
      POSTGRES_PORT: 15432
      POSTGRES_HOST: 127.0.0.1
    healthcheck:
      test: sh -c "sleep 5 && PGPASSWORD=abc123 psql -h 127.0.0.1 -U postgres -p 15432 -c '\q';"
      interval: 30s
      timeout: 10s
      retries: 3
    network_mode: "host"
 
  zookeeper:
    image: failymao/zookeeper:1.4.0
    container_name: zookeeper
    restart: always
    network_mode: "host"
 
  kafka:
    image: failymao/kafka:1.4.0
    container_name: kafka
    restart: always
    depends_on:
      - zookeeper
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ZOOKEEPER_CONNECT: localhost:2181
      KAFKA_LISTENERS: PLAINTEXT://127.0.0.1:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092
      KAFKA_BROKER_ID: 1
      KAFKA_LOG_RETENTION_HOURS: 24
      KAFKA_LOG_DIRS: /data/kafka-data #Data mountnetwork_mode: "host"
 
  producer:
    depends_on:
      - redis
      - kafka
      - zookeeper
    image: long2ice/synch
    container_name: producer
    command: sh -c "
      sleep 30 &&
      synch --alias pg2ch_test produce"
    volumes:
      - ./synch.yaml:/synch/synch.yaml
    network_mode: "host"
 
  # A consumer consumes a database consumer:
    tty: true
    depends_on:
      - redis
      - kafka
      - zookeeper
    image: long2ice/synch
    container_name: consumer
    command: sh -c
      "sleep 30 &&
      synch --alias pg2ch_test consume --schema pg2ch_test"
    volumes:
      - ./synch.yaml:/synch/synch.yaml
    network_mode: "host"
 
  redis:
    hostname: redis
    container_name: redis
    image: redis:latest
    volumes:
      - redis:/data
    network_mode: "host"
 
volumes:
  redis:
  kafka:
  zookeeper:

During the test, we needed to use postgres and wal2json components. It was very troublesome to install the components separately in the container. After several attempts, we failed. So we chose to install the postgres service on the host machine. The synch service in the container uses the host machine's IP and port.

But after restarting the service, the synch service still cannot be started, and the log shows that postgres cannot connect. The synch configuration file is as follows

core:
  debug: true # when set True, will display sql information.
  insert_num: 20000 # how many num to submit, recommend setting 20000 when production
  insert_interval: 60 # how many seconds to submit, recommend setting 60 when production
  # enable this will auto create database `synch` in ClickHouse and insert monitor data
  monitoring: true
 
redis:
  host: redis
  port: 6379
  db: 0
  password:
  prefix: synch
  sentinel: false # enable redis sentinel
  sentinel_hosts: # redis sentinel hosts
    - 127.0.0.1:5000
  sentinel_master: master
  queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
 
source_dbs:
  -db_type: postgres
    alias: pg2ch_test
    broker_type: kafka # currently supports redis and kafka
    host: 127.0.0.1
    port: 5433
    user: postgres
    password: abc123
    databases:
      - database: pg2ch_test
        auto_create: true
        tables:
          - table: pgbench_accounts
            auto_full_etl: true
            clickhouse_engine: CollapsingMergeTree
            sign_column: sign
            version_column:
            partition_by:
            settings:
 
clickhouse:
  # Shard hosts when cluster, will insert by random
  hosts:
    - 127.0.0.1:9000
  user: default
  password: ''
  cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enabled.
  distributed_suffix: _all # distributed tables suffix, available in cluster
 
kafka:
  servers:
    - 127.0.0.1:9092
  topic_prefix: synch

This situation is very strange. First, confirm that postgres is started and the listening port (here is 5433) is normal. Both localhost and the host eth0 network card address report errors.

2. Solution

Google answer, refer to stackoverflow high praise answer, the problem is solved, the original answer is as follows

If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).

If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker

container with the --add-host host.docker.internal:host-gateway option.

Otherwise, read below

Use ** --network="host" **in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.

See the source post for more details

In host mode, services in containers access services on the host machine

Modify the postgres listening address to host.docker.internal to solve the error. Check the host machine /etc/hosts file as follows

root@failymao-NC:/mnt/d/pythonProject/pg_2_ch_demo# cat /etc/hosts
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
 
10.111.130.24 host.docker.internal

You can see the mapping between the host IP and the domain name. By accessing the domain name, it is resolved to the host IP and accesses the host service.

Finally, the synch service configuration is as follows

core:
  debug: true # when set True, will display sql information.
  insert_num: 20000 # how many num to submit, recommend setting 20000 when production
  insert_interval: 60 # how many seconds to submit, recommend setting 60 when production
  # enable this will auto create database `synch` in ClickHouse and insert monitor data
  monitoring: true
 
redis:
  host: redis
  port: 6379
  db: 0
  password:
  prefix: synch
  sentinel: false # enable redis sentinel
  sentinel_hosts: # redis sentinel hosts
    - 127.0.0.1:5000
  sentinel_master: master
  queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
 
source_dbs:
  -db_type: postgres
    alias: pg2ch_test
    broker_type: kafka # currently supports redis and kafka
    host: host.docker.internal
    port: 5433
    user: postgres
    password: abc123
    databases:
      - database: pg2ch_test
        auto_create: true
        tables:
          - table: pgbench_accounts
            auto_full_etl: true
            clickhouse_engine: CollapsingMergeTree
            sign_column: sign
            version_column:
            partition_by:
            settings:
 
clickhouse:
  # Shard hosts when cluster, will insert by random
  hosts:
    - 127.0.0.1:9000
  user: default
  password: ''
  cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enabled.
  distributed_suffix: _all # distributed tables suffix, available in cluster
 
kafka:
  servers:
    - 127.0.0.1:9092
  topic_prefix: synch host: host.docker.internal
core:
  debug: true # when set True, will display sql information.
  insert_num: 20000 # how many num to submit, recommend setting 20000 when production
  insert_interval: 60 # how many seconds to submit, recommend setting 60 when production
  # enable this will auto create database `synch` in ClickHouse and insert monitor data
  monitoring: true
 
redis:
  host: redis
  port: 6379
  db: 0
  password:
  prefix: synch
  sentinel: false # enable redis sentinel
  sentinel_hosts: # redis sentinel hosts
    - 127.0.0.1:5000
  sentinel_master: master
  queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
 
source_dbs:
  -db_type: postgres
    alias: pg2ch_test
    broker_type: kafka # currently supports redis and kafka
    host: 
    port: 5433
    user: postgres
    password: abc123
    databases:
      - database: pg2ch_test
        auto_create: true
        tables:
          - table: pgbench_accounts
            auto_full_etl: true
            clickhouse_engine: CollapsingMergeTree
            sign_column: sign
            version_column:
            partition_by:
            settings:
 
clickhouse:
  # Shard hosts when cluster, will insert by random
  hosts:
    - 127.0.0.1:9000
  user: default
  password: ''
  cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enabled.
  distributed_suffix: _all # distributed tables suffix, available in cluster
 
kafka:
  servers:
    - 127.0.0.1:9092
  topic_prefix: synch

3. Conclusion

When starting a container in --networks="host" mode, if you want to access services on the host in the container, change the ip to `host.docker.internal`

4. References

https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach

This is the end of this article about the implementation of services in docker accessing host services. For more relevant content about docker accessing the host, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • How to use Docker container to access host network
  • How to access the local machine (host machine) in Docker
  • Solution to the Docker container being unable to access the host port
  • Solution to the Docker container not having permission to write to the host directory
  • Summary of data interaction between Docker container and host
  • Solve the problem of 8 hours difference between docker container and host machine

<<:  Detailed explanation of various types of image formats such as JPG, GIF and PNG

>>:  Detailed explanation of vue page state persistence

Recommend

CSS transparent border background-clip magic

This article mainly introduces the wonderful use ...

Solution to forgetting the MYSQL database password under MAC

Quick solution for forgetting MYSQL database pass...

Detailed explanation of Vue's monitoring properties

Table of contents Vue monitor properties What is ...

JavaScript exquisite snake implementation process

Table of contents 1. Create HTML structure 2. Cre...

Detailed explanation of vue keepAlive cache clearing problem case

Keepalive is often used for caching in Vue projec...

Nginx monitoring issues under Linux

nginx installation Ensure that the virtual machin...

Detailed explanation of the core concepts and basic usage of Vuex

Table of contents introduce start Install ① Direc...

Several situations where div is covered by iframe and their solutions

Similar structures: Copy code The code is as foll...

Mini Program Recording Function Implementation

Preface In the process of developing a mini progr...

How to use docker to deploy front-end applications

Docker is becoming more and more popular. It can ...

How to display small icons in the browser title bar of HTML webpage

Just like this effect, the method is also very si...

Simple example of HTML checkbox and radio style beautification

Simple example of HTML checkbox and radio style b...

TABLE tags (TAGS) detailed introduction

Basic syntax of the table <table>...</tab...

Detailed explanation of Docker compose orchestration tool

Docker Compose Docker Compose is a tool for defin...