Implementation of services in docker accessing host services

Implementation of services in docker accessing host services

1. Scenario

Use windows and wsl2 for daily development and testing work. But wsl2 often encounters network problems. For example, today I am testing a project whose core function is to synchronize postgres data to clickhouse using the open source component synch.

Components required for testing

  1. postgres
  2. Kafka
  3. zookeeper
  4. redis
  5. synch container

When testing at the beginning, the solution chosen was to orchestrate the above five services using docker-compose, and network_modules used the hosts mode. Considering the listening security mechanism of Kafka, this network mode does not require separate specification of exposed ports.

The docker-compose.yaml file is as follows

version: "3"
 
services:
  postgres:
    image: failymao/postgres:12.7
    container_name: postgres
    restart: unless-stopped
    privileged: true # Set docker-compose env file command: [ "-c", "config_file=/var/lib/postgresql/postgresql.conf", "-c", "hba_file=/var/lib/postgresql/pg_hba.conf" ]
    volumes:
      - ./config/postgresql.conf:/var/lib/postgresql/postgresql.conf
      - ./config/pg_hba.conf:/var/lib/postgresql/pg_hba.conf
    environment:
      POSTGRES_PASSWORD: abc123
      POSTGRES_USER: postgres
      POSTGRES_PORT: 15432
      POSTGRES_HOST: 127.0.0.1
    healthcheck:
      test: sh -c "sleep 5 && PGPASSWORD=abc123 psql -h 127.0.0.1 -U postgres -p 15432 -c '\q';"
      interval: 30s
      timeout: 10s
      retries: 3
    network_mode: "host"
 
  zookeeper:
    image: failymao/zookeeper:1.4.0
    container_name: zookeeper
    restart: always
    network_mode: "host"
 
  kafka:
    image: failymao/kafka:1.4.0
    container_name: kafka
    restart: always
    depends_on:
      - zookeeper
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ZOOKEEPER_CONNECT: localhost:2181
      KAFKA_LISTENERS: PLAINTEXT://127.0.0.1:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092
      KAFKA_BROKER_ID: 1
      KAFKA_LOG_RETENTION_HOURS: 24
      KAFKA_LOG_DIRS: /data/kafka-data #Data mountnetwork_mode: "host"
 
  producer:
    depends_on:
      - redis
      - kafka
      - zookeeper
    image: long2ice/synch
    container_name: producer
    command: sh -c "
      sleep 30 &&
      synch --alias pg2ch_test produce"
    volumes:
      - ./synch.yaml:/synch/synch.yaml
    network_mode: "host"
 
  # A consumer consumes a database consumer:
    tty: true
    depends_on:
      - redis
      - kafka
      - zookeeper
    image: long2ice/synch
    container_name: consumer
    command: sh -c
      "sleep 30 &&
      synch --alias pg2ch_test consume --schema pg2ch_test"
    volumes:
      - ./synch.yaml:/synch/synch.yaml
    network_mode: "host"
 
  redis:
    hostname: redis
    container_name: redis
    image: redis:latest
    volumes:
      - redis:/data
    network_mode: "host"
 
volumes:
  redis:
  kafka:
  zookeeper:

During the test, we needed to use postgres and wal2json components. It was very troublesome to install the components separately in the container. After several attempts, we failed. So we chose to install the postgres service on the host machine. The synch service in the container uses the host machine's IP and port.

But after restarting the service, the synch service still cannot be started, and the log shows that postgres cannot connect. The synch configuration file is as follows

core:
  debug: true # when set True, will display sql information.
  insert_num: 20000 # how many num to submit, recommend setting 20000 when production
  insert_interval: 60 # how many seconds to submit, recommend setting 60 when production
  # enable this will auto create database `synch` in ClickHouse and insert monitor data
  monitoring: true
 
redis:
  host: redis
  port: 6379
  db: 0
  password:
  prefix: synch
  sentinel: false # enable redis sentinel
  sentinel_hosts: # redis sentinel hosts
    - 127.0.0.1:5000
  sentinel_master: master
  queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
 
source_dbs:
  -db_type: postgres
    alias: pg2ch_test
    broker_type: kafka # currently supports redis and kafka
    host: 127.0.0.1
    port: 5433
    user: postgres
    password: abc123
    databases:
      - database: pg2ch_test
        auto_create: true
        tables:
          - table: pgbench_accounts
            auto_full_etl: true
            clickhouse_engine: CollapsingMergeTree
            sign_column: sign
            version_column:
            partition_by:
            settings:
 
clickhouse:
  # Shard hosts when cluster, will insert by random
  hosts:
    - 127.0.0.1:9000
  user: default
  password: ''
  cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enabled.
  distributed_suffix: _all # distributed tables suffix, available in cluster
 
kafka:
  servers:
    - 127.0.0.1:9092
  topic_prefix: synch

This situation is very strange. First, confirm that postgres is started and the listening port (here is 5433) is normal. Both localhost and the host eth0 network card address report errors.

2. Solution

Google answer, refer to stackoverflow high praise answer, the problem is solved, the original answer is as follows

If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).

If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker

container with the --add-host host.docker.internal:host-gateway option.

Otherwise, read below

Use ** --network="host" **in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.

See the source post for more details

In host mode, services in containers access services on the host machine

Modify the postgres listening address to host.docker.internal to solve the error. Check the host machine /etc/hosts file as follows

root@failymao-NC:/mnt/d/pythonProject/pg_2_ch_demo# cat /etc/hosts
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
 
10.111.130.24 host.docker.internal

You can see the mapping between the host IP and the domain name. By accessing the domain name, it is resolved to the host IP and accesses the host service.

Finally, the synch service configuration is as follows

core:
  debug: true # when set True, will display sql information.
  insert_num: 20000 # how many num to submit, recommend setting 20000 when production
  insert_interval: 60 # how many seconds to submit, recommend setting 60 when production
  # enable this will auto create database `synch` in ClickHouse and insert monitor data
  monitoring: true
 
redis:
  host: redis
  port: 6379
  db: 0
  password:
  prefix: synch
  sentinel: false # enable redis sentinel
  sentinel_hosts: # redis sentinel hosts
    - 127.0.0.1:5000
  sentinel_master: master
  queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
 
source_dbs:
  -db_type: postgres
    alias: pg2ch_test
    broker_type: kafka # currently supports redis and kafka
    host: host.docker.internal
    port: 5433
    user: postgres
    password: abc123
    databases:
      - database: pg2ch_test
        auto_create: true
        tables:
          - table: pgbench_accounts
            auto_full_etl: true
            clickhouse_engine: CollapsingMergeTree
            sign_column: sign
            version_column:
            partition_by:
            settings:
 
clickhouse:
  # Shard hosts when cluster, will insert by random
  hosts:
    - 127.0.0.1:9000
  user: default
  password: ''
  cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enabled.
  distributed_suffix: _all # distributed tables suffix, available in cluster
 
kafka:
  servers:
    - 127.0.0.1:9092
  topic_prefix: synch host: host.docker.internal
core:
  debug: true # when set True, will display sql information.
  insert_num: 20000 # how many num to submit, recommend setting 20000 when production
  insert_interval: 60 # how many seconds to submit, recommend setting 60 when production
  # enable this will auto create database `synch` in ClickHouse and insert monitor data
  monitoring: true
 
redis:
  host: redis
  port: 6379
  db: 0
  password:
  prefix: synch
  sentinel: false # enable redis sentinel
  sentinel_hosts: # redis sentinel hosts
    - 127.0.0.1:5000
  sentinel_master: master
  queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO
 
source_dbs:
  -db_type: postgres
    alias: pg2ch_test
    broker_type: kafka # currently supports redis and kafka
    host: 
    port: 5433
    user: postgres
    password: abc123
    databases:
      - database: pg2ch_test
        auto_create: true
        tables:
          - table: pgbench_accounts
            auto_full_etl: true
            clickhouse_engine: CollapsingMergeTree
            sign_column: sign
            version_column:
            partition_by:
            settings:
 
clickhouse:
  # Shard hosts when cluster, will insert by random
  hosts:
    - 127.0.0.1:9000
  user: default
  password: ''
  cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enabled.
  distributed_suffix: _all # distributed tables suffix, available in cluster
 
kafka:
  servers:
    - 127.0.0.1:9092
  topic_prefix: synch

3. Conclusion

When starting a container in --networks="host" mode, if you want to access services on the host in the container, change the ip to `host.docker.internal`

4. References

https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach

This is the end of this article about the implementation of services in docker accessing host services. For more relevant content about docker accessing the host, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • How to use Docker container to access host network
  • How to access the local machine (host machine) in Docker
  • Solution to the Docker container being unable to access the host port
  • Solution to the Docker container not having permission to write to the host directory
  • Summary of data interaction between Docker container and host
  • Solve the problem of 8 hours difference between docker container and host machine

<<:  Detailed explanation of various types of image formats such as JPG, GIF and PNG

>>:  Detailed explanation of vue page state persistence

Recommend

MySql Group By implements grouping of multiple fields

In daily development tasks, we often use MYSQL...

HTML Tutorial: Unordered List

<br />Original text: http://andymao.com/andy...

HTML form value transfer example through get method

The google.html interface is as shown in the figur...

Some tips on deep optimization to improve website access speed

<br />The website access speed can directly ...

MySQL 5.7.18 installation tutorial and problem summary

MySQL 5.7.18 installation and problem summary. I ...

web.config (IIS) and .htaccess (Apache) configuration

xml <?xml version="1.0" encoding=&qu...

In-depth understanding of umask in new linux file permission settings

Preface The origin is a question 1: If your umask...

Implementation code for partial refresh of HTML page

Event response refresh: refresh only when request...

5 Simple XHTML Web Forms for Web Design

Simple XHTML web form in web design 5. Technique ...

How to query a record in Mysql in which page of paging

Preface In practice, we may encounter such a prob...

Nginx uses ctx to realize data sharing and context modification functions

Environment: init_worker_by_lua, set_by_lua, rewr...

WeChat applet learning notes: page configuration and routing

I have been studying and reviewing the developmen...

How to reset MySQL root password under Windows

Today I found that WordPress could not connect to...