Installation Script Ubuntu / CentOS There seems to be a problem with the Debian installation, and the installation source problem needs to be solved. curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh --mirror Aliyun / AzureChinaCloud If you are using an overseas cloud server vendor such as AWS or GCP, you do not need to add --mirror. After Centos is finished running, you still need to manually sudo systemctl start docker.service, otherwise it will prompt errors such as docker not started Log related Grep String Correct approach: For example, to view the token of Jupyter Notebook: Other supported parameters -f : similar to tail -f command --since: Start from a certain timestamp, such as 2013-01-02T13:23:37. Relative time is also supported, such as: 42m --until : Similar to above, but in reverse. -t, --timestamp : Display timestamp --tail N (default all) : Display the last few lines of data Mounting techniques <br /> For example, Grafana and others have some files built into the docker image. If you mount the corresponding directory directly and the host directory is empty, then the docker internal The directory will be overwritten. How to deal with this situation? Simple and crude method 1: (idea only) Run it once, then copy it using the docker cp command Then delete the docker container, copy the file to the corresponding directory, and then mount it. A more elegant method 2: Take starting ClickHouse as an example # Step 1.1: Create a docker volume (Purpose: Expose the configuration of CH Server) docker volume create --driver local \ --opt type=none \ --opt device=/home/centos/workspace/clickhouse/configs \ --opt o=bind \ ch-server-configs # Step 1.2: Create a volume and mount the database data docker volume create --driver local \ --opt type=none \ --opt device=/home/centos/workspace/clickhouse/data \ --opt o=bind \ ch-server-data # Step 2: Start (Note: When there is a lot of stored data, the second startup will take a long time to initialize. Attempting to connect before initialization is complete will fail.) sudo docker run -d --name mkt-ch-server \ -v ch-server-configs:/etc/clickhouse-server \ -v ch-server-data:/var/lib/clickhouse \ --restart always \ -p 9000:9000 -p 8123:8123 \ --ulimit nofile=262144:262144 yandex/clickhouse-server In this way, the configuration file that comes with the Docker image will not be cleared when it is mounted for the first time. Scheduled tasks For example, MySQL needs to export data backup regularly. This operation is best done using crond on the host machine. 0 1 * * * docker exec mysqldump xxxx Common Docker images and their installation commands MySQL Install docker run --name some-mysql --restart always\ -v /my/own/datadir:/var/lib/mysql\ -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag Dump data Method 1: Already have a mysql docker container locally The following command is for mysql inside docker, you can also directly specify the parameter dump remote mysql docker exec some-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' > /path-to-data/all-databases.sql Method 2: There is no mysql docker container locally # Delete after use and enter the password in the command line prompt docker run -i --rm mysql:5.7 mysqldump --all-databases\ -h 172.17.0.1 -uroot -p | gzip -9 > /home/centos/workspace/mysql-data/backup.sql.gz Due to editor reasons, the above > is not displayed correctly Restore Data Still refer to the above Dump method, but the command line tool is changed to mysql Python Proxy You'll need to do some crawling. Make full use of the IP of the cloud server to act as a crawler proxy. The simplest way to build a crawler proxy is currently found: docker run --name py-proxy -d --restart always -p 8899:8899 abhinavsingh/proxy.py Notice:
Jupyter Notebook After using it for a while, I feel that the Notebook that comes with the tensorflow image is simpler. Because when mounting the host directory, there are no strange permission issues. The bash script is as follows: sudo docker run --name notebook -d --restart always \ -p 127.0.0.1:8888:8888 \ -v /path-to-workspace/notebooks:/tf \ tensorflow/tensorflow:latest-py3-jupyter If you need to link Apache Spark, etc., refer to the following Script sudo docker run --name pyspark-notebook -d \ --net host --pid host -e TINI_SUBREAPER=true -p 8888:8888 \ -v /path-to-workspace/notebooks:/tf \ tensorflow/tensorflow:latest-py3-jupyter Grafana ID=$(id -u) docker run \ -d --restart always \ -p 3000:3000 \ --name=grafana \ --user $ID -v /path-to-data/grafana-data:/var/lib/grafana \ -e "GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource" \ -e "GF_SECURITY_ADMIN_PASSWORD=aaabbbccc" \ grafana/grafana Some quick explanations:
Summarize The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. Thank you for your support of 123WORDPRESS.COM. You may also be interested in:
|
<<: Vue uses better-scroll to achieve horizontal scrolling method example
>>: Mysql queries the transactions being executed and how to wait for locks
1. Use the mysql/mysql-server:latest image to qui...
Introduction MySQL achieves high availability of ...
In CSS files, sometimes you need to use background...
When using MySQL 5.7, you will find that garbled ...
Linux has been loved by more and more users. Why ...
When to install If you use the protoc command and...
This article is used to record the installation o...
Preface During my internship at the company, I us...
Table of contents 1. What is Promise? 2. Why is t...
MySQL5.7 master-slave configuration implementatio...
Failure Scenario When calling JDBC to insert emoj...
1. mpstat command 1.1 Command Format mpstat [ -A ...
Table of contents 1. Basic types 2. Object Type 2...
Table of contents 1. What is a closure? 1.2 Memoi...
First we need to install some dependencies npm i ...