Environment Setup OverviewDear family members, you can go to the link: http://xiazai.jb51.net/202105/yuanma/javayaml_jb51.rar to download the required yaml file. 1.What is K8S?K8S stands for Kubernetes, a new leading distributed architecture solution based on container technology. Based on container technology, it aims to automate resource management and maximize resource utilization across multiple data centers. If our system design follows the design philosophy of Kubernetes, then the underlying codes or functional modules in the traditional system architecture that have little to do with the business can be managed using K8S. We no longer have to worry about the selection and deployment of load balancing, no longer have to consider introducing or developing a complex service governance framework ourselves, and no longer have to worry about the development of service monitoring and fault handling modules. In short, using the solutions provided by Kubernetes will greatly reduce development costs, while allowing you to focus more on the business itself. Moreover, because Kubernetes provides a powerful automation mechanism, the difficulty and cost of later system operation and maintenance are greatly reduced. 2. Why use K8S?Docker, an emerging container technology, has been adopted by many companies. Its transition from a single machine to a cluster is inevitable, and the booming development of cloud computing is accelerating this process. Kubernetes is currently the only Docker distributed system solution that is widely recognized and favored by the industry. It is foreseeable that in the next few years, a large number of new systems will choose it, whether running on enterprise local servers or hosted on public clouds. 3. What are the benefits of using K8S?Using Kubernetes is a comprehensive deployment of a microservices architecture. The core of the microservice architecture is to decompose a huge monolithic application into many small interconnected microservices. A microservice may be supported by multiple instance copies, and the number of copies may be adjusted as the system load changes. The embedded load balancer is supported by multiple instance copies in the k8s platform, and the number of copies may be adjusted as the system load changes. The embedded load balancer plays an important role in the k8s platform. The microservice architecture allows each service to be developed by a dedicated development team, and developers can freely choose development technologies, which is very valuable for large-scale teams. In addition, each microservice is independently developed, upgraded, and expanded, which makes the system highly stable and capable of rapid iterative evolution. 4. Environmental compositionThe construction of the entire environment includes: the construction of the Docker environment, the construction of the docker-compose environment, the construction of the K8S cluster, the construction of the GitLab code repository, the construction of the SVN repository, the construction of the Jenkins automated deployment environment, and the construction of the Harbor private repository. In this document, the construction of the entire environment includes:
Server Planning |
IP | Hostname | node | operating system |
---|---|---|---|
192.168.0.10 | test10 | K8S Master | CentOS 8.0.1905 |
192.168.0.11 | test11 | K8S Worker | CentOS 8.0.1905 |
192.168.0.12 | test12 | K8S Worker | CentOS 8.0.1905 |
Installation Environment
Software Title | Software Version | illustrate |
---|---|---|
Docker | 19.03.8 | Providing a container environment |
docker-compose | 1.25.5 | Define and run applications consisting of multiple containers |
K8S | 1.18.2 | It is an open source containerized application management platform for multiple hosts. The goal of Kubernetes is to make the deployment of containerized applications simple and powerful. Kubernetes provides a mechanism for application deployment, planning, updating, and maintenance. |
GitLab | 12.1.6 | Code repository |
Harbor | 1.10.2 | Private image repository |
Jenkins | 2.222.3 | Continuous Integration Delivery |
Docker is an open source application container engine based on the Go language and open source in compliance with the Apache 2.0 protocol.
Docker allows developers to package their applications and dependent packages into a lightweight, portable container and then publish it to any popular Linux machine, and also implement virtualization.
This document builds a Docker environment based on Docker version 19.03.8.
Create the install_docker.sh script on all servers. The script content is as follows.
#Use Alibaba Cloud Mirror Center export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com #Install the yum tool dnf install yum* #Install the docker environment yum install -y yum-utils device-mapper-persistent-data lvm2 #Configure Docker's yum source yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #Install the container plug-in dnf install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm #Specify the installation of docker version 19.03.8 yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8 #Set Docker to start systemctl enable docker.service #Start Docker systemctl start docker.service #Check the Docker versiondocker version
Give executable permissions to the install_docker.sh script on each server and execute the script as shown below.
#Give the install_docker.sh script executable permissions chmod a+x ./install_docker.sh # Execute the install_docker.sh script ./install_docker.sh
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YML file to configure all the services your application needs. Then, with a single command, you can create and start all services from the YML file configuration.
Note: Install docker-compose on each server
#Download and install docker-compose curl -L https://github.com/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
#Give docker-compose executable permissions chmod a+x /usr/local/bin/docker-compose
# View docker-compose version [root@binghe ~]# docker-compose version docker-compose version 1.25.5, build 8a1c60f6 docker-py version: 4.1.0 CPython version: 3.7.5 OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
Kubernetes is an open source platform for managing containerized applications on multiple hosts in a cloud platform. The goal of Kubernetes is to make the deployment of containerized applications simple and powerful. Kubernetes provides a mechanism for application deployment, planning, updating, and maintenance.
This document is based on K8S version 1.8.12 to build a K8S cluster
Create the install_k8s.sh script file on all servers. The content of the script file is as follows.
##################Configure Alibaba Cloud Image Accelerator Start########################## mkdir -p /etc/docker tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"] } EOF systemctl daemon-reload systemctl restart docker #######################End of configuring Alibaba Cloud Mirror Accelerator########################## #Install nfs-utils yum install -y nfs-utils #Install wget software download command yum install -y wget #Start nfs-server systemctl start nfs-server #Configure nfs-server to start automatically at boot time systemctl enable nfs-server #Shut down the firewall systemctl stop firewalld #Cancel the firewall startup systemctl disable firewalld #Disable SeLinux setenforce 0 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config # Disable swap swapoff -a yes | cp /etc/fstab /etc/fstab_bak cat /etc/fstab_bak |grep -v swap > /etc/fstab ##############################Modify /etc/sysctl.conf and start############################## # If there is a configuration, modify sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf # Maybe not, add echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf ##############################Modify /etc/sysctl.confEnd############################# # Execute the command to make the modified /etc/sysctl.conf file take effect sysctl -p ################### Start configuring the yum source of K8S################################ cat <<EOF > /etc/yum.repos.d/kubernetes.repo [Kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ################## End of configuring K8S's yum source############################## # Uninstall the old version of K8S yum remove -y kubelet kubeadm kubectl #Install kubelet, kubeadm, kubectl. Here I installed version 1.18.2. You can also install version 1.17.2. yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2 # Change docker Cgroup Driver to systemd # # Change this line in the /usr/lib/systemd/system/docker.service file ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock # # Modify to ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd # If you do not modify it, you may encounter the following error when adding a worker node. # [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". # Please follow the guide at https://kubernetes.io/docs/setup/cri/ sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service # Set up the docker image to improve the download speed and stability of the docker image # If the access speed to https://hub.docker.io is very stable, you can also skip this step. Generally, no configuration is required # curl -sSL https://kuboard.cn/install-script/set_mirror.sh | sh -s ${REGISTRY_MIRROR} # Reload the configuration file systemctl daemon-reload #Restart Docker systemctl restart docker # Set kubelet to start at boot and start kubelet systemctl enable kubelet && systemctl start kubelet # View the docker version
Grant executable permissions to the install_k8s.sh script on each server and execute the script
#Give the install_k8s.sh script executable permissions chmod a+x ./install_k8s.sh # Run the install_k8s.sh script ./install_k8s.sh
Operations are only performed on the test10 server.
Note: The following commands need to be executed manually on the command line.
# Only execute on the master node # The export command is only valid in the current shell session. After opening a new shell window, if you want to continue the installation process, please re-execute the export command here export MASTER_IP=192.168.0.10 # Replace k8s.master with the dnsName you want export APISERVER_NAME=k8s.master # The network segment where the Kubernetes container group is located. This network segment is created by Kubernetes after installation and does not exist in the physical network beforehand. export POD_SUBNET=172.18.0.1/16 echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
Create the init_master.sh script file on the test10 server. The file content is as follows.
#!/bin/bash # Terminate the script when an error occurs set -e if [ ${#POD_SUBNET} -eq 0 ] || [ ${#APISERVER_NAME} -eq 0 ]; then echo -e "\033[31;1mMake sure you have set the environment variables POD_SUBNET and APISERVER_NAME \033[0m" echo Current POD_SUBNET=$POD_SUBNET echo Current APISERVER_NAME=$APISERVER_NAME exit 1 fi # View full configuration options https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2 rm -f ./kubeadm-config.yaml cat <<EOF > ./kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.18.2 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers controlPlaneEndpoint: "${APISERVER_NAME}:6443" networking: serviceSubnet: "10.96.0.0/16" podSubnet: "${POD_SUBNET}" dnsDomain: "cluster.local" EOF # kubeadm init # Initialize kebeadm kubeadm init --config=kubeadm-config.yaml --upload-certs # Configure kubectl rm -rf /root/.kube/ mkdir /root/.kube/ cp -i /etc/kubernetes/admin.conf /root/.kube/config # Install the calico network plugin # Refer to the documentation https://docs.projectcalico.org/v3.13/getting-started/kubernetes/self-managed-onprem/onpremises echo "Install calico-3.13.1" rm -f calico-3.13.1.yaml wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml kubectl apply -f calico-3.13.1.yaml
Grant executable permissions to the init_master.sh script file and execute the script.
# Grant executable permissions to the init_master.sh file chmod a+x ./init_master.sh # Run the init_master.sh script./init_master.sh
(1) Ensure that all container groups are in the Running state
# Execute the following command and wait for 3-10 minutes until all container groups are in the Running state. watch kubectl get pod -n kube-system -o wide
The specific implementation is as follows.
[root@test10 ~]# watch kubectl get pod -n kube-system -o wide Every 2.0s: kubectl get pod -n kube-system -o wide test10: Sun May 10 11:01:32 2020 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5b8b769fcd-5dtlp 1/1 Running 0 118s 172.18.203.66 test10 <none> <none> calico-node-fnv8g 1/1 Running 0 118s 192.168.0.10 test10 <none> <none> coredns-546565776c-27t7h 1/1 Running 0 2m1s 172.18.203.67 test10 <none> <none> coredns-546565776c-hjb8z 1/1 Running 0 2m1s 172.18.203.65 test10 <none> <none> etcd-test10 1/1 Running 0 2m7s 192.168.0.10 test10 <none> <none> kube-apiserver-test10 1/1 Running 0 2m7s 192.168.0.10 test10 <none> <none> kube-controller-manager-test10 1/1 Running 0 2m7s 192.168.0.10 test10 <none> <none> kube-proxy-dvgsr 1/1 Running 0 2m1s 192.168.0.10 test10 <none> <none> kube-scheduler-test10 1/1 Running 0 2m7s 192.168.0.10 test10 <none>
(2) View the Master node initialization results
# View the initialization results of the Master node kubectl get nodes -o wide
The specific implementation is as follows.
[root@test10 ~]# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME test10 Ready master 3m28s v1.18.2 192.168.0.10 <none> CentOS Linux 8 (Core) 4.18.0-80.el8.x86_64 docker://19.3.8
Execute the following command on the Master node (test10 server) to obtain the join command parameters.
kubeadm token create --print-join-command
The specific implementation is as follows.
[root@test10 ~]# kubeadm token create --print-join-command W0510 11:04:34.828126 56132 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d
Among them, there is the following line of output.
kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d
This line of code is the obtained join command.
Note: The token in the join command is valid for 2 hours. Within 2 hours, you can use this token to initialize any number of worker nodes.
Execute on all worker nodes. Here, it is executed on the test11 server and the test12 server.
Manually execute the following commands in the command line.
# Only executed on worker nodes # 192.168.0.10 is the intranet IP of the master node export MASTER_IP=192.168.0.10 #Replace k8s.master with the APISERVER_NAME used when initializing the master node export APISERVER_NAME=k8s.master echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts # Replace with the join output of the kubeadm token create command on the master node kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d
The specific implementation is as follows.
[root@test11 ~]# export MASTER_IP=192.168.0.10 [root@test11 ~]# export APISERVER_NAME=k8s.master [root@test11 ~]# echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts [root@test11 ~]# kubeadm join k8s.master:6443 --token 8nblts.62xytoqufwsqzko2 --discovery-token-ca-cert-hash sha256:1717cc3e34f6a56b642b5751796530e367aa73f4113d09994ac3455e33047c0d W0510 11:08:27.709263 42795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING FileExisting-tc]: tc not found in system path [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
According to the output results, the Worker node has joined the K8S cluster.
Note: kubeadm join… is the join output by the kubeadm token create command on the master node.
Execute the following command on the Master node (test10 server) to view the initialization results.
kubectl get nodes -o wide
The specific implementation is as follows.
[root@test10 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION test10 Ready master 20m v1.18.2 test11 Ready <none> 2m46s v1.18.2 test12 Ready <none> 2m46s v1.18.2
Note: Adding the -o wide parameter after the kubectl get nodes command will output more information.
The IP address of the Master node changes, causing the worker node to fail to start. You need to reinstall the K8S cluster and ensure that all nodes have fixed intranet IP addresses.
After restarting the server, use the following command to check the running status of the Pod.
#View the running status of all pods kubectl get pods --all-namespaces
It is found that many Pods are not in the Running state. At this time, you need to use the following command to delete the Pods that are not running properly.
kubectl delete pod <pod-name> -n <pod-namespecific>
Note: If the Pod is created using a controller such as Deployment or StatefulSet, K8S will create a new Pod as a replacement, and the restarted Pod usually works normally.
Among them, pod-name indicates the name of the pod running in K8S, and pod-namespece indicates the namespace. For example, if you need to delete a pod with the pod name pod-test and the namespace pod-test-namespace, you can use the following command.
kubectl delete pod pod-test -n pod-test-namespace
As a reverse proxy, it imports external traffic into the cluster, exposes the Service inside Kubernetes to the outside, and matches the Service by domain name in the Ingress object, so that the service inside the cluster can be accessed directly by domain name. Compared with traefik, nginx-ingress has better performance.
Note: On the Master node (executed on the test10 server)
Create the ingress-nginx-namespace.yaml file. Its main function is to create the ingress-nginx namespace. The file content is as follows.
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: name: ingress-nginx
Run the following command to create the ingress-nginx namespace.
kubectl apply -f ingress-nginx-namespace.yaml
Create the ingress-nginx-mandatory.yaml file, the main function of which is to install ingress-nginx. The file contents are as follows.
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: default-http-backend labels: app.kubernetes.io/name: default-http-backend app.kubernetes.io/part-of: ingress-nginx namespace: ingress-nginx spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: default-http-backend app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: default-http-backend app.kubernetes.io/part-of: ingress-nginx spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissible as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20mi requests: cpu: 10m memory: 20mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: ingress-nginx labels: app.kubernetes.io/name: default-http-backend app.kubernetes.io/part-of: ingress-nginx spec: ports: - port: 80 targetPort: 8080 selector: app.kubernetes.io/name: default-http-backend app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: -apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch -apiGroups: - "" resources: - nodes verbs: - get -apiGroups: - "" resources: - services verbs: - get - list - watch -apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch -apiGroups: - "" resources: - events verbs: - create -patch -apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: -apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get -apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update -apiGroups: - "" resources: - configmaps verbs: - create -apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: capabilities: drop: - ALL add: -NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 ---
Run the following command to install the ingress controller.
kubectl apply -f ingress-nginx-mandatory.yaml
Mainly used to expose pod: nginx-ingress-controller.
Create the service-nodeport.yaml file with the following content.
apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: type: NodePort ports: - name: http port: 80 targetPort: 80 protocol: TCP nodePort: 30080 - name: https port: 443 targetPort: 443 protocol: TCP nodePort: 30443 selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx
Run the following command to install it.
kubectl apply -f service-nodeport.yaml
Check the deployment of the ingress-nginx namespace, as shown below.
[root@test10 k8s]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE default-http-backend-796ddcd9b-vfmgn 1/1 Running 1 10h nginx-ingress-controller-58985cc996-87754 1/1 Running 2 10h
Enter the following command on the command line server command line to view the port mapping of ingress-nginx.
kubectl get svc -n ingress-nginx
The details are as follows.
[root@test10 k8s]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.96.247.2 <none> 80/TCP 7m3s ingress-nginx NodePort 10.96.40.6 <none> 80:30080/TCP,443:30443/TCP 4m35s
Therefore, you can access ingress-nginx through the IP address of the Master node (test10 server) and port number 30080, as shown below.
[root@test10 k8s]# curl 192.168.0.10:30080 default backend - 404
You can also access ingress-nginx by opening http://192.168.0.10:30080 in your browser, as shown below.
GitLab is a web-based Git repository management tool developed by GitLab Inc. using the MIT license and has Wiki and issue tracking functions. Use Git as a code management tool and build a web service on this basis.
Note: On the Master node (executed on the test10 server)
Create the k8s-ops-namespace.yaml file, which is mainly used to create the k8s-ops namespace. The file contents are as follows.
apiVersion: v1 kind: Namespace metadata: name: k8s-ops labels: name: k8s-ops
Run the following command to create a namespace.
kubectl apply -f k8s-ops-namespace.yaml
Create the gitlab-redis.yaml file. The content of the file is as follows.
apiVersion: apps/v1 kind: Deployment metadata: name: redis namespace: k8s-ops labels: name: redis spec: selector: matchLabels: name: redis template: metadata: name: redis labels: name: redis spec: containers: - name: redis image: sameersbn/redis imagePullPolicy: IfNotPresent ports: - name: redis containerPort: 6379 volumeMounts: - mountPath: /var/lib/redis name: data livenessProbe: exec: command: - redis-cli - ping initialDelaySeconds: 30 timeoutSeconds: 5 readinessProbe: exec: command: - redis-cli - ping initialDelaySeconds: 10 timeoutSeconds: 5 volumes: - name: data hostPath: path: /data1/docker/xinsrv/redis --- apiVersion: v1 kind: Service metadata: name: redis namespace: k8s-ops labels: name: redis spec: ports: - name: redis port: 6379 targetPort: redis selector: name: redis
First, execute the following command in the command line to create the /data1/docker/xinsrv/redis directory.
mkdir -p /data1/docker/xinsrv/redis
Run the following command to install gitlab-redis.
kubectl apply -f gitlab-redis.yaml
Create gitlab-postgresql.yaml and the content of the file is as follows.
apiVersion: apps/v1 kind: Deployment metadata: name: postgresql namespace: k8s-ops labels: name: postgresql spec: selector: matchLabels: name: postgresql template: metadata: name: postgresql labels: name: postgresql spec: containers: - name: postgresql image: sameersbn/postgresql imagePullPolicy: IfNotPresent env: - name: DB_USER value: gitlab - name: DB_PASS value: passw0rd - name: DB_NAME value: gitlab_production - name: DB_EXTENSION value: pg_trgm ports: - name: postgres containerPort: 5432 volumeMounts: -mountPath: /var/lib/postgresql name: data livenessProbe: exec: command: - pg_isready - -h - localhost - -U - postgres initialDelaySeconds: 30 timeoutSeconds: 5 readinessProbe: exec: command: - pg_isready - -h - localhost - -U - postgres initialDelaySeconds: 5 timeoutSeconds: 1 volumes: - name: data hostPath: path: /data1/docker/xinsrv/postgresql --- apiVersion: v1 kind: Service metadata: name: postgresql namespace: k8s-ops labels: name: postgresql spec: ports: - name: postgres port: 5432 targetPort: postgres selector: name: postgresql
First, execute the following command to create the /data1/docker/xinsrv/postgresql directory.
mkdir -p /data1/docker/xinsrv/postgresql
Next, install gitlab-postgresql as shown below.
kubectl apply -f gitlab-postgresql.yaml
(1) Configure username and password
First, use base64 encoding to convert the username and password in the command line. In this example, the username used is admin and the password is admin.1231
The transcoding is shown below.
[root@test10 k8s]# echo -n 'admin' | base64 YWRtaW4= [root@test10 k8s]# echo -n 'admin.1231' | base64 YWRtaW4uMTIzMQ==
The converted username is: YWRtaW4= Password: YWRtaW4uMTIzMQ==
You can also decode a base64-encoded string, for example, a password string, as shown below.
[root@test10 k8s]# echo 'YWRtaW4uMTIzMQ==' | base64 --decode admin.1231
Next, create the secret-gitlab.yaml file, which is mainly used by users to configure GitLab's username and password. The file content is as follows.
apiVersion: v1 kind: Secret metadata: namespace: k8s-ops name: git-user-pass type: Opaque data: username: YWRtaW4= password: YWRtaW4uMTIzMQ==
Execute the contents of the configuration file as shown below.
kubectl create -f ./secret-gitlab.yaml
(2) Install GitLab
Create a gitlab.yaml file with the following content.
apiVersion: apps/v1 kind: Deployment metadata: name: gitlab namespace: k8s-ops labels: name: gitlab spec: selector: matchLabels: name: gitlab template: metadata: name: gitlab labels: name: gitlab spec: containers: - name: gitlab image: sameersbn/gitlab:12.1.6 imagePullPolicy: IfNotPresent env: - name: TZ value: Asia/Shanghai - name: GITLAB_TIMEZONE value: Beijing - name: GITLAB_SECRETS_DB_KEY_BASE value: long-and-random-alpha-numeric-string - name: GITLAB_SECRETS_SECRET_KEY_BASE value: long-and-random-alpha-numeric-string - name: GITLAB_SECRETS_OTP_KEY_BASE value: long-and-random-alpha-numeric-string - name: GITLAB_ROOT_PASSWORD valueFrom: secretKeyRef: name: git-user-pass key: password - name: GITLAB_ROOT_EMAIL value: [email protected] - name: GITLAB_HOST value: gitlab.binghe.com - name: GITLAB_PORT value: "80" - name: GITLAB_SSH_PORT value: "30022" - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS value: "true" - name: GITLAB_NOTIFY_PUSHER value: "false" - name: GITLAB_BACKUP_SCHEDULE value: daily - name: GITLAB_BACKUP_TIME value: 01:00 - name: DB_TYPE value: postgres - name: DB_HOST value: postgresql - name: DB_PORT value: "5432" - name: DB_USER value: gitlab - name: DB_PASS value: passw0rd - name: DB_NAME value: gitlab_production - name: REDIS_HOST value: redis - name: REDIS_PORT value: "6379" ports: - name: http containerPort: 80 - name: ssh containerPort: 22 volumeMounts: - mountPath: /home/git/data name: data livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 180 timeoutSeconds: 5 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 timeoutSeconds: 1 volumes: - name: data hostPath: path: /data1/docker/xinsrv/gitlab --- apiVersion: v1 kind: Service metadata: name: gitlab namespace: k8s-ops labels: name: gitlab spec: ports: - name: http port: 80 nodePort: 30088 - name: ssh port: 22 targetPort: ssh nodePort: 30022 type: NodePort selector: name: gitlab --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gitlab namespace: k8s-ops annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: gitlab.binghe.com http: paths: - backend: serviceName: gitlab servicePort: http
Note: When configuring GitLab, you cannot use the IP address when listening to the host. You need to use the host name or domain name. In the above configuration, I use the gitlab.binghe.com host name.
Execute the following command on the command line to create the /data1/docker/xinsrv/gitlab directory.
mkdir -p /data1/docker/xinsrv/gitlab
Install GitLab as shown below.
kubectl apply -f gitlab.yaml
Check the k8s-ops namespace deployment as shown below.
[root@test10 k8s]# kubectl get pod -n k8s-ops NAME READY STATUS RESTARTS AGE gitlab-7b459db47c-5vk6t 0/1 Running 0 11s postgresql-79567459d7-x52vx 1/1 Running 0 30m redis-67f4cdc96c-h5ckz 1/1 Running 1 10h
You can also use the following command to view it.
[root@test10 k8s]# kubectl get pod --namespace=k8s-ops NAME READY STATUS RESTARTS AGE gitlab-7b459db47c-5vk6t 0/1 Running 0 36s postgresql-79567459d7-x52vx 1/1 Running 0 30m redis-67f4cdc96c-h5ckz 1/1 Running 1 10h
The two have the same effect.
Next, check the port mapping for GitLab as shown below.
[root@test10 k8s]# kubectl get svc -n k8s-ops NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE gitlab NodePort 10.96.153.100 <none> 80:30088/TCP,22:30022/TCP 2m42s postgresql ClusterIP 10.96.203.119 <none> 5432/TCP 32m redis ClusterIP 10.96.107.150 <none> 6379/TCP 10h
At this point, you can see that GitLab can be accessed through the host name gitlab.binghe.com and port 30088 of the Master node (test10). Since I am using a virtual machine to build the relevant environment, when accessing gitlab.binghe.com mapped to the virtual machine on this machine, you need to configure the local hosts file and add the following configuration items to the local hosts file.
192.168.0.10 gitlab.binghe.com
Note: In Windows operating systems, the hosts file is located in the following directory.
C:\Windows\System32\drivers\etc
Next, you can access GitLab in your browser through the link: http://gitlab.binghe.com:30088, as shown below.
At this point, you can log in to GitLab with the username root and password admin.1231.
Note: The username here is root instead of admin because root is the default super user of GitLab.
At this point, the K8S installation of gitlab is complete.
Habor is an open source container image repository developed by VMWare. In fact, Habor has made corresponding enterprise-level extensions on Docker Registry to achieve wider application. These new enterprise-level features include: management user interface, role-based access control, AD/LDAP integration and audit logs, which are enough to meet basic enterprise needs.
Note: Here, the Harbor private warehouse is installed on the Master node (test10 server). It is recommended to install it on other servers in the actual production environment.
wget https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz
tar -zxvf harbor-offline-installer-v1.10.2.tgz
After successful decompression, a harbor directory will be generated in the current directory of the server.
Note: Here, I changed the port of Harbor to 1180. If you do not change the port of Harbor, the default port is 80.
(1) Modify the harbor.yml file
cd harbor vim harbor.yml
The modified configuration items are as follows.
hostname: 192.168.0.10 http: port: 1180 harbor_admin_password: binghe123 ###And comment out https, otherwise an error will be reported during installation: ERROR:root:Error: The protocol is https but attribute ssl_cert is not set #https: #port: 443 #certificate: /your/certificate/path #private_key: /your/private/key/path
(2) Modify the daemon.json file
Modify the /etc/docker/daemon.json file. If it does not exist, create it and add the following content to the /etc/docker/daemon.json file.
[root@binghe~]# cat /etc/docker/daemon.json { "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"], "insecure-registries":["192.168.0.10:1180"] }
You can also use the ip addr command on the server to view all the IP address segments of the local machine and configure them in the /etc/docker/daemon.json file. Here, the contents of the file after my configuration are as follows.
{ "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"], "insecure-registries":["192.168.175.0/16","172.17.0.0/16", "172.18.0.0/16", "172.16.29.0/16", "192.168.0.10:1180"] }
After the configuration is complete, enter the following command to install and start Harbor
[root@binghe harbor]# ./install.sh
After successful installation, enter http://192.168.0.10:1180 in the browser address bar to open the link, enter the username admin and password binghe123, and log in to the system.
Next, we select User Management and add an administrator account to prepare for the subsequent packaging and uploading of Docker images.
The password is Binghe123. Click OK. At this time, the binghe account is not an administrator yet. Select the binghe account and click "Set as Administrator".
At this point, the binghe account is set as an administrator. At this point, the installation of Harbor is complete.
If you need to change the port of Harbor after installing it, you can follow the steps below to change the port of Harbor. Here, I will take the example of changing port 80 to port 1180.
(1) Modify the harbor.yml file
cd harbor vim harbor.yml
The modified configuration items are as follows.
hostname: 192.168.0.10 http: port: 1180 harbor_admin_password: binghe123 ###And comment out https, otherwise an error will be reported during installation: ERROR:root:Error: The protocol is https but attribute ssl_cert is not set #https: #port: 443 #certificate: /your/certificate/path #private_key: /your/private/key/path
(2) Modify the docker-compose.yml file
vim docker-compose.yml
The modified configuration items are as follows.
ports: - 1180:80
(3) Modify the config.yml file
cd common/config/registry vim config.yml
The modified configuration items are as follows.
realm: http://192.168.0.10:1180/service/token
(4) Restart Docker
systemctl daemon-reload systemctl restart docker.service
(5) Restart Harbor
[root@binghe harbor]# docker-compose down Stopping harbor-log ... done Removing nginx ... done Removing harbor-portal ... done Removing harbor-jobservice ... done Removing harbor-core ... done Removing redis ... done Removing registry ... done Removing registryctl ... done Removing harbor-db ... done Removing harbor-log ... done Removing network harbor_harbor [root@binghe harbor]# ./prepare prepare base dir is set to /mnt/harbor Clearing the configuration file: /config/log/logrotate.conf Clearing the configuration file: /config/nginx/nginx.conf Clearing the configuration file: /config/core/env Clearing the configuration file: /config/core/app.conf Clearing the configuration file: /config/registry/root.crt Clearing the configuration file: /config/registry/config.yml Clearing the configuration file: /config/registryctl/env Clearing the configuration file: /config/registryctl/config.yml Clearing the configuration file: /config/db/env Clearing the configuration file: /config/jobservice/env Clearing the configuration file: /config/jobservice/config.yml Generated configuration file: /config/log/logrotate.conf Generated configuration file: /config/nginx/nginx.conf Generated configuration file: /config/core/env Generated configuration file: /config/core/app.conf Generated configuration file: /config/registry/config.yml Generated configuration file: /config/registryctl/env Generated configuration file: /config/db/env Generated configuration file: /config/jobservice/env Generated configuration file: /config/jobservice/config.yml loaded secret from file: /secret/keys/secretkey Generated configuration file: /compose_location/docker-compose.yml Clean up the input dir [root@binghe harbor]# docker-compose up -d Creating network "harbor_harbor" with the default driver Creating harbor-log ... done Creating harbor-db ... done Creating redis ... done Creating registry ... done Creating registryctl ... done Creating harbor-core ... done Creating harbor-jobservice ... done Creating harbor-portal ... done Creating nginx ... done [root@binghe harbor]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
Jenkins is an open source continuous integration (CI) tool that provides a user-friendly interface. It originated from Hudson (Hudson is commercial) and is mainly used for continuous and automatic building/testing of software projects and monitoring the operation of external tasks (this is relatively abstract, so I will write it down for now without further explanation). Jenkins is written in Java and can run in popular servlet containers such as Tomcat or independently. It is usually used in conjunction with version management tools (SCM) and build tools. Commonly used version control tools include SVN and GIT, and build tools include Maven, Ant, and Gradle.
The biggest problem with using nfs is the write permission. You can use the securityContext/runAsUser of kubernetes to specify the uid of the user running jenkins in the jenkins container, so as to specify the permissions of the nfs directory and allow the jenkins container to be writable; you can also set no restrictions and allow all users to write. For simplicity, all users are allowed to write here.
If nfs has been installed before, this step can be omitted. Find a host and install nfs. Here, I will take the installation of nfs on the Master node (test10 server) as an example.
Enter the following command in the command line to install and start nfs.
yum install nfs-utils -y systemctl start nfs-server systemctl enable nfs-server
Create the /opt/nfs/jenkins-data
directory on the Master node (test10 server) as a shared directory of nfs, as shown below.
mkdir -p /opt/nfs/jenkins-data
Next, edit the /etc/exports file as shown below.
vim /etc/exports
Add the following line of configuration to the /etc/exports file.
/opt/nfs/jenkins-data 192.168.175.0/24(rw,all_squash)
The IP here uses the IP range of the kubernetes node. The following all_squash
option will map all access users to the nfsnobody user. No matter what user you are, it will eventually be compressed into nfsnobody. Therefore, as long as the owner of /opt/nfs/jenkins-data
is changed to nfsnobody, any user who accesses it will have write permission.
This option is very effective when the user who starts the process is different on many machines due to irregular user uids, but at the same time has write permissions to a shared directory.
Next, grant authorization to the /opt/nfs/jenkins-data
directory and reload nfs as shown below.
#Authorize the /opt/nfs/jenkins-data/directory chown -R 1000 /opt/nfs/jenkins-data/ #Reload nfs-server systemctl reload nfs-server
Use the following command to verify on any node in the K8S cluster:
#View the directory permissions of the nfs system showmount -e NFS_IP
If you can see /opt/nfs/jenkins-data, it means everything is OK.
The details are as follows.
[root@test10 ~]# showmount -e 192.168.0.10 Export list for 192.168.0.10: /opt/nfs/jenkins-data 192.168.175.0/24 [root@test11 ~]# showmount -e 192.168.0.10 Export list for 192.168.0.10: /opt/nfs/jenkins-data 192.168.175.0/24
In fact, Jenkins can read previous data as long as it loads the corresponding directory, but since deployment cannot define storage volumes, we can only use StatefulSet.
First, create a pv. The pv is used by StatefulSet. Each time StatefulSet is started, it will create a pvc through the volumeClaimTemplates template. Therefore, a pv must exist before the pvc can be bound.
Create the jenkins-pv.yaml file with the following content.
apiVersion: v1 kind: PersistentVolume metadata: name: jenkins spec: nfs: path: /opt/nfs/jenkins-data server: 192.168.0.10 accessModes: ["ReadWriteOnce"] capacity: storage: 1Ti
I have given 1T of storage space here, which can be configured according to actual needs.
Run the following command to create a pv.
kubectl apply -f jenkins-pv.yaml
Create a service account, because Jenkins needs to be able to dynamically create slaves later, so it must have some permissions.
Create the jenkins-service-account.yaml file with the following content.
apiVersion: v1 kind: ServiceAccount metadata: name: jenkins --- kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: jenkins rules: -apiGroups: [""] resources: ["pods"] verbs: ["create", "delete", "get", "list", "patch", "update", "watch"] -apiGroups: [""] resources: ["pods/exec"] verbs: ["create", "delete", "get", "list", "patch", "update", "watch"] -apiGroups: [""] resources: ["pods/log"] verbs: ["get", "list", "watch"] -apiGroups: [""] resources: ["secrets"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: jenkins roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: jenkins subjects: - kind: ServiceAccount name: jenkins
In the above configuration, a RoleBinding and a ServiceAccount are created, and the permissions of the RoleBinding are bound to this user. Therefore, the jenkins container must be run using this ServiceAccount, otherwise it will not have the permissions of the RoleBinding.
The permissions of RoleBinding are easy to understand, because Jenkins needs to create and delete slaves, so the above permissions are required. As for secrets permissions, it is the https certificate.
Run the following command to create a serviceAccount.
kubectl apply -f jenkins-service-account.yaml
Create the jenkins-statefulset.yaml file with the following content.
apiVersion: apps/v1 kind: StatefulSet metadata: name: jenkins labels: name: jenkins spec: selector: matchLabels: name: jenkins serviceName: jenkins replicas: 1 updateStrategy: type: RollingUpdate template: metadata: name: jenkins labels: name: jenkins spec: terminationGracePeriodSeconds: 10 serviceAccountName: jenkins containers: - name: jenkins image: docker.io/jenkins/jenkins:lts imagePullPolicy: IfNotPresent ports: - containerPort: 8080 - containerPort: 32100 resources: limits: CPU: 4 memory: 4Gi requests: CPU: 4 memory: 4Gi env: - name: LIMITS_MEMORY valueFrom: resourceFieldRef: resource:limits.memory divisor: 1Mi - name: JAVA_OPTS # value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 volumeMounts: - name: jenkins-home mountPath: /var/jenkins_home livenessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 # ~2 minutes readinessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 # ~2 minutes # pvc template, corresponding to the previous pv volumeClaimTemplates: - metadata: name: jenkins-home spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Ti
When deploying Jenkins, you need to pay attention to the number of its replicas. The number of PVs you have will be the same as the number of replicas. Similarly, storage consumption will be multiple times. Here I only used one copy, so only one pv was created before.
Install Jenkins using the following command.
kubectl apply -f jenkins-statefulset.yaml
Create the jenkins-service.yaml file, which is mainly used to run Jenkins in the background. The file content is as follows.
apiVersion: v1 kind: Service metadata: name: jenkins spec: # type: LoadBalancer selector: name: jenkins # ensure the client ip is propagated to avoid the invalid crumb issue when using LoadBalancer (k8s >=1.7) #externalTrafficPolicy: Local ports: - name: http port: 80 nodePort: 31888 targetPort: 8080 protocol: TCP - name: jenkins-agent port: 32100 nodePort: 32100 targetPort: 32100 protocol: TCP type: NodePort
Use the following command to install the Service.
kubectl apply -f jenkins-service.yaml
The Jenkins web interface needs to be accessed from outside the cluster, and here we choose to use ingress. Create the jenkins-ingress.yaml file with the following content.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: jenkins spec: rules: - http: paths: - path: / backend: serviceName: jenkins servicePort: 31888 host: jekins.binghe.com
Here, it should be noted that host must be configured as a domain name or host name, otherwise an error will be reported, as shown below.
The Ingress "jenkins" is invalid: spec.rules[0].host: Invalid value: "192.168.0.10": must be a DNS name, not an IP address
Use the following command to install ingress.
kubectl apply -f jenkins-ingress.yaml
Finally, since I am using a virtual machine to build the relevant environment, when accessing jekins.binghe.com mapped to the virtual machine on this machine, you need to configure the local hosts file and add the following configuration items to the local hosts file.
192.168.0.10 jekins.binghe.com
Note: In Windows operating systems, the hosts file is located in the following directory.
C:\Windows\System32\drivers\etc
Next, you can access Jenkins in your browser through the link: http://jekins.binghe.com:31888.
Apache Subversion, often abbreviated as SVN, is an open source version control system. Subversion was developed by CollabNet Inc in 2000 and has now grown into a project of the Apache Software Foundation and is also part of a rich developer and user community.
Compared with RCS and CVS, SVN uses a branch management system, and its design goal is to replace CVS. Most free version control services on the Internet are based on Subversion.
Here, we take the installation of SVN on the Master node (binghe101 server) as an example.
Run the following command on the command line to install SVN.
yum -y install subversion
Execute the following commands in sequence.
#Create /data/svn mkdir -p /data/svn # Initialize SVN svnserve -d -r /data/svn #Create a code repository svnadmin create /data/svn/test
mkdir /data/svn/conf cp /data/svn/test/conf/* /data/svn/conf/ cd /data/svn/conf/ [root@binghe101 conf]# ll Total dosage 20 -rw-r--r-- 1 root root 1080 May 12 02:17 authz -rw-r--r-- 1 root root 885 May 12 02:17 hooks-env.tmpl -rw-r--r-- 1 root root 309 May 12 02:17 passwd -rw-r--r-- 1 root root 4375 May 12 02:17 svnserve.conf
Configure the authz file,
vim authz
The content after configuration is as follows.
[aliases] # joe = /C=XZ/ST=Dessert/L=Snake City/O=Snake Oil, Ltd./OU=Research Institute/CN=Joe Average [groups] # harry_and_sally = harry,sally # harry_sally_and_joe = harry,sally,&joe SuperAdmin = admin binghe = admin,binghe # [/foo/bar] # harry = rw # &joe = r # * = # [repository:/baz/fuz] # @harry_and_sally = rw # * = r [test:/] @SuperAdmin=rw @binghe=rw
Configure the passwd file
vim passwd
The content after configuration is as follows.
[users] # harry = harryssecret # sally = sallyssecret admin = admin123 binghe = binghe123
Configure svnserve.conf
vim svnserve.conf
The configured file is as follows.
### This file controls the configuration of the svnserve daemon, if you ### use it to allow access to this repository. (If you only allow ### access through http: and/or file: URLs, then this file is ### irrelevant.) ### Visit http://subversion.apache.org/ for more information. [general] ### The anon-access and auth-access options control access to the ### repository for unauthenticated (aka anonymous) users and ### authenticated users, respectively. ### Valid values are "write", "read", and "none". ### Setting the value to "none" prohibits both reading and writing; ### "read" allows read-only access, and "write" allows complete ### read/write access to the repository. ### The sample settings below are the defaults and specify that anonymous ### users have read-only access to the repository, while authenticated ### users have read and write access to the repository. anon-access = none auth-access = write ### The password-db option controls the location of the password ### database file. Unless you specify a path starting with a /, ### the file's location is relative to the directory containing ### this configuration file. ### If SASL is enabled (see below), this file will NOT be used. ### Uncomment the line below to use the default password file. password-db = /data/svn/conf/passwd ### The authz-db option controls the location of the authorization ### rules for path-based access control. Unless you specify a path ### starting with a /, the file's location is relative to the ### directory containing this file. The specified path may be a ### repository relative URL (^/) or an absolute file:// URL to a text ### file in a Subversion repository. If you don't specify an authz-db, ### no path-based access control is done. ### Uncomment the line below to use the default authorization file. authz-db = /data/svn/conf/authz ### The groups-db option controls the location of the file with the ### group definitions and allows maintaining groups separately from the ### authorization rules. The groups-db file is of the same format as the ### authz-db file and should contain a single [groups] section with the ### group definitions. If the option is enabled, the authz-db file cannot ### contain a [groups] section. Unless you specify a path starting with ### a /, the file's location is relative to the directory containing this ### file. The specified path may be a repository relative URL (^/) or an ### absolute file:// URL to a text file in a Subversion repository. ### This option is not being used by default. # groups-db = groups ### This option specifies the authentication realm of the repository. ### If two repositories have the same authentication realm, they should ### have the same password database, and vice versa. The default realm ### is repository's uuid. realm = svn ### The force-username-case option causes svnserve to case-normalize ### usernames before comparing them against the authorization rules in the ### authz-db file configured above. Valid values are "upper" (to upper- ### case the usernames), "lower" (to lowercase the usernames), and ### "none" (to compare usernames as-is without case conversion, which ### is the default behavior). # force-username-case = none ### The hooks-env options specifies a path to the hook script environment ### configuration file. This option overrides the per-repository default ### and can be used to configure the hook script environment for multiple ### repositories in a single file, if an absolute path is specified. ### Unless you specify an absolute path, the file's location is relative ### to the directory containing this file. # hooks-env = hooks-env [sasl] ### This option specifies whether you want to use the Cyrus SASL ### library for authentication. Default is false. ### Enabling this option requires svnserve to have been built with Cyrus ### SASL support; to check, run 'svnserve --version' and look for a line ### reading 'Cyrus SASL authentication is available.' # use-sasl = true ### These options specify the desired strength of the security layer ### that you want SASL to provide. 0 means no encryption, 1 means ### integrity-checking only, values larger than 1 are correlated ### to the effective key length for encryption (eg 128 means 128-bit ### encryption). The values below are the defaults. # min-encryption = 0 # max-encryption = 256
Next, copy the svnserve.conf file in the /data/svn/conf directory to the /data/svn/test/conf/ directory. As shown below.
[root@binghe101 conf]# cp /data/svn/conf/svnserve.conf /data/svn/test/conf/ cp: overwrite '/data/svn/test/conf/svnserve.conf'? y
(1) Create the svnserve.service service
Create svnserve.service file
vim /usr/lib/systemd/system/svnserve.service
The contents of the file are shown below.
[Unit] Description=Subversion protocol daemon After=syslog.target network.target Documentation=man:svnserve(8) [Service] Type=forking EnvironmentFile=/etc/sysconfig/svnserve #ExecStart=/usr/bin/svnserve --daemon --pid-file=/run/svnserve/svnserve.pid $OPTIONS ExecStart=/usr/bin/svnserve --daemon $OPTIONS PrivateTmp=yes [Install] WantedBy=multi-user.target
Next, execute the following command to make the configuration take effect.
systemctl daemon-reload
After the command is executed successfully, modify the /etc/sysconfig/svnserve file.
vim /etc/sysconfig/svnserve
The modified file content is as follows.
# OPTIONS is used to pass command-line arguments to svnserve. # # Specify the repository location in -r parameter: OPTIONS="-r /data/svn"
(2) Start SVN
First check the SVN status as shown below.
[root@test10 conf]# systemctl status svnserve.service ● svnserve.service – Subversion protocol daemon Loaded: loaded (/usr/lib/systemd/system/svnserve.service; disabled; vendor preset: disabled) Active: inactive (dead) Docs: man:svnserve(8)
As you can see, SVN is not started at this time. Next, you need to start SVN.
systemctl start svnserve.service
Set the SVN service to start automatically at boot.
systemctl enable svnserve.service
Next, you can download and install TortoiseSVN, enter the link svn://192.168.0.10/test and enter the username binghe and password binghe123 to connect to SVN.
Pull the SVN image
docker pull docker.io/elleflorio/svn-server
Start the SVN container
docker run -v /usr/local/svn:/home/svn -v /usr/local/svn/passwd:/etc/subversion/passwd -v /usr/local/apache2:/run/apache2 --name svn_server -p 3380:80 -p 3690:3960 -e SVN_REPONAME=repos -d docker.io/elleflorio/svn-server
Enter the SVN container
docker exec -it svn_server bash
After entering the container, you can configure the SVN repository by referring to the method of installing SVN on a physical machine.
Note: JDK and Maven need to be installed before installing Jenkins. I also install Jenkins on the Master node (binghe101 server).
Run the following commands to download the repo file and import the GPG key:
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
Run the following command to install Jenkins.
yum install jenkins
Next, modify the Jenkins default port as shown below.
vim /etc/sysconfig/jenkins
The two modified configurations are shown below.
JENKINS_JAVA_CMD="/usr/local/jdk1.8.0_212/bin/java" JENKINS_PORT="18080"
At this point, the Jenkins port has been changed from 8080 to 18080
Enter the following command in the command line to start Jenkins.
systemctl start jenkins
Configure Jenkins to start automatically at boot.
systemctl enable jenkins
Check the running status of Jenkins.
[root@test10 ~]# systemctl status jenkins ● jenkins.service - LSB: Jenkins Automation Server Loaded: loaded (/etc/rc.d/init.d/jenkins; generated) Active: active (running) since Tue 2020-05-12 04:33:40 EDT; 28s ago Docs: man:systemd-sysv-generator(8) Tasks: 71 (limit: 26213) Memory: 550.8M
This indicates that Jenkins was started successfully.
After the first installation, you need to configure the Jenkins operating environment. First, access the link http://192.168.0.10:18080 in the browser address bar to open the Jenkins interface.
Follow the prompts and use the following command to find the password value on the server, as shown below.
[root@binghe101 ~]# cat /var/lib/jenkins/secrets/initialAdminPassword 71af861c2ab948a1b6efc9f7dde90776
Copy the password 71af861c2ab948a1b6efc9f7dde90776 into the text box and click Continue. You will be redirected to the Customize Jenkins page as shown below.
Here, you can directly select "Install recommended plug-ins". After that, you will be redirected to a page for installing the plug-in, as shown below.
This step may result in download failure, which can be ignored.
Plug-ins that need to be installed
There are more plug-ins to choose from. You can click System Management->Manage Plug-ins to manage and add them, and install the corresponding Docker plug-in, SSH plug-in, and Maven plug-in. Other plugins can be installed as needed. As shown in the figure below.
(1) Configure JDK and Maven
Configure JDK and Maven in Global Tool Configuration as shown below. Open the Global Tool Configuration interface.
Next, we will start configuring JDK and Maven.
Since I installed Maven in the /usr/local/maven-3.6.3 directory on the server, I need to configure it in "Maven Configuration", as shown in the figure below.
Next, configure the JDK as shown below.
Note: Do not check "Install automatically"
Next, configure Maven as shown below.
Note: Do not check "Install automatically"
(2) Configure SSH
Enter the Configure System interface of Jenkins and configure SSH as shown below.
Find SSH remote hosts and configure it.
After the configuration is complete, click the Check connection button and Successfull connection will be displayed. As shown below.
At this point, the basic configuration of Jenkins is complete.
To implement it, the pom.xml file of the module where the startup class is located in the SpringBoot project needs to introduce the configuration packaged into the Docker image, as shown below.
<properties> <docker.repostory>192.168.0.10:1180</docker.repostory> <docker.registry.name>test</docker.registry.name> <docker.image.tag>1.0.0</docker.image.tag> <docker.maven.plugin.version>1.4.10</docker.maven.plugin.version> </properties> <build> <finalName>test-starter</finalName> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <!-- docker maven plugin, official website: https://github.com/spotify/docker-maven-plugin --> <!-- Dockerfile maven plugin --> <plugin> <groupId>com.spotify</groupId> <artifactId>dockerfile-maven-plugin</artifactId> <version>${docker.maven.plugin.version}</version> <executions> <execution> <id>default</id> <goals> <!--If you don't want to use docker packaging, comment out this goal--> <goal>build</goal> <goal>push</goal> </goals> </execution> </executions> <configuration> <contextDirectory>${project.basedir}</contextDirectory> <!-- harbor warehouse username and password --> <useMavenSettingsForAuth>useMavenSettingsForAuth>true</useMavenSettingsForAuth> <repository>${docker.repostory}/${docker.registry.name}/${project.artifactId}</repository> <tag>${docker.image.tag}</tag> <buildArgs> <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE> </buildArgs> </configuration> </plugin> </plugins> <resources> <!-- Specify all files and folders under src/main/resources as resource files--> <resource> <directory>src/main/resources</directory> <targetPath>${project.build.directory}/classes</targetPath> <includes> <include>**/*</include> </includes> <filtering>true</filtering> </resource> </resources> </build>
Next, create a Dockerfile in the root directory of the module where the SpringBoot startup class is located. The content example is as follows.
#Add dependency environment, the premise is to pull the Java8 Docker image from the official image repository, and then upload it to your own Harbor private repository FROM 192.168.0.10:1180/library/java:8 #Specify the image creation author MAINTAINER binghe #Run directory VOLUME /tmp #Copy the local files to the container ADD target/*jar app.jar #Commands that are automatically executed after starting the container ENTRYPOINT [ "java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/app.jar" ]
Modify it according to the actual situation.
Note: The premise of FROM 192.168.0.10:1180/library/java:8 is to execute the following command.
docker pull java:8 docker tag java:8 192.168.0.10:1180/library/java:8 docker login 192.168.0.10:1180 docker push 192.168.0.10:1180/library/java:8
Create a yaml file in the root directory of the module where the SpringBoot startup class is located. Enter a file called test.yaml with the following content.
apiVersion: apps/v1 kind: Deployment metadata: name: test-starter labels: app:test-starter spec: replicas: 1 selector: matchLabels: app:test-starter template: metadata: labels: app:test-starter spec: containers: - name: test-starter image: 192.168.0.10:1180/test/test-starter:1.0.0 ports: - containerPort: 8088 nodeSelector: clustertype: node12 --- apiVersion: v1 kind: Service metadata: name: test-starter labels: app:test-starter spec: ports: - name: http port: 8088 nodePort: 30001 type: NodePort selector: app:test-starter
Upload the project to the SVN code repository, for example, the address is svn://192.168.0.10/test
Next, configure automatic publishing in Jenkins. The steps are as follows.
Click New Item.
Enter a description in the Description text box as shown below.
Next, configure the SVN information.
Note: The steps for configuring GitLab are the same as those for SVN and will not be repeated here.
Locate the "Build Module" of Jenkins and use Execute Shell to build and publish the project to the K8S cluster.
The executed commands are as follows.
#Delete the original local image, which will not affect the image in the Harbor warehouse docker rmi 192.168.0.10:1180/test/test-starter:1.0.0 #Use Maven to compile and build the Docker image. After the execution is completed, the image file will be rebuilt in the local Docker container. /usr/local/maven-3.6.3/bin/mvn -f ./pom.xml clean install -Dmaven.test.skip=true #Log in to Harbor warehouse docker login 192.168.0.10:1180 -u binghe -p Binghe123 #Upload the image to the Harbor repository docker push 192.168.0.10:1180/test/test-starter:1.0.0 #Stop and delete /usr/bin/kubectl delete -f test.yaml running in the K8S cluster #Republish the Docker image to the K8S cluster /usr/bin/kubectl apply -f test.yaml
This is the end of this article about building a continuous integration and delivery environment based on Docker+K8S+GitLab/SVN+Jenkins+Harbor (environment construction). For more relevant docker K8S continuous integration and delivery environment content, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!
<<: Douban website's method for making small changes to website content
>>: HTML tag full name and function introduction
The environment of this article is Windows 10, an...
Slow log query function The main function of slow...
Sometimes local development requires debugging of...
Baidu Cloud Disk: Link: https://pan.baidu.com/s/1...
I call this kind of bug a typical "Hamlet&qu...
Solution to MySQL failure to start MySQL cannot s...
Detailed explanation of HTML (select option) in ja...
Preface Since the types of the same fields in the...
introduce Monitors the health of HTTP servers in ...
Table of contents Written in front What exactly i...
Remove the dotted box on the link Copy code The co...
This article describes how to install lamp-php7.0...
Sometimes we build a file server through nginx, w...
Disable swap If the server is running a database ...
Table of contents 1. Check the MySQL status in th...