background The Agile model is widely used, and testing is particularly important. Since new versions need to be released frequently, we need to execute test cases more frequently to ensure that no new bugs are introduced into the version. The time and resources required for a complete testing process cannot be ignored, including the analysis of test results, which takes up a lot of resources. How to provide complete and comprehensive testing in a shorter time to ensure quality is a problem we are eager to solve, and it is also the key to ensuring the smooth progress of agile development. Jenkins implements an unattended testing process. After development is completed, once the test environment is successfully deployed, downstream testing tasks will be executed immediately. The application of Jenkins saves human resources to a certain extent, and Docker technology can achieve rapid expansion of containers, thereby saving a lot of equipment resources and time and completing the test quickly. This is a very important part in Jenkins Pipeline (code pipeline management), as shown in Figure 1: Figure 1. Jenkins Pipeline This article mainly introduces how to use the Docker Swarm cluster function and Selenium Grid script distribution function to build a Selenium automation script execution environment that can be dynamically expanded. Compared to using a real machine as the Selenium automation script execution environment, using this environment can greatly reduce the maintenance work of the execution environment, such as the management of various browser types and versions. It can also greatly reduce the material investment in the script execution environment and save various resources. Building a Docker Swarm Cluster Introduction to Swarm Swarm is a cluster management tool officially provided by Docker to manage Docker clusters. It abstracts several Docker hosts into a whole and manages various Docker resources on these Docker hosts through a unified entry. Swarm is just a scheduler plus a router. Swarm itself does not run containers. It only accepts requests sent by Docker clients and schedules suitable nodes to run containers. This means that even if Swarm crashes for some reason, the nodes in the cluster will continue to run as usual. When Swarm resumes running, it will collect and rebuild the cluster information. Swarm is similar to Kubernetes, but is more lightweight and has fewer features than Kubernetes. Environment Preparation In order to build a Docker Swarm cluster environment, I prepared two machines in the example. One node serves as a manager node and also as a worker node, and the other node serves only as a worker node. Here we assume that the IP information of our two machines is as follows:
Starting from version 1.12.0, Docker Engine has natively integrated Docker Swarm, so as long as Docker is installed on each machine, you can directly use Docker Swarm. Here, I will not go into detail about the installation of Docker. Please follow the official Docker Swarm documentation for installation. After the installation is complete, start the Docker service on each machine. hint: Note: It is best to turn off the firewall on the machine, otherwise there may be Swarm cluster network connection problems. Command to shut down the firewall: systemctl stop firewalld.service Disable the firewall startup command: systemctl disable firewalld.service step 1. Create a management node. We use machine M1 as the manager node and execute commands on this machine to initialize the cluster environment. The command is as follows: sudo docker swarm init --advertise-addr 10.13.181.1 After executing this command, a token for joining the cluster will be returned so that other workers can join this cluster. Listing 1. Join cluster token example: Copy the code as follows: docker swarm join --token SWMTKN-1-5p3kzxhsvlqonst5wr02hdo185kcpdajcu9omy4z5dpmlsyrzj- 3phtv1qkfdly2kchzxh0h1xft 10.13.181.1:2377 If you want to get the command to join the cluster again, you can get it by executing the following command: sudo docker swarm join-token worker 2. Add machine M1 to the cluster as a worker node. Run the command in Listing 1 on the manager node machine to add machine M1 as a worker to the swarm cluster. 3. Add another machine M2 to the cluster as a worker node. Execute the command in Listing 1 above on machine M2 to enable M2 to join the cluster. 4. Run the following command to create a cluster network: sudo docker network create -d overlay seleniumnet Here, seleniumnet is the name of the cluster network we created. 5. Create a Selenium Grid service on the newly created cluster network. a. Create a Selenium Grid Hub service. Based on the cluster network seleniumnet, map port 4444 to port 4444 of the cluster and set the timeout time to 120 seconds. You can increase or decrease the timeout time, as shown in Listing 2. Listing 2. Creating the Selenium Grid Hub service: Copy the code as follows: sudo docker service create --name selenium-hub --network seleniumnet -p 4444:4444 -e GRID_TIMEOUT=120 selenium/hub b. Create a Selenium Grid Firefox node service and connect it to the newly created Hub service. As shown in Listing 3. Listing 3. Creating the Selenium Grid Firefox node service: sudo docker service create \ --name node-firefox \ --replicas 5 \ -p 7900:5900 \ --network seleniumnet \ -e HUB_PORT_4444_TCP_ADDR=selenium-hub \ -e HUB_PORT_4444_TCP_PORT=4444 \ selenium/node-firefox-debug bash -c 'SE_OPTS="-host $HOSTNAME" /opt/bin/entry_point.sh' Parameter Description: -p:7900:5900 exposes Docker's internal VNC5900 to the host's port 7900, allowing users to monitor Docker's internal execution from the outside through VNC. c. Create a Selenium Grid Chrome Node service and connect it to the newly created Hub service. As shown in Listing 4. Listing 4. Create a node service: sudo docker service create \ --name node-chrome \ --replicas 3 \ -p 7901:5900 \ --network seleniumnet \ -e HUB_PORT_4444_TCP_ADDR=selenium-hub \ -e HUB_PORT_4444_TCP_PORT=4444 \ selenium/node-chrome-debug bash -c 'SE_OPTS="-host $HOSTNAME" /opt/bin/entry_point.sh' Parameter Description: -p:7901:5900 exposes Docker's internal VNC5900 to the host's port 7901, allowing users to monitor Docker's internal execution from the outside through VNC. 6. Check whether the environment is successfully built. Execute the following command on machine M1 to check whether each service is started successfully: sudo docker service ls You can see that Selenium Hub, Firefox node, and Chrome node have all been started successfully. The number of node replicas for Firefox is 5, and the number of node replicas for Chrome is 3, as shown in Figure 2. Figure 2. Docker service list We then open the Selenium Hub URL through the IP of any machine plus port 4444 to check whether the started Firefox and Chrome nodes have been successfully mounted on the Hub node, as shown in Figure 3. Hub url: 10.13.181.1:4444 Figure 3. Selenium Hub interface As can be seen from Figure 3, 5 Firefox nodes and 3 Chrome nodes have been successfully mounted on the Hub node. This means that 5 Firefox nodes and 3 Chrome nodes are now available in the Docker Swarm environment for executing Selenium automated testing scripts. Expansion Methods Users can dynamically expand the number of nodes at any time according to the number of script executions to improve the execution efficiency of automated scripts. For example, if we need 10 containers that can run Firefox browsers, the corresponding commands are as follows: sudo docker service scale node-firefox=10 Running Jenkins Jobs on Docker Swarm When users run Jenkins Job in Docker Swarm, they do not need to make any extra configuration in Jenkins. Instead, they need to call Selenium Hub in the corresponding automation script to call WebDriver remotely. This implements running Selenium scripts in Docker Container. Taking the scenario in this article as an example, you only need to call the remote Selenium Hub in the automation script, as shown below: http://9.111.139.104:4444/wd/hub Running Automation Scripts in Selenium Grid Basic Concepts Selenium Grid is used for distributed automated testing, which means that a set of Selenium code can run in different environments. This makes it easy to run the application in different containers provided by Docker. Selenium Grid has two concepts:
That is to say, there can only be one main hub in Selenium Grid, but multiple branch nodes can be established locally or remotely. The test script points to the main hub, and the main hub assigns the test cases to the local/remote nodes. Implementation To run automation scripts in Selenium Grid, we first need to create a remote driver object, which can be achieved through the source code in Figure 4. The corresponding input parameter selhub in the screenshot is the URL of the Selenium hub: http://9.111.139.104:4444/wd/hub Figure 4. Screenshot of the automation script code By calling the above driver, you can run the automation script in the Docker Container. Conclusion In continuous integration testing, deploying tests to Docker Swarm and automatically allocating nodes for executing tests through Selenium Grid can improve test efficiency, increase the scope of testing, better ensure the quality of delivered products in rapid iterations, and save testing resources. Original link: https://www.ibm.com/developerw ... .html You may also be interested in:
|
<<: Detailed discussion of the character order of mysql order by in (recommended)
>>: The perfect solution to the Chinese garbled characters in mysql6.x under win7
Table of contents 【Effect】 【Implementation method...
I have been working on a project recently - Budou ...
Under the instructions of my leader, I took over ...
Preface BINARY and VARBINARY are somewhat similar...
Preface Every developer who comes into contact wi...
Effect html <div class="sp-container"...
Why is the title of the article “Imitation Magnif...
Since 2019, both Android and IOS platforms have s...
lsof (list open files) is a tool to view files op...
This article shares the specific code of Vue to i...
Table of contents 1. Table self-sorting 2. Paging...
Intersection Selector The intersection selector i...
Adding the attribute selected = "selected&quo...
Table of contents Preface 1. Installation 1. Down...