Recently, I needed to test the zoom video conference and simulate 100 people joining the meeting at the same time. After learning about it, I found out that Zoom provides a way to join the meeting directly through a URL link (only through Chrome browser or FireFox browser because the protocol used is WebRTC). Following this line of thought, we can use Selenium automation to start multiple browser processes at the same time, with each process representing a video conference user, to achieve the effect of simulating a multi-party conference. However, there are two difficulties:
Chrome browser has better support for audio and video sources of video conferences. In the parameters of initializing Chrome browser in Selenium script, you only need to add the following configuration: chrome_options.add_argument("--use-fake-ui-for-media-stream") chrome_options.add_argument("--use-fake-device-for-media-stream") You can use virtual video and audio after joining the video conference. However, there is a problem that needs to be considered. There seems to be a gap in the video quality between this virtual video and the real video conference. Will it affect the test results? We will not discuss this topic here for the time being. The only headache now is how to implement 100 Chrome browser processes. You may think, isn’t this just a resource issue? Wouldn't it be solved by adding a server? ! But if we have server resources, how do we schedule tasks? Fortunately, there is Selenium Grid, which is one of the three major components of Selenium and is specifically designed to perform distributed testing. So a test plan was designed based on Selenium Grid:
According to the above design ideas, it is theoretically possible to simulate 100 people joining the meeting at the same time. Next, we will officially start exploring how to use Docker to build a Selenium Grid distributed environment. Selenium jar package directly starts the node In fact, at the beginning I used the jar package to start the node directly. It was acceptable for a few nodes, but it would be very troublesome when there were too many nodes. For example, if I wanted to restart the nodes, I needed to kill them all manually and then start them one by one. Any manual, repetitive work can be scripted. So I wrote two shell scripts, one script starts the corresponding number of nodes according to the passed parameters; the other script kills all node processes. The main script is shown below: Although it can be easily executed using scripts, it is still inconvenient. First, after starting the node, many Java processes will be added, and it is impossible to view the logs of a single node because the logs of all nodes are printed on the console at the same time. So I considered using docker to manage Selenium grid nodes. Start directly with the docker command There is a ready-made image on github: https://github.com/SeleniumHQ/docker-selenium . Then the description document also lists all the available image names. Since I mainly use the Chrome browser, I installed the following three images: selenium/hub, selenium/node-chrome, and selenium/node-chrome-debug. The selenium/node-chrome-debug image will start a VNC Server. During the script execution, you can connect to the VNC Server locally and view the script execution status of the server through the interface. Use command: $ docker pull selenium/hub $ docker pull selenium/node-chrome $ docker pull selenium/node-chrome-debug The command to start the hub is as follows: $ docker run -d -p 4444:4444 -e GRID_MAX_SESSION=100 --name hub selenium/hub The command to start the local node (hub and node are on the same machine) is as follows: $ docker run -d -p 5555:5555 -e NODE_MAX_INSTANCES=5 -e NODE_MAX_SESSION=5 --shm-size=2g --link hub:hub --name node1 selenium/node-chrome The command to start a remote node (the hub and node are not on the same machine) is as follows: $ docker run -d -p port:5555 -e HUB_HOST=remote_ip -e HUB_PORT=remote_port -e REMOTE_HOST=http://ip:port -e NODE_MAX_INSTANCES=5 -e NODE_MAX_SESSION=5 --shm-size=2g --name node1 selenium/node-chrome It should be noted here that the startup commands provided by many online tutorials are all for the hub and node to be on the same machine. If the hub and node need to be on different machines, following the online tutorials, although there will be no error when starting, the network between the node and the hub will be disconnected. Although you can view the logs of a single node by directly using the docker command, it faces the same problem as using the jar package: starting multiple nodes is very inconvenient and requires multiple manual commands. Is there a better solution? Of course, you can use docker-compose to integrate docker containers. docker-compose start Docker compose is a command line tool of Docker, which is used to define and run applications composed of multiple containers. This is equivalent to putting multiple docker commands into one file, and then executing them with one click using docker-compose. Similarly, there are two situations: Hub and node are on the same machine You can use the following configuration file docker-compose.yml version: "3" services: selenium-hub: image: selenium/hub container_name: selenium-hub ports: - "4444:4444" environment: - GRID_MAX_SESSION=50 -GRID_TIMEOUT=900 - START_XVFB=false chrome: image: selenium/node-chrome volumes: - /dev/shm:/dev/shm depends_on: - selenium-hub environment: - HUB_HOST=selenium-hub - HUB_PORT=4444 - NODE_MAX_INSTANCES=5 - NODE_MAX_SESSION=5 Then execute the command in the console: $ docker-compose up -d //-d means running in the background What if you want to start multiple nodes at the same time? Very simple: $ docker-compose up -d --scale chrome=num //num is the number of nodes to start If you want to shut down the node, you can execute the following command: $ docker-compose down Hub and node are not on the same machine You can use the following configuration file docker-compose.yml version: "3" services: # selenium-chrome-1 selenium-chrome-node-1: image: selenium/node-chrome volumes: - /dev/shm:/dev/shm ports: - "5556:5555" restart: always stdin_open: true environment: HUB_HOST: hub_ip HUB_PORT: 4444 NODE_MAX_INSTANCES: 5 NODE_MAX_SESSION: 5 REMOTE_HOST: http://nodeip:5556 GRID_TIMEOUT: 60000 shm_size: "2gb" # selenium-chrome-2 selenium-chrome-node-2: image: selenium/node-chrome volumes: - /dev/shm:/dev/shm ports: - "5555:5555" restart: always stdin_open: true container_name: node1 environment: HUB_HOST: hub_ip HUB_PORT: 4444 NODE_MAX_INSTANCES: 5 NODE_MAX_SESSION: 5 REMOTE_HOST: http://nodeip:5555 GRID_TIMEOUT: 60000 shm_size: "2gb" # selenium-chrome-3 selenium-chrome-node-3: image: selenium/node-chrome volumes: - /dev/shm:/dev/shm ports: - "5557:5555" restart: always stdin_open: true environment: HUB_HOST: hub_ip HUB_PORT: 4444 NODE_MAX_INSTANCES: 5 NODE_MAX_SESSION: 5 REMOTE_HOST: http://nodeip:5557 GRID_TIMEOUT: 60000 shm_size: "2gb" # selenium-chrome-4 selenium-chrome-node-4: image: selenium/node-chrome volumes: - /dev/shm:/dev/shm ports: - "5558:5555" restart: always stdin_open: true environment: HUB_HOST: hub_ip HUB_PORT: 4444 NODE_MAX_INSTANCES: 5 NODE_MAX_SESSION: 5 REMOTE_HOST: http://nodeip:5558 GRID_TIMEOUT: 60000 shm_size: "2gb" # selenium-chrome-5 selenium-chrome-node-5: image: selenium/node-chrome volumes: - /dev/shm:/dev/shm ports: - "5559:5555" restart: always stdin_open: true environment: HUB_HOST: hub_ip HUB_PORT: 4444 NODE_MAX_INSTANCES: 5 NODE_MAX_SESSION: 5 REMOTE_HOST: http://nodeip:5559 GRID_TIMEOUT: 60000 shm_size: "2gb" The command to start the node is (the premise is that the hub needs to be started in advance): $ docker-compose up -d The command to shut down a node is: $ docker-compose down Legacy When I build the Selenium Grid environment in this way, the local node can execute normally, but the remote node often times out. However, the node network seen from the http://hub_ip:4444/grid/console interface is all connected. I have checked some information before, and it seems that I need to use Docker Swarm, which is a Docker cluster management tool. It abstracts several Docker hosts into a whole and manages various Docker resources on these Docker hosts through a unified entrance. However, I haven’t studied it yet. If I have a conclusion about using Docker Swarm, I will write an article and share it with you. Summarize It is very convenient to use Docker to build a Selenium Grid distributed environment. Basically, you can start or shut down a node with just one line of command. I hope this article can provide you with some ideas and help you solve some problems in your daily work. This is the end of this article about the practical road to building a selenium grid distributed environment with docker. For more relevant content about building a selenium grid distributed environment with docker, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: JavaScript array merging case study
1. Use the shortcut Ctrl + Shift + P to call out ...
When using XAML layout, sometimes in order to make...
Table of contents App Update Process Rough flow c...
As the first article of this study note, we will ...
CSS Position The position attribute specifies the...
The box model specifies the size of the element b...
JDK Installation I won't go into too much det...
This article is just to commemorate those CSS que...
This article describes the MySQL slow query opera...
MySQL is divided into Community Edition (Communit...
First, let’s take a look at the picture: Today we...
background Indexes are a double-edged sword. Whil...
//MySQL statement SELECT * FROM `MyTable` WHERE `...
Installation of Python 3 1. Install dependent env...
Table of contents What are spread and rest operat...