VMware vSphere6.0 server virtualization deployment and installation diagram (detailed steps)

VMware vSphere6.0 server virtualization deployment and installation diagram (detailed steps)

1. Key points for early planning of VMware vSphere deployment

1. Advantages of vSphere

(slightly)

2How to use the current device architecture virtualization environment

During the virtualization process, most users will consider whether existing servers, storage, switches and other basic equipment can be used. This requires comprehensive consideration based on the performance and parameters of the server and storage.

If the servers were newly purchased in the past one or two years, consider integrating and expanding them and using them as virtualization hosts. Generally speaking, most servers with low standard configurations can be expanded to very high configurations. For example, the IBM 3850 X6 server can be expanded to a maximum of 4 CPUs and 1.5TB of memory. Taking the CPU as an example, the IBM 3850 X6 comes standard with 2 CPUs, which can be 6-core or 8-core. If there are multiple IBM 3850 X6 servers (for example, 2 or more), the CPUs of these 2 servers can be placed in one of them, and the other server can be equipped with 4 new 8-core CPUs. Similarly, the memory can be concentrated in one server, and the other server can be configured with multiple single 8GB memory sticks. Similarly, the same can be done for servers from other manufacturers: first integrate multiple servers, and then upgrade the server.

During the virtualization implementation process, if an existing server is used, it is recommended to first add memory and network cards to the server, followed by configuring redundant power supplies and CPUs; as for hard disks, in virtualization projects, the priority is to configure shared storage, followed by adding local hard disks.

In addition to being a virtualization host, the original server can also be converted into a storage server. For example, if a server has a low configuration and is not worth upgrading, but has a large number of local hard disks, the hard disks can be concentrated in a certain server. This server can be installed with openfiler (both 32-bit and 64-bit products) or Windows Server 2008 R2 or Windows Server 2012 to form a storage server. This server can provide iSCSI network storage for the virtualization environment through a Gigabit network. This storage can be used for data backup or expansion.

3. Server performance and capacity planning

In the early stage of implementing virtualization, there is a virtual machine capacity planning, which is the maximum number of virtual machines that can be placed on a physical server. In fact, this is a comprehensive issue, that is, we must consider the host's CPU, memory, disk (capacity and performance), as well as the resources required for running virtual machines. In actual use, the system always has at least 30% or even higher surplus capacity. It is impossible for the resource utilization on a host to exceed 80% or even approach 100%. Otherwise, once these values ​​are reached, the entire system will respond more slowly.

When estimating virtualization capacity, if only the CPU is considered, the physical CPU and virtual CPU can be planned at a ratio of 1:4 to 1:10 or even higher. For example, if a physical host has four 8-core CPUs and there is enough memory and storage, 4×8×5=160 vCPUs can be virtualized at a ratio of 1:5. Assuming that each virtual machine requires 2 vCPUs, 80 virtual machines can be created. In actual virtualization projects, most virtual machines do not have very high CPU requirements. Even if 4 or more CPUs are allocated to a virtual machine, the actual CPU usage of the virtual machine is only less than 10%, and the physical host CPU resources consumed are less than 0.5.

In virtualization projects, memory usage is the largest and has the highest requirements. In actual use, it is often found that the memory of the physical host is close to 80% or even 90%. This is because a large number of virtual machines are planned on the same physical host, and the memory allocated to each virtual machine is large (always exceeding the memory actually used by the virtual machine), which will lead to a reduction in the available memory of the host. When configuring memory for a physical host, you need to consider how many virtual machines will run on the host and how much memory these virtual machines require in total. Generally, each virtual machine requires 1GB to 4GB or even more memory, and some memory must be reserved for VMware ESXi. Typically, a host with four 8-core CPUs requires 96GB or more of memory; a host with two 6-core CPUs requires 32 to 64GB of memory.

4. Statistics and calculation of existing server capacity

If you want to migrate an existing physical server to a virtual machine, you can make a statistical table that includes the CPU model, quantity, CPU utilization, existing memory and memory utilization, existing hard disk quantity, size, RAID and usage of the existing physical server, and then calculate based on these. The calculation method is:

Actual CPU resources = the server's CPU frequency × number of CPUs × CPU usage

Actual memory resources = server memory × memory usage

Actual hard disk space = hard disk capacity - remaining space

Assuming that after calculation, 91.1944Ghz of CPU resources are now in use, taking the CPU frequency of 3.0Hz CPU as an example, 30 cores (load 100%) are required, but considering the CPU load rate of 60%~75% in the overall project, as well as other overheads such as management, at least 40 CPU cores are required. If 4 6-core servers are configured, about 4 physical hosts are required. As for memory, assuming that 182GB is currently in use, plus management and surplus, 360GB is calculated, so 96GB to 128GB is enough for each server.

5. Server Selection When Virtualizing Servers

In the process of implementing virtualization, if the existing servers can meet the virtualization needs, you can use the existing servers. If the existing servers cannot fully meet the needs, you can partially adopt the existing servers and then purchase new servers.

If you are purchasing a new server, there are many products to choose from. If the unit's computer room is stored in a cabinet, it is preferred to purchase a rack-mounted server. The principles of server procurement are:

(1) If a 2U server can meet the needs, then use a 2U server. Usually, a 2U server supports a maximum of 2 CPUs and comes standard with 1 CPU. In this case, 2 CPUs must be configured.

If a 2U server cannot meet the demand, a 4U server can be used. Usually, a 4U server supports up to 4 CPUs and comes standard with 2 CPUs. When purchasing a server, it is advisable to configure the server with 4 CPUs. If there is no limit on the number of servers, purchasing twice as many 2U servers will save more money than purchasing a 4U server, and the performance can mostly meet the demand.

(2) CPU: When choosing a CPU, it is advisable to choose an Intel series CPU with 6 or 8 cores. CPUs with 10 or more cores are more expensive and are not recommended, unless the unit has higher requirements for CPU performance and space.

(3) Memory: When configuring the server, it is possible to configure a larger memory for the server. In a virtualization project, memory is more important than CPU. Generally, two 6-core 2U servers are configured with 64GB of memory, and four 6-core or 8-core 4U servers are configured with 128GB or more of memory.

(4) Network card: When selecting a server, you should also consider the number of network cards in the server. At least a 2-port Gigabit network card should be configured for the server. A 4-port Gigabit network card is recommended.

(5) Power supply: Two power supplies may be configured. Generally speaking, two 450W power supplies can meet the needs of a 2U server, and two 750W power supplies can meet the needs of a 4U server.

(6) Hard disk: If the virtual machine is stored in the server's local storage rather than network storage, it is advisable to configure the server with 6 hard disks as RAID 5, or 8 hard disks as RAID 50. Since the server hard disk slots are limited, a hard disk that is too small cannot be selected. The most cost-effective hard disk currently is a 600GB SAS hard disk. The rotation speed of a 2.5-inch SAS hard disk is 10,000 rpm, and the rotation speed of a 3.5-inch SAS hard disk is 15,000 rpm. Choosing a 2.5-inch hard disk has higher IOPS.

As for the server brand, you can choose Huawei, IBM, HP or Dell. When there are higher requirements on the space occupied by the server, you can configure a blade server, such as Huawei Tecal E6000 server, which has 8U of space and can be configured with up to 10 blade servers. Each server can be equipped with 2 CPUs, 2 SAS hard drives, 12 memory slots, and a dual-port network card.

6. Storage device selection

In virtualization projects, it is recommended to use storage devices instead of local hard disks of servers. When configuring shared storage devices, only when virtual machines are saved in the storage can HA, FT, vMotion and other technologies be quickly implemented and used. When using VMware vSphere to implement a virtualization project, a recommended approach is to install VMware ESXi on the server's local hard disk. This local hard disk can be a solid-state drive (5.2 to 10 GB), an SD card (8 GB is sufficient), or even a 1GB USB flash drive. If the server is not configured with a local hard disk, you can also allocate an 8 to 16 GB partition for the server for startup.

When selecting a storage device, you need to consider the storage capacity, disk performance, number of interfaces, and interface bandwidth required for the entire virtualization system. In terms of capacity, the capacity of the entire storage design should be more than twice the actual used capacity. For example, if the entire data center has used 1TB of disk space (all used space added together), then when designing storage, at least 2TB of storage space should be designed (this is the space after RAID is configured, not the space of all disks added together without RAID).

Another important parameter in storage design is IOPS (Input/Output Operations Per Second), which is the number of read and write (I/O) operations per second. It is mostly used in databases and other occasions to measure the performance of random access. The IOPS performance of the storage side is different from the IO of the host side. IOPS refers to how many times the storage can accept access from the host per second. One IO of the host requires multiple accesses to the storage to complete. For example, when the host writes a minimum data block, it also has to go through three steps: "sending a write request, writing data, and receiving a write confirmation," which means three storage end accesses. The IOPS of each disk system has an upper limit. If the actual IOPS of the designed storage system exceeds the upper limit of the disk group, the system response will be slow, affecting system performance. Simply put, the IOPS of a 15,000 rpm disk is 150, the IOPS of a 10,000 rpm disk is 100, and the IOPS of an ordinary SATA hard drive is about 70~80. Generally speaking, when doing desktop virtualization, the IOPS of each virtual machine can be designed to be 3 to 5; the IOPS of an ordinary virtual server can be planned to be 15 to 30 (depending on the actual situation). When designing a system that runs 100 virtual machines at the same time, the IOPS must be planned to be at least 2,000. If a 10,000-rpm SAS disk is used, at least 20 disks are required. Of course, this is just a simple calculation, and many factors need to be considered in actual implementation.

When planning storage, you also need to consider the number of storage interfaces and the speed of the interfaces. Generally speaking, when planning a system with 4 hosts and 1 storage, it is more appropriate to use a storage server with 2 interfaces and 4 SAS interfaces. If there are more hosts, or the hosts require redundant interfaces, you can consider equipping the storage with FC interfaces and use a fiber optic switch to connect the storage and the server.

7Network and switch selection

In a virtualized environment, each physical server generally has a higher network card density. It is common for virtualized hosts to have 6, 8 or even more network interface cards (NICs). Conversely, non-virtualized servers only have 2 or 4 NICs. This becomes a problem in the data center because edge or distribution switches are placed in the rack to simplify network cabling and then connected to the network core. In this solution, a typical 48-port switch can only handle 4 to 8 virtual hosts. In order to fully fill the rack, more edge or distribution switches are required.

In a virtualized environment, when multiple workloads are consolidated onto these hosts, network traffic increases based on the number of workloads running on the host, and network utilization will no longer be as low as it was on each physical server in the past.

In order to accommodate the increased network traffic from the integrated workload, it may be necessary to increase the number of uplinks from the edge or distribution switch to the network core, which places higher demands on the switch's backplane bandwidth and uplink lines.

Another key change comes from the dynamic nature of the latest generation of virtualization products, with features such as live migration and dynamic resource management across multiple hosts. The dynamic nature of virtualization means that no assumptions can be made about how traffic flows between servers.

When performing dynamic migration between virtual machines or migrating virtual machines from one storage to another, in order to reduce the migration time and not affect key businesses, a large amount of network resources will be occupied during the migration. In addition, although the number of concurrent migrations can be reduced during migration, in some applications, multiple virtual machines may be migrated at the same time, which places higher requirements on the switch backplane bandwidth and switch performance.

In addition, virtualization reduces some visibility of the network layer in the data center. Network engineers have no visibility into the virtual switch and cannot easily determine which physical NIC corresponds to which virtual switch, which is the most important information in troubleshooting. In order to reduce the failure rate, it should also be considered to configure redundant business boards and redundant power supplies for the switch. At the same time, configure a higher switch whenever possible.

In most cases, the physical host is configured with a 4-port Gigabit network card, and for redundancy, every two network cards may be bonded together for load balancing and failover.

For the virtualization environment of small and medium-sized enterprises, configuring Huawei S5700 series Gigabit switches for the virtualization system can meet most needs. Huawei S5700 series has 24 ports and 48 ports. If higher network performance is required, Huawei S9300 series switches can be selected. If in the virtualization planning, the virtual machines in the physical host only need to be in the same network segment (or in two limited network segments), and the performance requirements are not high but the price is sensitive, Huawei's S1700 series ordinary switches can be selected. Both VMware ESXi and Hyper-V Server support VLAN division in virtual switches, that is, connecting the host network card to the Trunk port of the switch, and then dividing the VLAN on one end of the virtual switch. In this way, when there are only one or two physical network cards, virtual machines can be divided into different VLANs in the network to which they belong.

2. Actual deployment environment

Server configuration information

Storage configuration information

Storage capacity division

Three network topology diagrams

4. Install ESXi host

1 Install ESXi

1) Download the installation image file of ESXI6.0U1 (VMware-VMvisor-Installer-6.0.0.update01-3073146.x86_64.ISO), burn it to a CD, put it in the CD-ROM, start the server and boot from the CD, the installation system will automatically enter the boot interface after a while

You can also burn the installation image directly to a USB drive and install ESXi through the server's USB boot, but be aware that the USB drive will be cleared during burning.

2) The ESXi 6.0 installation program is automatically booting

3) Come to the welcome screen of ESXi installation, press "Enter" to proceed to the next step

4) On the protocol interface, press "F11" to proceed to the next step

5) At this point the system starts automatically querying available storage devices

6) On the Select Disk page, select the storage device on which you want to install ESXi. If there is an iSCSI storage device, you can also select it, and then press "Enter" to proceed to the next step

When selecting a disk, do not rely on the order of the disks in the list; the order of the disks is determined by the BIOS.

If you select an SSD, the SSD and all underlying HDDs in the same disk group will be cleared

If you select a HDD and the disk group has more than two disks, only the selected HDD will be cleared.

If you select a HDD disk and the disk group has two or fewer disks, the SSD and the selected HDD will be cleared

7) Select the keyboard type, the default is ok, press "Enter" to proceed to the next step

8) Set the administrator root password, which can be greater than or equal to 7 characters, and press "Enter" to proceed to the next step

9) At this time, the system is automatically organizing and collecting installation information

10) After confirming that the above configuration is correct, you can press "F11" to start installing ESXi

11) Installing the ESXi operating system

12) After the installation is complete, press "Enter" to restart the server

2 Configure ESXi

1) In the ESXi main interface, press "F2" to pop up the login box, enter the administrator root password, and then press "Enter" to log in to the ESXi system

2) Enter the ESXI system menu and configuration contents are as follows:

3) The menu for configuring the network is as follows:

Select "IPv4 Configuration" and press "Enter" to enter the IP address configuration interface

4) The configuration menu for IPv4 is as follows:

Disable IPv4 configuration for management network

Use dynamic IPv4 address and network configuration

Set static IPv4 address and network configuration

Select "Set static IPv4 address and network configuration", configure the static IP address, subnet mask, gateway, and press "Enter" to confirm.

5) Select "DNS Configuration" and press "Enter" to enter the DNS and host name configuration interface (if there is no DNS or domain control server, the fifth and sixth steps can be omitted)

6) Configure the DNS address and host name, press "Enter" to confirm

7) After the network configuration is completed, press "ESC" to pop up the interface for confirming the network configuration, press "Y" to save all the above configurations

At this point, all ESXi configurations are complete, and subsequent operations will be performed on the vSphere Client.

5. Install VCENTER FOR WINDOWS

1 Create a virtual machine to install VCENTER

A single ESXi server can be managed directly through the vSphere Client, but management is limited and operations such as vMotion are not possible. Therefore, the vSphere Client or web Client is usually used to manage multiple ESXi servers by connecting to the vCenter Server.

VMware vCenter is a powerful centralized host and virtual machine management component in the VMware vSphere suite. Many advanced features of vSphere can only be configured and implemented under vCenter, and many management modules of vSphere can only be installed in an integrated manner in the vCenter environment and cannot be installed and run independently. By managing vCenter, one or more VMware vSpheres can be managed and configured. Therefore, VMware vCenter is the main management platform for VMware vSphere.

The machine to install and deploy VMware vCenter For Windows must meet the following conditions:

n Windows Server 2008, and the available space on the C drive is not less than 20 GB, and the memory is not less than 8 GB; otherwise, an error message will be displayed and the installation cannot be performed.

n For Windows Server 2008, it is recommended to set it to a static IP and use the host name as FQDN (change the host name after installation, and make sure it cannot be the same as other host names)

Use vSphere Client to log in to the host, create a new virtual machine, configure it with 2CPU, 8G RAM, 80G hard disk (Note: the virtual machine's hard disk should use shared storage, try not to use local hard disk storage), install WINDOWS 2008 system and activate it, and install necessary software such as VM TOOLS and WINRAR.

2 Install SQL2008R2 and SP1 patches

Installing SQL Server 2008 R2 requires .NET Framework 3.5 SP1 support

Our operating system here is Windows Server 2008 R2, which comes with .NET Framework 3.5 SP1 by default.

1) Mount the SQL2008R2 CD image file, open the installation file, and the following interface will appear. Click OK

2) On the SQL Server Installation Center interface, click "Install" on the left, and then select "New installation or add features to an existing installation" in the options on the right.

3) The installation program will scan some information of the local computer to ensure that no abnormalities occur during the installation process. If the scan finds any problems, you will have to fix them before you can rerun the installer to install the program.

The scan result shows that it has passed. Click "OK" to proceed to the next step.

4) Enter the product key. The key is already in the input box, so you don’t need to enter it. Click Next.

5) Select "I accept the license terms" and click "Next"

6) The installer needs to install some necessary components on the local machine. pip install

7) Next, you will officially install the SQL Server program.

First, scan the host. This step looks the same as the one in the preparation process just now, which is to scan the local machine to prevent abnormalities during the installation process. Now we are not repeating the previous steps. It can be clearly seen from the picture below that the scan is more precise and contains more content.

In this step, be sure not to ignore the "Windows Firewall" warning, because if SQL Server is installed in the Windows 2008 operating system, the operating system will not automatically open the TCP 1433 port in the firewall. The method of opening TCP port 1433 will be mentioned later.

The scan result is passed, click "Next" to continue

8) Set the role, there are 3 options to choose from. We select "SQL Server Feature Installation".

9) Check the components you want to install, select the installation path, and click Next.

11) Instance configuration Here we select the default instance. The system automatically names this instance: MSSQLSERVER.

12) The disk space requirements are displayed, just click Next

13) Service account settings: First, you need to configure the server's service account, that is, which account the operating system should use to start the corresponding service. To save trouble, we select "Use the same account for all SQL Server services".

14) Enter the operating system account name and password, and confirm

15) Authentication mode: Windows Authentication mode

Add the current user and specify the SQL Server administrator as the operating system login account

Then the next step

16) Default, next step

17) Next Steps

18) After completing the above function selection and configuration, it is time to start the installation.

First, let's confirm our installation options. After confirmation, click the "Install" button to start the installation.

19) The installer displays the installation progress. If it is a new installation, this process will take about half an hour (depending on the speed of the disk).

20) Finally, the following screen appears, the installation is complete, click Close

21) At this point, the SQL Server 2008 R2 database installation is complete.

Then you need to install the SP1 patch, download SQLServer2008R2SP1-KB2528583-x64-CHS, and double-click to install.

The installer will scan the host first. Click Next after the scan is complete.

22) Select "I accept the license terms" and click Next.

23) Select the function and click "All".

24) Check the files and click Next.

25) Review the features and click Update.

26) Start updating and display the update progress.

27) Installation was successful. Click "Close" to exit

3. Open firewall port 1433

1) Click Start, click Administrative Tools, and then click Windows Firewall with Advanced Security.

2) Right-click Inbound Rules, and then click New Rule.

3) In the New Inbound Rule Wizard dialog box, on the Rule Type page, click Port, and then click Next.

4) On the Protocol and Ports page, click TCP, click Specific local ports, type 1433, and then click Next.

5) On the Action page, click Allow the connection, and then click Next.

6) On the Profile page, do all of the following:

Select the Domain check box.

Select the Private check box.

Clear the Public check box.

Click Next.

7) On the Name page, type a meaningful name for the new inbound rule. Then click Finish.

8) Set the network connection to "Private"

4 Database Preparation

1) Confirm that all database services are running normally:

Click "Start" - "All Programs" - "Microsoft SQL Server 2008 R2" - "Configuration Tools" - "SQL Server Configuration Manager", click "SQL Server Services", start all services, and change the startup mode to "Automatic"

If you have installed SQL Report Services and are going to install the database and vCenter on the same server, you need to shut down Report Services because the default port of Report Services is 80, which conflicts with vCenter. You can also change the default port of Report Services.

2) Click "SQL Server Network Configuration" - "MSSQLSERVER Protocols", right-click "TCP/IP", and click "Properties" in the pop-up menu.

Select the "IP Address" tab, change the "Enabled" in IP2 and IP4 from "No" to "Yes", and click "OK"

3) In the Windows Start menu, open the database management tool

Enter the server name for login, set the authentication to "Windows Authentication", connect to the database, and the following interface will appear

Create a database for vCenter Server: right-click on the "Database" column and create a new database

Enter the database name and click OK.

4) Find the installation package of NativeClient 10.0 in the SQL 2008 R2 CD "x:2052_chs_lpx86setupx64sqlncli.msi" and install it. This installation is very simple, just keep clicking the next step.

5) Open "Data Source (ODBC)" on the vCenter Server (172.16.14.22)

Open the "System DSN" tab and click "Add"

Select SQL Server Native Client 10.0 and click Finish.

Start creating a vCenter Server data source, enter the data source name, description, and database server, and click Next.

Select the verification method, keep the default, and click "Next"

Select "Change the default database to", select the vCenterDB database you just created, and click "Next"

Keep the default settings and click Finish.

Click "Test Data Source". If the test result shows "Test Successful!", it means that the created data source can be used normally. Then click "OK" to complete the data source creation.

5. Assign "Service Login" permissions to the account

Before installing vCenter, you need to assign the "Service Login" privilege to the vCenter Server service account. Click "Start" - "Run", enter gpedit.msc and press Enter. Open "Local Group Policy", expand "Computer Configuration" - "Windows Configuration" - "Security Settings" - "Local Policies" - "User Rights Assignment", then double-click "Log on as a service" on the right, click "Add User or Group" in the pop-up dialog box to add the account, and click "OK" after confirming that everything is correct.

After the modification, we force refresh the group policy through the command

6 Install vCenter Server

1) Mount the vCenter Server 6.0 installation disc (VMware-VIMSetup-all-6.0.0-3040890.iso), run the installer, select "vCenter Server for Windows", and click "Install"

2) In the vCenter installation wizard, click Next.

3) Select "I accept the terms of the license agreement" and click "Next"

4) Select "Embedded Deployment" for the deployment type and click "Next"

Starting with vSphere 6.0, vCenter Single Sign-On is included in embedded deployments or as part of the Platform Services Controller. Platform Services Controller contains all the services required for vSphere components to communicate, including vCenter SingleSign-On, VMware Certificate Authority, VMware Lookup Service, and Licensing Service.

Installation Order

vCenter6.0 currently supports two installation methods: embedded deployment and distributed deployment.

Embedded deployment deploys vCenter Server, vCenter Server service components, and Platform Services Controller on a virtual machine or physical server. This model is suitable for deployments with 8 or fewer instances.

A distributed deployment separates the Platform Services Controller and vCenter Server and installs them on different virtual machines or physical servers. First install the Platform Services Controller, then install vCenter Server and vCenter Server components on another virtual or physical machine, and connect vCenter Server to the Platform Services Controller. You can connect many vCenter Server instances to one Platform Services Controller. This model is suitable for deployments with more than 8 instances

1. If you select external deployment, also known as distributed deployment, for the deployment type, you must first install Platform Services Controller and then install vCenter Server.

2. If you select embedded deployment as the deployment type, the correct installation sequence will be automatically executed.

Notice

A Platform Services Controller supports a maximum of eight vCenter instances. If the number exceeds this limit, you need to install an additional Platform Services Controller.

5) After confirming that the system name is correct (if there is no DNS or domain, it is recommended to use the static IP address of VCENTER, otherwise the Web Client will not be able to open), click "Next"

Note: Make sure the FQDN or IP address provided does not change. The system name cannot be changed after deployment. If the system name changes, you must uninstall and reinstall vCenter Server.

6) Because this is the first installation, select "Create a new vCenter Single Sign-On domain", then enter the administrator password (the password should meet the complexity requirements, include uppercase letters, lowercase letters, special characters and numbers, and be longer than 8 characters), keep the rest as default, and click "Next"

7) Select "Specify User Service Account" here, then enter the account and password with service login privileges, and click "Next"

If you choose to run both here and the data source using the Windows local system account, an error will be reported when setting the data source in the next step, and the installation cannot continue

8) Select "Use external database", select the available data source created previously in DSN name, and click "Next"

9) All port numbers required for vCenter operation are listed. Keep the default values ​​and click "Next"

10) It is recommended not to modify the installation path, just keep the default path and click "Next"

11) All the parameters set above are listed. After confirming that they are correct, click "Install" to start installing vCenter Server.

12) vCenter Server is being installed. The installation progress bar is displayed. The installation process takes about half an hour.

13) Click "Finish" to complete the installation of vCenter Server.

6. Basic configuration of vCenter through Web Client

Browser configuration requirements

Microsoft Internet Explorer 10 and 11.

Mozilla Firefox: The latest browser version, and the previous version at the time of the vSphere 6.0 release.

Google Chrome: The latest browser version, and the previous version at the time of the release of vSphere 6.0.

And install Adobe Flash Player 11.9 or higher

It is recommended to install Chrome directly, which has good compatibility and does not require the installation of other plug-ins.

The browser version that comes with Windows 2008 is Microsoft Internet Explorer 8. You must first upgrade the version. After the upgrade is complete, continue with the following operations.

1. Turn off IE enhancement

After the server version of Windows is installed, the IE browser enables IE enhanced configuration. When you visit a website, you will be reminded to add it to the trust list. When you download files, a prompt will also pop up to add the trust list. All web pages accessed through IE need to be added to the trust list, which is very troublesome. You can choose to turn off this function.

Open the "Start Menu" - select "Administrative Tools" - select "Server Management"

Go to the server management interface, then open "Configure IE ESC"

Set both options on the IE ESC configuration interface to disabled and click OK

Open the IE browser again, and it will directly prompt us that IE ESC is not configured. Ignore it and just enter the address of the website you want to browse.

2 Install the vCenter Client Integration Plug-in

Open the browser, enter the vcenter address https://IP in the browser address bar, and click "Log in to vSphere Web Client"

If there is a problem with the security certificate, ignore it and click "Continue to browse this website"

Click "Download Client Integration Plug-in" and continue to install until the installation is complete.

3Add data center and ESXi host

vCenter Server is the core console of the entire vSphere architecture. The functions that must be implemented through it include VM templates, permission control, Vmotion, DRS, HA, FT, distributed vSwitch, Host Profiles, etc. Our daily maintenance, as well as subsequent vMotion, vDRS, HA, FT and other functions, will all be performed in this graphical interface. vCenter Server needs to be managed through the vSphere Client or Web Client.

Some basic concepts

Data Center: The basic unit of vCenter, generally divided by the location of the computer room, is the highest level division unit of vCenter.

Cluster: Multiple ESXi servers form a cluster, which can provide advanced functions. Usually, computers in the same computer room will be placed in a DataCenter, and multiple ESXI servers providing the same function will be placed in a cluster;

Host: refers to the ESXI host. HOST can be added to Cluster or DataCenter;

Virtual Machine (VM): can be placed in the HOST or the CLUSTER.

Folder: An abstract unit that can store one or more DataCenters. Multiple folders can also be created under a DataCenter.

1) Open the Web Client, enter the vCenter administrator and password, click "Login", and log in to the vCenter Server (Note: Please use the account vsphere.localadministrator or [email protected] account to log in, otherwise you will not have the corresponding permissions after logging in)

2) Click "Hosts and Clusters"

3) Right-click the instance name of the vCenter Server and click "New Data Center" in the pop-up menu.

4) Right-click the data center and click "Add Host" in the pop-up menu.

5) Enter the IP address of the ESXi host

6) Enter the ESXi administrator account root and password, and click "Next"

7) If the error message "The certificate cannot be verified" appears, it will not affect the system. Click "Yes" to continue.

8) Host summary, just click "Next" to continue

9) You can assign the ESXi license key here, but we will not make any settings here for now. We will authorize ESXi and vCenter together later, so keep the default and click "Next"

10) Lockdown mode will limit the control path of the ESXi host. It is usually set to "Disabled". Click "Next"

11) Select the virtual machine location. Since this is a new environment and we have only established one data center, keep the default and click "Next".

12) Click Finish to start adding the ESXi host.

13) The ESXi host has been added to the vCenter Server. Repeat the above steps to add all ESXi hosts.

4Adding Redundant Network Cards

The basis of virtual machine fault tolerance is clustering. To manage and use clusters, you need to have "management network redundancy" and "at least two shared storage disks." Next, you need to add redundant network cards to the management network of each ESXi host (Note: If the ESXI host is a blade server with two network cards, redundant network cards cannot be configured), and add network storage disks to the ESXi host.

The default vSphere configuration provides a working network, but to protect against network interface failures, you need to create redundant networks.

vSphere networking consists of many layers, the lowest layer being the physical network card. The virtual switch is located above the physical network card layer. The first virtual switch vSwitch0 is installed by default. The usage of virtual switches is similar to that of physical switches in a physical network. This means that several virtual machines can be connected to one switch. An overview of the current configuration can be seen in the Network tab.

In a default installation, only one physical NIC is connected to the virtual switch. To ensure network redundancy, another physical NIC should be added to form a NIC team.

1) In the vSphere Web Client management interface, select a host on the left, select "Management" - "Network" - "Virtual Switch" on the right, select the existing virtual switch in the system in the list, and click "

Button

2) In the "Assigned Adapters" dialog box, you can see that there is currently one network card. Click the "+" button to add another network card to the switch.

3) In the pop-up "Add Physical Adapter to Switch" dialog box, select the appropriate "Failover Order Group" according to the plan, and then select the network card to be added from the "Network Adapter" list. Select vmnic1 here, which is the second network card of the ESXi host (if you want to add multiple network cards, you can hold down the Shift key to select). If the physical network has been set up correctly, the required NIC will be in the same IP subnet as the NIC connected to the virtual switch.

4) Return to the "Assigned Adapters" dialog box, where you can see the added network card and the selected switching order.

5) After adding, return to the vSphere Web Client management console and click "

After refreshing, you can see that the current virtual switch already has two network cards, providing network redundancy on the network interface.

At this time, you can test the network redundancy: On other workstations, continuously ping the management interface of the ESXi host, physically unplug the network cable of a network card, and you should see that ping packets are always being sent, and there is no problem. At the same time, the interrupted physical network card is marked as failed in the vSphereClient network interface.

5. Add Storage

The implementation of vSphere's advanced functions must be achieved through multiple physical network cards. However, this is only one aspect. More importantly, vSphere requires independent shared storage.

Why do we need independent storage? Let's look at the following figure. In the figure, servers A and B each have their own operating system installed, and the files are stored on their own hard disks. If any of servers A or B fails, the hard disk data will be lost. Servers C and D only have the operating system installed, and the data is stored in independent storage devices. If either server C or D fails, we can have the other server take over the application, repair the downed server and replace it, and the original data will not be lost. Of course, you can also run the application system on two servers at the same time to perform load balancing.

Common storage includes DAS/NAS/SAN/iSCSI/FC, etc. This deployment uses two types of storage: FC SAN and iSCSI. To connect to ISCSI storage, you need to add a storage adapter first. FC SAN uses a dedicated HBA card for connection and can be added and used directly without adding an adapter.

5.1 Adding an iSCSI Dedicated Virtual Switch

Before connecting to iSCSI or NFS storage, you need to add a VMkernel network port to the virtual switch of the ESXi server. You can add VMkernel network ports to an existing virtual switch, or create a new independent virtual switch for iSCSI storage to get better performance. It is recommended to create a dedicated virtual switch for iSCSI storage.

First, add a dedicated iSCSI virtual switch and create a VMkernel port for software iSCSI.

Previously, FC SAN was added using the Web Client. This time, we will use the vSphere Client to add it.

Select the ESXi server that needs to connect to the iSCSI storage in the host list, and then click "Configuration" - "Network" - "Add Network" to open the Network Configuration Wizard

Select "VMkenel" in the connection type, and then click "Next" to continue

Use a separate network adapter as the iSCSI storage connection, select Create a vSphere Standard Switch, and use vmnic1 as the iSCSI storage connection adapter.

Note: The network card connected to the vSwitch must be a Gigabit adapter. The IP address can be in the same subnet as the iSCSI

Different network, but required to be routable to iiSCSI storage.

Name the newly created virtual switch port/port group, and optionally enable vMotion support for VMKernel

Configure the VMkernel connection settings, enter the IP address (must be an IP address that can access the iSCSI storage), network mask, and configure the gateway, then click Next

The configuration information is displayed. After checking that it is correct, click "Finish"

5.2 Adding an iSCSI storage adapter

After configuring the network for iSCSI storage, you need to add iSCSI adapters to access the iSCSI storage. vSphere 6 does not have an iSCSI software adapter configured by default and must be added manually.

Select the ESXi server in the host list, then select the "Configuration" tab, select "Storage Adapters" in the configuration list, and click "Add...";

Select "Add Software iSCSI Adapter" and click "OK" to confirm the addition.

vSphere prompts that you need to configure it after adding the adapter to access the iSCSI target, just click "OK"

5.3 Configuring the Software iSCSI Initiator

In the iSCSI system, the iSCSI adapter on the host is essentially an iSCSI initiator (iSCSI initiator, iSCSI initiator), and some necessary configurations need to be performed to discover the iSCSI target (iSCSI target, usually iSCSI storage).

To use iSCSI storage, you must first configure an iSCSI target. Select the host, click Configuration - Storage Adapter, select the iSCSI initiator to be configured in the storage adapter list, and then click Properties.

In the "General" tab, click "Configuration"

In the General Properties dialog box that opens, the launcher's status, default name, and alias are displayed. Check "Enabled" and click "OK" to enable this launcher.

There are two ways to discover iSCSI targets, dynamic discovery and static discovery. Dynamic discovery is recommended. Switch to the "Dynamic Discovery" tab and click "Add..."

Enter the host name or IP address of the iSCSI storage server and the listening port. The default iSCSI port is 3260. If you need to modify this port, pay attention to the corresponding settings on the firewall to ensure that the port is available.

The iSCSI server (storage access portal) has been added. You can click the "Add..." button to continue adding more. The added iSCSI server address will be displayed in the target list.

After adding the iSCSI target, vSphere prompts you to scan devices to discover iSCSI storage.

Currently, the storage cannot be discovered because the iSCSI storage has not been configured yet. The host's WWN code needs to be informed to the storage engineer, and the corresponding LUN of the storage needs to be mapped to the host (or host group) before it can be discovered by vSphere. The configuration method of the Huawei S5500 and 2600 storage deployed in this deployment is introduced in the attachment.

After entering the specified target address in dynamic discovery, switch to the "Static Discovery" tab and you will see the newly discovered target name. Select the target to be configured and click "Set..."

Click “CHAP(C) …”

Select "Do not use CHAP" and then click "OK"

Click "Close" to close the two previously opened windows. The system prompts that the adapter bus has changed. Click "Yes (Y)" to rescan.

After the scan is complete, the newly discovered LUNs will be displayed in the device list.

5.4 Create VFMS Data Storage

Before creating a data store, you must install and configure all adapters required for the storage.

The steps to add a data store are as follows:

Select the host and click "Add Storage"

Since iSCSI storage is added, select "Disk/LUN" as the storage type.

Select the storage to be added, click Next

Displays information about the disk to be added

If the added disk is a blank disk, the current disk layout will automatically display the entire disk space for storage configuration

If the disk is not empty, check the current disk layout in the top panel of the Current Disk Layout page and select the Configure option from the bottom panel

l Use all available partitions: Dedicate an entire disk or LUN to a single VMFS datastore. If this option is selected, all file systems and data currently stored on this disk will be deleted.

l Use free space: Deploy VMFS data storage in the remaining free disk space.

The disk added here is a blank disk, just click the next step.

Enter the name of the data store you want to add, and click Next.

Specify the size and capacity of the data store.

The disk layout information is displayed. After confirming that it is correct, click "Finish"

Repeat the above steps to add other storage LUNs.

When there are multiple VMware ESXi hosts in the network, you only need to add shared storage once on one host. For storage disks connected to the same storage server, other hosts will automatically add them.

5.5 Configuring Multipathing for Storage

Before configuring storage multipathing, first ensure that multiple paths can be found when viewing storage device information.

Select the host, then click "Management" - "Storage" - "Storage Device", then select the storage device that needs to configure multipath, and click "Edit Multipath" in the "Properties" column below.

Select the appropriate "Path Selection Strategy" as needed and click OK.

Rules for path selection strategy:

Most recently used (VMware): Under this policy, the host uses the path most recently used; when a path is unavailable, the host chooses to use an alternative path, and when the path is restored to normal, the host does not return to the original path;

Round Robin (VMware): Under this policy, the VMware host automatically uses all active paths in turn according to a certain algorithm to achieve load balancing between different LUN paths;

Fixed (VMware): Under this policy, the VMware host uses the specified preferred path (if specified), otherwise it automatically uses the first iSCSI path found when the system boots;

6Configure NTP Server for Host and Virtual Machines

In a virtual architecture, since services depend on servers, network time synchronization is very important for servers to remain consistent. For ESXi hosts, you can use the Web Client to set up network time NTP synchronization.

There are many reasons why you might want to synchronize your ESXi hosts. For example, if the host is integrated with Active Directory, it will take time to synchronize. Time consistency is also required when creating and retrieving snapshots, because snapshots store a real-time image of the server state. Setting up network time synchronization using the vSphere Client is very simple.

vSphere network time synchronization process:

To configure NTP synchronization, select the host, select "Time Configuration" on the "Configuration" list, and you can see the existing time synchronization on the host.

Next, click "Properties" and the "Time Configuration" window will pop up, where you can see the current time of the host. Make sure not to deviate too much from the actual time. If the host time is 1000 seconds ahead of the actual time, it will be too "crazy" and it will be difficult to sync.

After setting the local time for the host, check "NTP Client Enabled". Then click "Options..." and open "NTP Daemon (ntpd) Options".

Now, you need to select the NTP server with which the VMware ESXi host should synchronize. Click "NTP Settings" to see the current NTP server list. By default it is empty. Click "Add" to add the name or address of the NTP server to be used (it must be pingable on the host). The interface will prompt for the address, but you can also enter a name that will pass DNS. Multiple NTP servers can be added for one host.

If you don't know which NTP server to use for VMware network time synchronization, Internet NTP servers from the ntp.org pool can also be used. You only need to select one server from this group to add to the NTP server list. If you want to synchronize with an internal or proprietary NTP server, you should specify at least two NTP servers.

After adding the NTP server, check "Restart NTP service to apply the changes" below.

Then click "General", select the appropriate startup strategy on the right, and then click "Start" to start the NTP client service process. Click "OK" to close the NTP time setting window.

At this time, on the configuration screen of the ESXi host, you can now see the NTP Client running, and it can also display the list of NTP servers currently used by the host.

Then you need to enable time synchronization for the virtual machine. The steps are as follows:

Right click on the virtual machine and select "Edit Settings"

Switch to "VMware Tools" in "Options" and check "Synchronize guest time with host time".

With the ESXi hosts synchronized to the correct time, all services and events that depend on time can function normally. Most importantly, you won’t waste time and energy fixing misconfigured network events.

7Distribute License Keys

1. Return to the vCenter homepage and click the "License" icon in the right column

2. Open the "License" tab and click "

3. Enter the available ESXi and vCenter license keys and click "Next"

4. You can set a name for the license key to be imported. Here we keep the default and click "Next"

5. After confirmation, click "Finish" to import the license key into the system

6. After the import is complete, you can see the details of the license key in the "License" tab

7. Open the vCenter Server Systems tab in the Assets tab, select a vCenter instance, then click All Actions, and click Assign License in the drop-down menu.

8. Select the newly imported license key and click OK to complete the key assignment for vCenter

9. Follow the above steps (step 8) to complete the license key assignment for the ESXi host

8Add authorized administrator

Log in to the vcenter management center, click Permissions, and click

Add permissions

The dialog box that pops up will list the current local users. Select the user you want to add and click "Add" - "OK"

If there is no local user, you can create a user in vcenter (note: the local machine where vcenter is located) and then add it again.

The next step is to assign roles and permissions to the user. vSphere permissions are divided into very detailed categories. Assign the corresponding roles and permissions to the user as needed and click "OK"

Delegation of authority completed

After completion, you can test whether the user can log in normally and has the corresponding permissions.

Seven cluster configuration and HA (high availability)

High availability (HA) and hot standby (FT) are the most important parts of vSphere. The full name of HA is High Availability. High availability is not unique to vSphere. It is used for service continuity and data security. HA is a cluster function based on a group of ESXi servers. Its main purpose is to transfer the host in time when a host running a virtual machine fails to avoid long downtime. FT hot standby ensures that the virtual machine does not stop for the longest time. The virtual machine runs on two hosts at the same time in hot standby mode, which greatly enhances the continuity of business.

1Cluster Function

In the vSphere environment, high availability and hot standby are implemented based on clustering. A cluster is a collection of multiple ESXi hosts (vSphere 6.0 supports up to 64 ESXi hosts in a cluster), and the resources of all ESXi hosts in this collection are pooled. All virtual machines can be moved freely on any host in the pool (note that the hosts in the cluster must have shared storage and all virtual machines and their configuration files must reside on the shared storage). The purpose of a cluster is to distribute the computer's burden to multiple hosts, or when a physical server running a service fails, the virtual machines running on this server are automatically migrated to other available ESXi servers, thereby ensuring uninterrupted business operations.

VMware vSphere cluster functions are divided into five categories:

HA cluster: high availability, no special software needs to be installed in applications or virtual machines, all workloads are protected by Sphere HA, and when an unexpected failure of a host in the cluster is detected, the virtual machines previously hosted on the failed host will be automatically started on other hosts. When you create a vSphere HA cluster, one host is automatically selected as the primary host. The master host communicates with vCenter Server and monitors the status of all protected virtual machines and slave hosts. Different types of host failures can occur, and the primary host must detect and handle the failures accordingly. The master host must be able to distinguish between a failed host and a host that is in a network partition or has been isolated from the network. The primary host uses network and datastore heartbeats to determine the type of failure.

DRS cluster: Distributed resource scheduling, used to dynamically adjust the ESX host load in the cluster, automatically migrate virtual machines on heavily loaded hosts to lightly loaded hosts through VMotion, and ultimately achieve a balanced consumption of host resources in the entire cluster.

DPM cluster: Distributed power management, used to dynamically "centralize" virtual machines to a small number of hosts in the cluster when the load is light, and then put other ESX/ESXi hosts on standby to save power consumption. When the load is heavy, the previously standby hosts will be reawakened.

EVC: Enhanced vMotion, helps ensure vMotion compatibility of hosts within a cluster. EVC ensures that all hosts in a cluster present the same CPU feature set to virtual machines, even if the actual CPUs on the hosts are different. Use EVC to avoid migration failures with vMotion due to CPU incompatibility.

vSAN: Centrally manages the internal disks and flash devices of x86 servers to enable shared storage for virtual machines.

2. Create a new HA cluster

HA technology can monitor the physical machine and automatically migrate to a healthy physical machine when a blue screen or physical failure occurs to ensure normal business (memory RAW information will be lost).

HA also provides VMotion technology, which can realize "hot migration of virtual machines", achieve seamless migration transition of multiple physical machines, and migrate running virtual machines online to change their host locations. The biggest feature is that "the virtual machine application will not be interrupted during the entire migration process", which means that a virtual machine can be migrated from one ESXi host to another host in the same cluster without shutting down the virtual machine. This makes it very convenient to maintain the ESXi host without affecting the business.

Follow the steps below to create a new HA cluster

1) Right-click on the data center and click "New Cluster" in the pop-up menu.

2) Enter a name for the cluster, check vSphere HA, configure the HA options according to actual needs, and do not create other advanced features (DRS, EVC, vSAN) for now. Then click "OK" to complete the cluster creation.

3) Right-click the cluster and click "Move Host into Cluster" in the pop-up menu.

4) Select the host to be added to the cluster and click OK.

5) Add the ESXi hosts to their respective clusters. There are two ways to add ESXi hosts to a cluster, one is the wizard method, and the other is the drag-and-drop method. After all the additions are completed, the establishment of the VMware HA cluster is complete.

Attached is the address and IP address of commonly used NTP servers in China

210.72.145.44 (National Time Service Center server IP address)

133.100.11.8 Fukuoka University, Japan

time-a.nist.gov 129.6.15.28 NIST, Gaithersburg, Maryland

time-b.nist.gov 129.6.15.29 NIST, Gaithersburg, Maryland

time-a.timefreq.bldrdoc.gov 132.163.4.101 NIST, Boulder, Colorado

time-b.timefreq.bldrdoc.gov 132.163.4.102 NIST, Boulder, Colorado

time-c.timefreq.bldrdoc.gov 132.163.4.103 NIST, Boulder, Colorado

utcnist.colorado.edu 128.138.140.44 University of Colorado, Boulder

time.nist.gov 192.43.244.18 NCAR, Boulder, Colorado

time-nw.nist.gov 131.107.1.10 Microsoft, Redmond, Washington

nist1.symmetricom.com 69.25.96.13 Symmetricom, San Jose, California

nist1-dc.glassey.com 216.200.93.8 Abovenet, Virginia

nist1-ny.glassey.com 208.184.49.9 Abovenet, New York City

nist1-sj.glassey.com 207.126.98.204 Abovenet, San Jose, California

nist1.aol-ca.truetime.com 207.200.81.113 TrueTime, AOL facility, Sunnyvale, California

nist1.aol-va.truetime.com 64.236.96.53 TrueTime, AOL facility, Virginia

------------------------------------

ntp.sjtu.edu.cn 202.120.2.101 (NTP server address of Shanghai Jiao Tong University Network Center)

s1a.time.edu.cn Beijing University of Posts and Telecommunications

s1b.time.edu.cn Tsinghua University

s1c.time.edu.cn Peking University

s1d.time.edu.cn Southeast University

s1e.time.edu.cn Tsinghua University

s2a.time.edu.cn Tsinghua University

s2b.time.edu.cn Tsinghua University

s2c.time.edu.cn Beijing University of Posts and Telecommunications

s2d.time.edu.cn Southwest China Network Center

s2e.time.edu.cn Northwest Network Center

s2f.time.edu.cn Northeast China Network Center

s2g.time.edu.cn Southeast China Network Center

s2h.time.edu.cn Sichuan University Network Management Center

s2j.time.edu.cn Dalian University of Technology Network Center

s2k.time.edu.cn CERNET Guilin master node

s2m.time.edu.cn Peking University

2.cn.pool.ntp.org

3.asia.pool.ntp.org

2.asia.pool.ntp.org

Appendix 2 Huawei S5500T storage mapping configuration

1 Create LUN

2 Create hosts and host groups

3 Map the LUN to a host or host group

Attached is the solution to the yellow exclamation mark warning

1This host currently has no management network redundancy

After vSphere completes the cluster HA configuration, the host summary column prompts "This host currently has no management network redundancy". If there is no available redundant network due to deployment environment restrictions (although the blade server has two network cards, they cannot be used for vSphere redundancy because vSphere cannot detect external network interruptions), you can only use the following method to block this warning.

Right-click the cluster and select Edit Settings.

Select vSphere HA and click Advanced Options.

Double-click in "Option" and enter "das.ignoreRedundantNetWarning", double-click in "Value" and enter "true", then click "OK" to exit.

Right-click the host in the Cluster and select Reconfigure vSphere HA. After the reconfiguration is complete, this warning disappears.

2The number of vSphere HA heartbeat datastores for this host is 0, which is less than the required number: 2

Starting from vSphere 5.0, a data storage detection signal was added to the HA function. This error occurs when only one data store is configured in a deployment environment. Here's how to block this information:

l Right-click on the cluster and select Edit Settings.

l Select vSphere HA and click Advanced Options.

l Double-click "Option" and enter "das.ignoreinsufficienthbdatastore", and double-click "Value" and enter "true".

l Click "OK" to exit.

There is no need to reconfigure HA, the warning will disappear automatically.

Note: Data storage detection signal

When the master host in a vSphere HA cluster cannot communicate with a slave host over the management network, the master host uses datastore heartbeats to determine whether the slave host has failed, is in a network partition, or is isolated from the network. If a slave host has stopped heartbeating the datastore, it is considered to have failed and its virtual machines are restarted elsewhere.

3 Found a deprecated VMFS volume on the host

Alert content: A deprecated VMFS volume was found on the host. Please consider upgrading your volume to the latest version

I encountered several false alarms and carefully checked all mounted storage. They were all VMFS5. There was no lower version and no way to upgrade. After checking, this is a bug in VSPHERE6.0, KB2115558

In a vSphere 6 environment without VMFS-3 volumes, you experience the following symptoms:

• The ESXi host displays a false positive warning:

A deprecated VMFS volume was found on the host. Please consider upgrading volume(s) to the latest version.

Solution: Restart the management agent. There are two ways.

First, go to the ESXi host console, press F2, log in as root, and then select Restart Management Agents.

The second is to log in to the ESXi host through SSH. The specific steps are as follows:

l Use Client to log in to vCenter, find the host, select "Security Profile" in its "Configuration", and click "Properties" behind "Service" on the right.

l Select SSH in the list, click "Options", and then click "Start"

l Use SSH tool to log in to the host as root

l Run /etc/init.d/hostd restart

l Run /etc/init.d/vpxa restart

If you see that the host is disconnected in vCenter, you can right-click the host and click "Connect".

l Refer to the previous steps to shut down the SSH service.

Appendix 4: Using TeamViewer on a Server Operating System

TeamViewer offers free licenses for personal or non-commercial use, which can be used in perpetuity. However, for corporate or commercial use, only a 7-day trial period is provided, and it cannot be used after the trial period ends.

To install TeamViewer on a non-server operating system such as Windows XP, select "Personal/Non-commercial use" in the "Environment" installation step, and check the "I agree to use TeamViewer only for non-commercial and personal purposes" option in the "License Agreement" installation step.

When installing TeamViewer on a Windows 2003 Server, you are not allowed to select "Personal/non-commercial use" and are prompted that "Only business users" can use TeamViewer for server operations. That is to say: if it is installed on a server operating system such as Windows 2003 Server, it is a business user.

To install on a server operating system for "personal/non-commercial use", you need to make the following settings: right-click before installing TeamViewer and select Windows XP in compatibility. After the settings are completed, double-click to install and choose "Personal/Non-commercial use" installation.

If you have already installed TeamViewer on a server operating system such as Windows 2003 Server, you will not be able to use it after the 7-day trial period ends, even if you uninstall and reinstall TeamViewer. This is because TeamViewer automatically generates a fixed ID based on the computer's MAC address and saves it on the TeamViewer cloud server. This ID is the same when TeamViewer is installed multiple times on the same computer. After the trial period ends, the expiration information is saved on the cloud server, so even after reinstalling TeamViewer or even reinstalling the operating system, the software will still use the same ID for online verification.

To overcome this limitation, it is actually very simple: just change the MAC address of the machine. In the properties of the network connection, click "General/Configuration", and in the Advanced tab, set the "Locally Administered Address" or "network address" value. After modifying the MAC address, install the installation method mentioned in this article.

This is the end of this article about VMware vSphere6.0 server virtualization deployment and installation diagram. For more relevant VMware vSphere6.0 deployment and installation content, please search 123WORDPRESS.COM's previous articles or continue to browse the following related articles. I hope everyone will support 123WORDPRESS.COM in the future!

You may also be interested in:
  • VMware vsphere 6.5 installation tutorial (picture and text)
  • VMware vSphere 6.7 (ESXI 6.7) graphic installation steps
  • Vmware vSphere 5.0 installation and configuration method graphic tutorial
  • Vmware vSphere Client installation virtual machine graphic tutorial

<<:  Detailed explanation of MySQL delayed replication library method

>>:  JavaScript Shorthand Tips

Recommend

Detailed explanation of mktemp, a basic Linux command

mktemp Create temporary files or directories in a...

Detailed explanation of Linux redirection usage

I believe that everyone needs to copy and paste d...

A detailed discussion of MySQL deadlock and logs

Recently, several data anomalies have occurred in...

Using CSS to implement loading animation of Android system

There are two common loading icons on the web, on...

Native JS to achieve sliding button effect

The specific code of the sliding button made with...

5 Simple XHTML Web Forms for Web Design

Simple XHTML web form in web design 5. Technique ...

Detailed explanation of MySQL slow queries

Query mysql operation information show status -- ...

mysql uses stored procedures to implement tree node acquisition method

As shown in the figure: Table Data For such a tre...

Sample code for programmatically processing CSS styles

Benefits of a programmatic approach 1. Global con...

Flash embedded in web pages and IE, FF, Maxthon compatibility issues

After going through a lot of hardships, I searched...

CSS animation combined with SVG to create energy flow effect

The final effect is as follows: The animation is ...

How to use yum to configure lnmp environment in CentOS7.6 system

1. Installation version details Server: MariaDB S...

Nginx configures the same domain name to support both http and https access

Nginx is configured with the same domain name, wh...