Solution to the low writing efficiency of AIX mounted NFS

Solution to the low writing efficiency of AIX mounted NFS

Services provided by NFS

Mount: Enable the /usr/sbin/rpc.mountd servo process on the server and use the mount command on the client. The mounted servo process is an RPC that responds to the client's request.

Remote File access: By enabling /usr/sbin/nfsd on the server and /usr/sbin/biod on the client, client requests for files are processed. But when a user on the client wants to read or write a file on the server, the biod servo process sends this request to the server.

Boot parameters: Provides boot parameters for diskless SunOS clients by enabling the /usr/sbin/rpc.bootparamd daemon on the server.

PC authentication: Provides user authentication service for PC-NFS by starting /usr/sbin/rpc.pcnfsd on the server side

An NFS service is stateless, that is, NFS transmission is atomic, and a single NFS transmission corresponds to a single complete file operation.

background:

Linux is the server side of NFS, and AIX is the client side of NFS (in addition, there is a comparison test in which Linux is also used as the client side).

1. The underlying device corresponding to NFS is a flash card, and the local test I/O write performance can reach 2GB/s;

2. The server is a Gigabit network card, and the FTP test transmission can reach 100MB/s;

3. AIX successfully mounts NFS, and the dd test write speed is only 10MB/s;

4.Linux successfully mounts NFS, and the write speed of the same dd test can reach 100MB/s;

Note: The above speeds mainly reflect the difference in order of magnitude, and there will be slight deviations in actual tests.

Specific environment:

  • NFS Server: RHEL 6.8
  • NFS Client: AIX 6.1, RHEL 6.8

The mounting parameters are configured according to the MOS document:

Mount Options for Oracle files for RAC databases and Clusterware when used with NFS on NAS devices (Doc ID 359515.1)

According to the actual needs of this time, refine the parameters that need to be configured:

--MOS Recommendations (AIX):
cio,rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,noac,
vers=3,timeo=600

--MOS Recommendations (Linux):
rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,actimeo=0,
vers=3,timeo=600

AIX NFS mount parameters:

mount -o cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,timeo=600 10.xx.xx.212:/xtts /xtts

Direct mounting prompts the following error:

# mount -o cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,timeo=600 10.xx.xx.212:/xtts /xtts
mount: 1831-008 giving up on:
10.xx.xx.212:/xtts
vmount: Operation not permitted.

Check the information to confirm that AIX needs to set additional network parameters:

# nfso -p -o nfs_use_reserved_ports=1

Try mounting again successfully:

mount -o cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,timeo=600 10.xx.xx.212:/xtts /xtts

The speed of the dd test is very unsatisfactory, only 10MB/s:

--test performance; AIX NFS
# time dd if=/dev/zero of=/xtts/test-write bs=8192 count=102400
102400+0 records in.
102400+0 records out.

real 0m43.20s
user 0m0.79s
sys 0m5.28s
# time dd if=/xtts/test-write of=/dev/null bs=8192 count=102400
102400+0 records in.
102400+0 records out.

real 0m30.86s
user 0m0.84s
sys 0m5.88s

All parameters are set according to actual needs and in accordance with MOS recommendations. Any questions?

  • I tried removing the cio parameter test and found that the results were almost unchanged;
  • I tried removing the hard parameter test and found that the results were almost unchanged;
  • I tried changing the protocol from TCP to UDP and found that the results were almost unchanged.

We tried almost all possible parameters, but the results were not ideal. We were immediately ready to coordinate resources to find a host engineer to troubleshoot the problem.

At this moment, inspiration suddenly struck me and I suddenly thought of a possibility. Is it possible that NFS on AIX limits the I/O throughput of a single process? With this guess, run parallel tests:

Open 5 windows and start dd at the same time:

time dd if=/dev/zero of=/xtts/test-write1 bs=8192 count=102400
time dd if=/dev/zero of=/xtts/test-write2 bs=8192 count=102400
time dd if=/dev/zero of=/xtts/test-write3 bs=8192 count=102400
time dd if=/dev/zero of=/xtts/test-write4 bs=8192 count=102400
time dd if=/dev/zero of=/xtts/test-write5 bs=8192 count=102400

I was pleasantly surprised to find that all five windows were completed simultaneously in 55 seconds, which is equivalent to 800M*5=4000M, all completed in 55 seconds, reaching 72MB/s per second. This parallel approach has met the need to improve efficiency.

And it seems that as long as we continue to try to open multiple windows for testing, we can basically reach the network limit of 100MB/s (limited by Gigabit network cards).

P.S. When testing the same NFS mounted on another Linux server, the dd write speed can reach 100MB/s without parallelization. This was also the factor that influenced my thinking before.
Linux NFS mount parameters:

# mount -o rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 10.xx.xx.212:/xtts /xtts

Linux NFS test results:

--test performance; Linux NFS
# dd if=/dev/zero of=/xtts/test-write bs=8192 count=102400
102400+0 records in
102400+0 records out
838860800 bytes (839 MB) copied, 6.02451 s, 139 MB/s
# dd if=/xtts/test-write of=/dev/null bs=8192 count=102400
102400+0 records in
102400+0 records out
838860800 bytes (839 MB) copied, 8.55925 s, 98.0 MB/s

I am not familiar with AIX and did not delve further into the underlying principles. The main confusion in the process of solving the problem at the beginning was why when Linux was used as a client, the dd test could reach a speed of 100MB/s without parallelization, which made me fall into the inherent thinking. The lesson I learned from this incident is: Sometimes, you have to think outside the box in order to make a breakthrough.

Finally, I also posted the results of the local test on the NFS Server side to express my admiration for the I/O capabilities of the flash memory card:

# dd if=/dev/zero of=/dev/test-write2 bs=8192 count=1024000
1024000+0 records in
1024000+0 records out
8388608000 bytes (8.4 GB) copied, 4.19912 s, 2.0 GB/s

Summarize

The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. If you have any questions, you can leave a message to communicate. Thank you for your support for 123WORDPRESS.COM.

You may also be interested in:
  • How to mount the nfs network file system between linux systems
  • MacOS cannot mount NFS Operation not permitted error solution
  • Centos7 installation and configuration of NFS service and mounting tutorial (recommended)

<<:  Learn the black technology of union all usage in MySQL 5.7 in 5 minutes

>>:  Avoid abusing this to read data in data in Vue

Recommend

Nginx installation error solution

1. Unzip nginx-1.8.1.tar.gz 2. Unzip fastdfs-ngin...

Solution to Mysql binlog log file being too large

Table of contents 1. Related binlog configuration...

Learn to deploy microservices with docker in ten minutes

Since its release in 2013, Docker has been widely...

Introduction to Common XHTML Tags

<br />For some time, I found that many peopl...

The difference between JS pre-parsing and variable promotion in web interview

Table of contents What is pre-analysis? The diffe...

Detailed tutorial on installing CUDA9.0 on Ubuntu16.04

Preface: This article is based on the experience ...

Mybatis statistics of the execution time of each SQL statement

background I am often asked about database transa...

Error mysql Table 'performance_schema...Solution

The test environment is set up with a mariadb 5.7...

How to install and configure WSL on Windows

What is WSL Quoting a passage from Baidu Encyclop...

HTML basic summary recommendation (text format)

HTML text formatting tags 標簽 描述 <b> 定義粗體文本 ...

Docker commands are implemented so that ordinary users can execute them

After installing docker, there will usually be a ...

8 ways to manually and automatically backup your MySQL database

As a popular open source database management syst...