Solution to the low writing efficiency of AIX mounted NFS

Solution to the low writing efficiency of AIX mounted NFS

Services provided by NFS

Mount: Enable the /usr/sbin/rpc.mountd servo process on the server and use the mount command on the client. The mounted servo process is an RPC that responds to the client's request.

Remote File access: By enabling /usr/sbin/nfsd on the server and /usr/sbin/biod on the client, client requests for files are processed. But when a user on the client wants to read or write a file on the server, the biod servo process sends this request to the server.

Boot parameters: Provides boot parameters for diskless SunOS clients by enabling the /usr/sbin/rpc.bootparamd daemon on the server.

PC authentication: Provides user authentication service for PC-NFS by starting /usr/sbin/rpc.pcnfsd on the server side

An NFS service is stateless, that is, NFS transmission is atomic, and a single NFS transmission corresponds to a single complete file operation.

background:

Linux is the server side of NFS, and AIX is the client side of NFS (in addition, there is a comparison test in which Linux is also used as the client side).

1. The underlying device corresponding to NFS is a flash card, and the local test I/O write performance can reach 2GB/s;

2. The server is a Gigabit network card, and the FTP test transmission can reach 100MB/s;

3. AIX successfully mounts NFS, and the dd test write speed is only 10MB/s;

4.Linux successfully mounts NFS, and the write speed of the same dd test can reach 100MB/s;

Note: The above speeds mainly reflect the difference in order of magnitude, and there will be slight deviations in actual tests.

Specific environment:

  • NFS Server: RHEL 6.8
  • NFS Client: AIX 6.1, RHEL 6.8

The mounting parameters are configured according to the MOS document:

Mount Options for Oracle files for RAC databases and Clusterware when used with NFS on NAS devices (Doc ID 359515.1)

According to the actual needs of this time, refine the parameters that need to be configured:

--MOS Recommendations (AIX):
cio,rw,bg,hard,nointr,rsize=32768,
wsize=32768,proto=tcp,noac,
vers=3,timeo=600

--MOS Recommendations (Linux):
rw,bg,hard,nointr,rsize=32768,
wsize=32768,tcp,actimeo=0,
vers=3,timeo=600

AIX NFS mount parameters:

mount -o cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,timeo=600 10.xx.xx.212:/xtts /xtts

Direct mounting prompts the following error:

# mount -o cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,timeo=600 10.xx.xx.212:/xtts /xtts
mount: 1831-008 giving up on:
10.xx.xx.212:/xtts
vmount: Operation not permitted.

Check the information to confirm that AIX needs to set additional network parameters:

# nfso -p -o nfs_use_reserved_ports=1

Try mounting again successfully:

mount -o cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,timeo=600 10.xx.xx.212:/xtts /xtts

The speed of the dd test is very unsatisfactory, only 10MB/s:

--test performance; AIX NFS
# time dd if=/dev/zero of=/xtts/test-write bs=8192 count=102400
102400+0 records in.
102400+0 records out.

real 0m43.20s
user 0m0.79s
sys 0m5.28s
# time dd if=/xtts/test-write of=/dev/null bs=8192 count=102400
102400+0 records in.
102400+0 records out.

real 0m30.86s
user 0m0.84s
sys 0m5.88s

All parameters are set according to actual needs and in accordance with MOS recommendations. Any questions?

  • I tried removing the cio parameter test and found that the results were almost unchanged;
  • I tried removing the hard parameter test and found that the results were almost unchanged;
  • I tried changing the protocol from TCP to UDP and found that the results were almost unchanged.

We tried almost all possible parameters, but the results were not ideal. We were immediately ready to coordinate resources to find a host engineer to troubleshoot the problem.

At this moment, inspiration suddenly struck me and I suddenly thought of a possibility. Is it possible that NFS on AIX limits the I/O throughput of a single process? With this guess, run parallel tests:

Open 5 windows and start dd at the same time:

time dd if=/dev/zero of=/xtts/test-write1 bs=8192 count=102400
time dd if=/dev/zero of=/xtts/test-write2 bs=8192 count=102400
time dd if=/dev/zero of=/xtts/test-write3 bs=8192 count=102400
time dd if=/dev/zero of=/xtts/test-write4 bs=8192 count=102400
time dd if=/dev/zero of=/xtts/test-write5 bs=8192 count=102400

I was pleasantly surprised to find that all five windows were completed simultaneously in 55 seconds, which is equivalent to 800M*5=4000M, all completed in 55 seconds, reaching 72MB/s per second. This parallel approach has met the need to improve efficiency.

And it seems that as long as we continue to try to open multiple windows for testing, we can basically reach the network limit of 100MB/s (limited by Gigabit network cards).

P.S. When testing the same NFS mounted on another Linux server, the dd write speed can reach 100MB/s without parallelization. This was also the factor that influenced my thinking before.
Linux NFS mount parameters:

# mount -o rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 10.xx.xx.212:/xtts /xtts

Linux NFS test results:

--test performance; Linux NFS
# dd if=/dev/zero of=/xtts/test-write bs=8192 count=102400
102400+0 records in
102400+0 records out
838860800 bytes (839 MB) copied, 6.02451 s, 139 MB/s
# dd if=/xtts/test-write of=/dev/null bs=8192 count=102400
102400+0 records in
102400+0 records out
838860800 bytes (839 MB) copied, 8.55925 s, 98.0 MB/s

I am not familiar with AIX and did not delve further into the underlying principles. The main confusion in the process of solving the problem at the beginning was why when Linux was used as a client, the dd test could reach a speed of 100MB/s without parallelization, which made me fall into the inherent thinking. The lesson I learned from this incident is: Sometimes, you have to think outside the box in order to make a breakthrough.

Finally, I also posted the results of the local test on the NFS Server side to express my admiration for the I/O capabilities of the flash memory card:

# dd if=/dev/zero of=/dev/test-write2 bs=8192 count=1024000
1024000+0 records in
1024000+0 records out
8388608000 bytes (8.4 GB) copied, 4.19912 s, 2.0 GB/s

Summarize

The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. If you have any questions, you can leave a message to communicate. Thank you for your support for 123WORDPRESS.COM.

You may also be interested in:
  • How to mount the nfs network file system between linux systems
  • MacOS cannot mount NFS Operation not permitted error solution
  • Centos7 installation and configuration of NFS service and mounting tutorial (recommended)

<<:  Learn the black technology of union all usage in MySQL 5.7 in 5 minutes

>>:  Avoid abusing this to read data in data in Vue

Recommend

Analysis of Hyper-V installation CentOS 8 problem

CentOS 8 has been released for a long time. As so...

A brief discussion on the pitfalls of react useEffect closure

Problem code Look at a closure problem code cause...

Specific use of stacking context in CSS

Preface Under the influence of some CSS interacti...

Sample code for JS album image shaking and enlarging display effect

The previous article introduced how to achieve a ...

Solution to Chinese garbled characters when operating MySQL database in CMD

I searched on Baidu. . Some people say to use the...

Example code for CSS pseudo-classes to modify input selection style

Note: This table is quoted from the W3School tuto...

Improving the effect of hyperlinks in web design and production

Hyperlinks enable people to jump instantly from pa...

Detailed explanation of vue simple notepad development

This article example shares the specific code of ...

Detailed explanation of MySQL Workbench usage tutorial

Table of contents (I) Using Workbench to operate ...

Summary of some of my frequently used Linux commands

I worked in operations and maintenance for two ye...

How to implement responsive layout with CSS

Implementing responsive layout with CSS Responsiv...

CSS to implement QQ browser functions

Code Knowledge Points 1. Combine fullpage.js to a...