Methods for optimizing Oracle database with large memory pages in Linux

Methods for optimizing Oracle database with large memory pages in Linux

Preface

PC Server has developed to this day and has made great progress in performance. 64-bit CPUs have been available in ordinary home PCs several years ago, not to mention higher-end PC servers. Thanks to the efforts of the two processor giants, Intel and AMD, the processing power of x86 CPUs has been continuously improved. At the same time, with the development of manufacturing technology, the memory capacity that can be installed on PC servers is also getting larger and larger. Now, PC servers with tens of GB of memory can be seen everywhere. It is the development of hardware that makes the processing power of PC Server more and more powerful and the performance higher and higher. In terms of stability, the combination of PCServer and Linux operating system can also meet the stability and reliability required by important business systems. Of course, in terms of cost, to quote a netizen who works for an industry software manufacturer, "If we don't use PC Servers but use minicomputers instead, how can we make money?" Whether in terms of initial purchase, energy consumption during operation, or maintenance costs, PC Servers are much cheaper than minicomputers with the same processing power. It is under the influence of these two important factors, performance and cost, that more and more databases are running on PC servers. Some of the clients I serve even virtualize high-end PC Servers into multiple machines and run an Oracle database on each virtual machine. Many of these databases carry important production systems.

There is no doubt that the most suitable operating system for running Oracle database on PC Server is undoubtedly Linux. As an operating system very similar to UNIX, it has the same excellent performance as UNIX in terms of stability, reliability and performance. However, compared with AIX, HP-UX and other operating systems, Linux has an obvious defect in its memory paging processing mechanism. This defect is particularly evident in Oracle databases that use a larger SGA. In severe cases, it has a significant negative impact on database performance and may even cause the database to completely stop responding. This article will explain this defect in detail from a case and use large memory pages under Linux to solve this problem.

1. Introduction of Case Studies

One of the customer's systems had serious performance issues. When the problem occurs, the system is basically unusable and all business operations on the application become completely unresponsive. The system database is Oracle 10.2.0.4 Oracle Database running under RHEL 5.2 (Red Hat Enterprise Linux Server release 5 (Tikanga)), the CPU is 4 4-core Xeon processors (Intel(R)Xeon(R) CPU E7430 @ 2.13GHz), that is, the logical CPU is 16, and the memory is 32GB. During the failure, the CPU of the database server remained at 100% for a long time. Even after shutting down all WebLogic Servers of the application, the CPU utilization of the database server remains at 100% for several minutes, and then gradually decreases, taking about 20 minutes to drop to a normal idle state. Because all applications are shut down at this time, only very low CPU utilization is a normal state. According to the database maintenance personnel of this system, this situation has occurred many times. Even after restarting the database, such a failure will occur again within a day or two. At the same time, this system has not undergone any major changes recently.

After receiving the fault report, the author found that connecting to the database via SSH was very slow and it took almost 1 minute to connect. Let's first take a quick look at the performance of the server. The development IO is extremely low, there is still a lot of remaining memory, at least 1GB or more, and there is no page in / page out. The most notable phenomenon is that the CPU utilization is quite high, always maintained at 100%. At the same time, the SYS part of the CPU utilization is above 95%. The operating system run queue has always been above 200. The server memory usage is as follows:

$cat /proc/meminfo

MemTotal: 32999792 kB

MemFree: 1438672 kB

Buffers: 112304 kB

Cached: 23471680 kB

SwapCached: 1296 kB

Active: 19571024 kB

Inactive: 6085396 kB

HighTotal: 0 kB

HighFree: 0 kB

LowTotal: 32999792 kB

LowFree: 1438672 kB

SwapTotal: 38371320 kB

SwapFree: 38260796 kB

Dirty: 280 kB

Writeback: 0kB

AnonPages: 2071192 kB

Mapped: 12455324 kB

Slab: 340140 kB

PageTables: 4749076 kB

NFS_Unstable: 0 kB

Bounce: 0 kB

CommitLimit: 54871216kB

Committed_AS: 17226744 kB

VmallocTotal:34359738367 kB

VmallocUsed: 22016 kB

VmallocChunk:34359716303 kB

From the perspective of the phenomenon, high SYS CPU is an important clue for analyzing the problem.

After understanding the performance of the operating system as quickly as possible, immediately connect to the database through Sqlplus to view the performance information inside the database:

(Note: The following data has been processed regarding SQL, server name, database name, etc.)

SQL> select sid,serial#,program,machine,sql_id,eventfrom v$session where type='USER' and status='ACTIVE';

SID SERIAL# PROGRAM MACHINE SQL_ID EVENT

-------------------- ------------------------------ ---------- -------------

519 4304 xxx_app1 0gc4uvt2pqvpu latch: cache buffers chains

459 12806 xxx_app1 0gc4uvt2pqvpu latch: cache buffers chains

454 5518 xxx_app1 15hq76k17h4ta latch: cache buffers chains

529 7708 xxx_app1 0gc4uvt2pqvpu latch: cache buffers chains

420 40948 xxx_app1 0gc4uvt2pqvpu latch: cache buffers chains

353 56222 xxx_app1 f7fxxczffp5rx latch: cache buffers chains

243 42611 xxx_app1 2zqg4sbrq7zay latch: cache buffers chains

458 63221 xxxTimer.exe APPSERVER 9t1ujakwt6fnf local write wait

...Some content is omitted to save space...

409 4951 xxx_app1 7d4c6m3ytcx87 read by other session

239 51959 xxx_app1 7d4c6m3ytcx87 read by other session

525 3815 xxxTimer.exe APPSERVER 0ftnnr7pfw7r6 enq: RO -fast object reu

518 7845 xxx_app1 log file sync

473 1972 xxxTimer.exe APPSERVER 5017jsr7kdk3b log file sync

197 37462 xxx_app1 cbvbzbfdxn2w7 db file sequential read

319 4939 xxxTimer.exe APPSERVER 6vmk5uzu1p45m db file sequentialread

434 2939 xxx_app1 gw921z764rmkc latch: shared pool

220 50017 xxx_app1 2zqg4sbrq7zay latch: library cache

301 36418 xxx_app1 02dw161xqmrgf latch: library cache

193 25003 oracle@xxx_db1 (J001) xxx_db1 jobq slave wait

368 64846 oracle@xxx_db1 (J000) xxx_db1 jobq slave wait

218 13307 sqlplus@xxx_db1 (TNS V1-V3) xxx_db1 5rby2rfcgs6b7 SQL*Net message to client

435 1883 xxx_app1 fd7369jwkuvty SQL*Net message from client

448 3001 xxxTimer.exe APPSERVER bsk0kpawwztnwSQL*Net message from dblink


SQL>@waitevent

SID EVENT SECONDS_IN_WAIT STATE

------------------------------------ --------------- -------------------

556 latch: cache buffers chains 35 WAITED KNOWN TIME

464 latch:cache buffers chains 2 WAITING

427 latch:cache buffers chains 34 WAITED SHORT TIME

458 localwrite wait 63 WAITING

403 writecomplete waits 40 WAITING

502 writecomplete waits 41 WAITING

525 enq:RO - fast object reuse 40 WAITING

368 enq:RO - fast object return 23 WAITING

282 db file sequential read 0 WAITING

501 dbfile sequential read 2 WAITED SHORT TIME

478 db file sequential read 0 WAITING

281 db file sequential read 6 WAITED KNOWN TIME

195 db file sequential read 4 WAITED KNOWN TIME

450 db file sequential read 2 WAITED KNOWN TIME

529 db file sequential read 1 WAITING

310 dbfile sequential read 0 WAITED KNOWN TIME

316 db filesequential read 89 WAITED SHORT TIME

370 db file sequential read 1 WAITING

380 db file sequential read 1 WAITED SHORT TIME

326 jobq slave wait 122 WAITING

378 jobq slave wait 2 WAITING

425 jobq slave wait 108 WAITING

208 SQL*Net more data from db 11 WAITED SHORT TIME link

537 Streams AQ: waiting for t 7042 WAITING time management or cleanup tasks

549 Streams AQ: qmn coordinat 1585854 WAITING or idle wait

507 Streams AQ: qmn slave idl 1585854 WAITING e wait

430 latch free 2 WAITED KNOWN TIME

565 latch:cache buffers lru 136 WAITED SHORT TIME chain

Judging from the activities in the database and the wait events, there is nothing unusual. It is worth noting that when the CPU utilization of the database server is at 100% for a long time, or the physical memory is exhausted and accompanied by a large amount of swap memory swapped in and out, it is necessary to carefully diagnose the performance phenomena in the database, such as whether a certain type of wait event is the result of insufficient CPU or memory, or whether specific activities in the database are the root cause of excessive CPU or memory exhaustion.

Judging from the data above, there are not many active sessions, less than 50, plus the number of background processes, which is quite different from the 200 running in the operating system. There are three main types of non-idle wait events in the database: IO-related waits such as db file sequential read, database link-related SQL*Net more data from dblink, and latch-related wait events. Among these three categories, usually only wait events such as latch will cause increased CPU utilization.

By analyzing and comparing the AWR reports, there is no particularly obvious difference in database activity during the failure period and the normal period. But in terms of system statistics, there are big differences:

StatisticName 1st 2nd Value

------------------------------------------------- -------------- ------------------------

BUSY_TIME 3,475,776 1,611,753

IDLE_TIME 2,266,224 4,065,506

IOWAIT_TIME 520,453 886,345

LOAD -67 -3

NICE_TIME 0 0

NUM_CPU_SOCKETS 0 0

PHYSICAL_MEMORY_BYTES 0 0

RSRC_MGR_CPU_WAIT_TIME 0 0

SYS_TIME 1,802,025 205,644

USER_TIME 1,645,837 1,381,719

The above data is the comparison data from AWR of 1 hour (1st) including the fault period and 1 hour (2nd) during the normal period. For fault analysis, especially when the fault duration is short, a 1-hour AWR report will not accurately reflect the performance during the fault period. However, when we are Trouble Shooting, the first thing we need to do is to determine the direction from various data. As mentioned earlier, high CPU utilization in the SYS section is an important clue. If other performance data within the database are not much different, you can start with the CPU.

2. Analysis of CPU usage in the operating system

So, what do the two different utilizations of SYS and USER represent in the operating system? Or what is the difference between the two?

Simply put, the SYS part of CPU utilization refers to the CPU part used by the operating system kernel (Kernel), that is, the CPU consumed by the code running in kernel state. The most common is the CPU consumed during system calls (SYS CALL). The USER part is the CPU part used by the application software's own code, that is, the CPU consumed by the code running in user mode. For example, when Oracle executes SQL, it needs to initiate a read call to read data from the disk to the db buffer cache. This read call is mainly run by the operating system kernel including the device driver code, so the CPU consumption is calculated into the SYS part. When Oracle parses the data read from the disk, only Oracle's own code is running, so the CPU consumption is calculated into the USER part.

So what operations or system calls will generate the CPU in the SYS part?

1. I/O operations, such as reading and writing files, accessing peripherals, and transferring data over the network. This part of the operation generally does not consume too much CPU, because the main time consumption is on the device for IO operations. For example, when reading a file from disk, most of the time is spent on operations within the disk, and the CPU time consumed accounts for only a small part of the I/O operation response time. The SYS CPU usage may increase only when the concurrent I/O is too high.

2. Memory management, such as the application process applying for memory from the operating system, the operating system maintaining the system's available memory, swapping space pages, etc. In fact, similar to Oracle, the larger the memory and the more frequent the memory management operations, the higher the CPU consumption will be.

3. Process scheduling. The usage of this part of the CPU depends on the length of the run queue in the operating system. The longer the run queue, the more processes need to be scheduled, and the higher the burden on the kernel.

4. Others, including inter-process communication, semaphore processing, some activities within the device driver, etc.

Judging from the performance data when the system fails, memory management and process scheduling may be the reasons for the high SYS CPU. However, if the run queue is as high as 200 or more, it is most likely the result of high CPU utilization, rather than the high run queue causing high CPU utilization. From the database, the number of active sessions is not particularly high. Then, we need to pay attention to whether the high CPU utilization is caused by problems with system memory management?

Looking back at the system memory data collected in /proc/meminfo at the beginning of this article, we can find an important data:

PageTables: 4749076 kB

From the data, we can see that the PageTables memory reached 4637MB. PageTables literally means "page table". Simply put, it is a table used by the operating system kernel to maintain the correspondence between the process linear virtual address and the actual physical memory address.

Modern computers usually manage and allocate physical memory in units of pages (Page Frames). On the x86 processor architecture, the page size is 4K. The address space accessible to a process running on an operating system is called a virtual address space, which is related to the number of bits in the processor. For 32-bit x86 processors, the address space accessible to a process is 4GB. Each process running in the operating system has its own independent virtual address space or linear address space, and this address space is also managed by page. In Linux, the page size is usually 4KB. When a process accesses memory, the operating system and hardware work together to convert the process's virtual address into a physical address. The same virtual linear address of two different processes may point to the same physical memory, such as shared memory; or it may be different, such as the private memory of the process.

The following figure is a schematic diagram of the correspondence between virtual addresses and physical memory:

Suppose there are two processes A and B, each of which has a memory pointer pointing to the address 0x12345 (0x represents a hexadecimal number). For example, if one process forks or clones another process, then the two processes will have pointers pointing to the same memory address. When a process accesses the memory pointed to by the address 0x12345, the operating system converts this address into a physical address. For example, process A is 0x23456 and process B is 0x34567. The two do not affect each other. So when was this physical address obtained? For process private memory (which is the case in most cases), it is obtained when the process requests memory allocation from the operating system. When a process requests memory allocation from the operating system, the operating system allocates free physical memory to the process in units of Pages, generates a virtual thread address for the process, establishes a mapping relationship between the virtual address and the physical memory address, and returns this virtual address to the process as a result.

Page Table is a data structure used by the operating system to maintain the correspondence between the process virtual address and physical memory. The following figure is a schematic diagram of the Page Table in a relatively simple case:

The following is a brief description of how the operating system converts between the virtual address and the actual physical address of a process in a 32-bit system when the page size is 4K.

1. The directory table is a data structure used to index the page table. Each directory entry occupies 32 bits, or 4 bytes, and stores the location of a page table. The directory table takes up exactly 1 page of memory, or 4KB, and can store 1024 directory entries, which means it can store 1024 page table locations.

2. The size of a page table entry is 4 bytes, which stores the starting address of a physical memory page. Each page table also occupies 4K of memory and can store the starting addresses of 1024 physical memory pages. Since the starting address of the physical memory page is aligned in units of 4KB, only 20 bits are needed to represent the address out of 32 bits, and the other 12 bits are used for other purposes, such as indicating whether the memory page is read-only or writable, etc.

3. 1024 page tables, each page table has 1024 physical memory page start addresses, totaling 1M addresses. The size of the physical memory page pointed to by each address is 4KB, totaling 4GB.

4. When the operating system and hardware map the virtual address to a physical address, the 10 bits 31-22 of the virtual address are used to index from the directory entry to one of the 1024 page tables; the 10 bits 12-21 of the virtual address are used to index from the page table to one of the 1024 page table entries. The starting address of the physical memory page is obtained from the indexed page table entry, and then the 12 bits 0-11 of the virtual address are used as the offset in the 4KB memory page. Then the starting address of the physical memory page plus the offset is the address of the physical memory that the process needs to access.

Let's take a look at how much space the two data structures, directory table and page table, take up. The directory table is fixed at 4KB. And what about the page table? Since there are at most 1024 page tables and each page table occupies 4KB, the page table occupies at most 4MB of memory.

In practice processes in 32-bit Linux don't usually have page tables that large. It is impossible for a process to use up all of the 4GB address space, and even 1GB of virtual address space is allocated to the kernel. At the same time, Linux will not create such a large page table for a process at one time. Only when the process allocates and accesses memory will the operating system create a mapping of the corresponding address for the process.

Only the simplest case of paging mapping is described here. In fact, there are four levels of page table directory together with page table. At the same time, when PAE is enabled in 32-bit or 64-bit system, the page table structure is more complicated than the above diagram. But no matter what, the structure of the last level, the page table, is consistent.

In a 64-bit system, the size of the page table entries in the Page Table increases from 32 bits to 64 bits compared to 32 bits. So how big of an impact will this have? If a process accesses 1GB of physical memory, that is, 262144 memory pages, in a 32-bit system, the page table requires 262144*4/1024/1024=1MB, while in a 64-bit system, the space occupied by the page table increases by 1 times, that is, 2MB.

Now let's take a look at what the situation is like for Oracle databases running on Linux systems. In this case, the SGA size of the database is 12GB. If an OracleProcess accesses all SGA memory, the page table size will be 24MB, which is an astonishing number. PGA is ignored here because on average the PGA of each process does not exceed 2M, which is too small compared to SGA. According to the AWR report, there are about 300 sessions, so the page table of these 300 connections will reach 7200MB, but not every process will access all the memory in the SGA. The Page Tables size seen from meminfo is 4637MB. Such a large Page Table space is the result of 300 sessions and an SGA size of 12GB.

Obviously, the Page Table is not the only memory management data structure in the system. There are also other data structures used to manage memory. These overly large memory management structures will undoubtedly greatly increase the burden on the operating system kernel and CPU consumption. However, if the load changes or other reasons cause the memory demand to change significantly, such as multiple processes applying for a large amount of memory at the same time, the CPU may reach its peak in a short period of time, thus causing problems.

3. Use large memory pages to solve the problem

Although there was no definite evidence and not enough time to collect sufficient evidence to prove that the problem was caused by an overly large Page Table, it would require facing multiple system unavailability failures of more than half an hour. But from the current perspective, this is the biggest suspicious point. Therefore, it was decided to use huge memory pages first to tune the system's memory usage.

Large memory page is a general term. It is called Large Page in earlier versions of Linux and Huge Page in current mainstream Linux versions. The following uses Huge Page as an example to illustrate the advantages of Huge Page and how to use it.

What are the benefits of using large memory pages:

1. Reduce the size of the page table. Each Huge Page corresponds to 2MB of continuous physical memory, so 12GB of physical memory only requires a 48KB Page Table, which is much less than the original 24MB.

2. Huge Page memory can only be locked in physical memory and cannot be swapped to the swap area. This avoids the performance impact caused by swapping.

3. Due to the reduction in the number of page tables, the hit rate of the TLB in the CPU (which can be understood as the CPU's CACHE for page tables) is greatly improved.

4. The page table for Huge Page can be shared between processes, which also reduces the size of the Page Table. In fact, this reflects the defects in Linux's paging processing mechanism. Other operating systems, such as AIX, share the same page table for memory such as shared memory segments, avoiding this problem in Linux. For example, in a system maintained by the author, the number of connections is usually more than 5,000, and the SGA of the instance is around 60GB. If the Linux paging method is used, most of the memory in the system will be used up by the page table.

So, how do you enable huge pages for Oracle? Here are the steps to implement it. Since the database involved in the case adjusted the SGA to 18G after a period of time, 18G is used as an example here:

1. Check /proc/meminfo to confirm that the system supports HugePage:

HugePages_Total: 0

HugePages_Free: 0

HugePages_Rsvd: 0

Hugepagesize: 2048 kB

HugePages Total indicates the number of huge memory pages configured in the system. HugePages Free indicates the number of large memory pages that have not been accessed. The word free here is misleading, which will be explained later. HugePages Rsvd indicates the number of pages that have been allocated but not yet used. Hugepagesize indicates the large memory page size, which is 2MB here. Note that it may be 4MB in some kernel configurations.

For example, if HugePages totals 11GB, SGA_MAX_SIZE is 10GB, and SGA_TARGET is 8GB. After the database is started, HugePage memory will be allocated according to SGA_MAX_SIZE, which is 10GB here. The actual Free HugePage memory is 11-10=1G. However, SGA_TARGET is only 8GB, so 2GB will not be accessed, and HugePage_Free is 2+1=3GB, and HugePage_Rsvd memory has 2GB. In fact, only 1 GB can be used by other instances, which means that only 1 GB is truly free.

2. Plan the number of memory pages to be set. Until now, huge pages have been available only for a few types of memory, such as shared memory segments. Once physical memory is used as huge pages, it cannot be used for other purposes, such as private memory of a process. Therefore, too much memory cannot be set as large memory pages. We usually use large memory pages as the SGA of the Oracle database, so the number of large memory pages is:

HugePages_Total = ceil(SGA_MAX_SIZE/Hugepagesize)+N

For example, if SGA_MAX_SIZE is set to 18GB for the database, the number of pages can be ceil(18*1024/2)+2=9218.

Adding N here means that the HugePage memory space needs to be set slightly larger than SGA_MAX_SIZE, usually 1-2. We use the ipcs -m command to check the size of the shared memory segment and can see that the size of the shared memory segment is actually larger than SGA_MAX_SIZE. If there are multiple Oracle instances on the server, you need to consider the extra portion of the shared memory segment for each instance, that is, the larger the N value will be. In addition, Oracle databases either use all huge memory pages or do not use huge memory pages at all, so an inappropriate HugePages_Total will cause a waste of memory.

In addition to using SGA_MAX_SIZE for calculation, you can also calculate a more accurate HugePages_Total using the shared memory segment size obtained by ipcs -m.

HugePages_Total = sum(ceil(share_segment_size/Hugepagesize))

3. Modify the /etc/sysctl.conf file and add the following line:

vm.nr_hugepages=9218

Then execute the sysctl –p command to make the configuration take effect.

Here, the parameter value of vm.nr_hugepages is the number of huge memory pages calculated in step 2. Then check /proc/meminfo. If HugePages_Total is less than the set number, it means that there is not enough continuous physical memory for these large memory pages and the server needs to be restarted.

4. Add the following line to the /etc/security/limits.conf file:

oracle soft memlock 18878464

oracle hard memlock 18878464

Here, set the size of memory that the Oracle user can lock, in KB.

Then reconnect to the database server as the oracle user and use the ulimit -a command to see:

max lockedmemory (kbytes, -l) 18878464

It is also possible to configure memlock to unlimited here.

5. If the database uses the MANUAL method to manage the SGA, you need to change it to the AUTO method, that is, set SGA_TARGET_SIZE to a value greater than 0. For 11g, since HugePage can only be used for shared memory and not for PGA, AMM cannot be used, that is, MEMORY_TARGET cannot be set to greater than 0. SGA and PGA can only be set separately, and SGA can only be managed in AUTO mode.

6. Finally, start the database and check /proc/meminfo to see if HugePages_Free has decreased. If it has decreased, it indicates that HugePage Memory has been used.

However, when checking /proc/meminfo on the failed database server, it was found that there was no HugePage-related information. Sysctl -a checked all system parameters and did not find the parameter vm.nr_hugepages. This is because the HugePage feature is not compiled into the Linux kernel. We need to use another kernel to enable HugePage.

Check /boot/grub/grub.conf:

# grub.confgenerated by anaconda

# Note that you do not have to rerun grub after making changes to this file

#NOTICE: You have a /boot partition. This means that

# all kerneland initrd paths are relative to /boot/, eg.

# root(hd0,0)

# kernel/vmlinuz-version ro root=/dev/VolGroup00/LogVol00

# initrd/initrd-version.img

#boot=/dev/cciss/c0d0

default=0

timeout=5

splashimage=(hd0,0)/grub/splash.xpm.gz

hiddenmenu

title Red Hat Enterprise Linux Server (2.6.18-8.el5xen) with RDAC

root (hd0,0)

kernel/xen.gz-2.6.18-8.el5

module /vmlinuz-2.6.18-8.el5xen roroot=/dev/VolGroup00/LogVol00 rhgb quiet

module/mpp-2.6.18-8.el5xen.img

title Red Hat Enterprise Linux Server (2.6.18-8.el5xen)

root (hd0,0)

kernel/xen.gz-2.6.18-8.el5

module /vmlinuz-2.6.18-8.el5xen roroot=/dev/VolGroup00/LogVol00 rhgb quiet

module/initrd-2.6.18-8.el5xen.img

title Red HatEnterprise Linux Server-base (2.6.18-8.el5)

root (hd0,0)

kernel /vmlinuz-2.6.18-8.el5 roroot=/dev/VolGroup00/LogVol00 rhgb quiet

module/initrd-2.6.18-8.el5.img

We found that the kernel used by this system has the word "xen". We modified this file, changed default=0 to default=2, or blocked the first two kernels with a # sign, and then restarted the database server. We found that the new kernel already supports HugePage.

After the database enabled large memory pages, the performance issues described in this article did not occur even when the SGA was enlarged. Observing the /proc/meminfo data, the memory occupied by PageTables has remained below 120MB, a reduction of 4500MB compared to the original. It has been observed that the CPU utilization has also decreased compared to before using HugePages, and the system operation is also quite stable, at least there are no bugs caused by the use of HugePages.

Tests show that for OLTP systems, enabling HugePage on Linux running Oracle database can improve database processing power and response time to varying degrees, with the highest improvement being more than 10%.

IV. Summary

This article uses a case study to introduce the role of large memory pages in improving performance in the Linux operating system, as well as how to set corresponding parameters to enable large memory pages. At the end of this article, the author recommends enabling large memory pages when running Oracle database in Linux operating system to avoid the performance problems encountered in this case or further improve system performance. It can be said that HugePage is one of the few features that can improve performance without any additional cost. It is also worth noting that the new version of the Linux kernel provides Transparent Huge Pages, so that applications running on Linux can use large memory pages more widely and conveniently, rather than just shared memory. Let us wait and see the changes caused by this feature.

Source: "Oracle DBA Notes 3" Linux Large Memory Page Oracle Database Optimization Author: Xiong Jun

Image source: http://2.bp.blogspot.com/-o1ihxahkl0o/VQFhFj2lHwI/AAAAAAAAAV4/egUhLwaYtmc/s1600/oracle_linux.png

Summarize

The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. If you have any questions, you can leave a message to communicate. Thank you for your support for 123WORDPRESS.COM.

You may also be interested in:
  • Detailed explanation of how to create Oracle database users under LINUX
  • Tutorial on setting up scheduled tasks to backup the Oracle database under Linux
  • Solution to the ora12505 problem in the oracle database of the Linux system
  • Detailed explanation of automatic backup of Oracle database in Linux
  • Detailed process of creating an Oracle database in Linux

<<:  How to use SVG icons in WeChat applets

>>:  Detailed tutorial on setting password for MySQL free installation version

Recommend

Implementation steps of mysql master-slave replication

Table of contents mysql master-slave replication ...

Ubuntu 19.10 enables ssh service (detailed process)

It took me more than an hour to open ssh in Ubunt...

MySQL high availability cluster deployment and failover implementation

Table of contents 1. MHA 1. Concept 2. Compositio...

Summary of MySQL lock knowledge points

The concept of lock ①. Lock, in real life, is a t...

In-depth analysis of Linux NFS mechanism through cases

Continuing from the previous article, we will cre...

Solution to secure-file-priv problem when exporting MySQL data

ERROR 1290 (HY000) : The MySQL server is running ...

About the problem of vertical centering of img and span in div

As shown below: XML/HTML CodeCopy content to clip...

Detailed explanation of the functions and usage of MySQL common storage engines

This article uses examples to illustrate the func...

An exploration of the JS operator in problem

Here's the thing: Everyone knows about "...

How to check the version of Kali Linux system

1. Check the kali linux system version Command: c...

Solution to MySQL master-slave delay problem

Today we will look at why master-slave delay occu...

Details of MutationObServer monitoring DOM elements in JavaScript

1. Basic Use It can be instantiated through the M...

What is the use of the enctype field when uploading files?

The enctype attribute of the FORM element specifie...

Example of how to quickly delete a 2T table in mysql in Innodb

Preface This article mainly introduces the releva...