1 Introduction
Thread technology was proposed as early as the 1960s, but multithreading was not truly applied to operating systems until the mid-1980s, and Solaris was a leader in this regard. Traditional Unix also supports the concept of threads, but only one thread is allowed in a process, so multi-threading means multi-process. Now, multithreading technology has been supported by many operating systems, including Windows/NT and, of course, Linux. Why do we need to introduce threads after we have the concept of processes? What are the benefits of using multithreading? What kind of system should choose multi-threading? We must first answer these questions. One of the reasons to use multithreading is that it is a very "frugal" way of multitasking compared to processes. We know that under the Linux system, starting a new process must allocate it an independent address space and establish numerous data tables to maintain its code segment, stack segment, and data segment. This is an "expensive" way of working with multiple tasks. Multiple threads running in a process use the same address space and share most of the data. The space spent on starting a thread is much smaller than the space spent on starting a process. Moreover, the time required to switch between threads is much shorter than the time required to switch between processes. According to statistics, in general, the overhead of a process is about 30 times that of a thread. Of course, this data may vary greatly on a specific system. The second reason for using multithreading is the convenient communication mechanism between threads. Different processes have independent data spaces, and data can only be transferred through communication, which is not only time-consuming but also inconvenient. This is not the case with threads. Since threads under the same process share data space, the data of one thread can be directly used by other threads, which is not only fast but also convenient. Of course, data sharing also brings some other problems. Some variables cannot be modified by two threads at the same time. Some data declared as static in subroutines are more likely to bring catastrophic blows to multi-threaded programs. These are the areas that require the most attention when writing multi-threaded programs. In addition to the advantages mentioned above, multi-threaded programs, as a multi-tasking and concurrent working method, certainly have the following advantages compared with processes: 1) Improve application responsiveness. This is especially meaningful for graphical interface programs. When an operation takes a long time, the entire system will wait for this operation. At this time, the program will not respond to keyboard, mouse, or menu operations. Using multi-threading technology to place time-consuming operations in a new thread can avoid this embarrassing situation. 2) Make multi-CPU systems more efficient. The operating system ensures that when the number of threads is not greater than the number of CPUs, different threads run on different CPUs. 3) Improve program structure. A long and complex process can be divided into multiple threads, which can become several independent or semi-independent running parts. Such a program will be easier to understand and modify. Let's try to write a simple multi-threaded program.
2 Simple multithreaded programming
Multithreading under the Linux system follows the POSIX thread interface, called pthread. To write a multithreaded program under Linux, you need to use the header file pthread.h, and the library libpthread.a when linking. By the way, the implementation of pthread under Linux is achieved through the system call clone(). clone() is a system call unique to Linux. Its usage is similar to fork. For details about clone(), interested readers can refer to the relevant documentation. Below we show the simplest multi-threaded program example1.c.
/* example.c */
#include <stdio.h>
#include <pthread.h>
void thread(void)
{
int i;
for(i=0;i<3;i++)
printf("This is a pthread./n");
}
int main(void)
{
pthread_t id;
int i,ret;
ret = pthread_create(&id,NULL,(void *) thread,NULL);
if(ret!=0)
{
printf ("Create pthread error!/n");
exit (1);
}
for(i=0;i<3;i++)
printf("This is the main process./n");
pthread_join(id,NULL);
return (0);
}
We compile this program: gcc example1.c -lpthread -o example1 Running example1, we get the following results:
This is the main process. This is a pthread. This is the main process. This is the main process. This is a pthread. This is a pthread.
Running again, we may get the following results:
This is a pthread. This is the main process. This is a pthread. This is the main process. This is a pthread. This is the main process.
The two results are different, which is the result of two threads competing for CPU resources. In the above example, we used two functions, pthread_create and pthread_join, and declared a variable of type pthread_t. pthread_t is defined in the header file /usr/include/bits/pthreadtypes.h: typedef unsigned long int pthread_t; It is an identifier of a thread. The function pthread_create is used to create a thread. Its prototype is: extern int pthread_create __P ((pthread_t *__thread, __const pthread_attr_t *__attr, void *(*__start_routine) (void *), void *__arg)); The first parameter is a pointer to the thread identifier, the second parameter is used to set the thread attributes, the third parameter is the starting address of the thread running function, and the last parameter is the parameters of the running function. Here, our function thread does not require parameters, so the last parameter is set to a null pointer. We also set the second parameter to a null pointer, which will generate a thread with default properties. We will explain how to set and modify thread attributes in the next section. When the thread is created successfully, the function returns 0. If it is not 0, it means that the thread creation failed. Common error return codes are EAGAIN and EINVAL. The former means that the system restricts the creation of new threads, for example, the number of threads is too large; the latter means that the thread attribute value represented by the second parameter is illegal. After the thread is created successfully, the newly created thread runs the function determined by parameters three and four, and the original thread continues to run the next line of code. The function pthread_join is used to wait for the end of a thread. The function prototype is: extern int pthread_join __P ((pthread_t __th, void **__thread_return)); The first parameter is the identifier of the thread being waited for, and the second parameter is a user-defined pointer that can be used to store the return value of the thread being waited for. This function is a thread blocking function. The function that calls it will wait until the waiting thread ends. When the function returns, the resources of the waiting thread are reclaimed. There are two ways to end a thread. One is like the example above, when the function ends, the thread that called it also ends; the other way is to implement it through the function pthread_exit. Its function prototype is: extern void pthread_exit __P ((void *__retval)) __attribute__ ((__noreturn__)); The only argument is the function's return code, which will be passed to thread_return as long as the second argument to pthread_join, thread_return, is not NULL. Finally, it should be noted that a thread cannot be waited for by multiple threads, otherwise the first thread to receive the signal returns successfully, and the remaining threads calling pthread_join return the error code ESRCH. In this section, we wrote the simplest thread and mastered the three most commonly used functions pthread_create, pthread_join, and pthread_exit. Next, let's learn about some common properties of threads and how to set them. 3 Modify thread properties
In the example in the previous section, we used the pthread_create function to create a thread. In this thread, we used the default parameters, that is, the second parameter of the function was set to NULL. Indeed, for most programs, using the default properties is enough, but it is still necessary for us to understand the relevant properties of threads. The attribute structure is pthread_attr_t, which is also defined in the header file /usr/include/pthread.h. Those who like to get to the bottom of it can check it out themselves. The attribute value cannot be set directly and must be operated using related functions. The initialization function is pthread_attr_init, which must be called before the pthread_create function. The attribute objects mainly include whether to bind, whether to detach, stack address, stack size, and priority. The default attributes are unbound, non-detached, default 1M stack, and the same priority level as the parent process. Regarding thread binding, another concept is involved: Light Weight Process (LWP). A light process can be understood as a kernel thread, which is located between the user layer and the system layer. The system allocates thread resources and controls threads through light processes. A light process can control one or more threads. By default, the system controls how many light processes are started and which light processes control which threads. This situation is called unbound. In the binding state, as the name implies, a thread is fixedly "bound" to a light process. The bound thread has a higher response speed because the scheduling of CPU time slices is oriented towards light processes. The bound thread can ensure that there is always a light process available when needed. By setting the priority and scheduling level of the bound light process, the bound thread can meet requirements such as real-time response. The function that sets the thread binding state is pthread_attr_setscope, which has two parameters. The first is a pointer to the attribute structure, and the second is the binding type, which has two values: PTHREAD_SCOPE_SYSTEM (bound) and PTHREAD_SCOPE_PROCESS (unbound). The following code creates a bound thread.
#include <pthread.h>
pthread_attr_t attr;
pthread_t tid;
/* Initialize property values, all set to default values */
pthread_attr_init(&attr);
pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM);
pthread_create(&tid, &attr, (void *) my_function, NULL);
The detached state of a thread determines how a thread terminates itself. In the above example, we use the default property of the thread, which is the non-detached state. In this case, the original thread waits for the created thread to end. Only when the pthread_join() function returns, the created thread is terminated and can release the system resources it occupies. This is not the case with a detached thread. It is not waited for by other threads. When its own execution is completed, the thread is terminated and system resources are released immediately. Programmers should choose the appropriate separation state according to their needs. The function to set the thread detach state is pthread_attr_setdetachstate (pthread_attr_t *attr, int detachstate). The second parameter can be PTHREAD_CREATE_DETACHED (detached thread) and PTHREAD _CREATE_JOINABLE (non-detached thread). One thing to note here is that if a thread is set as a separate thread and this thread runs very fast, it is likely to terminate before the pthread_create function returns. After it terminates, it may transfer the thread number and system resources to other threads for use. In this way, the thread calling pthread_create gets the wrong thread number. To avoid this situation, you can take certain synchronization measures. One of the simplest methods is to call the pthread_cond_timewait function in the created thread to let the thread wait for a while and leave enough time for the pthread_create function to return. Setting a waiting time is a commonly used method in multi-threaded programming. But be careful not to use functions such as wait(), which put the entire process to sleep and cannot solve the problem of thread synchronization. Another attribute that may be commonly used is the thread priority, which is stored in the structure sched_param. Use the functions pthread_attr_getschedparam and pthread_attr_setschedparam to store. Generally speaking, we always take the priority first, modify the obtained value, and then store it back. The following is a simple example.
/* example.c */
#include <stdio.h>
#include <pthread.h>
void thread(void)
{
int i;
for(i=0;i<3;i++)
printf("This is a pthread./n");
}
int main(void)
{
pthread_t id;
int i,ret;
ret = pthread_create(&id,NULL,(void *) thread,NULL);
if(ret!=0)
{
printf ("Create pthread error!/n");
exit (1);
}
for(i=0;i<3;i++)
printf("This is the main process./n");
pthread_join(id,NULL);
return (0);
}
4-thread data processing
Compared with processes, one of the biggest advantages of threads is data sharing. Each process shares the data segment inherited from the parent process and can easily obtain and modify data. But this also brings many problems to multi-threaded programming. We must be careful about having multiple different processes accessing the same variable. Many functions are not reentrant, that is, multiple copies of a function cannot be running at the same time (unless different data segments are used). Static variables declared in functions often cause problems, and function return values can also cause problems. Because if the address of the statically declared space inside the function is returned, when a thread calls the function to obtain the address and uses the data pointed to by the address, another thread may call this function and modify this piece of data. Variables shared within a process must be defined with the keyword volatile to prevent the compiler from changing the way they are used during optimization (such as using the -OX parameter in gcc). In order to protect variables, we must use semaphores, mutexes and other methods to ensure that we use variables correctly. Next, we will gradually introduce the relevant knowledge about processing thread data. 4.1 Thread Data
In a single-threaded program, there are two basic types of data: global variables and local variables. But in multi-threaded programs, there is a third data type: thread data (TSD: Thread-Specific Data). It is very similar to a global variable. Within a thread, each function can call it like a global variable, but it is not visible to other threads outside the thread. The need for such data is obvious. For example, our common variable errno returns standard error information. It obviously cannot be a local variable, as almost every function should be able to call it; but it cannot be a global variable, otherwise the error message output in thread A is likely to be that of thread B. To implement variables like these, we have to use thread data. We create a key for each thread data, which is associated with this key. In each thread, this key is used to refer to the thread data, but in different threads, the data represented by this key is different. In the same thread, it represents the same data content. There are four main functions related to thread data: creating a key; assigning thread data to a key; reading thread data from a key; and deleting a key. The function prototype for creating a key is: extern int pthread_key_create __P ((pthread_key_t *__key, void (*__destr_function) (void *))); The first parameter is a pointer to a key value, and the second parameter specifies a destructor function. If this parameter is not empty, the system will call this function to release the memory block bound to this key when each thread ends. This function is often used together with the function pthread_once ((pthread_once_t*once_control, void (*initroutine) (void))) to ensure that the key is created only once. The function pthread_once declares an initialization function. The first time pthread_once is called, it executes this function, and subsequent calls will be ignored by it.
In the following example, we create a key and associate it with some data. We need to define a function createWindow, which defines a graphics window (the data type is Fl_Window *, which is the data type in the graphical interface development tool FLTK). Since each thread will call this function, we use thread data.
/* example.c */
#include <stdio.h>
#include <pthread.h>
void thread(void)
{
int i;
for(i=0;i<3;i++)
printf("This is a pthread./n");
}
int main(void)
{
pthread_t id;
int i,ret;
ret = pthread_create(&id,NULL,(void *) thread,NULL);
if(ret!=0)
{
printf ("Create pthread error!/n");
exit (1);
}
for(i=0;i<3;i++)
printf("This is the main process./n");
pthread_join(id,NULL);
return (0);
}
In this way, calling the createMyWin function in different threads can obtain the window variable that is visible inside the thread. This variable is obtained through the pthread_getspecific function. In the above example, we have used the function pthread_setspecific to bind thread data to a key. The prototypes of these two functions are as follows: extern int pthread_setspecific __P ((pthread_key_t __key,__const void *__pointer)); extern void *pthread_getspecific __P ((pthread_key_t __key)); The meaning and usage of the parameters of these two functions are obvious. It should be noted that when using pthread_setspecific to specify new thread data for a key, you must release the original thread data yourself to reclaim space. This process function pthread_key_delete is used to delete a key. The memory occupied by this key will be released, but it should also be noted that it only releases the memory occupied by the key, and does not release the memory resources occupied by the thread data associated with the key, and it will not trigger the destructor function defined in the function pthread_key_create. The release of thread data must be completed before releasing the key. 4.2 Mutex Locks
Mutex locks are used to ensure that only one thread executes a piece of code at a time. The necessity is obvious: if each thread writes data sequentially to the same file, the final result will be catastrophic. Let's look at the following code first. This is a read/write program that uses a common buffer, and we assume that a buffer can only hold one piece of information. That is, the buffer has only two states: with information or without information.
void reader_function ( void );
void writer_function ( void );
char buffer;
int buffer_has_item=0;
pthread_mutex_t mutex;
struct timespec delay;
void main ( void ) {
pthread_t reader;
/* Define delay time */
delay.tv_sec = 2;
delay.tv_nec = 0;
/* Initialize a mutex object with default properties */
pthread_mutex_init (&mutex,NULL);
pthread_create(&reader, pthread_attr_default, (void *)&reader_function), NULL);
writer_function() ;
}
void writer_function (void){
while(1){
/* Lock the mutex */
pthread_mutex_lock (&mutex);
if (buffer_has_item==0){
buffer = make_new_item();
buffer_has_item=1;
}
/* Open the mutex */
pthread_mutex_unlock(&mutex);
pthread_delay_np(&delay);
}
}
void reader_function(void){
while(1){
pthread_mutex_lock(&mutex);
if(buffer_has_item==1){
consume_item(buffer);
buffer_has_item=0;
}
pthread_mutex_unlock(&mutex);
pthread_delay_np(&delay);
}
}
The mutex variable mutex is declared here. The structure pthread_mutex_t is a private data type that contains a system-allocated attribute object. The function pthread_mutex_init is used to generate a mutex lock. A NULL parameter indicates that the default properties are used. If you need to declare a mutex with specific attributes, you must call the function pthread_mutexattr_init. The functions pthread_mutexattr_setpshared and pthread_mutexattr_settype are used to set mutex attributes. The previous function sets the property pshared, which has two values, PTHREAD_PROCESS_PRIVATE and PTHREAD_PROCESS_SHARED. The former is used to synchronize threads in different processes, and the latter is used to synchronize different threads in the same process. In the above example, we used the default property PTHREAD_PROCESS_PRIVATE. The latter is used to set the mutex lock type. The optional types are PTHREAD_MUTEX_NORMAL, PTHREAD_MUTEX_ERRORCHECK, PTHREAD_MUTEX_RECURSIVE and PTHREAD _MUTEX_DEFAULT. They define different listing and unlocking mechanisms respectively. Generally, the last default attribute is selected. The pthread_mutex_lock statement starts locking the mutex, and all subsequent code is locked until pthread_mutex_unlock is called, that is, it can only be called and executed by one thread at a time. When a thread executes to pthread_mutex_lock, if the lock is used by another thread at this time, the thread is blocked, that is, the program will wait until the other thread releases the mutex lock. In the above example, we use the pthread_delay_np function to let the thread sleep for a while, just to prevent a thread from always occupying this function. The above example is very simple and will not be introduced here. What needs to be pointed out is that deadlock is very likely to occur in the process of using mutex locks: two threads try to occupy two resources at the same time and lock the corresponding mutex locks in different orders. For example, two threads need to lock mutex 1 and mutex 2. Thread a locks mutex 1 first, and thread b locks mutex 2 first. At this time, deadlock occurs. At this time, we can use the function pthread_mutex_trylock, which is a non-blocking version of the function pthread_mutex_lock. When it finds that deadlock is inevitable, it will return corresponding information, and the programmer can make corresponding handling for the deadlock. In addition, different mutex lock types handle deadlock differently, but the most important thing is that the programmer himself should pay attention to this point in program design. 4.3 Conditional Variables
In the previous section, we described how to use mutex locks to implement data sharing and communication between threads. An obvious disadvantage of mutex locks is that they only have two states: locked and unlocked. The condition variable makes up for the shortcomings of the mutex lock by allowing the thread to block and wait for another thread to send a signal. It is often used together with the mutex lock. When used, the condition variable is used to block a thread. When the condition is not met, the thread often unlocks the corresponding mutex and waits for the condition to change. Once another thread changes the condition variable, it will notify the corresponding condition variable to wake up one or more threads that are blocked by this condition variable. These threads will relock the mutex and retest whether the condition is satisfied. Generally speaking, condition variables are used to synchronize threads. The structure of the condition variable is pthread_cond_t, and the function pthread_cond_init() is used to initialize a condition variable. Its prototype is: extern int pthread_cond_init __P ((pthread_cond_t *__cond,__const pthread_condattr_t *__cond_attr)); Where cond is a pointer to a pthread_cond_t structure, and cond_attr is a pointer to a pthread_condattr_t structure. The structure pthread_condattr_t is the attribute structure of the condition variable. Like the mutex lock, we can use it to set whether the condition variable is available within the process or between processes. The default value is PTHREAD_PROCESS_PRIVATE, which means that this condition variable is used by each thread in the same process. Note that initialized condition variables can only be reinitialized or released if they are not used. The function to release a condition variable is pthread_cond_ destroy (pthread_cond_t cond). The function pthread_cond_wait() causes a thread to block on a condition variable. Its function prototype is: extern int pthread_cond_wait __P ((pthread_cond_t *__cond, pthread_mutex_t *__mutex)); The thread unlocks the lock pointed to by mutex and blocks on the condition variable cond. The thread can be awakened by the function pthread_cond_signal and the function pthread_cond_broadcast, but it should be noted that the condition variable only blocks and wakes up the thread. The specific judgment conditions still need to be given by the user, such as whether a variable is 0, etc. We can see this from the following examples. After the thread is awakened, it will recheck whether the judgment condition is met. If it is not met, generally speaking, the thread should still be blocked here, waiting to be awakened next time. This process is generally implemented using the while statement. Another function used to block a thread is pthread_cond_timedwait(), whose prototype is: extern int pthread_cond_timedwait __P ((pthread_cond_t *__cond, pthread_mutex_t *__mutex, __const struct timespec *__abstime)); It has one more time parameter than the function pthread_cond_wait(). After abstime period of time, the blocking is released even if the condition variable is not satisfied. The prototype of the function pthread_cond_signal() is: extern int pthread_cond_signal __P ((pthread_cond_t *__cond)); It is used to release a thread that is blocked on the condition variable cond. When multiple threads are blocked on this condition variable, which thread is awakened is determined by the thread scheduling policy. It should be noted that this function must be protected by a mutex lock that protects the condition variable, otherwise the condition satisfaction signal may be sent between testing the condition and calling the pthread_cond_wait function, resulting in unlimited waiting. Below is a simple example of using the pthread_cond_wait() and pthread_cond_signal() functions.
pthread_mutex_t count_lock;
pthread_cond_t count_nonzero;
unsigned count;
decrement_count () {
pthread_mutex_lock (&count_lock);
while(count==0)
pthread_cond_wait( &count_nonzero, &count_lock);
count=count -1;
pthread_mutex_unlock (&count_lock);
}
increment_count(){
pthread_mutex_lock(&count_lock);
if(count==0)
pthread_cond_signal(&count_nonzero);
count=count+1;
pthread_mutex_unlock(&count_lock);
}
When the count value is 0, the decrement function is blocked at pthread_cond_wait and the mutex count_lock is opened. At this time, when the increment_count function is called, the pthread_cond_signal() function changes the condition variable, telling decrement_count() to stop blocking. Readers can try to let two threads run these two functions separately to see what results will appear. The function pthread_cond_broadcast (pthread_cond_t *cond) is used to wake up all threads blocked on the condition variable cond. After these threads are awakened, they will compete for the corresponding mutex lock again, so this function must be used with caution. 4.4 Semaphores
A semaphore is essentially a non-negative integer counter that is used to control access to a common resource. When the common resource increases, the function sem_post() is called to increase the semaphore. The public resource can be used only when the semaphore value is greater than 0. After use, the function sem_wait() reduces the semaphore. The function sem_trywait() has the same function as the function pthread_ mutex_trylock() and is a non-blocking version of the function sem_wait(). Below we introduce some functions related to semaphores one by one. They are all defined in the header file /usr/include/semaphore.h. The data type of the semaphore is the structure sem_t, which is essentially a long integer. The function sem_init() is used to initialize a semaphore. Its prototype is: extern int sem_init __P ((sem_t *__sem, int __pshared, unsigned int __value)); sem is a pointer to the semaphore structure; when pshared is not 0, the semaphore is shared between processes, otherwise it can only be shared by all threads of the current process; value gives the initial value of the semaphore. The function sem_post( sem_t *sem ) is used to increase the value of the semaphore. When a thread is blocked on this semaphore, calling this function will make one of the threads unblocked. The selection mechanism is also determined by the thread scheduling policy. The function sem_wait( sem_t *sem ) is used to block the current thread until the value of the semaphore sem is greater than 0. After the blockage is released, the value of sem is reduced by one, indicating that the public resource has decreased after use. The function sem_trywait ( sem_t *sem ) is a non-blocking version of the function sem_wait(), which directly decrements the value of the semaphore sem by one. The function sem_destroy(sem_t *sem) is used to release the semaphore sem. Let's look at an example of using semaphores. In this example, there are a total of 4 threads, two of which are responsible for reading data from the file into a common buffer, and the other two threads read data from the buffer for different processing (addition and multiplication operations).
/* File sem.c */
#include <stdio.h>
#include <pthread.h>
#include <semaphore.h>
#define MAXSTACK 100
int stack[MAXSTACK][2];
int size=0;
sem_t sem;
/* Read data from file 1.dat. Each time the data is read, the semaphore increases by one*/
void ReadData1(void){
FILE *fp=fopen("1.dat","r");
while(!feof(fp)){
fscanf(fp,"%d %d",&stack[size][0],&stack[size][1]);
sem_post(&sem);
++size;
}
fclose(fp);
}
/*Read data from file 2.dat*/
void ReadData2(void){
FILE *fp=fopen("2.dat","r");
while(!feof(fp)){
fscanf(fp,"%d %d",&stack[size][0],&stack[size][1]);
sem_post(&sem);
++size;
}
fclose(fp);
}
/*Block and wait for data in the buffer. After reading the data, release the space and continue waiting*/
void HandleData1(void){
while(1){
sem_wait(&sem);
printf("Plus:%d+%d=%d/n",stack[size][0],stack[size][1],
stack[size][0]+stack[size][1]);
--size;
}
}
void HandleData2(void){
while(1){
sem_wait(&sem);
printf("Multiply:%d*%d=%d/n",stack[size][0],stack[size][1],
stack[size][0]*stack[size][1]);
--size;
}
}
int main(void){
pthread_t t1,t2,t3,t4;
sem_init(&sem,0,0);
pthread_create(&t1,NULL,(void *)HandleData1,NULL);
pthread_create(&t2,NULL,(void *)HandleData2,NULL);
pthread_create(&t3,NULL,(void *)ReadData1,NULL);
pthread_create(&t4,NULL,(void *)ReadData2,NULL);
/* Prevent the program from exiting prematurely, let it wait here indefinitely */
pthread_join(t1,NULL);
}
Under Linux, we use the command gcc -lpthread sem.c -o sem to generate the executable file sem. We have edited the data files 1.dat and 2.dat in advance. Assuming their contents are 1 2 3 4 5 6 7 8 9 10 and -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 respectively, we run sem and get the following results:
Multiply:-1*-2=2 Plus: -1+-2=-3 Multiply:9*10=90 Plus: -9+-10=-19 Multiply:-7*-8=56 Plus: -5+-6=-11 Multiply:-3*-4=12 Plus:9+10=19 Plus:7+8=15 Plus: 5+6=11
From this we can see the competitive relationship between threads. The values are not displayed in the order we originally intended because the size value is arbitrarily modified by each thread. This is also often a problem that needs attention in multi-threaded programming. 5 Summary
Multithreaded programming is a very interesting and useful technology. Network Ants, which uses multithreaded technology, is one of the most commonly used download tools. Grep, which uses multithreaded technology, is several times faster than single-threaded grep. There are many similar examples. I hope everyone can use multi-threading technology to write efficient and practical programs. This is the end of this article about the analysis of multi-threaded programming examples under Linux. For more relevant content about multi-threaded programming under Linux, please search for previous articles on 123WORDPRESS.COM or continue to browse the related articles below. I hope everyone will support 123WORDPRESS.COM in the future! You may also be interested in:- Linux Multithreaded Programming Quick Start
- Multithreaded programming in C language under Linux
- Detailed explanation and simple examples of multithreading in Linux
- Detailed explanation of C\C++ multi-process and multi-threaded programming examples under Linux
- Detailed explanation of Linux multithreaded programming (not limited to Linux)
- Linux multithreaded programming (V)
- Linux multithreaded programming (IV)
- Multithreaded programming under Linux (Part 3)
- Linux Multithreaded Programming (Part 2)
- Linux Multithreaded Programming (I)
- Detailed tutorial on Linux multithreaded programming (threads use semaphores to implement communication code)
|