How to use multi-core CPU to speed up your Linux commands (GNU Parallel)

How to use multi-core CPU to speed up your Linux commands (GNU Parallel)

Have you ever had the need to compute a very large amount of data (hundreds of GB)? Or search inside it, or some other operation - something that cannot be parallelized. Data experts, I’m talking to you. You may have a CPU with 4 cores or more, but our appropriate tools, such as grep, bzip2, wc, awk, sed, etc., are single-threaded and can only use one CPU core.

To paraphrase the cartoon character Cartman, “How can I use these cores?”

To make Linux commands use all CPU cores, we need to use the GNU Parallel command, which allows all CPU cores to do magical map-reduce operations in a single machine, of course, this also requires the help of the rarely used –pipes parameter (also called –spreadstdin). This way, your load will be evenly distributed across the CPUs, really.

BZIP2

bzip2 is a better compression tool than gzip, but it is slow! Don't worry, we have a way to solve this problem.

Previous practice:

cat bigfile.bin | bzip2 --best > compressedfile.bz2

Now like this:

cat bigfile.bin | parallel --pipe --recend '' -k bzip2 --best > compressedfile.bz2

Especially for bzip2, GNU parallel is super fast on multi-core CPUs. Before you know it, it's done.

GREP

If you have a very large text file, you might have previously done this:

grep pattern bigfile.txt

Now you can do:

cat bigfile.txt | parallel --pipe grep 'pattern'

Or like this:

cat bigfile.txt | parallel --block 10M --pipe grep 'pattern'

This second usage uses the --block 10M parameter, which means that each core processes 10 million rows - you can use this parameter to adjust how many rows of data are processed by each CPU core.

AWK

Below is an example of using awk command to calculate a very large data file.

General usage:

cat rands20M.txt | awk '{s+=$1} END {print s}'

Now like this:

cat rands20M.txt | parallel --pipe awk \'{s+=\$1} END {print s}\' | awk '{s+=$1} END {print s}'

This is a bit complicated: the --pipe parameter in the parallel command divides the cat output into multiple blocks and dispatches them to the awk call, forming many sub-computation operations. These sub-calculations are piped into the same awk command via a second pipeline, which outputs the final result. The first awk has three backslashes, which is required by GNU parallel to call awk.

WC

Want to count the number of lines in a file as quickly as possible?

Traditional approach:

wc -l bigfile.txt

Now you should have this:

cat bigfile.txt | parallel --pipe wc -l | awk '{s+=$1} END {print s}'

Very clever, first use the parallel command to 'map' a large number of wc -l calls into sub-calculations, and finally send them to awk through the pipe for aggregation.

SED

Want to use sed command to do a lot of replacement operations in a huge file?

Conventional practice:

sed s^old^new^g bigfile.txt

Now you can:

cat bigfile.txt | parallel --pipe sed s^old^new^g

…and then you can use the pipe to store the output into a specific file.

The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.

You may also be interested in:
  • 15-minute parallel artifact GNU Parallel Getting Started Guide

<<:  How to install the green version of MySQL Community Server 5.7.16 and implement remote login

>>:  Simple analysis of EffectList in React

Recommend

Detailed explanation of MySQL precompilation function

This article shares the MySQL precompilation func...

Brief analysis of the various versions of mysql.data.dll driver

Here is the mysql driver mysql.data.dll Notice: T...

HTML+Sass implements HambergurMenu (hamburger menu)

A few days ago, I watched a video of a foreign gu...

How to build php-nginx-alpine image from scratch in Docker

Although I have run some projects in Docker envir...

Initial settings after installing Ubuntu 16 in the development environment

The office needs Ubuntu system as the Linux devel...

Detailed explanation of the seven data types in JavaScript

Table of contents Preface: Detailed introduction:...

Vue implements upload component

Table of contents 1. Introduction 2. Ideas Two wa...

Detailed process record of Vue2 initiating requests using Axios

Table of contents Preface Axios installation and ...

In-depth analysis of Linux NFS mechanism through cases

Continuing from the previous article, we will cre...

How to add java startup command to tomcat service

My first server program I'm currently learnin...

Web designer is a suitable talent

<br />There is no road in the world. When mo...

Application of HTML and CSS in Flash

Application of HTML and CSS in Flash: I accidental...

HTML optimization speeds up web pages

Obvious HTML, hidden "public script" Th...

How to use Zen coding in Dreamweaver

After I published my last article “Zen Coding: A Q...