Application of Hadoop counters and data cleaning

Application of Hadoop counters and data cleaning

Data cleaning (ETL)

Before running the core business MapReduce program, it is often necessary to clean the data first to remove data that does not meet user requirements. The cleanup process often only requires running the Mapper program, not the Reduce program.

1. need

Remove the logs whose field length is less than or equal to 11.

(1) Input data

web.log

(2) Expected output data

The length of each line field is greater than 11

2. Demand Analysis

The input data needs to be filtered and cleaned according to the rules in the Map stage.

3. Implementation Code

(1) Write the LogMapper class

package com.atguigu.mapreduce.weblog;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class LogMapper extends Mapper<LongWritable, Text, Text, NullWritable>{
  Text k = new Text();
  @Override
  protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
   // 1 Get 1 line of data String line = value.toString();
   // 2 Parse log boolean result = parseLog(line,context);
   // 3 Log is illegal and exit if (!result) {
     return;
   }
   // 4 Set key
   k.set(line);
   // 5 Write data context.write(k, NullWritable.get());
  }
  // 2 Parse log private boolean parseLog(String line, Context context) {
   // 1 intercept String[] fields = line.split(" ");
   // 2 Logs with a length greater than 11 are legal if (fields.length > 11) {
     // System counter context.getCounter("map", "true").increment(1);
     return true;
   }else {
     context.getCounter("map", "false").increment(1);
     return false;
   }
  }
}

(2) Write the LogDriver class

package com.atguigu.mapreduce.weblog;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class LogDriver {
  public static void main(String[] args) throws Exception {
// The input and output paths need to be set according to the actual input and output paths on your computer args = new String[] { "e:/input/inputlog", "e:/output1" };
   // 1 Get job information Configuration conf = new Configuration();
   Job job = Job.getInstance(conf);
   // 2 Load the jar package job.setJarByClass(LogDriver.class);
   // 3 associated maps
   job.setMapperClass(LogMapper.class);
   // 4 Set the final output type job.setOutputKeyClass(Text.class);
   job.setOutputValueClass(NullWritable.class);
   // Set the number of reducetask to 0
   job.setNumReduceTasks(0);
   // 5 Set input and output paths FileInputFormat.setInputPaths(job, new Path(args[0]));
   FileOutputFormat.setOutputPath(job, new Path(args[1]));
   // 6 Submit job.waitForCompletion(true);
  }
}

Summarize

The above is the full content of this article. I hope that the content of this article will have certain reference learning value for your study or work. Thank you for your support of 123WORDPRESS.COM. If you want to learn more about this, please check out the following links

You may also be interested in:
  • Hadoop NameNode Federation
  • Explanation of the new feature of Hadoop 2.X, the recycle bin function
  • A practical tutorial on building a fully distributed Hadoop environment under Ubuntu 16.4
  • Hadoop 2.x vs 3.x 22-point comparison, Hadoop 3.x improvements over 2.x
  • How to build a Hadoop cluster environment with ubuntu docker
  • Detailed steps to build Hadoop in CentOS
  • Hadoop wordcount example code
  • Java/Web calls Hadoop for MapReduce sample code
  • Explanation of the working mechanism of namenode and secondarynamenode in Hadoop

<<:  MySQL error: Deadlock found when trying to get lock; try restarting transaction solution

>>:  Detailed explanation of formatting numbers in MySQL

Recommend

MySQL 5.7.20 zip installation tutorial

MySQL 5.7.20 zip installation, the specific conte...

MySQL Server IO 100% Analysis and Optimization Solution

Preface During the stress test, if the most direc...

MySQL Community Server compressed package installation and configuration method

Today, because I wanted to install MySQL, I went ...

Detailed explanation of using echarts map in angular

Table of contents Initialization of echart app-ba...

An article to give you a deep understanding of Mysql triggers

Table of contents 1. When inserting or modifying ...

Essential conditional query statements for MySQL database

Table of contents 1. Basic grammar 2. Filter by c...

Install docker offline by downloading rpm and related dependencies using yum

You can use yum to install all dependencies toget...

JavaScript Basics Operators

Table of contents 1. Operators Summarize 1. Opera...

Explain how to analyze SQL efficiency

The Explain command is the first recommended comm...

19 MySQL optimization methods in database management

After MySQL database optimization, not only can t...

js to achieve cool fireworks effect

This article shares the specific code for using j...

Some front-end basics (html, css) encountered in practice

1. The div css mouse hand shape is cursor:pointer;...

CSS Sticky Footer Implementation Code

This article introduces the CSS Sticky Footer imp...