SlideShare a Scribd company logo
MapReduce
Farzad Nozarian
4/11/15 @AUT
Purpose
This document describes how to set up and configure a single-node Hadoop
installation so that you can quickly perform simple operations using Hadoop
MapReduce
2
Supported Platforms
• GNU/Linux is supported as a development and production platform.
Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes.
• Windows is also a supported platform but the followings steps are for
Linux only.
3
Required Software
• Java™ must be installed. Recommended Java versions are described at
http://wiki.apache.org/hadoop/HadoopJavaVersions
• ssh must be installed and sshd must be running to use the Hadoop scripts
that manage remote Hadoop daemons.
• To get a Hadoop distribution, download a recent stable release from one
of the Apache Download Mirrors
$ sudo apt-get install ssh
$ sudo apt-get install rsync
4
Prepare to Start the Hadoop Cluster
• Unpack the downloaded Hadoop distribution. In the distribution, edit the
file etc/hadoop/hadoop-env.sh to define some parameters as follows:
• Try the following command:
This will display the usage documentation for the hadoop script.
# set to the root of your Java installation
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0
# Assuming your installation directory is
/usr/local/hadoop
export HADOOP_PREFIX=/usr/local/hadoop
$ bin/hadoop
5
Prepare to Start the Hadoop Cluster (Cont.)
• Now you are ready to start your Hadoop cluster in one of the three
supported modes:
• Local (Standalone) Mode
• By default, Hadoop is configured to run in a non-distributed mode, as a single Java
process. This is useful for debugging.
• Pseudo-Distributed Mode
• Hadoop can also be run on a single-node in a pseudo-distributed mode where each
Hadoop daemon runs in a separate Java process.
• Fully-Distributed Mode
6
Pseudo-Distributed Configuration
• etc/hadoop/core-site.xml:
• etc/hadoop/hdfs-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
7
MapReduce Execution Pipeline
8
Main components of the MapReduce
execution pipeline
• Driver:
• The main program that initializes a MapReduce job.
• It defines job-specific configuration, and specifies all of its components:
• input and output formats
• mapper and reducer
• use of a combiner
• use of a custom partitioner
• The driver can also get back the status of the job execution.
9
Main components of the MapReduce
execution pipeline
• Context:
• The driver, mappers, and reducers are executed in different processes, typically
on multiple machines.
• A context object is available at any point of MapReduce execution.
• It provides a convenient mechanism for exchanging required system and job-
wide information.
10
Main components of the MapReduce
execution pipeline
• Input data:
• This is where the data for a MapReduce task is initially stored
• This data can reside in HDFS, HBase, or other storage.
• InputFormat:
• This defines how input data is read and split.
• InputFormat is a class that defines the InputSplits that break input data into
tasks.
• It provides a factory for RecordReader objects that read the file.
• Several InputFormats are provided by Hadoop
11
Main components of the MapReduce
execution pipeline
• InputSplit:
• An InputSplit defines a unit of work for a single map task in a MapReduce
program.
• The InputFormat (invoked directly by a job driver) defines the number of map
tasks that make up the mapping phase.
• Each map task is given a single InputSplit to work on
12
Main components of the MapReduce
execution pipeline
• RecordReader:
• Although the InputSplit defines a data subset for a map task, it does not
describe how to access the data.
• The RecordReader class actually reads the data from its source, converts it into
key/value pairs suitable for processing by the mapper, and delivers them to the
map method.
• The RecordReader class is defined by the InputFormat.
13
Main components of the MapReduce
execution pipeline
• Mapper:
• Performs the user-defined work of the first phase of the MapReduce program.
• It takes input data in the form of a series of key/value pairs (k1, v1), which are
used for individual map execution.
• The map typically transforms the input pair into an output pair (k2, v2), which is
used as an input for shuffle and sort.
14
Main components of the MapReduce
execution pipeline
• Partition:
• A subset of the intermediate key space (k2, v2) produced by each individual
mapper is assigned to each reducer.
• These subsets (or partitions) are the inputs to the reduce tasks.
• Each map task may emit key/value pairs to any partition.
• The Partitioner class determines which reducer a given key/value pair will go to.
• The default Partitioner computes a hash value for the key, and assigns the
partition based on this result.
15
Main components of the MapReduce
execution pipeline
• Shuffle:
• Once at least one map function for a given node is completed, and the keys’
space is partitioned, the run time begins moving the intermediate outputs
from the map tasks to where they are required by the reducers.
• This process of moving map outputs to the reducers is known as shuffling.
• Sort:
• The set of intermediate key/value pairs for a given reducer is automatically
sorted by Hadoop to form keys/values (k2, {v2, v2,…}) before they are presented
to the reducer.
16
Main components of the MapReduce
execution pipeline
• Reducer:
• A reducer is responsible for an execution of user-provided code for the second
phase of job-specific work.
• For each key assigned to a given reducer, the reducer’s reduce() method is called
once.
• This method receives a key, along with an iterator over all the values associated
with the key.
• The reducer typically transforms the input key/value pairs into
output pairs (k3, v3).
17
Main components of the MapReduce
execution pipeline
• OutputFormat:
• The responsibility of the OutputFormat is to define a location of the output
data and RecordWriter used for storing the resulting data.
• RecordWriter:
• A RecordWriter defines how individual output records are written.
18
Let’s try it with simple example!
Word Count
(the Hello World! for MapReduce, available in Hadoop sources)
We want to count the occurrences of every word
of a text file
19
Driver
20
public class WordCount {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
…
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
for (int i = 0; i < otherArgs.length - 1; ++i) {
FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
}
FileOutputFormat.setOutputPath(job, new Path(
otherArgs[otherArgs.length - 1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
} }
Mapper class
21
//inside WordCount class
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
Reducer class
22
//inside WordCount class
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
References:
• hadoop.apache.org
• Professional Hadoop Solutions - Boris Lublinsky, Kevin T.
Smith, Alexey Yakubovich - WILEY
23

More Related Content

PPTX
Application layer
PPTX
ARCHITECTURE MICROSERVICE : TOUR D’HORIZON DU CONCEPT ET BONNES PRATIQUES
PPTX
HTTP request and response
DOCX
Hostel management project_report
PDF
MD-5 : Algorithm
PPT
Chat application
PPTX
Introduction to JavaScript
PPT
Message Oriented Middleware (MOM)
Application layer
ARCHITECTURE MICROSERVICE : TOUR D’HORIZON DU CONCEPT ET BONNES PRATIQUES
HTTP request and response
Hostel management project_report
MD-5 : Algorithm
Chat application
Introduction to JavaScript
Message Oriented Middleware (MOM)

What's hot (20)

PPTX
Event In JavaScript
PPTX
Message and Stream Oriented Communication
PPSX
Php and MySQL
PPTX
Illustration of BPMN Loop Types
PPTX
Spark architecture
PPTX
MapReduce Programming Model
DOCX
S/MIME
PPT
Amqp Basic
PPT
PDF
Introduction to MongoDB
PPTX
Public Key Cryptography
PPTX
Remote Procedure Call in Distributed System
PPT
Data encryption standard
PPT
Php Presentation
PPT
Hamming codes
PPTX
Kerberos explained
PPTX
Http Protocol
PPTX
Big data components - Introduction to Flume, Pig and Sqoop
PDF
Kriptografi - Hash
Event In JavaScript
Message and Stream Oriented Communication
Php and MySQL
Illustration of BPMN Loop Types
Spark architecture
MapReduce Programming Model
S/MIME
Amqp Basic
Introduction to MongoDB
Public Key Cryptography
Remote Procedure Call in Distributed System
Data encryption standard
Php Presentation
Hamming codes
Kerberos explained
Http Protocol
Big data components - Introduction to Flume, Pig and Sqoop
Kriptografi - Hash
Ad

Similar to Apache Hadoop MapReduce Tutorial (20)

PPT
Hadoop Map-Reduce from the subject: Big Data Analytics
PPTX
Map reducefunnyslide
PPT
Hadoop_Pennonsoft
PPTX
writing Hadoop Map Reduce programs
PDF
PPT
Hadoop - Introduction to mapreduce
PPTX
CLOUD_COMPUTING_MODULE4_RK_BIG_DATA.pptx
PPTX
Managing Big data Module 3 (1st part)
PPTX
Map Reduce
PPTX
Types_of_Stats.pptxTypes_of_Stats.pptxTypes_of_Stats.pptx
PPT
Apache hadoop, hdfs and map reduce Overview
PPT
Lecture 4 Parallel and Distributed Systems Fall 2024.ppt
PPTX
MAP REDUCE IN DATA SCIENCE.pptx
PDF
Hadoop eco system with mapreduce hive and pig
PPTX
Mapreduce advanced
PDF
PPT
hadoop.ppt
PPT
Hadoop 2
PPT
PPTX
Introduction to Hadoop and Big Data
Hadoop Map-Reduce from the subject: Big Data Analytics
Map reducefunnyslide
Hadoop_Pennonsoft
writing Hadoop Map Reduce programs
Hadoop - Introduction to mapreduce
CLOUD_COMPUTING_MODULE4_RK_BIG_DATA.pptx
Managing Big data Module 3 (1st part)
Map Reduce
Types_of_Stats.pptxTypes_of_Stats.pptxTypes_of_Stats.pptx
Apache hadoop, hdfs and map reduce Overview
Lecture 4 Parallel and Distributed Systems Fall 2024.ppt
MAP REDUCE IN DATA SCIENCE.pptx
Hadoop eco system with mapreduce hive and pig
Mapreduce advanced
hadoop.ppt
Hadoop 2
Introduction to Hadoop and Big Data
Ad

More from Farzad Nozarian (14)

PDF
SHARE Interface in Flash Storage for Relational and NoSQL Databases
PDF
Object Based Databases
PDF
Ultimate Goals In Robotics
PPTX
Tank Battle - A simple game powered by JMonkey engine
PPTX
The Continuous Distributed Monitoring Model
PDF
Big data Clustering Algorithms And Strategies
PDF
Shark - Lab Assignment
PDF
Apache HBase - Lab Assignment
PDF
Apache HDFS - Lab Assignment
PDF
Apache Spark Tutorial
PDF
Apache Storm Tutorial
PPTX
Big Data and Cloud Computing
PDF
Big Data Processing in Cloud Computing Environments
PPTX
S4: Distributed Stream Computing Platform
SHARE Interface in Flash Storage for Relational and NoSQL Databases
Object Based Databases
Ultimate Goals In Robotics
Tank Battle - A simple game powered by JMonkey engine
The Continuous Distributed Monitoring Model
Big data Clustering Algorithms And Strategies
Shark - Lab Assignment
Apache HBase - Lab Assignment
Apache HDFS - Lab Assignment
Apache Spark Tutorial
Apache Storm Tutorial
Big Data and Cloud Computing
Big Data Processing in Cloud Computing Environments
S4: Distributed Stream Computing Platform

Recently uploaded (20)

PPT
Introduction Database Management System for Course Database
PPTX
ai tools demonstartion for schools and inter college
PDF
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
PPTX
ManageIQ - Sprint 268 Review - Slide Deck
PPTX
Odoo POS Development Services by CandidRoot Solutions
PDF
How to Migrate SBCGlobal Email to Yahoo Easily
PDF
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
PDF
System and Network Administration Chapter 2
PPTX
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
PDF
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PDF
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PPTX
Operating system designcfffgfgggggggvggggggggg
PPTX
CHAPTER 2 - PM Management and IT Context
PDF
System and Network Administraation Chapter 3
PPTX
Introduction to Artificial Intelligence
PPTX
history of c programming in notes for students .pptx
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
Introduction Database Management System for Course Database
ai tools demonstartion for schools and inter college
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
ManageIQ - Sprint 268 Review - Slide Deck
Odoo POS Development Services by CandidRoot Solutions
How to Migrate SBCGlobal Email to Yahoo Easily
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
System and Network Administration Chapter 2
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
Adobe Illustrator 28.6 Crack My Vision of Vector Design
Design an Analysis of Algorithms II-SECS-1021-03
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
2025 Textile ERP Trends: SAP, Odoo & Oracle
Operating system designcfffgfgggggggvggggggggg
CHAPTER 2 - PM Management and IT Context
System and Network Administraation Chapter 3
Introduction to Artificial Intelligence
history of c programming in notes for students .pptx
Lecture 3: Operating Systems Introduction to Computer Hardware Systems

Apache Hadoop MapReduce Tutorial

  • 2. Purpose This document describes how to set up and configure a single-node Hadoop installation so that you can quickly perform simple operations using Hadoop MapReduce 2
  • 3. Supported Platforms • GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes. • Windows is also a supported platform but the followings steps are for Linux only. 3
  • 4. Required Software • Java™ must be installed. Recommended Java versions are described at http://wiki.apache.org/hadoop/HadoopJavaVersions • ssh must be installed and sshd must be running to use the Hadoop scripts that manage remote Hadoop daemons. • To get a Hadoop distribution, download a recent stable release from one of the Apache Download Mirrors $ sudo apt-get install ssh $ sudo apt-get install rsync 4
  • 5. Prepare to Start the Hadoop Cluster • Unpack the downloaded Hadoop distribution. In the distribution, edit the file etc/hadoop/hadoop-env.sh to define some parameters as follows: • Try the following command: This will display the usage documentation for the hadoop script. # set to the root of your Java installation export JAVA_HOME=/usr/lib/jvm/jdk1.7.0 # Assuming your installation directory is /usr/local/hadoop export HADOOP_PREFIX=/usr/local/hadoop $ bin/hadoop 5
  • 6. Prepare to Start the Hadoop Cluster (Cont.) • Now you are ready to start your Hadoop cluster in one of the three supported modes: • Local (Standalone) Mode • By default, Hadoop is configured to run in a non-distributed mode, as a single Java process. This is useful for debugging. • Pseudo-Distributed Mode • Hadoop can also be run on a single-node in a pseudo-distributed mode where each Hadoop daemon runs in a separate Java process. • Fully-Distributed Mode 6
  • 7. Pseudo-Distributed Configuration • etc/hadoop/core-site.xml: • etc/hadoop/hdfs-site.xml: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration> <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> 7
  • 9. Main components of the MapReduce execution pipeline • Driver: • The main program that initializes a MapReduce job. • It defines job-specific configuration, and specifies all of its components: • input and output formats • mapper and reducer • use of a combiner • use of a custom partitioner • The driver can also get back the status of the job execution. 9
  • 10. Main components of the MapReduce execution pipeline • Context: • The driver, mappers, and reducers are executed in different processes, typically on multiple machines. • A context object is available at any point of MapReduce execution. • It provides a convenient mechanism for exchanging required system and job- wide information. 10
  • 11. Main components of the MapReduce execution pipeline • Input data: • This is where the data for a MapReduce task is initially stored • This data can reside in HDFS, HBase, or other storage. • InputFormat: • This defines how input data is read and split. • InputFormat is a class that defines the InputSplits that break input data into tasks. • It provides a factory for RecordReader objects that read the file. • Several InputFormats are provided by Hadoop 11
  • 12. Main components of the MapReduce execution pipeline • InputSplit: • An InputSplit defines a unit of work for a single map task in a MapReduce program. • The InputFormat (invoked directly by a job driver) defines the number of map tasks that make up the mapping phase. • Each map task is given a single InputSplit to work on 12
  • 13. Main components of the MapReduce execution pipeline • RecordReader: • Although the InputSplit defines a data subset for a map task, it does not describe how to access the data. • The RecordReader class actually reads the data from its source, converts it into key/value pairs suitable for processing by the mapper, and delivers them to the map method. • The RecordReader class is defined by the InputFormat. 13
  • 14. Main components of the MapReduce execution pipeline • Mapper: • Performs the user-defined work of the first phase of the MapReduce program. • It takes input data in the form of a series of key/value pairs (k1, v1), which are used for individual map execution. • The map typically transforms the input pair into an output pair (k2, v2), which is used as an input for shuffle and sort. 14
  • 15. Main components of the MapReduce execution pipeline • Partition: • A subset of the intermediate key space (k2, v2) produced by each individual mapper is assigned to each reducer. • These subsets (or partitions) are the inputs to the reduce tasks. • Each map task may emit key/value pairs to any partition. • The Partitioner class determines which reducer a given key/value pair will go to. • The default Partitioner computes a hash value for the key, and assigns the partition based on this result. 15
  • 16. Main components of the MapReduce execution pipeline • Shuffle: • Once at least one map function for a given node is completed, and the keys’ space is partitioned, the run time begins moving the intermediate outputs from the map tasks to where they are required by the reducers. • This process of moving map outputs to the reducers is known as shuffling. • Sort: • The set of intermediate key/value pairs for a given reducer is automatically sorted by Hadoop to form keys/values (k2, {v2, v2,…}) before they are presented to the reducer. 16
  • 17. Main components of the MapReduce execution pipeline • Reducer: • A reducer is responsible for an execution of user-provided code for the second phase of job-specific work. • For each key assigned to a given reducer, the reducer’s reduce() method is called once. • This method receives a key, along with an iterator over all the values associated with the key. • The reducer typically transforms the input key/value pairs into output pairs (k3, v3). 17
  • 18. Main components of the MapReduce execution pipeline • OutputFormat: • The responsibility of the OutputFormat is to define a location of the output data and RecordWriter used for storing the resulting data. • RecordWriter: • A RecordWriter defines how individual output records are written. 18
  • 19. Let’s try it with simple example! Word Count (the Hello World! for MapReduce, available in Hadoop sources) We want to count the occurrences of every word of a text file 19
  • 20. Driver 20 public class WordCount { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); … Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); for (int i = 0; i < otherArgs.length - 1; ++i) { FileInputFormat.addInputPath(job, new Path(otherArgs[i])); } FileOutputFormat.setOutputPath(job, new Path( otherArgs[otherArgs.length - 1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
  • 21. Mapper class 21 //inside WordCount class public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } }
  • 22. Reducer class 22 //inside WordCount class public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } }
  • 23. References: • hadoop.apache.org • Professional Hadoop Solutions - Boris Lublinsky, Kevin T. Smith, Alexey Yakubovich - WILEY 23