SlideShare a Scribd company logo
Introduction to Hadoop and
MapReduce
Csaba Toth
GDG Fresno Meeting
Date: February 6th, 2014
Location: The Hashtag, Fresno
Agenda
•
•
•
•
•

Big Data
A little history
Hadoop
Map Reduce
Demo: Hadoop with Google Compute Engine
and Google Cloud Storage
Big Data
• Wikipedia: “collection of data sets so large and complex that it
becomes difficult to process using on-hand database management
tools or traditional data processing applications”
• Examples: (Wikibon - A Comprehensive List of Big Data Statistics)
– 100 Terabytes of data is uploaded to Facebook every day
– Facebook Stores, Processes, and Analyzes more than 30 Petabytes of
user generated data
– Twitter generates 12 Terabytes of data every day
– LinkedIn processes and mines Petabytes of user data to power the
"People You May Know" feature
– YouTube users upload 48 hours of new video content every minute of
the day
– Decoding of the human genome used to take 10 years. Now it can be
done in 7 days
Big Data characteristics
• Three Vs: Volume, Velocity, Variety
• Sources:
–
–
–
–

Science, Sensors, Social networks, Log files
Public Data Stores, Data warehouse appliances
Network and in-stream monitoring technologies
Legacy documents

• Main problems:
– Storage Problem
– Money Problem
– Consuming and processing the data
A Little History
Two Seminar papers:
• “The Google File System” - October 2003
http://labs.google.com/papers/gfs.html
– describes a scalable, distributed, fault-tolerant file system
tailored for data-intensive applications, running on inexpensive
commodity hardware, delivers high aggregate performance

• “MapReduce: Simplified Data Processing on Large Clusters”
- April 2004 http://queue.acm.org/detail.cfm?id=988408
– Describes a programming model and an implementation for
processing large data sets.
1.

2.

map function that processes a key/value pair to generate a set of
intermediate key/value pairs
reduce function that merges all intermediate values associated with
the same intermediate key
Hadoop
• Hadoop is an open-source software framework
that supports data-intensive distributed
applications.
• It is written in Java, utilizes JVMs
• Named after it’s creator’s (Doug Cutting, Yahoo)
son’s toy elephant
• Hadoop is managing a cluster of commodity
hardware computers. The cluster is composed of
a single master node and multiple worker nodes
Hadoop vs RDBMS
Hadoop / MapReduce

RDBMS

Size of data

Petabytes

Gigabytes

Integrity of data

Low

High (referential, typed)

Data schema

Dynamic

Static

Access method

Interactive and Batch

Batch

Scaling

Linear

Nonlinear (worse than
linear)

Data structure

Unstructured

Structured

Normalization of data

Not Required

Required

Query Response Time

Has latency (due to batch
processing)

Can be near immediate
MapReduce
• Hadoop leverages the programming model of
map/reduce. It is optimized for processing large data
sets.
• MapReduce is an essential technique to do distributed
computing on clusters of computers/nodes.
• The goal of map reduce is to break huge data sets into
smaller pieces, distribute those pieces to various
worker nodes, and process the data in parallel.
• Hadoop leverages a distributed file system to store the
data on various nodes.
MapReduce
• It is about two functions: map and reduce
1. Map Step:
– it is about dividing the problem into smaller subproblems. A master node has the job of distributing
the work to worker nodes. The worker node just does
one thing and returns the work back to the master
node.

2. Reduce Step:
– Once the master gets the work from the worker
nodes, the reduce step takes over and combines all
the work. By combining the work you can form some
answer and ultimately output.
MapReduce – Map step
• There is a master node and many slave nodes.
• The master node takes the input, divides it into
smaller sub-problems, and distributes the input
to worker or slave nodes. worker node may do
this again in turn, leading to a multi-level tree
structure.
• The worker/slave nodes processes the data into a
smaller problem, and passes the answer back to
its master node.
• Each mapping operation is independent of the
others, all maps can be performed in parallel.
MapReduce – Reduce step
• The master node then collects the answers
from the worker or slave nodes. It then
aggregates the answers and creates the
needed output, which is the answer to the
problem it was originally trying to solve.
• Reducers can also preform the reduction
phase in parallel. That is how the system can
process petabytes in a matter of hours.
Map, Shuffle, and Reduce

https://mm-tom.s3.amazonaws.com/blog/MapReduce.png
Word count

http://blog.jteam.nl/wp-content/uploads/2009/08/MapReduceWordCountOverview1.png
Hadoop architecture
•
•
•
•

Job Tracker
Task Tracker
Name Node
Data Node
Figures
• Following: some figures from the book
Hadoop: The Definitive Guide, 3rd Edition
A client reading data from HDFS
A client writing data to HDFS
Network distance in Hadoop
Introduction to Hadoop and MapReduce
MapReduce data flow with a single
reduce task
MapReduce data flow with multiple
reduce tasks
Hadoop architecture
Presentation
Layer

Web Browser (JS)

Data Mining
(Pegasus,
Mahout)

Index,
Searches
(Lucene)

DB drivers
(Hive driver)

Advanced Query Engine (Hive, Pig)
Computing Layer (MapReduce)
Storage Layer (HDFS)
Data Integration Layer
Flume

Sqoop

Log Data

RDBMS
Demo
• Google Compute Engine + Google Cloud Storage
• Using Ubuntu as a remote control host
• Following the tutorial of:
– https://github.com/GoogleCloudPlatform/solutions-google-computeengine-cluster-for-hadoop
– Hadoop on Google Compute Engine for Processing Big Data:
https://www.youtube.com/watch?v=se9vV8eIZME

• The example hadoop job is an advanced version of
word count in perl or python: the words are sorted by
length and abc
• Showing also Google Developer Tool web interface
References
• Google’s tutorial (see github and YouTube link of the
Demo)
• Tom White: Hadoop: The Definitive Guide, 3rd Edition,
Yahoo Press
• Lynn Langit’s various presentations and YouTube videos
• Dattatrey Sindol: Big Data Basics - Part 1 - Introduction
to Big Data
• Bruno Terkaly’s presentations (for example Hadoop on
Azure: Introduction)
• Daniel Jebaraj: Ignore HDInsight at Your Own Peril:
Everything You Need to Know
Thanks for your attention!
Ad

Recommended

PDF
Seminar_Report_hadoop
Varun Narang
 
PPTX
Hadoop and big data
Sharad Pandey
 
PPTX
Introduction to Apache Hadoop
Christopher Pezza
 
DOCX
Hadoop technology doc
tipanagiriharika
 
PDF
Hadoop Fundamentals I
Romeo Kienzler
 
ODP
Hadoop seminar
KrishnenduKrishh
 
DOCX
Hadoop Seminar Report
Atul Kushwaha
 
PDF
Hadoop Ecosystem Architecture Overview
Senthil Kumar
 
PPTX
Capacity Management and BigData/Hadoop - Hitchhiker's guide for the Capacity ...
Renato Bonomini
 
PPT
Presentation on Hadoop Technology
OpenDev
 
PPT
Seminar Presentation Hadoop
Varun Narang
 
PPTX
Hadoop: Distributed Data Processing
Cloudera, Inc.
 
PPTX
PPT on Hadoop
Shubham Parmar
 
DOC
Hadoop
Himanshu Soni
 
PDF
Report Hadoop Map Reduce
Urvashi Kataria
 
PDF
Big data technologies and Hadoop infrastructure
Roman Nikitchenko
 
PPTX
2. hadoop fundamentals
Lokesh Ramaswamy
 
PPT
Hadoop tutorial
Aamir Ameen
 
PPTX
Big data Hadoop presentation
Shivanee garg
 
PPTX
Hadoop Tutorial For Beginners
Dataflair Web Services Pvt Ltd
 
PDF
Hadoop: Distributed data processing
royans
 
PDF
Hadoop: The Default Machine Learning Platform ?
Milind Bhandarkar
 
PPTX
Hadoop and BigData - July 2016
Ranjith Sekar
 
PPTX
Introduction to Hadoop Technology
Manish Borkar
 
PPTX
Analysing of big data using map reduce
Paladion Networks
 
PDF
Introduction and Overview of BigData, Hadoop, Distributed Computing - BigData...
Mahantesh Angadi
 
PPTX
Hadoop
ABHIJEET RAJ
 
PPTX
Not only SQL - Database Choices
Lynn Langit
 
PDF
Shuffle phase as the bottleneck in Hadoop Terasort
pramodbiligiri
 

More Related Content

What's hot (20)

PPTX
Capacity Management and BigData/Hadoop - Hitchhiker's guide for the Capacity ...
Renato Bonomini
 
PPT
Presentation on Hadoop Technology
OpenDev
 
PPT
Seminar Presentation Hadoop
Varun Narang
 
PPTX
Hadoop: Distributed Data Processing
Cloudera, Inc.
 
PPTX
PPT on Hadoop
Shubham Parmar
 
DOC
Hadoop
Himanshu Soni
 
PDF
Report Hadoop Map Reduce
Urvashi Kataria
 
PDF
Big data technologies and Hadoop infrastructure
Roman Nikitchenko
 
PPTX
2. hadoop fundamentals
Lokesh Ramaswamy
 
PPT
Hadoop tutorial
Aamir Ameen
 
PPTX
Big data Hadoop presentation
Shivanee garg
 
PPTX
Hadoop Tutorial For Beginners
Dataflair Web Services Pvt Ltd
 
PDF
Hadoop: Distributed data processing
royans
 
PDF
Hadoop: The Default Machine Learning Platform ?
Milind Bhandarkar
 
PPTX
Hadoop and BigData - July 2016
Ranjith Sekar
 
PPTX
Introduction to Hadoop Technology
Manish Borkar
 
PPTX
Analysing of big data using map reduce
Paladion Networks
 
PDF
Introduction and Overview of BigData, Hadoop, Distributed Computing - BigData...
Mahantesh Angadi
 
PPTX
Hadoop
ABHIJEET RAJ
 
Capacity Management and BigData/Hadoop - Hitchhiker's guide for the Capacity ...
Renato Bonomini
 
Presentation on Hadoop Technology
OpenDev
 
Seminar Presentation Hadoop
Varun Narang
 
Hadoop: Distributed Data Processing
Cloudera, Inc.
 
PPT on Hadoop
Shubham Parmar
 
Report Hadoop Map Reduce
Urvashi Kataria
 
Big data technologies and Hadoop infrastructure
Roman Nikitchenko
 
2. hadoop fundamentals
Lokesh Ramaswamy
 
Hadoop tutorial
Aamir Ameen
 
Big data Hadoop presentation
Shivanee garg
 
Hadoop Tutorial For Beginners
Dataflair Web Services Pvt Ltd
 
Hadoop: Distributed data processing
royans
 
Hadoop: The Default Machine Learning Platform ?
Milind Bhandarkar
 
Hadoop and BigData - July 2016
Ranjith Sekar
 
Introduction to Hadoop Technology
Manish Borkar
 
Analysing of big data using map reduce
Paladion Networks
 
Introduction and Overview of BigData, Hadoop, Distributed Computing - BigData...
Mahantesh Angadi
 
Hadoop
ABHIJEET RAJ
 

Viewers also liked (8)

PPTX
Not only SQL - Database Choices
Lynn Langit
 
PDF
Shuffle phase as the bottleneck in Hadoop Terasort
pramodbiligiri
 
PDF
Spark overview
Lisa Hua
 
PDF
Hadoop Architecture Options for Existing Enterprise DataWarehouse
Asis Mohanty
 
PDF
Daum 내부 빅데이터 및 클라우드 기술 활용 사례- 윤석찬 (2012)
Channy Yun
 
PDF
MPP vs Hadoop
Alexey Grishchenko
 
PPT
Hadoop MapReduce Fundamentals
Lynn Langit
 
PPT
Chicago Data Summit: Apache HBase: An Introduction
Cloudera, Inc.
 
Not only SQL - Database Choices
Lynn Langit
 
Shuffle phase as the bottleneck in Hadoop Terasort
pramodbiligiri
 
Spark overview
Lisa Hua
 
Hadoop Architecture Options for Existing Enterprise DataWarehouse
Asis Mohanty
 
Daum 내부 빅데이터 및 클라우드 기술 활용 사례- 윤석찬 (2012)
Channy Yun
 
MPP vs Hadoop
Alexey Grishchenko
 
Hadoop MapReduce Fundamentals
Lynn Langit
 
Chicago Data Summit: Apache HBase: An Introduction
Cloudera, Inc.
 
Ad

Similar to Introduction to Hadoop and MapReduce (20)

PPTX
Big Data Processing
Michael Ming Lei
 
PPTX
4. hadoop גיא לבנברג
Taldor Group
 
PDF
Hadoop Master Class : A concise overview
Abhishek Roy
 
PPTX
Hadoop and Mapreduce for .NET User Group
Csaba Toth
 
PDF
Distributed Computing with Apache Hadoop: Technology Overview
Konstantin V. Shvachko
 
PPSX
Hadoop-Quick introduction
Sandeep Singh
 
PPT
Apache hadoop, hdfs and map reduce Overview
Nisanth Simon
 
PPTX
Hadoop.pptx
arslanhaneef
 
PPTX
Hadoop.pptx
sonukumar379092
 
PPTX
List of Engineering Colleges in Uttarakhand
Roorkee College of Engineering, Roorkee
 
PPTX
Hadoop MapReduce Paradigm
TarjMehta1
 
PPTX
Big Data and Hadoop
MaulikLakhani
 
PPT
Hadoop by sunitha
Sunitha Satyadas
 
PPTX
Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015
Deanna Kosaraju
 
PDF
Big data and hadoop overvew
Kunal Khanna
 
PPT
Hadoop hive presentation
Arvind Kumar
 
PPTX
Big data applications
Juan Pablo Paz Grau, Ph.D., PMP
 
PPTX
Hadoop and MapReduce addDdaDadadDDAD.pptx
ms236400269
 
PPTX
Bigdata workshop february 2015
clairvoyantllc
 
Big Data Processing
Michael Ming Lei
 
4. hadoop גיא לבנברג
Taldor Group
 
Hadoop Master Class : A concise overview
Abhishek Roy
 
Hadoop and Mapreduce for .NET User Group
Csaba Toth
 
Distributed Computing with Apache Hadoop: Technology Overview
Konstantin V. Shvachko
 
Hadoop-Quick introduction
Sandeep Singh
 
Apache hadoop, hdfs and map reduce Overview
Nisanth Simon
 
Hadoop.pptx
arslanhaneef
 
Hadoop.pptx
sonukumar379092
 
List of Engineering Colleges in Uttarakhand
Roorkee College of Engineering, Roorkee
 
Hadoop MapReduce Paradigm
TarjMehta1
 
Big Data and Hadoop
MaulikLakhani
 
Hadoop by sunitha
Sunitha Satyadas
 
Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015
Deanna Kosaraju
 
Big data and hadoop overvew
Kunal Khanna
 
Hadoop hive presentation
Arvind Kumar
 
Big data applications
Juan Pablo Paz Grau, Ph.D., PMP
 
Hadoop and MapReduce addDdaDadadDDAD.pptx
ms236400269
 
Bigdata workshop february 2015
clairvoyantllc
 
Ad

More from Csaba Toth (17)

PPTX
Git, GitHub gh-pages and static websites
Csaba Toth
 
PPTX
Eclipse RCP Demo
Csaba Toth
 
PPTX
The Health of Networks
Csaba Toth
 
PPTX
Introduction to Google BigQuery
Csaba Toth
 
PPTX
Column Stores and Google BigQuery
Csaba Toth
 
PPTX
Windows 10 preview
Csaba Toth
 
PPTX
Developing Multi Platform Games using PlayN and TriplePlay Framework
Csaba Toth
 
PPTX
Trends and future of java
Csaba Toth
 
PPTX
Google Compute Engine
Csaba Toth
 
PPTX
Google App Engine
Csaba Toth
 
PPTX
Setting up a free open source java e-commerce website
Csaba Toth
 
PPTX
CCJUG inaugural meeting and Adopt a JSR
Csaba Toth
 
PPTX
Google Cloud Platform, Compute Engine, and App Engine
Csaba Toth
 
PPTX
Hive and Pig for .NET User Group
Csaba Toth
 
PPTX
Introduction into windows 8 application development
Csaba Toth
 
PPTX
Ups and downs of enterprise Java app in a research setting
Csaba Toth
 
PPTX
Adopt a JSR NJUG edition
Csaba Toth
 
Git, GitHub gh-pages and static websites
Csaba Toth
 
Eclipse RCP Demo
Csaba Toth
 
The Health of Networks
Csaba Toth
 
Introduction to Google BigQuery
Csaba Toth
 
Column Stores and Google BigQuery
Csaba Toth
 
Windows 10 preview
Csaba Toth
 
Developing Multi Platform Games using PlayN and TriplePlay Framework
Csaba Toth
 
Trends and future of java
Csaba Toth
 
Google Compute Engine
Csaba Toth
 
Google App Engine
Csaba Toth
 
Setting up a free open source java e-commerce website
Csaba Toth
 
CCJUG inaugural meeting and Adopt a JSR
Csaba Toth
 
Google Cloud Platform, Compute Engine, and App Engine
Csaba Toth
 
Hive and Pig for .NET User Group
Csaba Toth
 
Introduction into windows 8 application development
Csaba Toth
 
Ups and downs of enterprise Java app in a research setting
Csaba Toth
 
Adopt a JSR NJUG edition
Csaba Toth
 

Recently uploaded (20)

PDF
Connecting Data and Intelligence: The Role of FME in Machine Learning
Safe Software
 
PDF
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
Edge AI and Vision Alliance
 
PDF
GenAI Opportunities and Challenges - Where 370 Enterprises Are Focusing Now.pdf
Priyanka Aash
 
PDF
9-1-1 Addressing: End-to-End Automation Using FME
Safe Software
 
PDF
cnc-processing-centers-centateq-p-110-en.pdf
AmirStern2
 
PPTX
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
Fwdays
 
PPTX
Securing Account Lifecycles in the Age of Deepfakes.pptx
FIDO Alliance
 
PDF
Cyber Defense Matrix Workshop - RSA Conference
Priyanka Aash
 
PDF
Quantum AI: Where Impossible Becomes Probable
Saikat Basu
 
PDF
Raman Bhaumik - Passionate Tech Enthusiast
Raman Bhaumik
 
PPTX
"How to survive Black Friday: preparing e-commerce for a peak season", Yurii ...
Fwdays
 
PDF
Techniques for Automatic Device Identification and Network Assignment.pdf
Priyanka Aash
 
PDF
Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Priyanka Aash
 
PDF
Securing AI - There Is No Try, Only Do!.pdf
Priyanka Aash
 
PDF
Enhance GitHub Copilot using MCP - Enterprise version.pdf
Nilesh Gule
 
PDF
A Constitutional Quagmire - Ethical Minefields of AI, Cyber, and Privacy.pdf
Priyanka Aash
 
PDF
"Scaling in space and time with Temporal", Andriy Lupa.pdf
Fwdays
 
PDF
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Safe Software
 
PDF
Python Conference Singapore - 19 Jun 2025
ninefyi
 
PDF
WebdriverIO & JavaScript: The Perfect Duo for Web Automation
digitaljignect
 
Connecting Data and Intelligence: The Role of FME in Machine Learning
Safe Software
 
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
Edge AI and Vision Alliance
 
GenAI Opportunities and Challenges - Where 370 Enterprises Are Focusing Now.pdf
Priyanka Aash
 
9-1-1 Addressing: End-to-End Automation Using FME
Safe Software
 
cnc-processing-centers-centateq-p-110-en.pdf
AmirStern2
 
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
Fwdays
 
Securing Account Lifecycles in the Age of Deepfakes.pptx
FIDO Alliance
 
Cyber Defense Matrix Workshop - RSA Conference
Priyanka Aash
 
Quantum AI: Where Impossible Becomes Probable
Saikat Basu
 
Raman Bhaumik - Passionate Tech Enthusiast
Raman Bhaumik
 
"How to survive Black Friday: preparing e-commerce for a peak season", Yurii ...
Fwdays
 
Techniques for Automatic Device Identification and Network Assignment.pdf
Priyanka Aash
 
Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Priyanka Aash
 
Securing AI - There Is No Try, Only Do!.pdf
Priyanka Aash
 
Enhance GitHub Copilot using MCP - Enterprise version.pdf
Nilesh Gule
 
A Constitutional Quagmire - Ethical Minefields of AI, Cyber, and Privacy.pdf
Priyanka Aash
 
"Scaling in space and time with Temporal", Andriy Lupa.pdf
Fwdays
 
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Safe Software
 
Python Conference Singapore - 19 Jun 2025
ninefyi
 
WebdriverIO & JavaScript: The Perfect Duo for Web Automation
digitaljignect
 

Introduction to Hadoop and MapReduce

  • 1. Introduction to Hadoop and MapReduce Csaba Toth GDG Fresno Meeting Date: February 6th, 2014 Location: The Hashtag, Fresno
  • 2. Agenda • • • • • Big Data A little history Hadoop Map Reduce Demo: Hadoop with Google Compute Engine and Google Cloud Storage
  • 3. Big Data • Wikipedia: “collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications” • Examples: (Wikibon - A Comprehensive List of Big Data Statistics) – 100 Terabytes of data is uploaded to Facebook every day – Facebook Stores, Processes, and Analyzes more than 30 Petabytes of user generated data – Twitter generates 12 Terabytes of data every day – LinkedIn processes and mines Petabytes of user data to power the "People You May Know" feature – YouTube users upload 48 hours of new video content every minute of the day – Decoding of the human genome used to take 10 years. Now it can be done in 7 days
  • 4. Big Data characteristics • Three Vs: Volume, Velocity, Variety • Sources: – – – – Science, Sensors, Social networks, Log files Public Data Stores, Data warehouse appliances Network and in-stream monitoring technologies Legacy documents • Main problems: – Storage Problem – Money Problem – Consuming and processing the data
  • 5. A Little History Two Seminar papers: • “The Google File System” - October 2003 http://labs.google.com/papers/gfs.html – describes a scalable, distributed, fault-tolerant file system tailored for data-intensive applications, running on inexpensive commodity hardware, delivers high aggregate performance • “MapReduce: Simplified Data Processing on Large Clusters” - April 2004 http://queue.acm.org/detail.cfm?id=988408 – Describes a programming model and an implementation for processing large data sets. 1. 2. map function that processes a key/value pair to generate a set of intermediate key/value pairs reduce function that merges all intermediate values associated with the same intermediate key
  • 6. Hadoop • Hadoop is an open-source software framework that supports data-intensive distributed applications. • It is written in Java, utilizes JVMs • Named after it’s creator’s (Doug Cutting, Yahoo) son’s toy elephant • Hadoop is managing a cluster of commodity hardware computers. The cluster is composed of a single master node and multiple worker nodes
  • 7. Hadoop vs RDBMS Hadoop / MapReduce RDBMS Size of data Petabytes Gigabytes Integrity of data Low High (referential, typed) Data schema Dynamic Static Access method Interactive and Batch Batch Scaling Linear Nonlinear (worse than linear) Data structure Unstructured Structured Normalization of data Not Required Required Query Response Time Has latency (due to batch processing) Can be near immediate
  • 8. MapReduce • Hadoop leverages the programming model of map/reduce. It is optimized for processing large data sets. • MapReduce is an essential technique to do distributed computing on clusters of computers/nodes. • The goal of map reduce is to break huge data sets into smaller pieces, distribute those pieces to various worker nodes, and process the data in parallel. • Hadoop leverages a distributed file system to store the data on various nodes.
  • 9. MapReduce • It is about two functions: map and reduce 1. Map Step: – it is about dividing the problem into smaller subproblems. A master node has the job of distributing the work to worker nodes. The worker node just does one thing and returns the work back to the master node. 2. Reduce Step: – Once the master gets the work from the worker nodes, the reduce step takes over and combines all the work. By combining the work you can form some answer and ultimately output.
  • 10. MapReduce – Map step • There is a master node and many slave nodes. • The master node takes the input, divides it into smaller sub-problems, and distributes the input to worker or slave nodes. worker node may do this again in turn, leading to a multi-level tree structure. • The worker/slave nodes processes the data into a smaller problem, and passes the answer back to its master node. • Each mapping operation is independent of the others, all maps can be performed in parallel.
  • 11. MapReduce – Reduce step • The master node then collects the answers from the worker or slave nodes. It then aggregates the answers and creates the needed output, which is the answer to the problem it was originally trying to solve. • Reducers can also preform the reduction phase in parallel. That is how the system can process petabytes in a matter of hours.
  • 12. Map, Shuffle, and Reduce https://mm-tom.s3.amazonaws.com/blog/MapReduce.png
  • 15. Figures • Following: some figures from the book Hadoop: The Definitive Guide, 3rd Edition
  • 16. A client reading data from HDFS
  • 17. A client writing data to HDFS
  • 20. MapReduce data flow with a single reduce task
  • 21. MapReduce data flow with multiple reduce tasks
  • 22. Hadoop architecture Presentation Layer Web Browser (JS) Data Mining (Pegasus, Mahout) Index, Searches (Lucene) DB drivers (Hive driver) Advanced Query Engine (Hive, Pig) Computing Layer (MapReduce) Storage Layer (HDFS) Data Integration Layer Flume Sqoop Log Data RDBMS
  • 23. Demo • Google Compute Engine + Google Cloud Storage • Using Ubuntu as a remote control host • Following the tutorial of: – https://github.com/GoogleCloudPlatform/solutions-google-computeengine-cluster-for-hadoop – Hadoop on Google Compute Engine for Processing Big Data: https://www.youtube.com/watch?v=se9vV8eIZME • The example hadoop job is an advanced version of word count in perl or python: the words are sorted by length and abc • Showing also Google Developer Tool web interface
  • 24. References • Google’s tutorial (see github and YouTube link of the Demo) • Tom White: Hadoop: The Definitive Guide, 3rd Edition, Yahoo Press • Lynn Langit’s various presentations and YouTube videos • Dattatrey Sindol: Big Data Basics - Part 1 - Introduction to Big Data • Bruno Terkaly’s presentations (for example Hadoop on Azure: Introduction) • Daniel Jebaraj: Ignore HDInsight at Your Own Peril: Everything You Need to Know
  • 25. Thanks for your attention!

Editor's Notes

  • #4: consists of very large volumes of heterogeneous data that is being generated, often, at high speeds.  These data sets cannot be managed and processed using traditional data management tools and applications at hand.  Big Data requires the use of a new set of tools, applications and frameworks to process and manage the data.
  • #5: However, it is not just about the total size of data (volume)It is also about the velocity (how rapidly is the data arriving)What is the structure? Does it have variations?ScienceScientists are regularly challenged by large data sets in many areas, including meteorology, genomics, connectomics, complex physics simulations, and biological and environmental research. SensorsData sets grow in size in part because they are increasingly being gathered by ubiquitous information-sensing mobile devices, aerial sensory technologies (remote sensing), software logs, cameras, microphones, radio-frequency identification readers, and wireless sensor networks.Social networksI am thinking of Facebook, LinkedIn, Yahoo, GoogleSocial influencersBlog comments, YELP likes, Twitter, Facebook likes, Apple's app store, Amazon, ZDNet, etcLog filesComputer and mobile device log files, web site tracking information, application logs, and sensor data. But there are also sensors from vehicles, video games, cable boxes or, soon, household appliancesPublic Data StoresMicrosoft Azure MarketPlace/DataMarket, The World Bank, SEC/Edgar, Wikipedia, IMDbData warehouse appliancesTeradata, IBM Netezza, EMC Greenplum, which includes internal, transactional data that is already prepared for analysis Network and in-stream monitoring technologiesPackets in TCP/IP, email, etcLegacy documentsArchives of statements, insurance forms, medical record and customer correspondence VolumeVolume refers to the size of data that we are working with. With the advancement of technology and with the invention of social media, the amount of data is growing very rapidly.  This data is spread across different places, in different formats, in large volumes ranging from Gigabytes to Terabytes, Petabytes, and even more. Today, the data is not only generated by humans, but large amounts of data is being generated by machines and it surpasses human generated data. This size aspect of data is referred to as Volume in the Big Data world.VelocityVelocity refers to the speed at which the data is being generated. Different applications have different latency requirements and in today's competitive world, decision makers want the necessary data/information in the least amount of time as possible.  Generally, in near real time or real time in certain scenarios. In different fields and different areas of technology, we see data getting generated at different speeds. A few examples include trading/stock exchange data, tweets on Twitter, status updates/likes/shares on Facebook, and many others. This speed aspect of data generation is referred to as Velocity in the Big Data world.VarietyVariety refers to the different formats in which the data is being generated/stored. Different applications generate/store the data in different formats. In today's world, there are large volumes of unstructured data being generated apart from the structured data getting generated in enterprises. Until the advancements in Big Data technologies, the industry didn't have any powerful and reliable tools/technologies which can work with such voluminous unstructured data that we see today. In today's world, organizations not only need to rely on the structured data from enterprise databases/warehouses, they are also forced to consume lots of data that is being generated both inside and outside of the enterprise like clickstream data, social media, etc. to stay competitive. Apart from the traditional flat files, spreadsheets, relational databases etc., we have a lot of unstructured data stored in the form of images, audio files, video files, web logs, censor data, and many others. This aspect of varied data formats is referred to as Variety in the Big Data world.
  • #6: The Google File System It is about a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clientshttp://research.google.com/archive/gfs.htmlMapReduce: Simplified Data Processing on Large Clusters MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate keyhttp://research.google.com/archive/mapreduce.html