SlideShare a Scribd company logo
Mapreduce Algorithms
 O'Reilly Strata Conference,
  London UK, October 1st 2012

             Amund Tveit
   amund@atbrox.com - twitter.com/atveit

 http://atbrox.com/about/ - twitter.com/atbrox
Background

● Been blogging about Mapreduce Algorithms in Academic
  Papers since since Oct 2009 (1st Hadoop World)
   1. http://atbrox.com/2009/10/01/mapreduce-and-hadoop-academic-
      papers/
   2. http://atbrox.com/2010/02/12/mapreduce-hadoop-algorithms-in-
      academic-papers-updated/
   3. http://atbrox.com/2010/05/08/mapreduce-hadoop-algorithms-in-
      academic-papers-may-2010-update/
   4. http://atbrox.com/2011/05/16/mapreduce-hadoop-algorithms-in-
      academic-papers-4th-update-may-2011/
   5. http://atbrox.com/2011/11/09/mapreduce-hadoop-algorithms-in-
      academic-papers-5th-update-%E2%80%93-nov-2011/
● Atbrox works on IR-related Hadoop and cloud projects
● My prior experience: Google (software infrastructure and
  mobile news), PhD in Computer Science
TOC

1. Brief introduction to Mapreduce Algorithms

2. Overview of a few Recent Mapreduce Algorithms in Papers

3. In-Depth look at a Mapreduce Algorithm

4. Recommendations for Designing Mapreduce Algorithms

5. Appendix - 6th (partial) list of Mapreduce and Hadoop
Algorithms in Acemic papers
1. Brief Introduction
           to
Mapreduce Algorithms
1.1 So What is Mapreduce?

Mapreduce is a concept,method and software for typically batch-based
large-scale parallelization. It is inspired by functional programming's
map() and reduce() functions

Nice features of mapreduce systems include:
  ● reliably processing job even though machines die (vs MPI,BSP)
  ● parallelization, e.g. thousands of machines for terasort and
    petasort

Mapreduce was invented by the Google fellows:
        Jeff Dean       Sanjay Ghemawat
1.2 Mapper function

Processes one key and value pair at the time, e.g.

 ● word count
    ○ map(key: uri, value: text):
       ■ for word in tokenize(value)
       ■ emit(word, 1) # found 1 occurence of word

 ● inverted index
     ○ map(key: uri, value: text):
        ■ for word in tokenize(value)
        ■ emit(word, key) # word and uri pair
1.3 Reducer function

Reducers processes one key and all values that belong to it (as
received and aggregated from the map function), e.g.

 ● word count
    ○ reduce(key: word type, value: list of 1s):
        ■ emit(key, sum(value))

 ● inverted index
     ○ reduce(key: word type, value: list of URIs):
         ■ # perhaps transformation of value, e.g. encoding
         ■ emit(key, value) // e.g. to a distr. hash table
1.4 Mapreduce Pan Patterns
1.4 Pattern 1 - Data Reduction




  ● Word Count
  ● Machine Learning (e.g. training models)
  ● Probably the most common way of using
    mapreduce
1.5 Pattern 2 - Transformation




  ● Sorting (e.g. Terasort and Petasort)
1.6 Pattern 3 - Data Increase




  ● Decompression
  ● Annotation, e.g. traditional indexing pipeline
2. Examples of recently published use
   and development of Mapreduce
             Algorithms
2.1 Machine Learning - ILP

 ● Problem: Automatically find (induce) rules from examples
   and knowledge base

 ● Paper:
    ○ Data and Task Parallelism in ILP using
      Mapreduce (IBM Research India et.al)


This follows Pan Pattern 1 - Data Reduction - output is a set of
rules from a (typically larger) set of examples and knowledge
base
2.1 Machine Learning - ILP - II

  Example Input:




  Example Result:
2.2 Finance - Trading

Problem: Optimize Algorithmic Trading

Paper:
    ○ Optimizing Parameters of Algorithm Trading Strategies
      using Mapreduce (EMC-Greenplum Research China et.
      al)


This follows Pan Pattern 1 - Data Reduction - output is the set
of best parameter sets for algorithmic trading. Note that during
map phase there is increase in data, i.e. creation of
permutations of possible parameters
2.3 Software Engineering

Problem: Automatically generate unit test code to increase test
coverage and offload developers

Paper:
    ○ A Parallel Genetic Algorithm Based on Hadoop
       Mapreduce for the Automatic Generation of JUnit Test
       Suites (University of Salerno, Italy)

This (probably) follows Pan Pattern 1, 2 and 3, i.e. - assumably
- fixed amount of chromosomes (i.e. transformation), collection
unit tests are being evolved and the combined lengths of unit
tests evolved might increase or decrease compared to the
original input.
2.3 Software Engineering - II
  Figure from "EvoTest: Test Case Generation using
  Genetic Programming and Software Analysis"
3. In-Depth look at a
Mapreduce Algorithm
3.1 The Challenge

● Task:
   ○ Build a low-latency key-value store for disk or SSD

● Features:
   ○ Low startup time
      ■ i.e. no/little pre-loading of (large) caches to memory
   ○ Prefix-search
      ■ i.e. support searching for both all prefixes of a key as
        well as the entire key
   ○ Low-latency
      ■ i.e. reduce number of disk/SSD seeks, e.g. by
        increase probability of disk cache hits
   ○ Static/Immutable data - write once, read many
3.2 A few Possible Ways

 1. Binary Search or
    Interpolation Search
    within a file of sorted keys
    and then look up value
    ~ lg(N) or lg(lg(N))

 2. Prefix-algorithms mapped
    to file, e.g.
      1. Trie,
      2. Ternary search tree
      3. Patricia Tree
     ~ O(k)
3.3 Overall Approach

1. Scale - divide key,value data into shards

2. Build patricia tree per shard and store all key, values for
   later

3. Prepare trees to have placeholder (short) value for each key

4. Flatten each patricia tree to a disk-friendly and byte-aligned
   format fit for random access

5. Recalculate file addresses in each patricia tree to be able to
   store the actual values

6. Create final patricia tree with values on disk
3.4 Split data with mapper

 1. Scale - divide key,value data into shards

map(key, value):
 # e.g. simple - hash(first char), or use a classifier
 # personalization etc.
 shard_key = shard_function(key, value)
 out_value = (key,value)
 emit shard_key, out_value
3.5 Init and run reduce()

2. Build one patricia tree per (reduce) shard


reduce_init():        # called once per reducer before it starts
  self.patricia = Patricia()
  self.tempkeyvaluestore = TempKeyValueStore

reducer(shard_key, list of key_value pairs):
  for (key, value) in list of key_value pairs:
    self.tempkeyvaluestore[key] = value
3.6 Reducer cont.

3. Prepare trees to have placeholder values (=key) for each key


reduce_final():       # called once per reducer after all
reduce()
  for key, value in self.tempkeyvaluestore:
    self.patricia.add(key, key) # key == value for now
3.7 Flatten patricia tree for disk

4. Flatten each patricia tree to a disk-friendly and byte-
aligned format fit for random access

reduce_final():         # continued from 3.
  # num 0s below constrains addressable size of shard file
  self.firstblockaddress = "00000000000000"
  # create mapping from dict of dicts to a linear file
  self.flatten_patricia(self.patricia, parent=self.firstblockaddress)

  #
  self.recalculate_patricia_tree_for_actual_values()
  self.output_patricia_tree_with_actual_values()
3.8 Mapreduce Approach - 5
Generated file format below, and
corresponding patricia tree to the
right



00000000038["", {"r": "00000000038"}]
00000000060["", {"om": "00000000098", "ub": "00000000290"}]
00000000062["", {"ulus": "00000000265", "an": "00000000160"}]
00000000059["", {"e": "00000000219", "us": "00000000242"}]
00000000023["one", ""]
00000000023["two", ""]
00000000025["three", ""]
00000000059["", {"ic": "00000000456", "e": "00000000349"}]
00000000059["", {"r": "00000000432", "ns": "00000000408"}]
00000000024["four", ""]
00000000024["five", ""]
00000000063["", {"on": "00000000519", "undus": "00000000542"}]
00000000023["six", ""]
00000000025["seven", ""]
4. Recommendations for Designing
      Mapreduce Algorithms
Mapreduce Patterns

Map() and Reduce() methods typically follow patterns, a
recommended way of representing such patterns are:

extracting and generalize code skeleton fingerprints based on:
 1. loops: e.g. "do-while", "while", "for", "repeat-until" => "loop"
 2. conditions: e.g. "if", "exception" and "switch" => "condition"
 3. emits: e.g. outputs from map() => reduce() or IO => "emit"
 4. emit data types: e.g. string, number, list (if known)

map(key, value):                   reduce(key, values):
 loop # over tokenized value          emit # key = word,
   emit # key=word, val=1 or uri           # value = sum(values) or
                                          # list of URIs
General Mapreduce Advice

Performance
 1. IO/moving data is expensive - use compression and aggr.
 2. Use combiners, i.e. "reducer afterburners" for mappers
 3. Look out for skewness in key distribution, e.g. zipfs law
 4. Use the right programming language for the task
 5. Balance work between mappers and reducers - http:
    //atbrox.com/2010/02/08/parallel-machine-learning-for-
    hadoopmapreduce-a-python-example/

Cost, Readability & Maintainability
 6. Mapreduce = right tool? (seq./parallel/iterative/realtime)
 7. E.g. Crunch, Pig, Hive instead of full Mapreduce code?
 8. Split job into sequence of mapreduce jobs, e.g. with
    cascading, mrjob etc.
The End


● Mapreduce Paper Trends (from 2009 => 2012), roughly:
   ○ Increased use of mapreduce jobflows, i.e. more than one
     mapreduce in a sequence and also in various types of
     iterations
       ■ e.g. the Algorithmic Trading earlier
   ○ Increased amount of papers published related to
     semantic web (e.g. RDF) and AI reasoning/inference
   ○ Decreased (relative) amount of IR and Ads papers
APPENDIX
   List of Mapreduce and Hadoop
Algorithms in Academic Papers - 6th
      version (partial subset of
        forthcoming blogpost)
AI: Reasoning & Semantic Web

1. Reasoning with Fuzzy-cL+Ontologies Using Mapreduce
2. WebPIE: A Web-scale parallel inference engine using
   Mapreduce
3. Towards Scalable Reasoning over Annotated RDF Data
   Using Mapreduce
4. Reasoning with Large Scale Ontologies in Fuzzy pD* Using
   Mapreduce
5. Scalable RDF Compression with Mapreduce
6. Towards Parallel Nonmonotonic Reasoning with Billions of
   Facts
Biology & Medicine

1. A Mapreduce-based Algorithm for Motif Search
2. A MapReduce Approach for Ridge Regression in
   Neuroimaging Genetic Studies
3. Fractal Mapreduce decomposition of sequence alignment
4. Cloud-enabling Sequence Alignment with Hadoop
   Mapreduce: A Performance Analysis

AI Misc.

A MapReduce based Ant Colony Optimization approach to
combinatorial optimization problems
Machine Learning

1. An efficient Mapreduce Algorithm for Parallelizing Large-
   Scale Graph Clustering
2. Accelerating Bayesian Network Parameter Learning Using
   Hadoop and Mapreduce
3. The Performance Improvements of SPRINT Algorithm
   Based on the Hadoop Platform

Graphs & Graph Theory
4. Large-Scale Graph Biconnectivity in MapReduce
5. Parallel Tree Reduction on MapReduce
Datacubes & Joins
1. Data Cube Materialization and Mining Over Mapreduce
2. Fuzzy joins using Mapreduce
3. Efficient Distributed Parallel Top-Down Computation of
   ROLAP Data Cube Using Mapreduce
4. V-smart-join: A scalable MapReduce Framework for all-pair
   similarity joins of multisets and vectors
5. Data Cube Materialization and Mining over MapReduce

Finance & Business
6. Optimizing Parameters of Algorithm Trading Strategies
   using Mapreduce
7. Using Mapreduce to scale events correlation discovery for
   business processes mining
8. Computational Finance with Map-Reduce in Scala
Mathematics & Statistics

1. GigaTensor: scaling tensor analysis up by 100 times -
   algorithms and discoveries
2. Fast Parallel Algorithms for Blocked Dense Matrix
   Multiplication on Shared Memory Architectures
3. Mr. LDA: A Flexible Large Scale Topic Modelling Package
   using Variational Inference in MapReduce
4. Matrix chain multiplication via multi-way algorithms in
   MapReduce

More Related Content

PDF
Code optimization in compiler design
PPTX
Lecture 16 memory bounded search
PPTX
Lecture 3 general problem solver
PDF
Lecture 1 introduction to parallel and distributed computing
PPT
Scheduling algorithms
PDF
TYBSC CS SEM 5 AI NOTES
PDF
Multithreading
PPTX
Threads (operating System)
Code optimization in compiler design
Lecture 16 memory bounded search
Lecture 3 general problem solver
Lecture 1 introduction to parallel and distributed computing
Scheduling algorithms
TYBSC CS SEM 5 AI NOTES
Multithreading
Threads (operating System)

What's hot (20)

PDF
Algorithms Lecture 7: Graph Algorithms
PPTX
LISP: Introduction to lisp
PPT
Planning
PPTX
Chapter 1 - INTRODUCTION TO PYTHON -MAULIK BORSANIYA
PPTX
Lecture 17 Iterative Deepening a star algorithm
PPTX
Compiler design syntax analysis
PPT
Introduction to Compiler design
PDF
Python recursion
PDF
Ppl for students unit 1,2 and 3
PPTX
Human computer interaction -Design and software process
PPTX
Parsing in Compiler Design
PPTX
Automata theory - Push Down Automata (PDA)
PPTX
Asymptotic notations
PPTX
Design Goals of Distributed System
PPTX
Operating Systems Chapter 6 silberschatz
PPTX
Page replacement algorithms
PPTX
String Matching (Naive,Rabin-Karp,KMP)
PPTX
Functions in python
PDF
Process scheduling (CPU Scheduling)
PPTX
Planning in Artificial Intelligence
Algorithms Lecture 7: Graph Algorithms
LISP: Introduction to lisp
Planning
Chapter 1 - INTRODUCTION TO PYTHON -MAULIK BORSANIYA
Lecture 17 Iterative Deepening a star algorithm
Compiler design syntax analysis
Introduction to Compiler design
Python recursion
Ppl for students unit 1,2 and 3
Human computer interaction -Design and software process
Parsing in Compiler Design
Automata theory - Push Down Automata (PDA)
Asymptotic notations
Design Goals of Distributed System
Operating Systems Chapter 6 silberschatz
Page replacement algorithms
String Matching (Naive,Rabin-Karp,KMP)
Functions in python
Process scheduling (CPU Scheduling)
Planning in Artificial Intelligence
Ad

Viewers also liked (8)

PPTX
Practical introduction to hadoop
PPTX
Design Patterns for Efficient Graph Algorithms in MapReduce__HadoopSummit2010
PDF
正直過ぎる家入さん2「そもそも今の知事選は政策が無意味化してない?」20140205e
PDF
Map reduce: beyond word count
PPT
Hadoop Real Life Use Case & MapReduce Details
PPTX
MapReduce in Simple Terms
PPTX
MapReduce Design Patterns
PPT
Hadoop MapReduce Fundamentals
Practical introduction to hadoop
Design Patterns for Efficient Graph Algorithms in MapReduce__HadoopSummit2010
正直過ぎる家入さん2「そもそも今の知事選は政策が無意味化してない?」20140205e
Map reduce: beyond word count
Hadoop Real Life Use Case & MapReduce Details
MapReduce in Simple Terms
MapReduce Design Patterns
Hadoop MapReduce Fundamentals
Ad

Similar to Mapreduce Algorithms (20)

PPT
Google Cluster Innards
PPT
Mapreduce in Search
PPT
r,rstats,r language,r packages
PPT
Easy R
PPTX
Apache pig presentation_siddharth_mathur
PPT
R programming slides
PPTX
Apache pig presentation_siddharth_mathur
PDF
Hadoop map reduce concepts
PDF
Big data distributed processing: Spark introduction
PPTX
Postgresql Database Administration Basic - Day2
PPTX
04 pig data operations
DOCX
Article link httpiveybusinessjournal.compublicationmanaging-.docx
PPTX
PPTX
Chapter one Department Computer Science
PDF
Astronomy_python_data_Analysis_made_easy.pdf
PDF
R programming for data science
PDF
DS unit 10000000000000000000000000000.pdf
PDF
Data Analytics and Simulation in Parallel with MATLAB*
PPTX
Hadoop and HBase experiences in perf log project
PDF
NoSQL and SQL Anti Patterns
Google Cluster Innards
Mapreduce in Search
r,rstats,r language,r packages
Easy R
Apache pig presentation_siddharth_mathur
R programming slides
Apache pig presentation_siddharth_mathur
Hadoop map reduce concepts
Big data distributed processing: Spark introduction
Postgresql Database Administration Basic - Day2
04 pig data operations
Article link httpiveybusinessjournal.compublicationmanaging-.docx
Chapter one Department Computer Science
Astronomy_python_data_Analysis_made_easy.pdf
R programming for data science
DS unit 10000000000000000000000000000.pdf
Data Analytics and Simulation in Parallel with MATLAB*
Hadoop and HBase experiences in perf log project
NoSQL and SQL Anti Patterns

Recently uploaded (20)

PPTX
Machine Learning_overview_presentation.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Approach and Philosophy of On baking technology
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Electronic commerce courselecture one. Pdf
Machine Learning_overview_presentation.pptx
Spectral efficient network and resource selection model in 5G networks
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Assigned Numbers - 2025 - Bluetooth® Document
SOPHOS-XG Firewall Administrator PPT.pptx
A comparative analysis of optical character recognition models for extracting...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Network Security Unit 5.pdf for BCA BBA.
The Rise and Fall of 3GPP – Time for a Sabbatical?
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Encapsulation_ Review paper, used for researhc scholars
Building Integrated photovoltaic BIPV_UPV.pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
Approach and Philosophy of On baking technology
Unlocking AI with Model Context Protocol (MCP)
Electronic commerce courselecture one. Pdf

Mapreduce Algorithms

  • 1. Mapreduce Algorithms O'Reilly Strata Conference, London UK, October 1st 2012 Amund Tveit [email protected] - twitter.com/atveit http://atbrox.com/about/ - twitter.com/atbrox
  • 2. Background ● Been blogging about Mapreduce Algorithms in Academic Papers since since Oct 2009 (1st Hadoop World) 1. http://atbrox.com/2009/10/01/mapreduce-and-hadoop-academic- papers/ 2. http://atbrox.com/2010/02/12/mapreduce-hadoop-algorithms-in- academic-papers-updated/ 3. http://atbrox.com/2010/05/08/mapreduce-hadoop-algorithms-in- academic-papers-may-2010-update/ 4. http://atbrox.com/2011/05/16/mapreduce-hadoop-algorithms-in- academic-papers-4th-update-may-2011/ 5. http://atbrox.com/2011/11/09/mapreduce-hadoop-algorithms-in- academic-papers-5th-update-%E2%80%93-nov-2011/ ● Atbrox works on IR-related Hadoop and cloud projects ● My prior experience: Google (software infrastructure and mobile news), PhD in Computer Science
  • 3. TOC 1. Brief introduction to Mapreduce Algorithms 2. Overview of a few Recent Mapreduce Algorithms in Papers 3. In-Depth look at a Mapreduce Algorithm 4. Recommendations for Designing Mapreduce Algorithms 5. Appendix - 6th (partial) list of Mapreduce and Hadoop Algorithms in Acemic papers
  • 4. 1. Brief Introduction to Mapreduce Algorithms
  • 5. 1.1 So What is Mapreduce? Mapreduce is a concept,method and software for typically batch-based large-scale parallelization. It is inspired by functional programming's map() and reduce() functions Nice features of mapreduce systems include: ● reliably processing job even though machines die (vs MPI,BSP) ● parallelization, e.g. thousands of machines for terasort and petasort Mapreduce was invented by the Google fellows: Jeff Dean Sanjay Ghemawat
  • 6. 1.2 Mapper function Processes one key and value pair at the time, e.g. ● word count ○ map(key: uri, value: text): ■ for word in tokenize(value) ■ emit(word, 1) # found 1 occurence of word ● inverted index ○ map(key: uri, value: text): ■ for word in tokenize(value) ■ emit(word, key) # word and uri pair
  • 7. 1.3 Reducer function Reducers processes one key and all values that belong to it (as received and aggregated from the map function), e.g. ● word count ○ reduce(key: word type, value: list of 1s): ■ emit(key, sum(value)) ● inverted index ○ reduce(key: word type, value: list of URIs): ■ # perhaps transformation of value, e.g. encoding ■ emit(key, value) // e.g. to a distr. hash table
  • 8. 1.4 Mapreduce Pan Patterns
  • 9. 1.4 Pattern 1 - Data Reduction ● Word Count ● Machine Learning (e.g. training models) ● Probably the most common way of using mapreduce
  • 10. 1.5 Pattern 2 - Transformation ● Sorting (e.g. Terasort and Petasort)
  • 11. 1.6 Pattern 3 - Data Increase ● Decompression ● Annotation, e.g. traditional indexing pipeline
  • 12. 2. Examples of recently published use and development of Mapreduce Algorithms
  • 13. 2.1 Machine Learning - ILP ● Problem: Automatically find (induce) rules from examples and knowledge base ● Paper: ○ Data and Task Parallelism in ILP using Mapreduce (IBM Research India et.al) This follows Pan Pattern 1 - Data Reduction - output is a set of rules from a (typically larger) set of examples and knowledge base
  • 14. 2.1 Machine Learning - ILP - II Example Input: Example Result:
  • 15. 2.2 Finance - Trading Problem: Optimize Algorithmic Trading Paper: ○ Optimizing Parameters of Algorithm Trading Strategies using Mapreduce (EMC-Greenplum Research China et. al) This follows Pan Pattern 1 - Data Reduction - output is the set of best parameter sets for algorithmic trading. Note that during map phase there is increase in data, i.e. creation of permutations of possible parameters
  • 16. 2.3 Software Engineering Problem: Automatically generate unit test code to increase test coverage and offload developers Paper: ○ A Parallel Genetic Algorithm Based on Hadoop Mapreduce for the Automatic Generation of JUnit Test Suites (University of Salerno, Italy) This (probably) follows Pan Pattern 1, 2 and 3, i.e. - assumably - fixed amount of chromosomes (i.e. transformation), collection unit tests are being evolved and the combined lengths of unit tests evolved might increase or decrease compared to the original input.
  • 17. 2.3 Software Engineering - II Figure from "EvoTest: Test Case Generation using Genetic Programming and Software Analysis"
  • 18. 3. In-Depth look at a Mapreduce Algorithm
  • 19. 3.1 The Challenge ● Task: ○ Build a low-latency key-value store for disk or SSD ● Features: ○ Low startup time ■ i.e. no/little pre-loading of (large) caches to memory ○ Prefix-search ■ i.e. support searching for both all prefixes of a key as well as the entire key ○ Low-latency ■ i.e. reduce number of disk/SSD seeks, e.g. by increase probability of disk cache hits ○ Static/Immutable data - write once, read many
  • 20. 3.2 A few Possible Ways 1. Binary Search or Interpolation Search within a file of sorted keys and then look up value ~ lg(N) or lg(lg(N)) 2. Prefix-algorithms mapped to file, e.g. 1. Trie, 2. Ternary search tree 3. Patricia Tree ~ O(k)
  • 21. 3.3 Overall Approach 1. Scale - divide key,value data into shards 2. Build patricia tree per shard and store all key, values for later 3. Prepare trees to have placeholder (short) value for each key 4. Flatten each patricia tree to a disk-friendly and byte-aligned format fit for random access 5. Recalculate file addresses in each patricia tree to be able to store the actual values 6. Create final patricia tree with values on disk
  • 22. 3.4 Split data with mapper 1. Scale - divide key,value data into shards map(key, value): # e.g. simple - hash(first char), or use a classifier # personalization etc. shard_key = shard_function(key, value) out_value = (key,value) emit shard_key, out_value
  • 23. 3.5 Init and run reduce() 2. Build one patricia tree per (reduce) shard reduce_init(): # called once per reducer before it starts self.patricia = Patricia() self.tempkeyvaluestore = TempKeyValueStore reducer(shard_key, list of key_value pairs): for (key, value) in list of key_value pairs: self.tempkeyvaluestore[key] = value
  • 24. 3.6 Reducer cont. 3. Prepare trees to have placeholder values (=key) for each key reduce_final(): # called once per reducer after all reduce() for key, value in self.tempkeyvaluestore: self.patricia.add(key, key) # key == value for now
  • 25. 3.7 Flatten patricia tree for disk 4. Flatten each patricia tree to a disk-friendly and byte- aligned format fit for random access reduce_final(): # continued from 3. # num 0s below constrains addressable size of shard file self.firstblockaddress = "00000000000000" # create mapping from dict of dicts to a linear file self.flatten_patricia(self.patricia, parent=self.firstblockaddress) # self.recalculate_patricia_tree_for_actual_values() self.output_patricia_tree_with_actual_values()
  • 26. 3.8 Mapreduce Approach - 5 Generated file format below, and corresponding patricia tree to the right 00000000038["", {"r": "00000000038"}] 00000000060["", {"om": "00000000098", "ub": "00000000290"}] 00000000062["", {"ulus": "00000000265", "an": "00000000160"}] 00000000059["", {"e": "00000000219", "us": "00000000242"}] 00000000023["one", ""] 00000000023["two", ""] 00000000025["three", ""] 00000000059["", {"ic": "00000000456", "e": "00000000349"}] 00000000059["", {"r": "00000000432", "ns": "00000000408"}] 00000000024["four", ""] 00000000024["five", ""] 00000000063["", {"on": "00000000519", "undus": "00000000542"}] 00000000023["six", ""] 00000000025["seven", ""]
  • 27. 4. Recommendations for Designing Mapreduce Algorithms
  • 28. Mapreduce Patterns Map() and Reduce() methods typically follow patterns, a recommended way of representing such patterns are: extracting and generalize code skeleton fingerprints based on: 1. loops: e.g. "do-while", "while", "for", "repeat-until" => "loop" 2. conditions: e.g. "if", "exception" and "switch" => "condition" 3. emits: e.g. outputs from map() => reduce() or IO => "emit" 4. emit data types: e.g. string, number, list (if known) map(key, value): reduce(key, values): loop # over tokenized value emit # key = word, emit # key=word, val=1 or uri # value = sum(values) or # list of URIs
  • 29. General Mapreduce Advice Performance 1. IO/moving data is expensive - use compression and aggr. 2. Use combiners, i.e. "reducer afterburners" for mappers 3. Look out for skewness in key distribution, e.g. zipfs law 4. Use the right programming language for the task 5. Balance work between mappers and reducers - http: //atbrox.com/2010/02/08/parallel-machine-learning-for- hadoopmapreduce-a-python-example/ Cost, Readability & Maintainability 6. Mapreduce = right tool? (seq./parallel/iterative/realtime) 7. E.g. Crunch, Pig, Hive instead of full Mapreduce code? 8. Split job into sequence of mapreduce jobs, e.g. with cascading, mrjob etc.
  • 30. The End ● Mapreduce Paper Trends (from 2009 => 2012), roughly: ○ Increased use of mapreduce jobflows, i.e. more than one mapreduce in a sequence and also in various types of iterations ■ e.g. the Algorithmic Trading earlier ○ Increased amount of papers published related to semantic web (e.g. RDF) and AI reasoning/inference ○ Decreased (relative) amount of IR and Ads papers
  • 31. APPENDIX List of Mapreduce and Hadoop Algorithms in Academic Papers - 6th version (partial subset of forthcoming blogpost)
  • 32. AI: Reasoning & Semantic Web 1. Reasoning with Fuzzy-cL+Ontologies Using Mapreduce 2. WebPIE: A Web-scale parallel inference engine using Mapreduce 3. Towards Scalable Reasoning over Annotated RDF Data Using Mapreduce 4. Reasoning with Large Scale Ontologies in Fuzzy pD* Using Mapreduce 5. Scalable RDF Compression with Mapreduce 6. Towards Parallel Nonmonotonic Reasoning with Billions of Facts
  • 33. Biology & Medicine 1. A Mapreduce-based Algorithm for Motif Search 2. A MapReduce Approach for Ridge Regression in Neuroimaging Genetic Studies 3. Fractal Mapreduce decomposition of sequence alignment 4. Cloud-enabling Sequence Alignment with Hadoop Mapreduce: A Performance Analysis AI Misc. A MapReduce based Ant Colony Optimization approach to combinatorial optimization problems
  • 34. Machine Learning 1. An efficient Mapreduce Algorithm for Parallelizing Large- Scale Graph Clustering 2. Accelerating Bayesian Network Parameter Learning Using Hadoop and Mapreduce 3. The Performance Improvements of SPRINT Algorithm Based on the Hadoop Platform Graphs & Graph Theory 4. Large-Scale Graph Biconnectivity in MapReduce 5. Parallel Tree Reduction on MapReduce
  • 35. Datacubes & Joins 1. Data Cube Materialization and Mining Over Mapreduce 2. Fuzzy joins using Mapreduce 3. Efficient Distributed Parallel Top-Down Computation of ROLAP Data Cube Using Mapreduce 4. V-smart-join: A scalable MapReduce Framework for all-pair similarity joins of multisets and vectors 5. Data Cube Materialization and Mining over MapReduce Finance & Business 6. Optimizing Parameters of Algorithm Trading Strategies using Mapreduce 7. Using Mapreduce to scale events correlation discovery for business processes mining 8. Computational Finance with Map-Reduce in Scala
  • 36. Mathematics & Statistics 1. GigaTensor: scaling tensor analysis up by 100 times - algorithms and discoveries 2. Fast Parallel Algorithms for Blocked Dense Matrix Multiplication on Shared Memory Architectures 3. Mr. LDA: A Flexible Large Scale Topic Modelling Package using Variational Inference in MapReduce 4. Matrix chain multiplication via multi-way algorithms in MapReduce