SlideShare a Scribd company logo
Hadoop and HBase on the Cloud:
A Case Study on Performance and Isolation.
Konstantin V. Shvachko Jagane Sundar
February 26, 2013
 Founders of AltoStor and AltoScale
 Jagane: WANdisco, CTO and VP Engineering of Big Data
– Director of Hadoop Performance and Operability at Yahoo!
– Big data, cloud, virtualization, and networking experience
 Konstantin: WANdisco, Chief Architect
– Hadoop, HDFS at Yahoo! & eBay
– Efficient data structures and algorithms for large-scale distributed storage systems
– Giraffa - file system with distributed metadata & data utilizing HDFS and HBase.
Hosted on Apache Extra
/ page 2
Authors
 The Hadoop Distributed File System
(HDFS)
– Reliable storage layer
– NameNode – namespace and block
management
– DataNodes – block replica container
 MapReduce – distributed computation
framework
– Simple computational model
– JobTracker – job scheduling, resource
management, lifecycle coordination
– TaskTracker – task execution module
 Analysis and transformation of very large
amounts of data using commodity
servers
/ page 3
A reliable, scalable, high performance distributed computing system
What is Apache Hadoop
NameNode
DataNode
TaskTracker
JobTracker
DataNode
TaskTracker
DataNode
TaskTracker
Block
Task
 Table: big, sparse, loosely structured
– Collection of rows, sorted by row keys
– Rows can have arbitrary number of
columns
 Table is split Horizontally into Regions
– Dynamic Table partitioning
– Region Servers serve regions to
applications
 Columns grouped into Column families
– Vertical partition of tables
 Distributed Cache: Regions are loaded
in nodes’ RAM
– Real-time access to data
/ page 4
A distributed key-value storage for real-time access to semi-structured data
What is Apache HBase
DataNode DataNode DataNode
NameNode JobTracker
RegionServer RegionServerRegionServer
TaskTracker TaskTrackerTaskTracker
HBase
Master
/ page 5
A Parallel Computational Model and Distributed Framework
What is MapReduce
dogs C, 3
like
cats
V, 1
C, 2 V, 2
C, 3 V, 1
C, 8
V, 4
 I/O utilization
– Can run with the speed of spinning drives
– Examples: DFSIO, Terasort (well tuned)
 Network utilization – optimized by design
– Data locality. Tasks executed on nodes where input data resides.
No massive transfers
– Block replication of 3 requires two data transfers
– Map writes transient output locally
– Shuffle requires cross-node transfers
 CPU utilization
1. IO bound workloads preclude from using more cpu time
2. Cluster provisioning:
peak-load performance vs. average utilization tradeoff
/ page 6
Low Average CPU Utilization on Hadoop Clusters
What is the Problem
 Computation of Pi
– pure CPU workload, no input or output data
– Enormous amount of FFTs computing amazingly large numbers
– Record Pi run over-heated the datacenter
 Well tuned Terasort is CPU intensive
 Compression – marginal utilization gain
 Production clusters run cold
1. IO bound workloads
2. Conservative provisioning of cluster resources to meet strict SLAs
/ page 7
CPU Load
Two quadrillionth (1015)
digit of π is 0
 72 GB - total RAM / node
– 4 GB – DataNode
– 2 GB – TaskTracker
– 16 GB – RegionServer
– 2 GB – per individual task: 25 task slots (17 maps and 8 reduces)
 Average utilization vs peak-load performance
– Oversubscription (28 task slots)
– Better average utilization
– MR Tasks can starve HBase RegionServers
 Better Isolation of resources → Aggressive resource allocation
/ page 8
Rule of thumb
Cluster Provisioning Dilemma
 Goal: Eliminate disk IO contention
 Faster non-volatile storage devices improve IO performance
– Advantage in random reads
– Similar performance for sequential IOs
 More RAM: HBase caching
/ page 9
With non-spinning storage
Increasing IO Rate
 DFSIO benchmark measures average throughput for IO operations
– Write
– Read (sequential)
– Append
– Random Read (new)
 MapReduce job
– Map: same operation write or read for all mappers. Measures throughput
– Single reducer: aggregates the performance results
 Random Reads (MAPREDUCE-4651)
– Random Read DFSIO randomly chooses an offset
– Backward Read DFSIO reads files in reverse order
– Skip Read DFSIO reads seeks ahead after every portion read
– Avoid read-ahead buffering
– Similar results for all three random read modifications
/ page 10
Standard Hadoop Benchmark measuring HDFS performance
What is DFSIO
 Four node cluster: Hadoop 1.0.3 HBase 0.92.1
– 1 master-node: NameNode, JobTracker
– 3 slave node: DataNode, TaskTracker
 Node configuraiton
– Intel 8 core processor with hyper-threading
– 24 GB RAM
– Four 1TB 7200 rpm SATA drives
– 1 Gbps network interfaces
 DFSIO dataset
– 72 files of size 10 GB each
– Total data read: 7GB
– Single read size: 1 MB
– Concurrent readers: from 3 to 72
/ page 11
DFSIO
Benchmarking Environment
0
200
400
600
800
1000
1200
1400
1600
1800
3 12 24 48 72
AggregateThroughput(MB/sec)
Concurrent Readers
Disks
Flash
/ page 12
Increasing Load with Random Reads
Random Reads
 YCSB allows to define a mix of read / write operations,
measure latency and throughput
– Compares different database: relational and no-SQL
– Data is represented as a table of records with number of fixed fields
– Unique key identifies each record
 Main operations
– Insert: Insert a new record
– Read: Read a record
– Update: Update a record by replacing the value of one field
– Scan: Scan a random number of consequent records, starting at a random record
key
/ page 13
Yahoo! Cloud Serving Benchmark
What is YCSB
 Four node cluster
– 1 master-node: NameNode, JobTracker, HBase Master, Zookeeper
– 3 slave node: DataNode, TaskTracker, RegionServer
– Physical master node
– 2 to 4 VMs on a slave node. Max 12 VMs
 YCSB datasets of two different sizes: 10 and 30 million records
– dstat collects system resource metrics: CPU, memory usage, disk and network stats
/ page 14
YCSB
Benchmarking Environment
/ page 15
YCSB Workloads
Workloads Insert % Read % Update % Scan %
Data Load 100
Reads with heavy insert load 55 45
Short range scans: workload E 5 95
/ page 16
Random reads and Scans substantially faster with flash
Average Workloads Throughput
0
10
20
30
40
50
60
70
80
90
100
Data Load
Reads with
Inserts Short range
Scans
Throughput(%Ops/sec)
Disks
Flash
0
500
1000
1500
2000
2500
3000
64 256 512 1024
Throughput(Ops/sec)
Concurrent Threads
Physical nodes
6 Vms
9 Vms
12 Vms
/ page 17
Adding one VM per node increases overall performance 20% on average
Short range Scans: Throughput
/ page 18
Latency grows linearly with number of threads on physical nodes
Short range Scans: Latency
0
200
400
600
800
1000
1200
64 256 512 1024
AverageLatency(ms)
Concurrent Threads
Physical nodes
6 Vms
9 Vms
12 Vms
/ page 19
Virtualized Cluster drastically increases CPU utilization
CPU Utilization comparison
4%3% 1%
92%
CPU Physical nodes
user system wait idle
55%23%
1%
21%
CPU Virtualized cluster
user system wait idle
 Physical node cluster generates
very light CPU load – 92% idle
 With VMs the CPU can be drawn close to
100% at peaks
/ page 20
Latency of reads on mixed workload: 45% reads and 55% inserts
Reads with Inserts
0
100
200
300
400
500
600
700
10 million records
30 million records
ReadLatency(ms)
Disks
Flash
 HDFS
– Sequential IO is handled well by the disk storage
– Flash substantially outperforms disks on workloads with random reads
 HBase write-only workload provides marginal improvement for flash
 Using multiple VMs / node provides 100% peak utilization of HW resources
– CPU utilization on physical-node clusters is a fraction of its capacity
 Combination of Flash Storage and Virtualization implies
high performance of HBase for
Random Read and Reads Mixed with writes workloads
 Virtualization serves two main functions:
– Resource utilization by running more server processes per node
– Resource isolation by designating certain percentage of resources to each server
and not letting them starve each other
/ page 21
VMs allow to utilize Random Read advantage of flash for Hadoop
Conclusions
Thank you
Konstantin V. Shvachko Jagane Sundar

More Related Content

PPTX
Supporting Financial Services with a More Flexible Approach to Big Data
PDF
Hadoop Overview kdd2011
PPT
An Introduction to Hadoop
PDF
Distributed Computing with Apache Hadoop: Technology Overview
PDF
Hadoop scalability
PPTX
Hadoop and WANdisco: The Future of Big Data
PPTX
Introduction to Hadoop
PDF
Introduction to Hadoop
Supporting Financial Services with a More Flexible Approach to Big Data
Hadoop Overview kdd2011
An Introduction to Hadoop
Distributed Computing with Apache Hadoop: Technology Overview
Hadoop scalability
Hadoop and WANdisco: The Future of Big Data
Introduction to Hadoop
Introduction to Hadoop

What's hot (20)

PPT
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
PPTX
Hadoop technology
PDF
PPTX
A Basic Introduction to the Hadoop eco system - no animation
PPT
Meethadoop
PPTX
Hadoop configuration & performance tuning
PDF
Introduction to Big Data & Hadoop
PDF
EclipseCon Keynote: Apache Hadoop - An Introduction
PPT
Hadoop ppt2
PDF
HDFS Architecture
PPTX
Apache hadoop technology : Beginners
PPTX
Hadoop File system (HDFS)
PDF
Introduction to the Hadoop Ecosystem with Hadoop 2.0 aka YARN (Java Serbia Ed...
PPT
Seminar Presentation Hadoop
PPTX
Introduction to Big Data & Hadoop Architecture - Module 1
PPTX
Introduction to Hadoop Technology
PPTX
Overview of Big data, Hadoop and Microsoft BI - version1
PDF
Hadoop Family and Ecosystem
PPSX
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop technology
A Basic Introduction to the Hadoop eco system - no animation
Meethadoop
Hadoop configuration & performance tuning
Introduction to Big Data & Hadoop
EclipseCon Keynote: Apache Hadoop - An Introduction
Hadoop ppt2
HDFS Architecture
Apache hadoop technology : Beginners
Hadoop File system (HDFS)
Introduction to the Hadoop Ecosystem with Hadoop 2.0 aka YARN (Java Serbia Ed...
Seminar Presentation Hadoop
Introduction to Big Data & Hadoop Architecture - Module 1
Introduction to Hadoop Technology
Overview of Big data, Hadoop and Microsoft BI - version1
Hadoop Family and Ecosystem
Ad

Similar to 02.28.13 WANdisco ApacheCon 2013 (20)

DOCX
Hadoop Research
PDF
[B4]deview 2012-hdfs
PPTX
Apache Hadoop 3.0 Community Update
PPTX
NYC Hadoop Meetup - MapR, Architecture, Philosophy and Applications
PPTX
Apache Hadoop India Summit 2011 talk "Hadoop Map-Reduce Programming & Best Pr...
PDF
Red Hat Storage Server Administration Deep Dive
PPTX
Big data with HDFS and Mapreduce
ODP
Cassandra as Memcache
PDF
getFamiliarWithHadoop
PDF
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
PPT
Hw09 Practical HBase Getting The Most From Your H Base Install
PDF
hdfs readrmation ghghg bigdats analytics info.pdf
PDF
Power Hadoop Cluster with AWS Cloud
PPT
Hadoop training in bangalore-kellytechnologies
PPT
Hadoop ecosystem framework n hadoop in live environment
PPT
Apache hadoop and hive
PDF
How can Hadoop & SAP be integrated
PDF
Lecture 2 part 1
PDF
Improving Apache Spark by Taking Advantage of Disaggregated Architecture
PPT
Hadoop Architecture
Hadoop Research
[B4]deview 2012-hdfs
Apache Hadoop 3.0 Community Update
NYC Hadoop Meetup - MapR, Architecture, Philosophy and Applications
Apache Hadoop India Summit 2011 talk "Hadoop Map-Reduce Programming & Best Pr...
Red Hat Storage Server Administration Deep Dive
Big data with HDFS and Mapreduce
Cassandra as Memcache
getFamiliarWithHadoop
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
Hw09 Practical HBase Getting The Most From Your H Base Install
hdfs readrmation ghghg bigdats analytics info.pdf
Power Hadoop Cluster with AWS Cloud
Hadoop training in bangalore-kellytechnologies
Hadoop ecosystem framework n hadoop in live environment
Apache hadoop and hive
How can Hadoop & SAP be integrated
Lecture 2 part 1
Improving Apache Spark by Taking Advantage of Disaggregated Architecture
Hadoop Architecture
Ad

More from WANdisco Plc (12)

PDF
Forrester On Using Subversion to Optimize Globally Distributed Development
PPTX
03.13.13 WANDisco SVN Training: Advanced Branching & Merging
PPTX
02.28.13 WANDisco SVN Training: Getting Info Out of SVN
PPTX
02.19.13 WANDisco SVN Training: Branching Options for Development
PPTX
uberSVN introduction by WANdisco
PPT
Subversion Zen
PPT
WANdisco Subversion Support Services
PPT
Make Subversion Agile
PPT
Why Svn
PPT
Subversion in 2010 and Beyond
PPT
Forrester Research on Optimizing Globally Distributed Software Development Us...
PPT
Forrester Research on Globally Distributed Development Using Subversion
Forrester On Using Subversion to Optimize Globally Distributed Development
03.13.13 WANDisco SVN Training: Advanced Branching & Merging
02.28.13 WANDisco SVN Training: Getting Info Out of SVN
02.19.13 WANDisco SVN Training: Branching Options for Development
uberSVN introduction by WANdisco
Subversion Zen
WANdisco Subversion Support Services
Make Subversion Agile
Why Svn
Subversion in 2010 and Beyond
Forrester Research on Optimizing Globally Distributed Software Development Us...
Forrester Research on Globally Distributed Development Using Subversion

Recently uploaded (20)

PDF
DP Operators-handbook-extract for the Mautical Institute
PPTX
OMC Textile Division Presentation 2021.pptx
PDF
Enhancing emotion recognition model for a student engagement use case through...
PPTX
1. Introduction to Computer Programming.pptx
PPTX
A Presentation on Touch Screen Technology
PDF
Mushroom cultivation and it's methods.pdf
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Hybrid model detection and classification of lung cancer
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Approach and Philosophy of On baking technology
PDF
1 - Historical Antecedents, Social Consideration.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
cloud_computing_Infrastucture_as_cloud_p
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Web App vs Mobile App What Should You Build First.pdf
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
PDF
project resource management chapter-09.pdf
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
DP Operators-handbook-extract for the Mautical Institute
OMC Textile Division Presentation 2021.pptx
Enhancing emotion recognition model for a student engagement use case through...
1. Introduction to Computer Programming.pptx
A Presentation on Touch Screen Technology
Mushroom cultivation and it's methods.pdf
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Zenith AI: Advanced Artificial Intelligence
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Hybrid model detection and classification of lung cancer
Building Integrated photovoltaic BIPV_UPV.pdf
Approach and Philosophy of On baking technology
1 - Historical Antecedents, Social Consideration.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
cloud_computing_Infrastucture_as_cloud_p
Programs and apps: productivity, graphics, security and other tools
Web App vs Mobile App What Should You Build First.pdf
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
project resource management chapter-09.pdf
Group 1 Presentation -Planning and Decision Making .pptx

02.28.13 WANdisco ApacheCon 2013

  • 1. Hadoop and HBase on the Cloud: A Case Study on Performance and Isolation. Konstantin V. Shvachko Jagane Sundar February 26, 2013
  • 2.  Founders of AltoStor and AltoScale  Jagane: WANdisco, CTO and VP Engineering of Big Data – Director of Hadoop Performance and Operability at Yahoo! – Big data, cloud, virtualization, and networking experience  Konstantin: WANdisco, Chief Architect – Hadoop, HDFS at Yahoo! & eBay – Efficient data structures and algorithms for large-scale distributed storage systems – Giraffa - file system with distributed metadata & data utilizing HDFS and HBase. Hosted on Apache Extra / page 2 Authors
  • 3.  The Hadoop Distributed File System (HDFS) – Reliable storage layer – NameNode – namespace and block management – DataNodes – block replica container  MapReduce – distributed computation framework – Simple computational model – JobTracker – job scheduling, resource management, lifecycle coordination – TaskTracker – task execution module  Analysis and transformation of very large amounts of data using commodity servers / page 3 A reliable, scalable, high performance distributed computing system What is Apache Hadoop NameNode DataNode TaskTracker JobTracker DataNode TaskTracker DataNode TaskTracker Block Task
  • 4.  Table: big, sparse, loosely structured – Collection of rows, sorted by row keys – Rows can have arbitrary number of columns  Table is split Horizontally into Regions – Dynamic Table partitioning – Region Servers serve regions to applications  Columns grouped into Column families – Vertical partition of tables  Distributed Cache: Regions are loaded in nodes’ RAM – Real-time access to data / page 4 A distributed key-value storage for real-time access to semi-structured data What is Apache HBase DataNode DataNode DataNode NameNode JobTracker RegionServer RegionServerRegionServer TaskTracker TaskTrackerTaskTracker HBase Master
  • 5. / page 5 A Parallel Computational Model and Distributed Framework What is MapReduce dogs C, 3 like cats V, 1 C, 2 V, 2 C, 3 V, 1 C, 8 V, 4
  • 6.  I/O utilization – Can run with the speed of spinning drives – Examples: DFSIO, Terasort (well tuned)  Network utilization – optimized by design – Data locality. Tasks executed on nodes where input data resides. No massive transfers – Block replication of 3 requires two data transfers – Map writes transient output locally – Shuffle requires cross-node transfers  CPU utilization 1. IO bound workloads preclude from using more cpu time 2. Cluster provisioning: peak-load performance vs. average utilization tradeoff / page 6 Low Average CPU Utilization on Hadoop Clusters What is the Problem
  • 7.  Computation of Pi – pure CPU workload, no input or output data – Enormous amount of FFTs computing amazingly large numbers – Record Pi run over-heated the datacenter  Well tuned Terasort is CPU intensive  Compression – marginal utilization gain  Production clusters run cold 1. IO bound workloads 2. Conservative provisioning of cluster resources to meet strict SLAs / page 7 CPU Load Two quadrillionth (1015) digit of π is 0
  • 8.  72 GB - total RAM / node – 4 GB – DataNode – 2 GB – TaskTracker – 16 GB – RegionServer – 2 GB – per individual task: 25 task slots (17 maps and 8 reduces)  Average utilization vs peak-load performance – Oversubscription (28 task slots) – Better average utilization – MR Tasks can starve HBase RegionServers  Better Isolation of resources → Aggressive resource allocation / page 8 Rule of thumb Cluster Provisioning Dilemma
  • 9.  Goal: Eliminate disk IO contention  Faster non-volatile storage devices improve IO performance – Advantage in random reads – Similar performance for sequential IOs  More RAM: HBase caching / page 9 With non-spinning storage Increasing IO Rate
  • 10.  DFSIO benchmark measures average throughput for IO operations – Write – Read (sequential) – Append – Random Read (new)  MapReduce job – Map: same operation write or read for all mappers. Measures throughput – Single reducer: aggregates the performance results  Random Reads (MAPREDUCE-4651) – Random Read DFSIO randomly chooses an offset – Backward Read DFSIO reads files in reverse order – Skip Read DFSIO reads seeks ahead after every portion read – Avoid read-ahead buffering – Similar results for all three random read modifications / page 10 Standard Hadoop Benchmark measuring HDFS performance What is DFSIO
  • 11.  Four node cluster: Hadoop 1.0.3 HBase 0.92.1 – 1 master-node: NameNode, JobTracker – 3 slave node: DataNode, TaskTracker  Node configuraiton – Intel 8 core processor with hyper-threading – 24 GB RAM – Four 1TB 7200 rpm SATA drives – 1 Gbps network interfaces  DFSIO dataset – 72 files of size 10 GB each – Total data read: 7GB – Single read size: 1 MB – Concurrent readers: from 3 to 72 / page 11 DFSIO Benchmarking Environment
  • 12. 0 200 400 600 800 1000 1200 1400 1600 1800 3 12 24 48 72 AggregateThroughput(MB/sec) Concurrent Readers Disks Flash / page 12 Increasing Load with Random Reads Random Reads
  • 13.  YCSB allows to define a mix of read / write operations, measure latency and throughput – Compares different database: relational and no-SQL – Data is represented as a table of records with number of fixed fields – Unique key identifies each record  Main operations – Insert: Insert a new record – Read: Read a record – Update: Update a record by replacing the value of one field – Scan: Scan a random number of consequent records, starting at a random record key / page 13 Yahoo! Cloud Serving Benchmark What is YCSB
  • 14.  Four node cluster – 1 master-node: NameNode, JobTracker, HBase Master, Zookeeper – 3 slave node: DataNode, TaskTracker, RegionServer – Physical master node – 2 to 4 VMs on a slave node. Max 12 VMs  YCSB datasets of two different sizes: 10 and 30 million records – dstat collects system resource metrics: CPU, memory usage, disk and network stats / page 14 YCSB Benchmarking Environment
  • 15. / page 15 YCSB Workloads Workloads Insert % Read % Update % Scan % Data Load 100 Reads with heavy insert load 55 45 Short range scans: workload E 5 95
  • 16. / page 16 Random reads and Scans substantially faster with flash Average Workloads Throughput 0 10 20 30 40 50 60 70 80 90 100 Data Load Reads with Inserts Short range Scans Throughput(%Ops/sec) Disks Flash
  • 17. 0 500 1000 1500 2000 2500 3000 64 256 512 1024 Throughput(Ops/sec) Concurrent Threads Physical nodes 6 Vms 9 Vms 12 Vms / page 17 Adding one VM per node increases overall performance 20% on average Short range Scans: Throughput
  • 18. / page 18 Latency grows linearly with number of threads on physical nodes Short range Scans: Latency 0 200 400 600 800 1000 1200 64 256 512 1024 AverageLatency(ms) Concurrent Threads Physical nodes 6 Vms 9 Vms 12 Vms
  • 19. / page 19 Virtualized Cluster drastically increases CPU utilization CPU Utilization comparison 4%3% 1% 92% CPU Physical nodes user system wait idle 55%23% 1% 21% CPU Virtualized cluster user system wait idle  Physical node cluster generates very light CPU load – 92% idle  With VMs the CPU can be drawn close to 100% at peaks
  • 20. / page 20 Latency of reads on mixed workload: 45% reads and 55% inserts Reads with Inserts 0 100 200 300 400 500 600 700 10 million records 30 million records ReadLatency(ms) Disks Flash
  • 21.  HDFS – Sequential IO is handled well by the disk storage – Flash substantially outperforms disks on workloads with random reads  HBase write-only workload provides marginal improvement for flash  Using multiple VMs / node provides 100% peak utilization of HW resources – CPU utilization on physical-node clusters is a fraction of its capacity  Combination of Flash Storage and Virtualization implies high performance of HBase for Random Read and Reads Mixed with writes workloads  Virtualization serves two main functions: – Resource utilization by running more server processes per node – Resource isolation by designating certain percentage of resources to each server and not letting them starve each other / page 21 VMs allow to utilize Random Read advantage of flash for Hadoop Conclusions
  • 22. Thank you Konstantin V. Shvachko Jagane Sundar