SlideShare a Scribd company logo
HBase Tuning
Performance and Correctness
Lars Hofhansl
Principal Architect, Salesforce (10 years!)
HBase, Phoenix Committer, PMC
Apache Incubator PMC
Apache Foundation Member
http://hadoop-hbase.blogspot.com/
HBaseCon 2015: HBase Performance Tuning @ Salesforce
Boring Topic
Experiment with Colorful Slides
Agenda
• HDFS
• HBase – Server
• HBase – Client
• Correctness
• Performance
HDFS
hdfs-site.xml
HDFS - Background
• Stores HBase WAL and HFiles
• No sync-to-disk by default
• Datanode writes tmp file, moves it into place
• Old data lost on power outage
HDFS Correctness Settings
• dfs.datanode.synconclose = true
(since Hadoop 1.1)
• mount ext4 with dirsync! Or use XFS
• You must do this!
HDFS Performance Settings
1. Sync behind writes
2. Stale Datanode Detection
3. Short Circuit Reads
4. Miscellaneous Settings
HDFS Sync Behind Writes
• Syncs partial blocks to disk – best effort
(OK, since blocks are immutable)
• Necessary with sync-on-close for performance
• Always enable this
• dfs.datanode.sync.behind.writes = true
(Since Hadoop 1.1)
Stale Datanodes - Background
• Datanodes (DNs) send block reports to the
Namenode (NN)
• After 10min(!) w/o a report, DN is declared dead
• NN will still direct reads and writes to those DNs
• Bad for recovery. Down by 1 DN by definition.
(every 3rd read/write goes to a bad DN)
Stale Datanodes - Detection
Don’t use a DN for read or write when it looks like it is
stale (default off)
• dfs.namenode.avoid.read.stale.datanode = true
• dfs.namenode.avoid.write.stale.datanode = true
• dfs.namenode.stale.datanode.interval = 30000
(default)
HDFS short circuit reads
Read local blocks directly without DN, when
RegionServers and DNs are co-located.
• dfs.client.read.shortcircuit = true
• dfs.client.read.shortcircuit.buffer.size = 131072
(important, OOM on direct buffers, default on 0.98+)
• hbase.regionserver.checksum.verify = true
(default on 0.98+)
• dfs.domain.socket.path
(local Unix domain socket, not group or world readable)
Misc HDFS tips
Keep DN running with some failed disks
• dfs.datanode.failed.volumes.tolerated = <N>
(tolerate losing this many disks)
Distribute data across disks at a DN
• dfs.datanode.fsdataset.volume.choosing.policy =
AvailableSpaceVolumeChoosingPolicy
(HDFS-1804 hit drives with more space with higher probability for writes when free space
differs by more than 10GB by default)
Misc HDFS settings
(just trust me on these)
• dfs.block.size = 268435456
(note that WAL is rolled at 95% of this)
• ipc.server.tcpnodelay = true
• ipc.client.tcpnodelay = true
Misc HDFS settings
(just trust me on these, really)
• dfs.datanode.max.xcievers = 8192
• dfs.namenode.handler.count = 64
• dfs.datanode.handler.count = 8
(match number of spindles)
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBase
RegionServer Settings
hbase-site.xml
Compactions
Compactions - Background
• Writes are buffered in the memstore
• Memstore contents flushed to disk as HFiles
• Need to limit # HFiles by rewriting small HFiles
into fewer larger ones
• Remove deleted and expired Cells
• Same data written multiple times => Write
Amplification!
Read vs. Write
• Read requires merging HFiles => fewer is
better
• Write throughput better with fewer
compactions => leads to more files
• Optimize for Read or Write, not both
Write Amplification
Vs.
Read Performance
Control the number of HFiles
• hbase.hstore.blockingStoreFiles = 10
(do not allow more flushes when there more than <N> files)
small for read, large for write, will stop flushes and writes
• hbase.hstore.compactionThreshold = 3
(number of files that starts a compaction)
small for read, large for write
• hbase.hregion.memstore.flush.size = 128
(max memstore size, default is good)
larger good for fewer compaction (watch Region Server heap)
Time Based Compactions
• HBase does time based major compactions
• expensive, always at wrong time
• hbase.hregion.majorcompaction = 604800000
(week, default)
• hbase.hregion.majorcompaction.jitter = 0.5 (½
week, default)
Memstore/Cache Sizing
• hbase.hregion.memstore.flush.size = 128
• hbase.hregion.memstore.block.multiplier
(allow single memstore to grow by this multiplier, good for heavy, bursty
writes)
• hbase.regionserver.global.memstore.upperLimit (0.98)
hbase.regionserver.global.memstore.size (1.0+)
(percent of heap, default 0.4, decrease for read heavy load)
• hfile.block.cache.size
(percent heap used for the block cache, default 0.4)
Autotune BlockCache vs. Memstores (1.0+)
HBASE-5349, not well tested, Must Experiment
• hbase.regionserver.global.memstore.size.{max|min}.range
• hfile.block.cache.size.{max|min}.range
• hbase.regionserver.heapmemory.tuner.class
• hbase.regionserver.heapmemory.tuner.period
Data Locality
• Essential for Short Circuit Reads
• hbase.hstore.min.locality.to.skip.major.compact
(compact even when unnecessary to restore locality)
• hbase.master.wait.on.regionservers.timeout
(allow master to wait a bit upon restart, so not all region go to the first servers
who sign in 30-90s is good. Default it 4.5s)
• Don’t use the HDFS balancer!
HBase
Column Family
Settings
Block Encoding
• NONE, FAST_DIFF, PREFIX, etc
• alter 'test', { NAME => 'cf',
DATA_BLOCK_ENCODING => 'FAST_DIFF' }
• Scan friendly, decodes as you scan
• Not so Get friendly (might need to decode many
previous Cells)
• Currently produces a lot of extra garbage
• Safe to enable, always
Compression
• NONE, GZIP, SNAPPY, etc
• create ’test', {NAME => ’cf', COMPRESSION => 'SNAPPY’}}
• Compresses entire blocks, not Scan or Get friendly
• Typically does not achieve much over block encoding
• Blocks cached decompressed, unless
hbase.block.data.cachecompressed = true
(more cache capacity, but every access needs decompressions)
• Need to test with your data
HFile Block Size
• Don’t confuse with HDFS block size!
• create ‘test′,{NAME => ‘cf′, BLOCKSIZE => ’4096'}
• Default 64k good compromise between Scans
and point Gets
• Increase for large Scans
• Decrease for many point gets
• Rarely want to change this, likely never > 1mb
RegionServer - Garbage Collection
(source: http://www.everystockphoto.com)
Weak Generational Hypothesis
Most Allocated Objects Die Young
Garbage Collection - Background
HotSpot manages four generations (CMS collector):
• Eden for all new objects
• Survivor I and II where surviving objects are promoted when
eden is collected
• Tenured space. Objects surviving a few rounds (16 by default)
of eden/survivor collection are promoted into the tenured
space
• Perm gen for classes, interned strings, and other more or less
permanent objects. (gone, finally, in JDK8)
Garbage Collection - HBase
• Garbage from operations is shortlived (single
RPC)
• Memstore is relatively long-lived
(allocated in 2mb chunks)
• Blockcache is long-lived
(allocation in 64k blocks)
• Deal with the “operational” garbage efficiently
Garbage Collection (CMS)
-Xmn512m
very small eden space
-XX:+UseParNewGC
collect eden in parallel
-XX:+UseConcMarkSweepGC
use the non-moving CMS collector
-XX:CMSInitiatingOccupancyFraction=70
start collecting when 70% of tenured gen is full, avoid collection under pressure
-XX:+UseCMSInitiatingOccupancyOnly
do not try to adjust CMS setting
RegionServer Machine Sizing
RegionServer Machine Sizing
• How much RAM/Heap?
• How many disks?
• What size of disk?
• Network?
• Number of cores?
RegionServer Disk/Java Heap ratio
• Disk/Heap ratio:
RegionSize / MemstoreSize *
ReplicationFactor *
HeapFractionForMemstores * 2
(assuming memstores on average ½ filled)
• 10gb/128mb * 3 * 0.4 * 2 = 192, with default
settings
RegionServer Disk/Java Heap ratio
• Each 192 bytes on disk need 1 byte of Heap
• With 32gb of heap, can barely fill 6T
disk/machine
(32gb * 192 = 6tb)
192?!
W.T.F.
How about 1gb regions?
1gb/128mb * 3 * 0.4 * 2 = 19
(source: http://www.everystockphoto.com)
RegionServer sizing configs
• hbase.hregion.max.filesize (default 10g is good)
• hbase.hregion.memstore.flush.size (default 128mb)
(decrease for read heavy loads)
• hbase.regionserver.maxlogs
(HDFS blocksize * 0.95 * <this> should larger than
0.4*JavaHeap)
RegionServer Hardware
• <= 6T disk space per machine
• Enough heap (~diskspace/200)
• Many cores are good. HBase is CPU intensive.
• Match network and disk throughput
(1ge and 24 disks is not good 125mb/s vs 2.4gb/s)
(10ge and 24 disks is OK, 1ge and 4 or 6 disks is OK)
• But… For reads with filters more disks are still better.
HBase Client Settings
Client/Server RPC chunk size
• No streaming RPC in HBase
• Can only asymptotically approach the
full network bandwidth
• Typical intra datacenter latency: 0.1ms-1ms
• Transmitting 2mb over 1ge: 150ms
• Transmitting 2mb over 10ge: 15ms
2mb chunks between Client and Server are good
But, how Should I do that?
Client Chunk Size Settings
Write:
• hbase.client.write.buffer = 2mb (default write buffer, good)
Read
• Scan.setCaching(<n>) (default 100 rows)
(but… how large are the rows? Must guess!)
• hbase.client.scanner.max.result.size = 2mb (default scan
buffer, 0.98.12+ only)
Client
Consider RPC size * hbase.regionserver.handler.count for
server GC
Need to be able to ride over splits and region moves:
hbase.client.pause = 100
hbase.client.retries.number = 35
hbase.ipc.client.tcpnodelay = true
Replication (trust me)
• hbase.zookeeper.useMulti = true (needs ZK 3.4)
this one is important for correctness
Other defaults are good:
• replication.sleep.before.failover = 30000
• replication.source.maxretriesmultiplier = 300
• replication.source.ratio = 0.10
Linux
• Turn THP (Transparent Huge Pages) OFF
• Set Swappiness to 0
• Set vm.min_free_kbytes to AT LEAST 1GB (8GB on
larger systems, server allocation immediately)
• Set zone_reclaim_mode to 0
(one cache on NUMA)
• dirsync mount option for EXT4, or use XFS
Not Covered
• Security/Kerberos
• HA NameNode/QJM
• ZK/Disk Layout
• Obscure Configs
• Offheap Caching, G1 GC
(source: http://www.morguefile.com)
TL;DR:
• Enable HDFS Sync on close, Sync behind writes
• Mount EXT4 with dirsync
• Enabled Stale Datanode detection
• Tune HBase read vs. write load
• Set HFile block size for your load
• Get RPC Client/Server chunk size right
Thank You!
http://hadoop-hbase.blogspot.com/

More Related Content

PPTX
ORC File - Optimizing Your Big Data
PDF
Facebook Messages & HBase
PPTX
Hive + Tez: A Performance Deep Dive
PDF
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
PDF
HBase Advanced - Lars George
PDF
2019.06.27 Intro to Ceph
PDF
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
PDF
HBase HUG Presentation: Avoiding Full GCs with MemStore-Local Allocation Buffers
ORC File - Optimizing Your Big Data
Facebook Messages & HBase
Hive + Tez: A Performance Deep Dive
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBase Advanced - Lars George
2019.06.27 Intro to Ceph
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
HBase HUG Presentation: Avoiding Full GCs with MemStore-Local Allocation Buffers

What's hot (20)

PPTX
Migrating your clusters and workloads from Hadoop 2 to Hadoop 3
PPTX
LLAP: long-lived execution in Hive
PPTX
Apache HBase Performance Tuning
PPTX
The Impala Cookbook
PPTX
From cache to in-memory data grid. Introduction to Hazelcast.
PPTX
Kafka replication apachecon_2013
PPTX
How to understand and analyze Apache Hive query execution plan for performanc...
PDF
Disaggregating Ceph using NVMeoF
PPTX
Securing Hadoop with Apache Ranger
PPTX
VMware Performance Troubleshooting
PPTX
Off-heaping the Apache HBase Read Path
PDF
Apache HBase Improvements and Practices at Xiaomi
PPTX
Demystifying flink memory allocation and tuning - Roshan Naik, Uber
PPTX
HBase in Practice
PPTX
Supporting Apache HBase : Troubleshooting and Supportability Improvements
PDF
Kafka’s New Control Plane: The Quorum Controller | Colin McCabe, Confluent
PPTX
Ozone: scaling HDFS to trillions of objects
PPTX
Apache Tez - A New Chapter in Hadoop Data Processing
PPTX
Hive: Loading Data
PPTX
Ceph Introduction 2017
Migrating your clusters and workloads from Hadoop 2 to Hadoop 3
LLAP: long-lived execution in Hive
Apache HBase Performance Tuning
The Impala Cookbook
From cache to in-memory data grid. Introduction to Hazelcast.
Kafka replication apachecon_2013
How to understand and analyze Apache Hive query execution plan for performanc...
Disaggregating Ceph using NVMeoF
Securing Hadoop with Apache Ranger
VMware Performance Troubleshooting
Off-heaping the Apache HBase Read Path
Apache HBase Improvements and Practices at Xiaomi
Demystifying flink memory allocation and tuning - Roshan Naik, Uber
HBase in Practice
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Kafka’s New Control Plane: The Quorum Controller | Colin McCabe, Confluent
Ozone: scaling HDFS to trillions of objects
Apache Tez - A New Chapter in Hadoop Data Processing
Hive: Loading Data
Ceph Introduction 2017
Ad

Viewers also liked (20)

PDF
HBaseCon 2015: Solving HBase Performance Problems with Apache HTrace
PPTX
HBaseCon 2015: HBase 2.0 and Beyond Panel
PPTX
Apache Spark on Apache HBase: Current and Future
PPTX
Apache HBase at Airbnb
PDF
Improvements to Apache HBase and Its Applications in Alibaba Search
PPTX
HBaseCon 2015 General Session: The Evolution of HBase @ Bloomberg
PPTX
HBaseCon 2015 General Session: State of HBase
PDF
HBaseCon 2015: Meet HBase 1.0
PDF
Breaking the Sound Barrier with Persistent Memory
PPTX
Keynote: The Future of Apache HBase
PPTX
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
PPTX
HBaseCon 2015 General Session: Zen - A Graph Data Model on HBase
PDF
HBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWS
PPTX
HBaseCon 2015: Analyzing HBase Data with Apache Hive
PPTX
HBaseCon 2013: Compaction Improvements in Apache HBase
PPTX
Apache Kylin’s Performance Boost from Apache HBase
PPTX
Keynote: Welcome Message/State of Apache HBase
PDF
HBaseCon 2015: Elastic HBase on Mesos
PDF
Tales from Taming the Long Tail
PPTX
Update on OpenTSDB and AsyncHBase
HBaseCon 2015: Solving HBase Performance Problems with Apache HTrace
HBaseCon 2015: HBase 2.0 and Beyond Panel
Apache Spark on Apache HBase: Current and Future
Apache HBase at Airbnb
Improvements to Apache HBase and Its Applications in Alibaba Search
HBaseCon 2015 General Session: The Evolution of HBase @ Bloomberg
HBaseCon 2015 General Session: State of HBase
HBaseCon 2015: Meet HBase 1.0
Breaking the Sound Barrier with Persistent Memory
Keynote: The Future of Apache HBase
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon 2015 General Session: Zen - A Graph Data Model on HBase
HBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWS
HBaseCon 2015: Analyzing HBase Data with Apache Hive
HBaseCon 2013: Compaction Improvements in Apache HBase
Apache Kylin’s Performance Boost from Apache HBase
Keynote: Welcome Message/State of Apache HBase
HBaseCon 2015: Elastic HBase on Mesos
Tales from Taming the Long Tail
Update on OpenTSDB and AsyncHBase
Ad

Similar to HBaseCon 2015: HBase Performance Tuning @ Salesforce (20)

PPTX
HBase Low Latency, StrataNYC 2014
PDF
Hbase: an introduction
PDF
Hbase 20141003
PDF
Elastic HBase on Mesos - HBaseCon 2015
PPTX
HBase: Where Online Meets Low Latency
PDF
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PPTX
HBase Low Latency
PPTX
Hadoop Architecture_Cluster_Cap_Plan
PDF
[B4]deview 2012-hdfs
PPTX
HBase Operations and Best Practices
PDF
Apache HBase Low Latency
PDF
004 architecture andadvanceduse
PPTX
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
PPTX
HBase: Extreme makeover
PDF
hbaseconasia2017: Large scale data near-line loading method and architecture
PDF
Facebook keynote-nicolas-qcon
PDF
支撑Facebook消息处理的h base存储系统
PDF
HBase Sizing Guide
PPTX
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
PDF
Large-scale Web Apps @ Pinterest
HBase Low Latency, StrataNYC 2014
Hbase: an introduction
Hbase 20141003
Elastic HBase on Mesos - HBaseCon 2015
HBase: Where Online Meets Low Latency
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
HBase Low Latency
Hadoop Architecture_Cluster_Cap_Plan
[B4]deview 2012-hdfs
HBase Operations and Best Practices
Apache HBase Low Latency
004 architecture andadvanceduse
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
HBase: Extreme makeover
hbaseconasia2017: Large scale data near-line loading method and architecture
Facebook keynote-nicolas-qcon
支撑Facebook消息处理的h base存储系统
HBase Sizing Guide
Big Data and Hadoop - History, Technical Deep Dive, and Industry Trends
Large-scale Web Apps @ Pinterest

More from HBaseCon (20)

PDF
hbaseconasia2017: Building online HBase cluster of Zhihu based on Kubernetes
PDF
hbaseconasia2017: HBase on Beam
PDF
hbaseconasia2017: HBase Disaster Recovery Solution at Huawei
PDF
hbaseconasia2017: Removable singularity: a story of HBase upgrade in Pinterest
PDF
hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程
PDF
hbaseconasia2017: Apache HBase at Netease
PDF
hbaseconasia2017: HBase在Hulu的使用和实践
PDF
hbaseconasia2017: 基于HBase的企业级大数据平台
PDF
hbaseconasia2017: HBase at JD.com
PDF
hbaseconasia2017: Ecosystems with HBase and CloudTable service at Huawei
PDF
hbaseconasia2017: HBase Practice At XiaoMi
PDF
hbaseconasia2017: hbase-2.0.0
PDF
HBaseCon2017 Democratizing HBase
PDF
HBaseCon2017 Removable singularity: a story of HBase upgrade in Pinterest
PDF
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
PDF
HBaseCon2017 Transactions in HBase
PDF
HBaseCon2017 Highly-Available HBase
PDF
HBaseCon2017 Apache HBase at Didi
PDF
HBaseCon2017 gohbase: Pure Go HBase Client
PDF
HBaseCon2017 Improving HBase availability in a multi tenant environment
hbaseconasia2017: Building online HBase cluster of Zhihu based on Kubernetes
hbaseconasia2017: HBase on Beam
hbaseconasia2017: HBase Disaster Recovery Solution at Huawei
hbaseconasia2017: Removable singularity: a story of HBase upgrade in Pinterest
hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程
hbaseconasia2017: Apache HBase at Netease
hbaseconasia2017: HBase在Hulu的使用和实践
hbaseconasia2017: 基于HBase的企业级大数据平台
hbaseconasia2017: HBase at JD.com
hbaseconasia2017: Ecosystems with HBase and CloudTable service at Huawei
hbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: hbase-2.0.0
HBaseCon2017 Democratizing HBase
HBaseCon2017 Removable singularity: a story of HBase upgrade in Pinterest
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon2017 Transactions in HBase
HBaseCon2017 Highly-Available HBase
HBaseCon2017 Apache HBase at Didi
HBaseCon2017 gohbase: Pure Go HBase Client
HBaseCon2017 Improving HBase availability in a multi tenant environment

Recently uploaded (20)

PDF
AI in Product Development-omnex systems
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PDF
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PDF
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
PPTX
Materi-Enum-and-Record-Data-Type (1).pptx
PPTX
ai tools demonstartion for schools and inter college
PDF
System and Network Administration Chapter 2
PDF
Digital Strategies for Manufacturing Companies
PPTX
Materi_Pemrograman_Komputer-Looping.pptx
PDF
A REACT POMODORO TIMER WEB APPLICATION.pdf
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PPTX
CHAPTER 12 - CYBER SECURITY AND FUTURE SKILLS (1) (1).pptx
PPT
Introduction Database Management System for Course Database
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
Understanding Forklifts - TECH EHS Solution
PDF
How to Migrate SBCGlobal Email to Yahoo Easily
PPTX
CRUISE TICKETING SYSTEM | CRUISE RESERVATION SOFTWARE
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PPTX
Essential Infomation Tech presentation.pptx
AI in Product Development-omnex systems
2025 Textile ERP Trends: SAP, Odoo & Oracle
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
Upgrade and Innovation Strategies for SAP ERP Customers
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
Materi-Enum-and-Record-Data-Type (1).pptx
ai tools demonstartion for schools and inter college
System and Network Administration Chapter 2
Digital Strategies for Manufacturing Companies
Materi_Pemrograman_Komputer-Looping.pptx
A REACT POMODORO TIMER WEB APPLICATION.pdf
How to Choose the Right IT Partner for Your Business in Malaysia
CHAPTER 12 - CYBER SECURITY AND FUTURE SKILLS (1) (1).pptx
Introduction Database Management System for Course Database
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Understanding Forklifts - TECH EHS Solution
How to Migrate SBCGlobal Email to Yahoo Easily
CRUISE TICKETING SYSTEM | CRUISE RESERVATION SOFTWARE
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
Essential Infomation Tech presentation.pptx

HBaseCon 2015: HBase Performance Tuning @ Salesforce

  • 1. HBase Tuning Performance and Correctness Lars Hofhansl Principal Architect, Salesforce (10 years!) HBase, Phoenix Committer, PMC Apache Incubator PMC Apache Foundation Member http://hadoop-hbase.blogspot.com/
  • 4. Agenda • HDFS • HBase – Server • HBase – Client • Correctness • Performance
  • 6. HDFS - Background • Stores HBase WAL and HFiles • No sync-to-disk by default • Datanode writes tmp file, moves it into place • Old data lost on power outage
  • 7. HDFS Correctness Settings • dfs.datanode.synconclose = true (since Hadoop 1.1) • mount ext4 with dirsync! Or use XFS • You must do this!
  • 8. HDFS Performance Settings 1. Sync behind writes 2. Stale Datanode Detection 3. Short Circuit Reads 4. Miscellaneous Settings
  • 9. HDFS Sync Behind Writes • Syncs partial blocks to disk – best effort (OK, since blocks are immutable) • Necessary with sync-on-close for performance • Always enable this • dfs.datanode.sync.behind.writes = true (Since Hadoop 1.1)
  • 10. Stale Datanodes - Background • Datanodes (DNs) send block reports to the Namenode (NN) • After 10min(!) w/o a report, DN is declared dead • NN will still direct reads and writes to those DNs • Bad for recovery. Down by 1 DN by definition. (every 3rd read/write goes to a bad DN)
  • 11. Stale Datanodes - Detection Don’t use a DN for read or write when it looks like it is stale (default off) • dfs.namenode.avoid.read.stale.datanode = true • dfs.namenode.avoid.write.stale.datanode = true • dfs.namenode.stale.datanode.interval = 30000 (default)
  • 12. HDFS short circuit reads Read local blocks directly without DN, when RegionServers and DNs are co-located. • dfs.client.read.shortcircuit = true • dfs.client.read.shortcircuit.buffer.size = 131072 (important, OOM on direct buffers, default on 0.98+) • hbase.regionserver.checksum.verify = true (default on 0.98+) • dfs.domain.socket.path (local Unix domain socket, not group or world readable)
  • 13. Misc HDFS tips Keep DN running with some failed disks • dfs.datanode.failed.volumes.tolerated = <N> (tolerate losing this many disks) Distribute data across disks at a DN • dfs.datanode.fsdataset.volume.choosing.policy = AvailableSpaceVolumeChoosingPolicy (HDFS-1804 hit drives with more space with higher probability for writes when free space differs by more than 10GB by default)
  • 14. Misc HDFS settings (just trust me on these) • dfs.block.size = 268435456 (note that WAL is rolled at 95% of this) • ipc.server.tcpnodelay = true • ipc.client.tcpnodelay = true
  • 15. Misc HDFS settings (just trust me on these, really) • dfs.datanode.max.xcievers = 8192 • dfs.namenode.handler.count = 64 • dfs.datanode.handler.count = 8 (match number of spindles)
  • 19. Compactions - Background • Writes are buffered in the memstore • Memstore contents flushed to disk as HFiles • Need to limit # HFiles by rewriting small HFiles into fewer larger ones • Remove deleted and expired Cells • Same data written multiple times => Write Amplification!
  • 20. Read vs. Write • Read requires merging HFiles => fewer is better • Write throughput better with fewer compactions => leads to more files • Optimize for Read or Write, not both
  • 22. Control the number of HFiles • hbase.hstore.blockingStoreFiles = 10 (do not allow more flushes when there more than <N> files) small for read, large for write, will stop flushes and writes • hbase.hstore.compactionThreshold = 3 (number of files that starts a compaction) small for read, large for write • hbase.hregion.memstore.flush.size = 128 (max memstore size, default is good) larger good for fewer compaction (watch Region Server heap)
  • 23. Time Based Compactions • HBase does time based major compactions • expensive, always at wrong time • hbase.hregion.majorcompaction = 604800000 (week, default) • hbase.hregion.majorcompaction.jitter = 0.5 (½ week, default)
  • 24. Memstore/Cache Sizing • hbase.hregion.memstore.flush.size = 128 • hbase.hregion.memstore.block.multiplier (allow single memstore to grow by this multiplier, good for heavy, bursty writes) • hbase.regionserver.global.memstore.upperLimit (0.98) hbase.regionserver.global.memstore.size (1.0+) (percent of heap, default 0.4, decrease for read heavy load) • hfile.block.cache.size (percent heap used for the block cache, default 0.4)
  • 25. Autotune BlockCache vs. Memstores (1.0+) HBASE-5349, not well tested, Must Experiment • hbase.regionserver.global.memstore.size.{max|min}.range • hfile.block.cache.size.{max|min}.range • hbase.regionserver.heapmemory.tuner.class • hbase.regionserver.heapmemory.tuner.period
  • 26. Data Locality • Essential for Short Circuit Reads • hbase.hstore.min.locality.to.skip.major.compact (compact even when unnecessary to restore locality) • hbase.master.wait.on.regionservers.timeout (allow master to wait a bit upon restart, so not all region go to the first servers who sign in 30-90s is good. Default it 4.5s) • Don’t use the HDFS balancer!
  • 28. Block Encoding • NONE, FAST_DIFF, PREFIX, etc • alter 'test', { NAME => 'cf', DATA_BLOCK_ENCODING => 'FAST_DIFF' } • Scan friendly, decodes as you scan • Not so Get friendly (might need to decode many previous Cells) • Currently produces a lot of extra garbage • Safe to enable, always
  • 29. Compression • NONE, GZIP, SNAPPY, etc • create ’test', {NAME => ’cf', COMPRESSION => 'SNAPPY’}} • Compresses entire blocks, not Scan or Get friendly • Typically does not achieve much over block encoding • Blocks cached decompressed, unless hbase.block.data.cachecompressed = true (more cache capacity, but every access needs decompressions) • Need to test with your data
  • 30. HFile Block Size • Don’t confuse with HDFS block size! • create ‘test′,{NAME => ‘cf′, BLOCKSIZE => ’4096'} • Default 64k good compromise between Scans and point Gets • Increase for large Scans • Decrease for many point gets • Rarely want to change this, likely never > 1mb
  • 31. RegionServer - Garbage Collection (source: http://www.everystockphoto.com)
  • 32. Weak Generational Hypothesis Most Allocated Objects Die Young
  • 33. Garbage Collection - Background HotSpot manages four generations (CMS collector): • Eden for all new objects • Survivor I and II where surviving objects are promoted when eden is collected • Tenured space. Objects surviving a few rounds (16 by default) of eden/survivor collection are promoted into the tenured space • Perm gen for classes, interned strings, and other more or less permanent objects. (gone, finally, in JDK8)
  • 34. Garbage Collection - HBase • Garbage from operations is shortlived (single RPC) • Memstore is relatively long-lived (allocated in 2mb chunks) • Blockcache is long-lived (allocation in 64k blocks) • Deal with the “operational” garbage efficiently
  • 35. Garbage Collection (CMS) -Xmn512m very small eden space -XX:+UseParNewGC collect eden in parallel -XX:+UseConcMarkSweepGC use the non-moving CMS collector -XX:CMSInitiatingOccupancyFraction=70 start collecting when 70% of tenured gen is full, avoid collection under pressure -XX:+UseCMSInitiatingOccupancyOnly do not try to adjust CMS setting
  • 37. RegionServer Machine Sizing • How much RAM/Heap? • How many disks? • What size of disk? • Network? • Number of cores?
  • 38. RegionServer Disk/Java Heap ratio • Disk/Heap ratio: RegionSize / MemstoreSize * ReplicationFactor * HeapFractionForMemstores * 2 (assuming memstores on average ½ filled) • 10gb/128mb * 3 * 0.4 * 2 = 192, with default settings
  • 39. RegionServer Disk/Java Heap ratio • Each 192 bytes on disk need 1 byte of Heap • With 32gb of heap, can barely fill 6T disk/machine (32gb * 192 = 6tb) 192?! W.T.F.
  • 40. How about 1gb regions? 1gb/128mb * 3 * 0.4 * 2 = 19
  • 42. RegionServer sizing configs • hbase.hregion.max.filesize (default 10g is good) • hbase.hregion.memstore.flush.size (default 128mb) (decrease for read heavy loads) • hbase.regionserver.maxlogs (HDFS blocksize * 0.95 * <this> should larger than 0.4*JavaHeap)
  • 43. RegionServer Hardware • <= 6T disk space per machine • Enough heap (~diskspace/200) • Many cores are good. HBase is CPU intensive. • Match network and disk throughput (1ge and 24 disks is not good 125mb/s vs 2.4gb/s) (10ge and 24 disks is OK, 1ge and 4 or 6 disks is OK) • But… For reads with filters more disks are still better.
  • 45. Client/Server RPC chunk size • No streaming RPC in HBase • Can only asymptotically approach the full network bandwidth • Typical intra datacenter latency: 0.1ms-1ms • Transmitting 2mb over 1ge: 150ms • Transmitting 2mb over 10ge: 15ms
  • 46. 2mb chunks between Client and Server are good But, how Should I do that?
  • 47. Client Chunk Size Settings Write: • hbase.client.write.buffer = 2mb (default write buffer, good) Read • Scan.setCaching(<n>) (default 100 rows) (but… how large are the rows? Must guess!) • hbase.client.scanner.max.result.size = 2mb (default scan buffer, 0.98.12+ only)
  • 48. Client Consider RPC size * hbase.regionserver.handler.count for server GC Need to be able to ride over splits and region moves: hbase.client.pause = 100 hbase.client.retries.number = 35 hbase.ipc.client.tcpnodelay = true
  • 49. Replication (trust me) • hbase.zookeeper.useMulti = true (needs ZK 3.4) this one is important for correctness Other defaults are good: • replication.sleep.before.failover = 30000 • replication.source.maxretriesmultiplier = 300 • replication.source.ratio = 0.10
  • 50. Linux • Turn THP (Transparent Huge Pages) OFF • Set Swappiness to 0 • Set vm.min_free_kbytes to AT LEAST 1GB (8GB on larger systems, server allocation immediately) • Set zone_reclaim_mode to 0 (one cache on NUMA) • dirsync mount option for EXT4, or use XFS
  • 51. Not Covered • Security/Kerberos • HA NameNode/QJM • ZK/Disk Layout • Obscure Configs • Offheap Caching, G1 GC
  • 53. TL;DR: • Enable HDFS Sync on close, Sync behind writes • Mount EXT4 with dirsync • Enabled Stale Datanode detection • Tune HBase read vs. write load • Set HFile block size for your load • Get RPC Client/Server chunk size right