SlideShare a Scribd company logo
Chris Lohfink
Cassandra Metrics
About me
• Software developer at DataStax
• OpsCenter, Metrics & Cassandra interactions
© DataStax, All Rights Reserved. 2
What this talk is
• What does the thing the metrics report mean (da dum tis)
• How metrics evolved in C*
© DataStax, All Rights Reserved. 3
Collecting
Not how, but what and why
Cassandra Metrics
• For the most part metrics do not break backwards compatibility
• Until they do (from deprecation or bugs)
• Deprecated metrics are hard to identify without looking at source
code, so their disappearance may have surprising impacts even if
deprecated for years.
• i.e. Cassandra 2.2 removal of “Recent Latency” metrics
© DataStax, All Rights Reserved. 5
C* Metrics Pre-1.1
© DataStax, All Rights Reserved. 6
• Classes implemented MBeans and metrics were added in place
• ColumnFamilyStore -> ColumnFamilyStoreMBean
• Semi-adhoc, tightly coupled to code but had a “theme” or common
abstractions
Latency Tracker
• LatencyTracker stores:
• recent histogram
• total histogram
• number of ops
• total latency
• Use latency/#ops since last time called to compute “recent” average
latency
• Every time queried it will reset the latency and histogram.
© DataStax, All Rights Reserved. 7
Describing Latencies
© DataStax, All Rights Reserved. 8
0 100 200 300 400 500 600 700 800 900 1000
• Listing the raw the values:
13ms, 14ms, 2ms, 13ms, 90ms, 734ms, 8ms, 23ms, 30ms
• Doesn’t scale well
• Not easy to parse, with larger amounts can be difficult to find high values
Describing Latencies
© DataStax, All Rights Reserved. 9
0 100 200 300 400 500 600 700 800 900 1000
• Average:
• 103ms
Describing Latencies
© DataStax, All Rights Reserved. 10
0 100 200 300 400 500 600 700 800 900 1000
• Average:
• 103ms
Describing Latencies
© DataStax, All Rights Reserved. 11
0 100 200 300 400 500 600 700 800 900 1000
• Average:
• 103ms
• Missing outliers
Describing Latencies
© DataStax, All Rights Reserved. 12
0 100 200 300 400 500 600 700 800 900 1000
• Average:
• 103ms
• Missing outliers
• Max: 734ms
• Min: 2ms
Describing Latencies
© DataStax, All Rights Reserved. 13
0 100 200 300 400 500 600 700 800 900 1000
• Average:
• 103ms
• Missing outliers
• Max: 734ms
• Min: 2ms
Latency Tracker
• LatencyTracker stores:
• recent histogram
• total histogram
• number of ops
• total latency
• Use latency/#ops since last time called to compute “recent”
average latency
• Every time queried it will reset the latency and histogram.
© DataStax, All Rights Reserved. 14
Recent Average Latencies
© DataStax, All Rights Reserved. 15
0 100 200 300 400 500 600 700 800 900 1000
• Reported latency from
• Sum of latencies since last called
• Number of requests since last called
• Average:
• 103ms
• Outliers lost
Histograms
• Describes frequency of data
© DataStax, All Rights Reserved. 16
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
1
© DataStax, All Rights Reserved. 17
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
1
2
© DataStax, All Rights Reserved. 18
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
11
2
© DataStax, All Rights Reserved. 19
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
111
2
© DataStax, All Rights Reserved. 20
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
111
2
3
© DataStax, All Rights Reserved. 21
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
111
2
3
4
© DataStax, All Rights Reserved. 22
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
111
2
33
4
© DataStax, All Rights Reserved. 23
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
1111
2
33
4
© DataStax, All Rights Reserved. 24
1, 2, 1, 1, 3, 4, 3, 1
Histograms
• Describes frequency of data
1111
2
33
4
© DataStax, All Rights Reserved. 25
1, 2, 1, 1, 3, 4, 3, 1
0 1 2 3 4
4
3
2
1
Count
Histograms
• "bin" the range of values
• divide the entire range of values into a series of intervals
• Count how many values fall into each interval
© DataStax, All Rights Reserved. 26
Histograms
• "bin" the range of values—that is, divide the entire range of values
into a series of intervals—and then count how many values fall into
each interval
© DataStax, All Rights Reserved. 27
0 100 200 300 400 500 600 700 800 900 1000
13, 14, 2, 20, 13, 90, 734, 8, 53, 23, 30
Histograms
• "bin" the range of values—that is, divide the entire range of values
into a series of intervals—and then count how many values fall into
each interval
© DataStax, All Rights Reserved. 28
13, 14, 2, 20, 13, 90, 734, 8, 53, 23, 30
Histograms
• "bin" the range of values—that is, divide the entire range of values
into a series of intervals—and then count how many values fall into
each interval
© DataStax, All Rights Reserved. 29
2, 8, 13, 13, 14, 20, 23, 30, 53, 90, 734
Histograms
• "bin" the range of values—that is, divide the entire range of values
into a series of intervals—and then count how many values fall into
each interval
© DataStax, All Rights Reserved. 30
2, 8, 13, 13, 14, 20, 23, 30, 53, 90, 734
1-10 11-100 101-1000
2 8 1
Histograms
Approximations
Max: 1000 (actual 734)
© DataStax, All Rights Reserved. 31
1-10 11-100 101-1000
2 8 1
Histograms
Approximations
Max: 1000 (actual 734)
Min: 10 (actual 2)
© DataStax, All Rights Reserved. 32
1-10 11-100 101-1000
2 8 1
Histograms
Approximations
Max: 1000 (actual 734)
Min: 10 (actual 2)
Average: sum / count, (10*2 + 100*8 + 1000*1) / (2+8+1) = 165 (actual 103)
© DataStax, All Rights Reserved. 33
1-10 11-100 101-1000
2 8 1
Histograms
Approximations
Max: 1000 (actual 734)
Min: 10 (actual 2)
Average: sum / count, (10*2 + 100*8 + 1000*1) / (2+8+1) = 165 (actual 103)
Percentiles: 11 requests, so we know 90 percent of the latencies occurred in the 11-100 bucket or
lower.
90th Percentile: 100
© DataStax, All Rights Reserved. 34
1-10 11-100 101-1000
2 8 1
Histograms
Approximations
Max: 1000 (actual 734)
Min: 10 (actual 2)
Average: sum / count, (10*2 + 100*8 + 1000) / (2+8+1) = 165 (actual 103)
Percentiles: 11 requests, so we know 90 percent of the latencies occurred in the 11-100 bucket or
lower.
90th Percentile: 100
© DataStax, All Rights Reserved. 35
1-10 11-100 101-1000
2 8 1
EstimatedHistogram
The series starts at 1 and grows by 1.2 each time
1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 17, 20, 24, 29,
…
12108970, 14530764, 17436917, 20924300, 25109160
© DataStax, All Rights Reserved. 36
LatencyTracker
Has two histograms
• Recent
• Count of times a latency occurred since last time read for each bin
• Total
• Count of times a latency occurred since Cassandra started for each bin
© DataStax, All Rights Reserved. 37
Total Histogram Deltas
If you keep track of histogram last time you read it can find delta to determine
how many occurred in that interval
Last
Now
© DataStax, All Rights Reserved. 38
1-10 11-100 101-1000
2 8 1
1-10 11-100 101-1000
4 8 2
Total Histogram Deltas
If you keep track of histogram last time you read it can find delta to determine
how many occurred in that interval
Last
Now
Delta
© DataStax, All Rights Reserved.
1-10 11-100 101-1000
2 8 1
1-10 11-100 101-1000
4 8 2
1-10 11-100 101-1000
2 0 1
Cassandra 1.1
• Yammer/Codahale/Dropwizard Metrics introduced
• Awesome!
• Not so awesome…
© DataStax, All Rights Reserved. 40
Reservoirs
• Maintain a sample of the data that is representative of the entire set.
• Can perform operations on the limited, fixed memory set as if on entire dataset
• Vitters Algorithm R
• Offers a 99.9% confidence level & 5% margin of error
• Simple
• Randomly include value in reservoir, less and less likely as more
values seen
© DataStax, All Rights Reserved. 41
Reservoirs
• Maintain a sample of the data that is representative of the entire set.
• Can perform operations on the limited, fixed memory set as if on entire dataset
• Vitters Algorithm R
• Offers a 99.9% confidence level & 5% margin of error
* When the stream has a normal distribution
© DataStax, All Rights Reserved. 42
Metrics Reservoirs
• Random sampling, what can it miss?
– Min
– Max
– Everything in 99th percentile?
– The more rare, the less likely to be included
43
Metrics Reservoirs
• “Good enough” for basic adhoc viewing but too non-deterministic for many
• Commonly resolved using replacement reservoirs (i.e. HdrHistogram)
44
Metrics Reservoirs
• “Good enough” for basic adhoc viewing but too non-deterministic for many
• Commonly resolved using replacement reservoirs (i.e. HdrHistogram)
– org.apache.cassandra.metrics.EstimatedHistogramReservoir
45
Cassandra 2.2
• CASSANDRA-5657 – upgrade metrics library (and extend it)
– Replaced reservoir with EH
• Also exposed raw bin counts in values operation
– Deleted deprecated metrics
• Non EH latencies from LatencyTracker
46
Cassandra 2.2
• No recency in histograms
• Requires delta’ing on the total bin counts currently which is beyond
some simple tooling
• CASSANDRA-11752 (fixed 2.2.8, 3.0.9, 3.8)
47
Storage
Storing the data
• We have data, now to store it. Approaches tend to follow:
– Store all data points
• Provide aggregations either pre-computed as entered, MR, or on query
– Round Robin Database
• Only store pre-computed aggregations
• Choice depends heavily on requirements
49
Round Robin Database
• Store state required to generate the aggregations, and only store the
aggregations
– Sum & Count for Average
– Current min, max
– “One pass” or “online” algorithms
• Constant footprint
50
Round Robin Database
• Store state required to generate the aggregations, and only store the aggregations
– Sum & Count for Average
– Current min, max
– “One pass” or “online” algorithms
• Constant footprint
51
60 300 3600
Sum 0 0 0
Count 0 0 0
Min 0 0 0
Max 0 0 0
Round Robin Database
> 10ms @ 00:00
52
60 300 3600
Sum 10 10 10
Count 1 1 1
Min 10 10 10
Max 10 10 10
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
53
60 300 3600
Sum 22 22 22
Count 2 2 2
Min 10 10 10
Max 12 12 12
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
> 14ms @ 00:59
54
60 300 3600
Sum 36 36 36
Count 3 3 3
Min 10 10 10
Max 14 14 14
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
> 14ms @ 00:59
> 13ms @ 01:10
55
60 300 3600
Sum 36 36 36
Count 3 3 3
Min 10 10 10
Max 14 14 14
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
> 14ms @ 00:59
> 13ms @ 01:10
56
60 300 3600
Sum 36 36 36
Count 3 3 3
Min 10 10 10
Max 14 14 14
Average 12
Min 10
Max 14
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
> 14ms @ 00:59
> 13ms @ 01:10
57
60 300 3600
Sum 0 36 36
Count 0 3 3
Min 0 10 10
Max 0 14 14
Round Robin Database
> 10ms @ 00:00
> 12ms @ 00:30
> 14ms @ 00:59
> 13ms @ 01:10
58
60 300 3600
Sum 13 49 49
Count 1 4 4
Min 13 10 10
Max 13 14 14
Max is a lie
• The issue with the deprecated LatencyTracker metrics is that the 1 minute interval
does not have a min/max. So we cannot compute true min/max
the rollups min/max will be the minimum and maximum average
59
Histograms to the rescue (again)
• The histograms of the data does not have this issue. But storage is
more complex. Some options include:
– Store each bin of the histogram as a metric
– Store the percentiles/min/max each as own metric
– Store raw long[90] (possibly compressed)
60
Histogram Storage Size
• Some things to note:
– “Normal” clusters have over 100 tables.
– Each table has at least two histograms we want to record
• Read latency
• Write latency
• Tombstones scanned
• Cells scanned
• Partition cell size
• Partition cell count
61
Histogram Storage
Because we store the extra histograms we have a 600 (minimum) with upper
bounds seen to be over 24,000 histograms per minute.
• Storing 1 per bin means [54000] metrics (expensive to store, expensive to
read)
• Storing raw histograms is [600] metrics
• Storing min, max, 50th, 90th, 99th is [3000] metrics
– Additional problems with this
• Cant compute 10th, 95th, 99.99th etc
• Aggregations
62
Aggregating Histograms
Averaging the percentiles
[ INSERT DISAPOINTED GIL TENE PHOTO ]
© DataStax, All Rights Reserved. 63
Aggregating Histograms
• Consider averaging the maximum
If there is a node with a 10 second GC, but the maximum latency on your other 9 nodes
is 60ms. If you report a “Max 1 second” latency, it would be misleading.
• Poor at representing hotspots affects on your application
One node in 10 node raspberry pi cluster gets 1000 write reqs/sec while others get 10
reqs/sec. The 1 node being under heavy stress has a 90th percentile of 10 second. The
other nodes are basically sub ms and writes are taking 1ms on 90th percentile. Would
report a 1 second 90th percentile, even though 10% of our applications writes are taking
>10 seconds
© DataStax, All Rights Reserved. 64
Aggregating Histograms
Merging histograms from different nodes more accurately can be straight forward:
Node1
Node2
Cluster
© DataStax, All Rights Reserved. 65
1-10 11-100 101-1000
2 8 1
1-10 11-100 101-1000
2 1 5
1-10 11-100 101-1000
4 9 6
Histogram Storage
Because we store the extra histograms we have a 600 (minimum) with upper
bounds seen to be over 24,000 histograms per minute.
• Storing 1 per bin means [54000] metrics (expensive to store, expensive to
read)
• Storing raw histograms is [600] metrics
• Storing min, max, 50th, 90th, 99th is [3000] metrics
– Additional problems with this
• Cant compute 10th, 95th, 99.99th etc
• Aggregations
66
Raw Histogram storage
• Storing raw histograms 160 (default) longs is a minimum of 1.2kb
bytes per rollup and hard sell
– 760kb per minute (600 tables)
– 7.7gb for the 7 day TTL we want to keep our 1 min rollups at
– ~77gb with 10 nodes
– ~2.3 Tb on 10 node clusters with 3k tables
– Expired data isn’t immediately purged so disk space can be much worse
67
Raw Histogram storage
• Goal: We wanted this to be comparable to other min/max/avg metric
storage (12 bytes each)
– 700mb on expected 10 node cluster
– 2gb on extreme 10 node cluster
• Enter compression
68
Compressing Histograms
• Overhead of typical compression makes it a non-starter.
– headers (ie 10 bytes for gzip) alone nearly exceeds the length used by
existing rollup storage (~12 bytes per metric)
• Instead we opt to leverage known context to reduce the size of the
data along with some universal encoding.
69
Compressing Histograms
• Instead of storing every bin, only store the value of each bin with a value > 0
since most bin will have no data (ie, very unlikely for a read histogram to be
between 1-10 microseconds which is first 10 bins)
• Write the count of offset/count pairs
• Use varint for the bin count
– To reduce the value of the varint as much as possible we sort the offset/count
pairs by the count and represent it as a delta sequence
70
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
71
1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte
1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte
1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
72
7
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 8:100, 11:9999999, 14:1, 15:127, 16:128 17:129}
73
7
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
74
7
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
75
7 4 1
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
76
7 4 1 14 0
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
77
7 4 1 14 0 8 99
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
78
7 4 1 14 0 8 99 15
27
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
79
7 4 1 14 0 8 99 15
27 16 1 17 1
Compressing Histograms
0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
{4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999}
80
7 4 1 14 0 8 99 15
27 16 1 17 1 11
9999870
Compressing Histograms
Real Life** results of compression:
81
Size in bytes
Median 1
75th 3
95th 15
99th 45
Max** 124
Note on HdrHistogram
• Comes up every couple months
• Very awesome histogram, popular replacement for Metrics reservoir.
– More powerful and general purpose than EH
– Only slightly slower for all it offers
A issue comes up a bit with storage:
• Logged HdrHistograms are ~31kb each (30,000x more than our average use)
• Compressed version: 1kb each
• Perfect for many many people when tracking 1 or two metrics. Gets painful when
tracking hundreds or thousands
82
Questions?

More Related Content

PPTX
G1 collector and tuning and Cassandra
PDF
GC Tuning Confessions Of A Performance Engineer
PDF
GC Tuning in the HotSpot Java VM - a FISL 10 Presentation
PPTX
G1 Garbage Collector - Big Heaps and Low Pauses?
PDF
The Performance Engineer's Guide To HotSpot Just-in-Time Compilation
PPTX
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...
PDF
The Performance Engineer's Guide To (OpenJDK) HotSpot Garbage Collection - Th...
KEY
Everything I Ever Learned About JVM Performance Tuning @Twitter
G1 collector and tuning and Cassandra
GC Tuning Confessions Of A Performance Engineer
GC Tuning in the HotSpot Java VM - a FISL 10 Presentation
G1 Garbage Collector - Big Heaps and Low Pauses?
The Performance Engineer's Guide To HotSpot Just-in-Time Compilation
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...
The Performance Engineer's Guide To (OpenJDK) HotSpot Garbage Collection - Th...
Everything I Ever Learned About JVM Performance Tuning @Twitter

What's hot (20)

PDF
Moving to g1 gc by Kirk Pepperdine.
PPTX
Hadoop world g1_gc_forh_base_v4
PPTX
HotSpot JVM Tuning
PPTX
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
PDF
Tuning Java for Big Data
PDF
Introduction of Java GC Tuning and Java Java Mission Control
PDF
Basics of JVM Tuning
PDF
Garbage First Garbage Collector: Where the Rubber Meets the Road!
PPTX
Tuning Java GC to resolve performance issues
PPT
Java Garbage Collectors – Moving to Java7 Garbage First (G1) Collector
PDF
Way Improved :) GC Tuning Confessions - presented at JavaOne2015
PDF
JVM Garbage Collection Tuning
PPTX
G1GC
PDF
Processing Big Data in Real-Time - Yanai Franchi, Tikal
PDF
GC Tuning Confessions Of A Performance Engineer - Improved :)
PDF
The Performance Engineer's Guide to Java (HotSpot) Virtual Machine
PDF
Let's talk about Garbage Collection
PDF
Fight with Metaspace OOM
PDF
淺談 Java GC 原理、調教和 新發展
PPT
Performance tuning jvm
Moving to g1 gc by Kirk Pepperdine.
Hadoop world g1_gc_forh_base_v4
HotSpot JVM Tuning
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
Tuning Java for Big Data
Introduction of Java GC Tuning and Java Java Mission Control
Basics of JVM Tuning
Garbage First Garbage Collector: Where the Rubber Meets the Road!
Tuning Java GC to resolve performance issues
Java Garbage Collectors – Moving to Java7 Garbage First (G1) Collector
Way Improved :) GC Tuning Confessions - presented at JavaOne2015
JVM Garbage Collection Tuning
G1GC
Processing Big Data in Real-Time - Yanai Franchi, Tikal
GC Tuning Confessions Of A Performance Engineer - Improved :)
The Performance Engineer's Guide to Java (HotSpot) Virtual Machine
Let's talk about Garbage Collection
Fight with Metaspace OOM
淺談 Java GC 原理、調教和 新發展
Performance tuning jvm
Ad

Similar to Storing Cassandra Metrics (20)

PDF
"Einstürzenden Neudaten: Building an Analytics Engine from Scratch", Tobias J...
PDF
Drinking from the Firehose - Real-time Metrics
PPTX
Everyday I'm Scaling... Cassandra (Ben Bromhead, Instaclustr) | C* Summit 2016
PPTX
Everyday I’m scaling... Cassandra
PDF
Apache con 2020 use cases and optimizations of iotdb
PDF
Cyclone DDS Unleashed: Scalability in DDS and Dealing with Large Systems
PPTX
Introduce Apache Cassandra - JavaTwo Taiwan, 2012
PDF
Chronix Poster for the Poster Session FAST 2017
PPTX
The Case for a Signal Oriented Data Stream Management System
PDF
What you need to know about GC
PPTX
Cassandra
PDF
Aerospike Go Language Client
PPTX
DataStax TechDay - Munich 2014
PPTX
What We Learned About Cassandra While Building go90 (Christopher Webster & Th...
PDF
Time Series Processing with Solr and Spark: Presented by Josef Adersberger, Q...
PDF
Time Series Analysis
PDF
Time Series Processing with Solr and Spark
PPTX
InfluxEnterprise Architecture Patterns by Tim Hall & Sam Dillard
PDF
«Scrapy internals» Александр Сибиряков, Scrapinghub
PDF
Cassandra CLuster Management by Japan Cassandra Community
"Einstürzenden Neudaten: Building an Analytics Engine from Scratch", Tobias J...
Drinking from the Firehose - Real-time Metrics
Everyday I'm Scaling... Cassandra (Ben Bromhead, Instaclustr) | C* Summit 2016
Everyday I’m scaling... Cassandra
Apache con 2020 use cases and optimizations of iotdb
Cyclone DDS Unleashed: Scalability in DDS and Dealing with Large Systems
Introduce Apache Cassandra - JavaTwo Taiwan, 2012
Chronix Poster for the Poster Session FAST 2017
The Case for a Signal Oriented Data Stream Management System
What you need to know about GC
Cassandra
Aerospike Go Language Client
DataStax TechDay - Munich 2014
What We Learned About Cassandra While Building go90 (Christopher Webster & Th...
Time Series Processing with Solr and Spark: Presented by Josef Adersberger, Q...
Time Series Analysis
Time Series Processing with Solr and Spark
InfluxEnterprise Architecture Patterns by Tim Hall & Sam Dillard
«Scrapy internals» Александр Сибиряков, Scrapinghub
Cassandra CLuster Management by Japan Cassandra Community
Ad

Recently uploaded (20)

PPTX
Transform Your Business with a Software ERP System
PDF
iTop VPN 6.5.0 Crack + License Key 2025 (Premium Version)
PDF
Product Update: Alluxio AI 3.7 Now with Sub-Millisecond Latency
PPTX
Oracle Fusion HCM Cloud Demo for Beginners
PDF
CCleaner Pro 6.38.11537 Crack Final Latest Version 2025
PPTX
Why Generative AI is the Future of Content, Code & Creativity?
PDF
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
PPTX
Operating system designcfffgfgggggggvggggggggg
PDF
Navsoft: AI-Powered Business Solutions & Custom Software Development
PDF
Design an Analysis of Algorithms I-SECS-1021-03
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PDF
AutoCAD Professional Crack 2025 With License Key
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PDF
medical staffing services at VALiNTRY
PPTX
Reimagine Home Health with the Power of Agentic AI​
PDF
Odoo Companies in India – Driving Business Transformation.pdf
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PDF
Cost to Outsource Software Development in 2025
Transform Your Business with a Software ERP System
iTop VPN 6.5.0 Crack + License Key 2025 (Premium Version)
Product Update: Alluxio AI 3.7 Now with Sub-Millisecond Latency
Oracle Fusion HCM Cloud Demo for Beginners
CCleaner Pro 6.38.11537 Crack Final Latest Version 2025
Why Generative AI is the Future of Content, Code & Creativity?
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
Operating system designcfffgfgggggggvggggggggg
Navsoft: AI-Powered Business Solutions & Custom Software Development
Design an Analysis of Algorithms I-SECS-1021-03
Design an Analysis of Algorithms II-SECS-1021-03
AutoCAD Professional Crack 2025 With License Key
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
medical staffing services at VALiNTRY
Reimagine Home Health with the Power of Agentic AI​
Odoo Companies in India – Driving Business Transformation.pdf
Adobe Illustrator 28.6 Crack My Vision of Vector Design
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
How to Choose the Right IT Partner for Your Business in Malaysia
Cost to Outsource Software Development in 2025

Storing Cassandra Metrics

  • 2. About me • Software developer at DataStax • OpsCenter, Metrics & Cassandra interactions © DataStax, All Rights Reserved. 2
  • 3. What this talk is • What does the thing the metrics report mean (da dum tis) • How metrics evolved in C* © DataStax, All Rights Reserved. 3
  • 4. Collecting Not how, but what and why
  • 5. Cassandra Metrics • For the most part metrics do not break backwards compatibility • Until they do (from deprecation or bugs) • Deprecated metrics are hard to identify without looking at source code, so their disappearance may have surprising impacts even if deprecated for years. • i.e. Cassandra 2.2 removal of “Recent Latency” metrics © DataStax, All Rights Reserved. 5
  • 6. C* Metrics Pre-1.1 © DataStax, All Rights Reserved. 6 • Classes implemented MBeans and metrics were added in place • ColumnFamilyStore -> ColumnFamilyStoreMBean • Semi-adhoc, tightly coupled to code but had a “theme” or common abstractions
  • 7. Latency Tracker • LatencyTracker stores: • recent histogram • total histogram • number of ops • total latency • Use latency/#ops since last time called to compute “recent” average latency • Every time queried it will reset the latency and histogram. © DataStax, All Rights Reserved. 7
  • 8. Describing Latencies © DataStax, All Rights Reserved. 8 0 100 200 300 400 500 600 700 800 900 1000 • Listing the raw the values: 13ms, 14ms, 2ms, 13ms, 90ms, 734ms, 8ms, 23ms, 30ms • Doesn’t scale well • Not easy to parse, with larger amounts can be difficult to find high values
  • 9. Describing Latencies © DataStax, All Rights Reserved. 9 0 100 200 300 400 500 600 700 800 900 1000 • Average: • 103ms
  • 10. Describing Latencies © DataStax, All Rights Reserved. 10 0 100 200 300 400 500 600 700 800 900 1000 • Average: • 103ms
  • 11. Describing Latencies © DataStax, All Rights Reserved. 11 0 100 200 300 400 500 600 700 800 900 1000 • Average: • 103ms • Missing outliers
  • 12. Describing Latencies © DataStax, All Rights Reserved. 12 0 100 200 300 400 500 600 700 800 900 1000 • Average: • 103ms • Missing outliers • Max: 734ms • Min: 2ms
  • 13. Describing Latencies © DataStax, All Rights Reserved. 13 0 100 200 300 400 500 600 700 800 900 1000 • Average: • 103ms • Missing outliers • Max: 734ms • Min: 2ms
  • 14. Latency Tracker • LatencyTracker stores: • recent histogram • total histogram • number of ops • total latency • Use latency/#ops since last time called to compute “recent” average latency • Every time queried it will reset the latency and histogram. © DataStax, All Rights Reserved. 14
  • 15. Recent Average Latencies © DataStax, All Rights Reserved. 15 0 100 200 300 400 500 600 700 800 900 1000 • Reported latency from • Sum of latencies since last called • Number of requests since last called • Average: • 103ms • Outliers lost
  • 16. Histograms • Describes frequency of data © DataStax, All Rights Reserved. 16 1, 2, 1, 1, 3, 4, 3, 1
  • 17. Histograms • Describes frequency of data 1 © DataStax, All Rights Reserved. 17 1, 2, 1, 1, 3, 4, 3, 1
  • 18. Histograms • Describes frequency of data 1 2 © DataStax, All Rights Reserved. 18 1, 2, 1, 1, 3, 4, 3, 1
  • 19. Histograms • Describes frequency of data 11 2 © DataStax, All Rights Reserved. 19 1, 2, 1, 1, 3, 4, 3, 1
  • 20. Histograms • Describes frequency of data 111 2 © DataStax, All Rights Reserved. 20 1, 2, 1, 1, 3, 4, 3, 1
  • 21. Histograms • Describes frequency of data 111 2 3 © DataStax, All Rights Reserved. 21 1, 2, 1, 1, 3, 4, 3, 1
  • 22. Histograms • Describes frequency of data 111 2 3 4 © DataStax, All Rights Reserved. 22 1, 2, 1, 1, 3, 4, 3, 1
  • 23. Histograms • Describes frequency of data 111 2 33 4 © DataStax, All Rights Reserved. 23 1, 2, 1, 1, 3, 4, 3, 1
  • 24. Histograms • Describes frequency of data 1111 2 33 4 © DataStax, All Rights Reserved. 24 1, 2, 1, 1, 3, 4, 3, 1
  • 25. Histograms • Describes frequency of data 1111 2 33 4 © DataStax, All Rights Reserved. 25 1, 2, 1, 1, 3, 4, 3, 1 0 1 2 3 4 4 3 2 1 Count
  • 26. Histograms • "bin" the range of values • divide the entire range of values into a series of intervals • Count how many values fall into each interval © DataStax, All Rights Reserved. 26
  • 27. Histograms • "bin" the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval © DataStax, All Rights Reserved. 27 0 100 200 300 400 500 600 700 800 900 1000 13, 14, 2, 20, 13, 90, 734, 8, 53, 23, 30
  • 28. Histograms • "bin" the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval © DataStax, All Rights Reserved. 28 13, 14, 2, 20, 13, 90, 734, 8, 53, 23, 30
  • 29. Histograms • "bin" the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval © DataStax, All Rights Reserved. 29 2, 8, 13, 13, 14, 20, 23, 30, 53, 90, 734
  • 30. Histograms • "bin" the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval © DataStax, All Rights Reserved. 30 2, 8, 13, 13, 14, 20, 23, 30, 53, 90, 734 1-10 11-100 101-1000 2 8 1
  • 31. Histograms Approximations Max: 1000 (actual 734) © DataStax, All Rights Reserved. 31 1-10 11-100 101-1000 2 8 1
  • 32. Histograms Approximations Max: 1000 (actual 734) Min: 10 (actual 2) © DataStax, All Rights Reserved. 32 1-10 11-100 101-1000 2 8 1
  • 33. Histograms Approximations Max: 1000 (actual 734) Min: 10 (actual 2) Average: sum / count, (10*2 + 100*8 + 1000*1) / (2+8+1) = 165 (actual 103) © DataStax, All Rights Reserved. 33 1-10 11-100 101-1000 2 8 1
  • 34. Histograms Approximations Max: 1000 (actual 734) Min: 10 (actual 2) Average: sum / count, (10*2 + 100*8 + 1000*1) / (2+8+1) = 165 (actual 103) Percentiles: 11 requests, so we know 90 percent of the latencies occurred in the 11-100 bucket or lower. 90th Percentile: 100 © DataStax, All Rights Reserved. 34 1-10 11-100 101-1000 2 8 1
  • 35. Histograms Approximations Max: 1000 (actual 734) Min: 10 (actual 2) Average: sum / count, (10*2 + 100*8 + 1000) / (2+8+1) = 165 (actual 103) Percentiles: 11 requests, so we know 90 percent of the latencies occurred in the 11-100 bucket or lower. 90th Percentile: 100 © DataStax, All Rights Reserved. 35 1-10 11-100 101-1000 2 8 1
  • 36. EstimatedHistogram The series starts at 1 and grows by 1.2 each time 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 17, 20, 24, 29, … 12108970, 14530764, 17436917, 20924300, 25109160 © DataStax, All Rights Reserved. 36
  • 37. LatencyTracker Has two histograms • Recent • Count of times a latency occurred since last time read for each bin • Total • Count of times a latency occurred since Cassandra started for each bin © DataStax, All Rights Reserved. 37
  • 38. Total Histogram Deltas If you keep track of histogram last time you read it can find delta to determine how many occurred in that interval Last Now © DataStax, All Rights Reserved. 38 1-10 11-100 101-1000 2 8 1 1-10 11-100 101-1000 4 8 2
  • 39. Total Histogram Deltas If you keep track of histogram last time you read it can find delta to determine how many occurred in that interval Last Now Delta © DataStax, All Rights Reserved. 1-10 11-100 101-1000 2 8 1 1-10 11-100 101-1000 4 8 2 1-10 11-100 101-1000 2 0 1
  • 40. Cassandra 1.1 • Yammer/Codahale/Dropwizard Metrics introduced • Awesome! • Not so awesome… © DataStax, All Rights Reserved. 40
  • 41. Reservoirs • Maintain a sample of the data that is representative of the entire set. • Can perform operations on the limited, fixed memory set as if on entire dataset • Vitters Algorithm R • Offers a 99.9% confidence level & 5% margin of error • Simple • Randomly include value in reservoir, less and less likely as more values seen © DataStax, All Rights Reserved. 41
  • 42. Reservoirs • Maintain a sample of the data that is representative of the entire set. • Can perform operations on the limited, fixed memory set as if on entire dataset • Vitters Algorithm R • Offers a 99.9% confidence level & 5% margin of error * When the stream has a normal distribution © DataStax, All Rights Reserved. 42
  • 43. Metrics Reservoirs • Random sampling, what can it miss? – Min – Max – Everything in 99th percentile? – The more rare, the less likely to be included 43
  • 44. Metrics Reservoirs • “Good enough” for basic adhoc viewing but too non-deterministic for many • Commonly resolved using replacement reservoirs (i.e. HdrHistogram) 44
  • 45. Metrics Reservoirs • “Good enough” for basic adhoc viewing but too non-deterministic for many • Commonly resolved using replacement reservoirs (i.e. HdrHistogram) – org.apache.cassandra.metrics.EstimatedHistogramReservoir 45
  • 46. Cassandra 2.2 • CASSANDRA-5657 – upgrade metrics library (and extend it) – Replaced reservoir with EH • Also exposed raw bin counts in values operation – Deleted deprecated metrics • Non EH latencies from LatencyTracker 46
  • 47. Cassandra 2.2 • No recency in histograms • Requires delta’ing on the total bin counts currently which is beyond some simple tooling • CASSANDRA-11752 (fixed 2.2.8, 3.0.9, 3.8) 47
  • 49. Storing the data • We have data, now to store it. Approaches tend to follow: – Store all data points • Provide aggregations either pre-computed as entered, MR, or on query – Round Robin Database • Only store pre-computed aggregations • Choice depends heavily on requirements 49
  • 50. Round Robin Database • Store state required to generate the aggregations, and only store the aggregations – Sum & Count for Average – Current min, max – “One pass” or “online” algorithms • Constant footprint 50
  • 51. Round Robin Database • Store state required to generate the aggregations, and only store the aggregations – Sum & Count for Average – Current min, max – “One pass” or “online” algorithms • Constant footprint 51 60 300 3600 Sum 0 0 0 Count 0 0 0 Min 0 0 0 Max 0 0 0
  • 52. Round Robin Database > 10ms @ 00:00 52 60 300 3600 Sum 10 10 10 Count 1 1 1 Min 10 10 10 Max 10 10 10
  • 53. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 53 60 300 3600 Sum 22 22 22 Count 2 2 2 Min 10 10 10 Max 12 12 12
  • 54. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 > 14ms @ 00:59 54 60 300 3600 Sum 36 36 36 Count 3 3 3 Min 10 10 10 Max 14 14 14
  • 55. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 > 14ms @ 00:59 > 13ms @ 01:10 55 60 300 3600 Sum 36 36 36 Count 3 3 3 Min 10 10 10 Max 14 14 14
  • 56. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 > 14ms @ 00:59 > 13ms @ 01:10 56 60 300 3600 Sum 36 36 36 Count 3 3 3 Min 10 10 10 Max 14 14 14 Average 12 Min 10 Max 14
  • 57. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 > 14ms @ 00:59 > 13ms @ 01:10 57 60 300 3600 Sum 0 36 36 Count 0 3 3 Min 0 10 10 Max 0 14 14
  • 58. Round Robin Database > 10ms @ 00:00 > 12ms @ 00:30 > 14ms @ 00:59 > 13ms @ 01:10 58 60 300 3600 Sum 13 49 49 Count 1 4 4 Min 13 10 10 Max 13 14 14
  • 59. Max is a lie • The issue with the deprecated LatencyTracker metrics is that the 1 minute interval does not have a min/max. So we cannot compute true min/max the rollups min/max will be the minimum and maximum average 59
  • 60. Histograms to the rescue (again) • The histograms of the data does not have this issue. But storage is more complex. Some options include: – Store each bin of the histogram as a metric – Store the percentiles/min/max each as own metric – Store raw long[90] (possibly compressed) 60
  • 61. Histogram Storage Size • Some things to note: – “Normal” clusters have over 100 tables. – Each table has at least two histograms we want to record • Read latency • Write latency • Tombstones scanned • Cells scanned • Partition cell size • Partition cell count 61
  • 62. Histogram Storage Because we store the extra histograms we have a 600 (minimum) with upper bounds seen to be over 24,000 histograms per minute. • Storing 1 per bin means [54000] metrics (expensive to store, expensive to read) • Storing raw histograms is [600] metrics • Storing min, max, 50th, 90th, 99th is [3000] metrics – Additional problems with this • Cant compute 10th, 95th, 99.99th etc • Aggregations 62
  • 63. Aggregating Histograms Averaging the percentiles [ INSERT DISAPOINTED GIL TENE PHOTO ] © DataStax, All Rights Reserved. 63
  • 64. Aggregating Histograms • Consider averaging the maximum If there is a node with a 10 second GC, but the maximum latency on your other 9 nodes is 60ms. If you report a “Max 1 second” latency, it would be misleading. • Poor at representing hotspots affects on your application One node in 10 node raspberry pi cluster gets 1000 write reqs/sec while others get 10 reqs/sec. The 1 node being under heavy stress has a 90th percentile of 10 second. The other nodes are basically sub ms and writes are taking 1ms on 90th percentile. Would report a 1 second 90th percentile, even though 10% of our applications writes are taking >10 seconds © DataStax, All Rights Reserved. 64
  • 65. Aggregating Histograms Merging histograms from different nodes more accurately can be straight forward: Node1 Node2 Cluster © DataStax, All Rights Reserved. 65 1-10 11-100 101-1000 2 8 1 1-10 11-100 101-1000 2 1 5 1-10 11-100 101-1000 4 9 6
  • 66. Histogram Storage Because we store the extra histograms we have a 600 (minimum) with upper bounds seen to be over 24,000 histograms per minute. • Storing 1 per bin means [54000] metrics (expensive to store, expensive to read) • Storing raw histograms is [600] metrics • Storing min, max, 50th, 90th, 99th is [3000] metrics – Additional problems with this • Cant compute 10th, 95th, 99.99th etc • Aggregations 66
  • 67. Raw Histogram storage • Storing raw histograms 160 (default) longs is a minimum of 1.2kb bytes per rollup and hard sell – 760kb per minute (600 tables) – 7.7gb for the 7 day TTL we want to keep our 1 min rollups at – ~77gb with 10 nodes – ~2.3 Tb on 10 node clusters with 3k tables – Expired data isn’t immediately purged so disk space can be much worse 67
  • 68. Raw Histogram storage • Goal: We wanted this to be comparable to other min/max/avg metric storage (12 bytes each) – 700mb on expected 10 node cluster – 2gb on extreme 10 node cluster • Enter compression 68
  • 69. Compressing Histograms • Overhead of typical compression makes it a non-starter. – headers (ie 10 bytes for gzip) alone nearly exceeds the length used by existing rollup storage (~12 bytes per metric) • Instead we opt to leverage known context to reduce the size of the data along with some universal encoding. 69
  • 70. Compressing Histograms • Instead of storing every bin, only store the value of each bin with a value > 0 since most bin will have no data (ie, very unlikely for a read histogram to be between 1-10 microseconds which is first 10 bins) • Write the count of offset/count pairs • Use varint for the bin count – To reduce the value of the varint as much as possible we sort the offset/count pairs by the count and represent it as a delta sequence 70
  • 71. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 71 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte 1 byte
  • 72. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 72 7
  • 73. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 8:100, 11:9999999, 14:1, 15:127, 16:128 17:129} 73 7
  • 74. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 74 7
  • 75. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 75 7 4 1
  • 76. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 76 7 4 1 14 0
  • 77. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 77 7 4 1 14 0 8 99
  • 78. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 78 7 4 1 14 0 8 99 15 27
  • 79. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 79 7 4 1 14 0 8 99 15 27 16 1 17 1
  • 80. Compressing Histograms 0 0 0 0 1 0 0 0 100 0 0 9999999 0 0 1 127 128 129 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 {4:1, 14:1, 8:100, 15:127, 16:128, 17:129, 11:9999999} 80 7 4 1 14 0 8 99 15 27 16 1 17 1 11 9999870
  • 81. Compressing Histograms Real Life** results of compression: 81 Size in bytes Median 1 75th 3 95th 15 99th 45 Max** 124
  • 82. Note on HdrHistogram • Comes up every couple months • Very awesome histogram, popular replacement for Metrics reservoir. – More powerful and general purpose than EH – Only slightly slower for all it offers A issue comes up a bit with storage: • Logged HdrHistograms are ~31kb each (30,000x more than our average use) • Compressed version: 1kb each • Perfect for many many people when tracking 1 or two metrics. Gets painful when tracking hundreds or thousands 82