SlideShare a Scribd company logo
Let's Aggregate
XXXXTREME JOE BEDWELL
EDITION
@BillSlacum
Accumulo Meetup, Sep 23, 2014
Have you heard of...
… TSAR, Summingbird? (Twitter)
… Mesa? (Google)
… Commutative Replicated Data Types?
Describe a system that pre-computes
aggregations over large datasets using
associative and/or commutative functions.
What's it all about, Alfie?
Imagine we have a dataset that describes
fights between two cities. We'd at some
point run a SQL query similar to SELECT
DISTINCT destination FROM flights WHERE
origin=”BWI” to see where everyone is
feeing from Baltimore.
We can pre-compute, in parallel, this answer
for systems that have too much data to
compute it every time a user queries the DB.
What do we need to pull this off?
We need data structures that can be
combined together. Numbers are a trivial
example of this, as we can combine two
numbers using a function (such as plus and
multiply). There are more advanced data
structures such as matrices,
HyperLogLogPlus, StreamSummary (used for
top-k) and Bloom filters that also have this
property!
val partial: T = op(a, b)
What do we need to pull this off?
We need operations that can be performed in
parallel. Associative operations are espoused
by Twitter, but for our case operations that
are both associative and commutative have
the better property that we can get correct
results no matter what order we receive the
data. Common operations that are
associative (summation, set building) are also
commutative.
op(op(a, b), c) == op(a, op(b, c))
op(a, b) == op(b, a)
Wait a minute isn't that...
You caught me! It's a commutative monoid!
From Wolfram:
Monoid: A monoid is a set that is closed
under an associative binary operation and
has an identity element I in S such that for all
a in S, Ia=aI=a
Commutative Monoid: A monoid that is
commutative i.e., a monoid M such that for
every two elements a and b in M, ab=ba.
Put it to work
The example we're about to see uses
MapReduce and Accumulo. The same can be
accomplished using any processing
framework that supports map and reduce
operations, such as Spark or Storm's Trident
interface.
We need two functions...
Map
– Takes an input datum and turns into some
combinable structure
– Like parsing strings to numbers, or creating
single element sets for combining
Reduce
– Combines the merge-able data structures using
our associative and commutative function
Yup, that's all!
●
Map will be called on the input data once in a
Mapper instance.
●
Reduce will be called in a Combiner, Reducer
and an Accumulo Iterator!
●
The Accumulo Iterator is configured to run on
major compactions, minor compactions, and
scans
●
That's five places the same piece of code
gets run-- talk about modularity!
What does our Accumulo Iterator
look like?
●
We can re-use Accumulo's Combiner type here:
override def reduce:(key: Key, values:
Iterator[Value]) Value = {
// deserialize and combine all intermediate
// values. This logic should be identical to
// what is in the mr.Combiner and Reducer
}
●
Our function has to be commutative because major
compactions will often pick smaller files to combine,
which means we only see discrete subsets of data in an
iterator invocation
Counting in practice (pt 1)
We've seen how to aggregate values together. What's
the best way to structure our data and query it?
Twitter's TSAR is a good starting point. It allows users to
declare what they want to aggregate:
Aggregate(
onKeys((“origin”, “destination”))
producing(Count))
This describes generating an edge between two cities
and calculating a weight for it.
Counting in practice (pt 2)
With that declaration, we can infer that the user wants their
operation to be summing over each instance of a given
pairing, so we can say the base value is 1 (sounds a bit like
word count, huh?). We need a key for each base value and
partial computation to be reduced with. For this simple
pairing we can have a schema like:
<field_1>0<value_1>0...<field_n>0<value_n> count: “”
[] <serialized long>
I recently traveled from Baltimore to Denver. Here's what that
trip would look like:
origin0bwi0destination0dia count: “” [] x01
Counting in practice (pt 3)
●
Iterator combines all values that are mapped
to the same key
●
We encoded the aggregation function into the
column family of the key
– We can arbitrarily add new aggregate functions
by updating a mapping of column family to
function and then updating the iterator
deployment
Something more than counting
●
Everybody counts, but what about something
like top-k?
●
The key schema isn't fexible enough to show
a relationship between two fields
●
We want to know the top-k relationship
between origin and destination cities
●
That column qualifier was looking awfully
blank. It'd be a shame if someone were to put
data in it...
How you like me now?
● Aggregate(
onKeys((“origin”))
producing(TopK(“destination”)))
● <field1>0<value1>0...<fieldN>0<valueN> <op>:
<relation> [] <serialized data structure>
●
Let's use my Baltimore->Denver trip as an
example:
origin0BWI topk: destination [] {“DIA”: 1}
But how do I query it?
●
This schema is really geared towards point
queries
●
Users would know exactly which dimensions
they were querying across to get an answer
– BUENO “What are the top-k destinations for Bill
when he leaves BWI?”
– NO BUENO “What are all the dimensions and
aggregations I have for Bill?”
Some thoughts to think about it
●
Prepare functions
– Preparing the input to do things like time bucketing and
normalization (Jared Winick's Trendulo)
●
Age off
– Combining down to a single value means that value represents
all historical data. Maybe we don't care about that and would
like to age off data after a day/week/month/year. Mesa's batch
Ids could be of use here.
●
Security labels
– Notice how I deftly avoided this topic. We should be able to
bucket aggregations based on visibility, but we need a way to
express the best way to handle this. Maybe just preserve the
input data's security labeling and attach it to the output of our
map function?
FIN
(hope this wasn't too hard to read)
Comments, suggestions or infammatory messages should
be sent to @BillSlacum or wslacum@gmail.com

More Related Content

ODP
Aggregating In Accumulo
PPTX
Conversation with-search-engines (Ren et al. 2020)
PDF
Cs229 notes10
PDF
Social network-analysis-in-python
PDF
Networkx tutorial
PDF
Bigdata analytics
PDF
L3
PPTX
Dstar Lite
Aggregating In Accumulo
Conversation with-search-engines (Ren et al. 2020)
Cs229 notes10
Social network-analysis-in-python
Networkx tutorial
Bigdata analytics
L3
Dstar Lite

What's hot (9)

PDF
Three Functional Programming Technologies for Big Data
PDF
Bloom Filters: An Introduction
PDF
R Visualization Assignment
PDF
Graph Analyses with Python and NetworkX
PPTX
Locality sensitive hashing
PDF
ScaleGraph - A High-Performance Library for Billion-Scale Graph Analytics
PPTX
A Fast and Dirty Intro to NetworkX (and D3)
PPTX
Data visualization with R
PPT
Dynamic Memory Allocation
Three Functional Programming Technologies for Big Data
Bloom Filters: An Introduction
R Visualization Assignment
Graph Analyses with Python and NetworkX
Locality sensitive hashing
ScaleGraph - A High-Performance Library for Billion-Scale Graph Analytics
A Fast and Dirty Intro to NetworkX (and D3)
Data visualization with R
Dynamic Memory Allocation
Ad

Similar to Xxx treme aggregation (20)

PPTX
Accumulo Summit 2015: Building Aggregation Systems on Accumulo [Leveraging Ac...
PDF
Beyond Shuffling, Tips and Tricks for Scaling Apache Spark updated for Spark ...
PDF
Bringing back the excitement to data analysis
PDF
MongoDB, Hadoop and humongous data - MongoSV 2012
PDF
Next Top Data Model by Ian Plosker
PPTX
Beyond shuffling - Strata London 2016
PDF
Visual Api Training
PDF
codecentric AG: Using Cassandra and Clojure for Data Crunching backends
PDF
Beyond shuffling - Scala Days Berlin 2016
PDF
Outrageous Ideas for Graph Databases
PDF
Beyond Shuffling and Streaming Preview - Salt Lake City Spark Meetup
PDF
Aggregators: Data Day Texas, 2015
PPTX
Introduction to Map-Reduce in Hadoop.pptx
PPTX
Introduction to Map-Reduce in Hadoop.pptx
PPTX
Building Scalable Aggregation Systems
PDF
Realtime Analytics
PDF
Everyday Probabilistic Data Structures for Humans
PDF
Monads and Monoids by Oleksiy Dyagilev
PPTX
An Exploration of 3 Very Different ML Solutions Running on Accumulo
PPTX
Presentation on functional data mining at the IGT Cloud meet up at eBay Netanya
Accumulo Summit 2015: Building Aggregation Systems on Accumulo [Leveraging Ac...
Beyond Shuffling, Tips and Tricks for Scaling Apache Spark updated for Spark ...
Bringing back the excitement to data analysis
MongoDB, Hadoop and humongous data - MongoSV 2012
Next Top Data Model by Ian Plosker
Beyond shuffling - Strata London 2016
Visual Api Training
codecentric AG: Using Cassandra and Clojure for Data Crunching backends
Beyond shuffling - Scala Days Berlin 2016
Outrageous Ideas for Graph Databases
Beyond Shuffling and Streaming Preview - Salt Lake City Spark Meetup
Aggregators: Data Day Texas, 2015
Introduction to Map-Reduce in Hadoop.pptx
Introduction to Map-Reduce in Hadoop.pptx
Building Scalable Aggregation Systems
Realtime Analytics
Everyday Probabilistic Data Structures for Humans
Monads and Monoids by Oleksiy Dyagilev
An Exploration of 3 Very Different ML Solutions Running on Accumulo
Presentation on functional data mining at the IGT Cloud meet up at eBay Netanya
Ad

Recently uploaded (20)

PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Hybrid model detection and classification of lung cancer
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
August Patch Tuesday
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PPTX
A Presentation on Touch Screen Technology
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Web App vs Mobile App What Should You Build First.pdf
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
DP Operators-handbook-extract for the Mautical Institute
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PPTX
OMC Textile Division Presentation 2021.pptx
PDF
MIND Revenue Release Quarter 2 2025 Press Release
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Univ-Connecticut-ChatGPT-Presentaion.pdf
Assigned Numbers - 2025 - Bluetooth® Document
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Hybrid model detection and classification of lung cancer
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
Zenith AI: Advanced Artificial Intelligence
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
August Patch Tuesday
Accuracy of neural networks in brain wave diagnosis of schizophrenia
A Presentation on Touch Screen Technology
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Web App vs Mobile App What Should You Build First.pdf
A comparative analysis of optical character recognition models for extracting...
DP Operators-handbook-extract for the Mautical Institute
Heart disease approach using modified random forest and particle swarm optimi...
Digital-Transformation-Roadmap-for-Companies.pptx
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
OMC Textile Division Presentation 2021.pptx
MIND Revenue Release Quarter 2 2025 Press Release

Xxx treme aggregation

  • 1. Let's Aggregate XXXXTREME JOE BEDWELL EDITION @BillSlacum Accumulo Meetup, Sep 23, 2014
  • 2. Have you heard of... … TSAR, Summingbird? (Twitter) … Mesa? (Google) … Commutative Replicated Data Types? Describe a system that pre-computes aggregations over large datasets using associative and/or commutative functions.
  • 3. What's it all about, Alfie? Imagine we have a dataset that describes fights between two cities. We'd at some point run a SQL query similar to SELECT DISTINCT destination FROM flights WHERE origin=”BWI” to see where everyone is feeing from Baltimore. We can pre-compute, in parallel, this answer for systems that have too much data to compute it every time a user queries the DB.
  • 4. What do we need to pull this off? We need data structures that can be combined together. Numbers are a trivial example of this, as we can combine two numbers using a function (such as plus and multiply). There are more advanced data structures such as matrices, HyperLogLogPlus, StreamSummary (used for top-k) and Bloom filters that also have this property! val partial: T = op(a, b)
  • 5. What do we need to pull this off? We need operations that can be performed in parallel. Associative operations are espoused by Twitter, but for our case operations that are both associative and commutative have the better property that we can get correct results no matter what order we receive the data. Common operations that are associative (summation, set building) are also commutative. op(op(a, b), c) == op(a, op(b, c)) op(a, b) == op(b, a)
  • 6. Wait a minute isn't that... You caught me! It's a commutative monoid! From Wolfram: Monoid: A monoid is a set that is closed under an associative binary operation and has an identity element I in S such that for all a in S, Ia=aI=a Commutative Monoid: A monoid that is commutative i.e., a monoid M such that for every two elements a and b in M, ab=ba.
  • 7. Put it to work The example we're about to see uses MapReduce and Accumulo. The same can be accomplished using any processing framework that supports map and reduce operations, such as Spark or Storm's Trident interface.
  • 8. We need two functions... Map – Takes an input datum and turns into some combinable structure – Like parsing strings to numbers, or creating single element sets for combining Reduce – Combines the merge-able data structures using our associative and commutative function
  • 9. Yup, that's all! ● Map will be called on the input data once in a Mapper instance. ● Reduce will be called in a Combiner, Reducer and an Accumulo Iterator! ● The Accumulo Iterator is configured to run on major compactions, minor compactions, and scans ● That's five places the same piece of code gets run-- talk about modularity!
  • 10. What does our Accumulo Iterator look like? ● We can re-use Accumulo's Combiner type here: override def reduce:(key: Key, values: Iterator[Value]) Value = { // deserialize and combine all intermediate // values. This logic should be identical to // what is in the mr.Combiner and Reducer } ● Our function has to be commutative because major compactions will often pick smaller files to combine, which means we only see discrete subsets of data in an iterator invocation
  • 11. Counting in practice (pt 1) We've seen how to aggregate values together. What's the best way to structure our data and query it? Twitter's TSAR is a good starting point. It allows users to declare what they want to aggregate: Aggregate( onKeys((“origin”, “destination”)) producing(Count)) This describes generating an edge between two cities and calculating a weight for it.
  • 12. Counting in practice (pt 2) With that declaration, we can infer that the user wants their operation to be summing over each instance of a given pairing, so we can say the base value is 1 (sounds a bit like word count, huh?). We need a key for each base value and partial computation to be reduced with. For this simple pairing we can have a schema like: <field_1>0<value_1>0...<field_n>0<value_n> count: “” [] <serialized long> I recently traveled from Baltimore to Denver. Here's what that trip would look like: origin0bwi0destination0dia count: “” [] x01
  • 13. Counting in practice (pt 3) ● Iterator combines all values that are mapped to the same key ● We encoded the aggregation function into the column family of the key – We can arbitrarily add new aggregate functions by updating a mapping of column family to function and then updating the iterator deployment
  • 14. Something more than counting ● Everybody counts, but what about something like top-k? ● The key schema isn't fexible enough to show a relationship between two fields ● We want to know the top-k relationship between origin and destination cities ● That column qualifier was looking awfully blank. It'd be a shame if someone were to put data in it...
  • 15. How you like me now? ● Aggregate( onKeys((“origin”)) producing(TopK(“destination”))) ● <field1>0<value1>0...<fieldN>0<valueN> <op>: <relation> [] <serialized data structure> ● Let's use my Baltimore->Denver trip as an example: origin0BWI topk: destination [] {“DIA”: 1}
  • 16. But how do I query it? ● This schema is really geared towards point queries ● Users would know exactly which dimensions they were querying across to get an answer – BUENO “What are the top-k destinations for Bill when he leaves BWI?” – NO BUENO “What are all the dimensions and aggregations I have for Bill?”
  • 17. Some thoughts to think about it ● Prepare functions – Preparing the input to do things like time bucketing and normalization (Jared Winick's Trendulo) ● Age off – Combining down to a single value means that value represents all historical data. Maybe we don't care about that and would like to age off data after a day/week/month/year. Mesa's batch Ids could be of use here. ● Security labels – Notice how I deftly avoided this topic. We should be able to bucket aggregations based on visibility, but we need a way to express the best way to handle this. Maybe just preserve the input data's security labeling and attach it to the output of our map function?
  • 18. FIN (hope this wasn't too hard to read) Comments, suggestions or infammatory messages should be sent to @BillSlacum or [email protected]