SlideShare a Scribd company logo
MongoDB
Hadoop
&
humongous data
Talking about
What is Humongous Data
Humongous Data & You
MongoDB & Data processing
Future of Humongous Data
@spf13

                  AKA
Steve Francia
15+ years building
the internet

  Father, husband,
  skateboarder


Chief Solutions Architect @
responsible for drivers,
integrations, web & docs
What is
humongous
   data ?
2000
Google Inc
Today announced it has released
the largest search engine on the
Internet.

Google’s new index, comprising
more than 1 billion URLs
2008
Our indexing system for processing
links indicates that
we now count 1 trillion unique URLs

(and the number of individual web
pages out there is growing by
several billion pages per day).
An unprecedented
amount of data is
being created and is
accessible
Data Growth                                   1,000
1000



 750


                                                       500
 500


                                                250
 250
                                          120
                                  55
            4      10     24
       1
   0
    2000   2001   2002   2003   2004     2005   2006   2007   2008

                           Millions of URLs
Truly Exponential
        Growth
Is hard for people to grasp


A BBC reporter recently: "Your current PC
is more powerful than the computer they
had on board the first flight to the moon".
Moore’s Law
Applies to more than just CPUs


Boiled down it is that things double at
regular intervals


It’s exponential growth.. and applies to
big data
How BIG is it?
How BIG is it?


2008
How BIG is it?
               2007


2008
                      2005
        2006
                          2003
                 2004
                               2001
                        2002
Why all this
talk about BIG
  Data now?
In the past few
years open source
software emerged
enabling ‘us’ to
handle BIG Data
The Big Data
   Story
Is actually
two stories
Doers & Tellers talking about
      different things
                http://www.slideshare.net/siliconangle/trendconnect-big-data-report-september
Tellers
Doers
Doers talk a lot more about
     actual solutions
They know it’s a two sided story

            Storage




           Processing
Take aways
MongoDB and Hadoop
MongoDB for storage &
operations
Hadoop for processing &
analytics
MongoDB
& Data Processing
Applications have
    complex needs
MongoDB ideal operational
database
MongoDB ideal for BIG data
Not a data processing engine, but
provides processing functionality
Many options for
    Processing Data
•Process in MongoDB using
                Map Reduce


•Process in MongoDB using
     Aggregation Framework


•Process outside MongoDB (using Hadoop)
MongoDB Map Reduce
                        Map()
MongoDB   Data
                                              Group(k)
                        emit(k,v)

                        map iterates on
                        documents
                        Document is $this
                                              Sort(k)
                        1 at time per shard




                                              Reduce(k,values)

                                               k,v


                 Finalize(k,v)
                                              Input matches output

                  k,v                         Can run multiple times
MongoDB Map Reduce
MongoDB map reduce quite capable... but with
limits
- Javascript not best language for processing map
  reduce
- Javascript limited in external data processing
  libraries
- Adds load to data store
MongoDB
            Aggregation
Most uses of MongoDB Map Reduce were for
aggregation

Aggregation Framework optimized for aggregate
queries

Realtime aggregation similar to SQL GroupBy
MongoDB & Hadoop
                      same as Mongo's          Many map operations
MongoDB             shard chunks (64mb)        1 at time per input split

                   Creates a list     each split      Map (k1,1v1,1ctx)                          Runs on same
                   of Input Splits                     Map (k ,1v ,1ctx)                         thread as map
                                      each split        Map (k , v , ctx)
single server or
sharded cluster    (InputFormat)      each split           ctx.write(k2,v2)2
                                                             ctx.write(k2,v )2            Combiner(k2,values2)2
                                     RecordReader              ctx.write(k2,v )            Combiner(k2,values )2
                                                                                            Combiner(k2,values )
                                                                                                k2, 2v3 3
                                                                                                 k , 2v 3
                                                                                                     k ,v


                                               Partitioner(k2)2
                                                Partitioner(k )2
                                                 Partitioner(k )
                                                                                  Sort(keys2)
                                                                                   Sort(k2)2
                                                                                    Sort(k )

MongoDB



                                                                                                            Reducer threads



                                                                 Reduce(k2,values3)
                                           Output Format                                    Runs once per key

                                                                    kf,vf
DEMO
TIME
DEMO
Install Hadoop MongoDB Plugin
Import tweets from twitter
Write mapper in Python using Hadoop
streaming
Write reducer in Python using Hadoop
streaming
Call myself a data scientist
Installing Mongo-hadoop
                   https://gist.github.com/1887726

hadoop_version '0.23'
hadoop_path="/usr/local/Cellar/hadoop/
$hadoop_version.0/libexec/lib"

git clone git://github.com/mongodb/mongo-
hadoop.git
cd mongo-hadoop
sed -i '' "s/default/$hadoop_version/g" build.sbt
cd streaming
./build.sh
Groking Twitter

curl 
https://stream.twitter.com/1/
statuses/sample.json 
-u<login>:<password> 
| mongoimport -d test -c live


             ... let it run for about 2 hours
DEMO 1
Map Hashtags in Python
#!/usr/bin/env python

import sys
sys.path.append(".")

from pymongo_hadoop import BSONMapper

def mapper(documents):
    for doc in documents:
        for hashtag in doc['entities']['hashtags']:
            yield {'_id': hashtag['text'], 'count': 1}

BSONMapper(mapper)
print >> sys.stderr, "Done Mapping."
Reduce hashtags in Python
#!/usr/bin/env python

import sys
sys.path.append(".")

from pymongo_hadoop import BSONReducer

def reducer(key, values):
    print >> sys.stderr, "Hashtag %s" % key.encode('utf8')
    _count = 0
    for v in values:
        _count += v['count']
    return {'_id': key.encode('utf8'), 'count': _count}

BSONReducer(reducer)
All together

hadoop jar target/mongo-hadoop-streaming-assembly-1.0.0-rc0.jar 
-mapper examples/twitter/twit_hashtag_map.py 
-reducer examples/twitter/twit_hashtag_reduce.py 
-inputURI mongodb://127.0.0.1/test.live 
-outputURI mongodb://127.0.0.1/test.twit_reduction 
-file examples/twitter/twit_hashtag_map.py 
-file examples/twitter/twit_hashtag_reduce.py
Popular Hash Tags
db.twit_hashtags.find().sort( {'count' : -1 })

{   "_id"   :   "YouKnowYoureInLoveIf", "count" : 287 }
{   "_id"   :   "teamfollowback", "count" : 200 }
{   "_id"   :   "RT", "count" : 150 }
{   "_id"   :   "Arsenal", "count" : 148 }
{   "_id"   :   "milars", "count" : 145 }
{   "_id"   :   "sanremo", "count" : 145 }
{   "_id"   :   "LoseMyNumberIf", "count" : 139 }
{   "_id"   :   "RelationshipsShould", "count" : 137 }
{   "_id"   :   "Bahrain", "count" : 129 }
{   "_id"   :   "bahrain", "count" : 125 }
{   "_id"   :   "oomf", "count" : 117 }
{   "_id"   :   "BabyKillerOcalan", "count" : 106 }
{   "_id"   :   "TeamFollowBack", "count" : 105 }
{   "_id"   :   "WhyDoPeopleThink", "count" : 102 }
{   "_id"   :   "np", "count" : 100 }
DEMO 2
Aggregation in Mongo 2.1
    db.live.aggregate(
    { $unwind : "$entities.hashtags" } ,
    { $match :
       { "entities.hashtags.text" :
           { $exists : true } } } ,
    { $group :
       { _id : "$entities.hashtags.text",
       count : { $sum : 1 } } } ,
    { $sort : { count : -1 } },
    { $limit : 10 }
)
Popular Hash Tags
    db.twit_hashtags.aggregate(a){
     "result" : [
        { "_id" : "YouKnowYoureInLoveIf", "count" : 287 },
        { "_id" : "teamfollowback", "count" : 200 },
        { "_id" : "RT", "count" : 150 },
        { "_id" : "Arsenal", "count" : 148 },
        { "_id" : "milars", "count" : 145 },
        { "_id" : "sanremo","count" : 145 },
        { "_id" : "LoseMyNumberIf", "count" : 139 },
        { "_id" : "RelationshipsShould", "count" : 137 },
        { "_id" : "Bahrain", "count" : 129 },
        { "_id" : "bahrain", "count" : 125 }
      ],"ok" : 1
}
The
      Future of
humongous
            data
What is BIG?
  BIG today is
normal tomorrow
Data Growth                                                 9,000
9000



6750


                                                                   4,400
4500


                                                           2,150
2250
                                                   1,000
                                             500
                         55     120   250
       1   4   10   24
  0
   2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011

                              Millions of URLs
Data Growth                                                 9,000
9000



6750


                                                                   4,400
4500


                                                           2,150
2250
                                                   1,000
                                             500
                         55     120   250
       1   4   10   24
  0
   2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011

                              Millions of URLs
2012
Generating over
250 Millions of
tweets per day
MongoDB enables us to scale
with the redefinition of BIG.

New processing tools like
Hadoop & Storm are enabling
us to process the new BIG.
Hadoop is our
  first step
MongoDB is
committed to working
 with best data tools
      including
 Hadoop, Storm,
Disco, Spark & more
http://spf13.com
                  http://github.com/spf13
                  @spf13



Questions?
 download at
 github.com/mongodb/mongo-hadoop
MongoDB, Hadoop and humongous data - MongoSV 2012

More Related Content

KEY
MongoDB and hadoop
KEY
MongoDB, Hadoop and Humongous Data
PDF
Build your first MongoDB App in Ruby @ StrangeLoop 2013
PPTX
Introduction to MongoDB and Hadoop
PDF
Building Apps with MongoDB
KEY
OSCON 2012 MongoDB Tutorial
KEY
MongoDB, E-commerce and Transactions
KEY
MongoDB and hadoop
MongoDB, Hadoop and Humongous Data
Build your first MongoDB App in Ruby @ StrangeLoop 2013
Introduction to MongoDB and Hadoop
Building Apps with MongoDB
OSCON 2012 MongoDB Tutorial
MongoDB, E-commerce and Transactions

What's hot (20)

PPTX
Conceptos básicos. Seminario web 4: Indexación avanzada, índices de texto y g...
KEY
MongoDB for Genealogy
PDF
MongoDB and Ruby on Rails
PPTX
Conceptos básicos. seminario web 3 : Diseño de esquema pensado para documentos
PPTX
Conceptos básicos. Seminario web 2: Su primera aplicación MongoDB
KEY
Practical Ruby Projects With Mongo Db
PPTX
Back to Basics Webinar 2: Your First MongoDB Application
PDF
MongoDB Days Silicon Valley: Winning the Dreamforce Hackathon with MongoDB
PPTX
Conceptos básicos. Seminario web 5: Introducción a Aggregation Framework
ODP
MongoDB : The Definitive Guide
PPTX
Webinar: Getting Started with MongoDB - Back to Basics
PDF
Introduction to MongoDB
PDF
Dealing with Azure Cosmos DB
PDF
Learn Learn how to build your mobile back-end with MongoDB
KEY
MongoDB & Mongoid with Rails
PPT
5 Pitfalls to Avoid with MongoDB
KEY
Introduction to MongoDB
PPTX
Back to Basics Webinar 1: Introduction to NoSQL
PPTX
Webinaire 2 de la série « Retour aux fondamentaux » : Votre première applicat...
KEY
Managing Social Content with MongoDB
Conceptos básicos. Seminario web 4: Indexación avanzada, índices de texto y g...
MongoDB for Genealogy
MongoDB and Ruby on Rails
Conceptos básicos. seminario web 3 : Diseño de esquema pensado para documentos
Conceptos básicos. Seminario web 2: Su primera aplicación MongoDB
Practical Ruby Projects With Mongo Db
Back to Basics Webinar 2: Your First MongoDB Application
MongoDB Days Silicon Valley: Winning the Dreamforce Hackathon with MongoDB
Conceptos básicos. Seminario web 5: Introducción a Aggregation Framework
MongoDB : The Definitive Guide
Webinar: Getting Started with MongoDB - Back to Basics
Introduction to MongoDB
Dealing with Azure Cosmos DB
Learn Learn how to build your mobile back-end with MongoDB
MongoDB & Mongoid with Rails
5 Pitfalls to Avoid with MongoDB
Introduction to MongoDB
Back to Basics Webinar 1: Introduction to NoSQL
Webinaire 2 de la série « Retour aux fondamentaux » : Votre première applicat...
Managing Social Content with MongoDB
Ad

Viewers also liked (20)

KEY
Big data for the rest of us
PDF
What every successful open source project needs
PDF
7 Common Mistakes in Go (2015)
PDF
The Future of the Operating System - Keynote LinuxCon 2015
PDF
Painless Data Storage with MongoDB & Go
PDF
Getting Started with Go
PDF
Go for Object Oriented Programmers or Object Oriented Programming without Obj...
PDF
7 Common mistakes in Go and when to avoid them
PDF
BI mit Apache Hadoop (CDH)
PDF
Apache CouchDB at PHPUG Karlsruhe, Germany (Jan 27th 2009)
PPTX
Why NoSQL and MongoDB for Big Data
PPTX
An Introduction to Big Data, NoSQL and MongoDB
PPTX
MongoDB for Beginners
PPTX
Mongo db
PDF
Mongo DB
PPT
SQL, NoSQL, BigData in Data Architecture
PDF
Intro To MongoDB
PPT
Introduction to MongoDB
PDF
Introduction to MongoDB
PPTX
Cloudera Customer Success Story
Big data for the rest of us
What every successful open source project needs
7 Common Mistakes in Go (2015)
The Future of the Operating System - Keynote LinuxCon 2015
Painless Data Storage with MongoDB & Go
Getting Started with Go
Go for Object Oriented Programmers or Object Oriented Programming without Obj...
7 Common mistakes in Go and when to avoid them
BI mit Apache Hadoop (CDH)
Apache CouchDB at PHPUG Karlsruhe, Germany (Jan 27th 2009)
Why NoSQL and MongoDB for Big Data
An Introduction to Big Data, NoSQL and MongoDB
MongoDB for Beginners
Mongo db
Mongo DB
SQL, NoSQL, BigData in Data Architecture
Intro To MongoDB
Introduction to MongoDB
Introduction to MongoDB
Cloudera Customer Success Story
Ad

Similar to MongoDB, Hadoop and humongous data - MongoSV 2012 (20)

PDF
Lecture 2: Data-Intensive Computing for Text Analysis (Fall 2011)
PDF
Spark Streaming Tips for Devs and Ops by Fran perez y federico fernández
PDF
Spark Streaming Tips for Devs and Ops
PPTX
MongoDB Live Hacking
PDF
Lecture 3: Data-Intensive Computing for Text Analysis (Fall 2011)
PPTX
Scoobi - Scala for Startups
PDF
Sorry - How Bieber broke Google Cloud at Spotify
PDF
Spark Streaming with Cassandra
PDF
MongoDB at FrozenRails
PPTX
Building a Scalable Distributed Stats Infrastructure with Storm and KairosDB
PDF
Using MongoDB and Python
PDF
2016 feb-23 pyugre-py_mongo
PDF
Hadoop 101 for bioinformaticians
PDF
Webinar: Data Processing and Aggregation Options
PDF
Scala+data
PPTX
February 2017 HUG: Data Sketches: A required toolkit for Big Data Analytics
PDF
MongoDB Solution for Internet of Things and Big Data
PDF
Lab pratico per la progettazione di soluzioni MongoDB in ambito Internet of T...
PPTX
MongoDB Days Silicon Valley: MongoDB and the Hadoop Connector
PDF
Performance Analysis and Optimizations for Kafka Streams Applications (Guozha...
Lecture 2: Data-Intensive Computing for Text Analysis (Fall 2011)
Spark Streaming Tips for Devs and Ops by Fran perez y federico fernández
Spark Streaming Tips for Devs and Ops
MongoDB Live Hacking
Lecture 3: Data-Intensive Computing for Text Analysis (Fall 2011)
Scoobi - Scala for Startups
Sorry - How Bieber broke Google Cloud at Spotify
Spark Streaming with Cassandra
MongoDB at FrozenRails
Building a Scalable Distributed Stats Infrastructure with Storm and KairosDB
Using MongoDB and Python
2016 feb-23 pyugre-py_mongo
Hadoop 101 for bioinformaticians
Webinar: Data Processing and Aggregation Options
Scala+data
February 2017 HUG: Data Sketches: A required toolkit for Big Data Analytics
MongoDB Solution for Internet of Things and Big Data
Lab pratico per la progettazione di soluzioni MongoDB in ambito Internet of T...
MongoDB Days Silicon Valley: MongoDB and the Hadoop Connector
Performance Analysis and Optimizations for Kafka Streams Applications (Guozha...

More from Steven Francia (14)

PDF
State of the Gopher Nation - Golang - August 2017
PDF
Building Awesome CLI apps in Go
PDF
Modern Database Systems (for Genealogy)
PPTX
Future of data
KEY
Replication, Durability, and Disaster Recovery
KEY
Multi Data Center Strategies
KEY
NoSQL databases and managing big data
KEY
Hybrid MongoDB and RDBMS Applications
KEY
Building your first application w/mongoDB MongoSV2011
KEY
MongoDB, PHP and the cloud - php cloud summit 2011
KEY
MongoDB and PHP ZendCon 2011
KEY
Blending MongoDB and RDBMS for ecommerce
KEY
Augmenting RDBMS with MongoDB for ecommerce
KEY
MongoDB and Ecommerce : A perfect combination
State of the Gopher Nation - Golang - August 2017
Building Awesome CLI apps in Go
Modern Database Systems (for Genealogy)
Future of data
Replication, Durability, and Disaster Recovery
Multi Data Center Strategies
NoSQL databases and managing big data
Hybrid MongoDB and RDBMS Applications
Building your first application w/mongoDB MongoSV2011
MongoDB, PHP and the cloud - php cloud summit 2011
MongoDB and PHP ZendCon 2011
Blending MongoDB and RDBMS for ecommerce
Augmenting RDBMS with MongoDB for ecommerce
MongoDB and Ecommerce : A perfect combination

Recently uploaded (20)

PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
Spectroscopy.pptx food analysis technology
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Approach and Philosophy of On baking technology
PDF
Machine learning based COVID-19 study performance prediction
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
1. Introduction to Computer Programming.pptx
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
August Patch Tuesday
PDF
A comparative analysis of optical character recognition models for extracting...
Network Security Unit 5.pdf for BCA BBA.
Building Integrated photovoltaic BIPV_UPV.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
A comparative study of natural language inference in Swahili using monolingua...
Encapsulation_ Review paper, used for researhc scholars
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Spectroscopy.pptx food analysis technology
Accuracy of neural networks in brain wave diagnosis of schizophrenia
SOPHOS-XG Firewall Administrator PPT.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing
Approach and Philosophy of On baking technology
Machine learning based COVID-19 study performance prediction
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Assigned Numbers - 2025 - Bluetooth® Document
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
1. Introduction to Computer Programming.pptx
NewMind AI Weekly Chronicles - August'25-Week II
August Patch Tuesday
A comparative analysis of optical character recognition models for extracting...

MongoDB, Hadoop and humongous data - MongoSV 2012

  • 2. Talking about What is Humongous Data Humongous Data & You MongoDB & Data processing Future of Humongous Data
  • 3. @spf13 AKA Steve Francia 15+ years building the internet Father, husband, skateboarder Chief Solutions Architect @ responsible for drivers, integrations, web & docs
  • 5. 2000 Google Inc Today announced it has released the largest search engine on the Internet. Google’s new index, comprising more than 1 billion URLs
  • 6. 2008 Our indexing system for processing links indicates that we now count 1 trillion unique URLs (and the number of individual web pages out there is growing by several billion pages per day).
  • 7. An unprecedented amount of data is being created and is accessible
  • 8. Data Growth 1,000 1000 750 500 500 250 250 120 55 4 10 24 1 0 2000 2001 2002 2003 2004 2005 2006 2007 2008 Millions of URLs
  • 9. Truly Exponential Growth Is hard for people to grasp A BBC reporter recently: "Your current PC is more powerful than the computer they had on board the first flight to the moon".
  • 10. Moore’s Law Applies to more than just CPUs Boiled down it is that things double at regular intervals It’s exponential growth.. and applies to big data
  • 11. How BIG is it?
  • 12. How BIG is it? 2008
  • 13. How BIG is it? 2007 2008 2005 2006 2003 2004 2001 2002
  • 14. Why all this talk about BIG Data now?
  • 15. In the past few years open source software emerged enabling ‘us’ to handle BIG Data
  • 16. The Big Data Story
  • 18. Doers & Tellers talking about different things http://www.slideshare.net/siliconangle/trendconnect-big-data-report-september
  • 20. Doers
  • 21. Doers talk a lot more about actual solutions
  • 22. They know it’s a two sided story Storage Processing
  • 23. Take aways MongoDB and Hadoop MongoDB for storage & operations Hadoop for processing & analytics
  • 25. Applications have complex needs MongoDB ideal operational database MongoDB ideal for BIG data Not a data processing engine, but provides processing functionality
  • 26. Many options for Processing Data •Process in MongoDB using Map Reduce •Process in MongoDB using Aggregation Framework •Process outside MongoDB (using Hadoop)
  • 27. MongoDB Map Reduce Map() MongoDB Data Group(k) emit(k,v) map iterates on documents Document is $this Sort(k) 1 at time per shard Reduce(k,values) k,v Finalize(k,v) Input matches output k,v Can run multiple times
  • 28. MongoDB Map Reduce MongoDB map reduce quite capable... but with limits - Javascript not best language for processing map reduce - Javascript limited in external data processing libraries - Adds load to data store
  • 29. MongoDB Aggregation Most uses of MongoDB Map Reduce were for aggregation Aggregation Framework optimized for aggregate queries Realtime aggregation similar to SQL GroupBy
  • 30. MongoDB & Hadoop same as Mongo's Many map operations MongoDB shard chunks (64mb) 1 at time per input split Creates a list each split Map (k1,1v1,1ctx) Runs on same of Input Splits Map (k ,1v ,1ctx) thread as map each split Map (k , v , ctx) single server or sharded cluster (InputFormat) each split ctx.write(k2,v2)2 ctx.write(k2,v )2 Combiner(k2,values2)2 RecordReader ctx.write(k2,v ) Combiner(k2,values )2 Combiner(k2,values ) k2, 2v3 3 k , 2v 3 k ,v Partitioner(k2)2 Partitioner(k )2 Partitioner(k ) Sort(keys2) Sort(k2)2 Sort(k ) MongoDB Reducer threads Reduce(k2,values3) Output Format Runs once per key kf,vf
  • 32. DEMO Install Hadoop MongoDB Plugin Import tweets from twitter Write mapper in Python using Hadoop streaming Write reducer in Python using Hadoop streaming Call myself a data scientist
  • 33. Installing Mongo-hadoop https://gist.github.com/1887726 hadoop_version '0.23' hadoop_path="/usr/local/Cellar/hadoop/ $hadoop_version.0/libexec/lib" git clone git://github.com/mongodb/mongo- hadoop.git cd mongo-hadoop sed -i '' "s/default/$hadoop_version/g" build.sbt cd streaming ./build.sh
  • 34. Groking Twitter curl https://stream.twitter.com/1/ statuses/sample.json -u<login>:<password> | mongoimport -d test -c live ... let it run for about 2 hours
  • 36. Map Hashtags in Python #!/usr/bin/env python import sys sys.path.append(".") from pymongo_hadoop import BSONMapper def mapper(documents): for doc in documents: for hashtag in doc['entities']['hashtags']: yield {'_id': hashtag['text'], 'count': 1} BSONMapper(mapper) print >> sys.stderr, "Done Mapping."
  • 37. Reduce hashtags in Python #!/usr/bin/env python import sys sys.path.append(".") from pymongo_hadoop import BSONReducer def reducer(key, values): print >> sys.stderr, "Hashtag %s" % key.encode('utf8') _count = 0 for v in values: _count += v['count'] return {'_id': key.encode('utf8'), 'count': _count} BSONReducer(reducer)
  • 38. All together hadoop jar target/mongo-hadoop-streaming-assembly-1.0.0-rc0.jar -mapper examples/twitter/twit_hashtag_map.py -reducer examples/twitter/twit_hashtag_reduce.py -inputURI mongodb://127.0.0.1/test.live -outputURI mongodb://127.0.0.1/test.twit_reduction -file examples/twitter/twit_hashtag_map.py -file examples/twitter/twit_hashtag_reduce.py
  • 39. Popular Hash Tags db.twit_hashtags.find().sort( {'count' : -1 }) { "_id" : "YouKnowYoureInLoveIf", "count" : 287 } { "_id" : "teamfollowback", "count" : 200 } { "_id" : "RT", "count" : 150 } { "_id" : "Arsenal", "count" : 148 } { "_id" : "milars", "count" : 145 } { "_id" : "sanremo", "count" : 145 } { "_id" : "LoseMyNumberIf", "count" : 139 } { "_id" : "RelationshipsShould", "count" : 137 } { "_id" : "Bahrain", "count" : 129 } { "_id" : "bahrain", "count" : 125 } { "_id" : "oomf", "count" : 117 } { "_id" : "BabyKillerOcalan", "count" : 106 } { "_id" : "TeamFollowBack", "count" : 105 } { "_id" : "WhyDoPeopleThink", "count" : 102 } { "_id" : "np", "count" : 100 }
  • 41. Aggregation in Mongo 2.1 db.live.aggregate( { $unwind : "$entities.hashtags" } , { $match : { "entities.hashtags.text" : { $exists : true } } } , { $group : { _id : "$entities.hashtags.text", count : { $sum : 1 } } } , { $sort : { count : -1 } }, { $limit : 10 } )
  • 42. Popular Hash Tags db.twit_hashtags.aggregate(a){ "result" : [ { "_id" : "YouKnowYoureInLoveIf", "count" : 287 }, { "_id" : "teamfollowback", "count" : 200 }, { "_id" : "RT", "count" : 150 }, { "_id" : "Arsenal", "count" : 148 }, { "_id" : "milars", "count" : 145 }, { "_id" : "sanremo","count" : 145 }, { "_id" : "LoseMyNumberIf", "count" : 139 }, { "_id" : "RelationshipsShould", "count" : 137 }, { "_id" : "Bahrain", "count" : 129 }, { "_id" : "bahrain", "count" : 125 } ],"ok" : 1 }
  • 43. The Future of humongous data
  • 44. What is BIG? BIG today is normal tomorrow
  • 45. Data Growth 9,000 9000 6750 4,400 4500 2,150 2250 1,000 500 55 120 250 1 4 10 24 0 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 Millions of URLs
  • 46. Data Growth 9,000 9000 6750 4,400 4500 2,150 2250 1,000 500 55 120 250 1 4 10 24 0 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 Millions of URLs
  • 48. MongoDB enables us to scale with the redefinition of BIG. New processing tools like Hadoop & Storm are enabling us to process the new BIG.
  • 49. Hadoop is our first step
  • 50. MongoDB is committed to working with best data tools including Hadoop, Storm, Disco, Spark & more
  • 51. http://spf13.com http://github.com/spf13 @spf13 Questions? download at github.com/mongodb/mongo-hadoop