SlideShare a Scribd company logo
SIKS Big Data Course
Part Two
Prof.dr.ir. Arjen P. de Vries
arjen@acm.org
Enschede, December 7, 2016
Bigdata processing with Spark - part II
Recap Spark
 Data Sharing Crucial for:
- Interactive Analysis
- Iterative machine learning algorithms
 Spark RDDs
- Distributed collections, cached in memory across cluster nodes
 Keep track of Lineage
- To ensure fault-tolerance
- To optimize processing based on knowledge of the data partitioning
RDDs in More Detail
RDDs additionally provide:
- Control over partitioning, which can be used to optimize data
placement across queries.
- usually more efficient than the sort-based approach of Map
Reduce
- Control over persistence (e.g. store on disk vs in RAM)
- Fine-grained reads (treat RDD as a big table)
Slide by Matei Zaharia, creator Spark, http://spark-project.org
Scheduling Process
rdd1.join(rdd2)
.groupBy(…)
.filter(…)
RDD Objects
build operator DAG
agnostic to
operators!
agnostic to
operators!
doesn’t know
about stages
doesn’t know
about stages
DAGScheduler
split graph into
stages of tasks
submit each
stage as ready
DAG
TaskScheduler
TaskSet
launch tasks via
cluster manager
retry failed or
straggling tasks
Cluster
manager
Worker
execute tasks
store and serve
blocks
Block
manager
Threads
Task
stage
failed
RDD API Example
// Read input file
val input = sc.textFile("input.txt")
val tokenized = input
.map(line => line.split(" "))
.filter(words => words.size > 0) // remove
empty lines
val counts = tokenized // frequency of log
levels
.map(words => (words(0), 1)).
.reduceByKey{ (a, b) => a + b, 2 } 6
RDD API Example
// Read input file
val input = sc.textFile( )
val tokenized = input
.map(line => line.split(" "))
.filter(words => words.size > 0) // remove
empty lines
val counts = tokenized // frequency of log
levels
.map(words => (words(0), 1)).
.reduceByKey{ (a, b) => a + b } 7
Transformations
sc.textFile().map().filter().map().reduceByKey()
8
DAG View of RDD’s
textFile() map() filter() map() reduceByKey()
9
Mapped RDD
Partition
1
Partition
2
Partition
3
Filtered RDD
Partition
1
Partition
2
Partition
3
Mapped RDD
Partition
1
Partition
2
Partition
3
Shuffle RDD
Partition
1
Partition
2
Hadoop RDD
Partition
1
Partition
2
Partition
3
input tokenized
counts
Transformations build up a DAG, but don’t “do
anything”
10
How runJob Works
Needs to compute my parents, parents, parents, etc all
the way back to an RDD with no dependencies (e.g.
HadoopRDD).
11
Mapped RDD
Partition
1
Partition
2
Partition
3
Filtered RDD
Partition
1
Partition
2
Partition
3
Mapped RDD
Partition
1
Partition
2
Partition
3
Hadoop RDD
Partition
1
Partition
2
Partition
3
input tokenized
counts
runJob(counts)
Physical
Optimizations
1. Certain types of transformations can be
pipelined.
2. If dependent RDD’s have already been
cached (or persisted in a shuffle) the graph
can be truncated.
Pipelining and truncation produce a set of
stages where each stage is composed of
tasks
12
Scheduler Optimizations
Pipelines narrow ops.
within a stage
Picks join algorithms
based on partitioning
(minimize shuffles)
Reuses previously
cached data
join
union
groupBy
map
Stage 3
Stage 1
Stage 2
A: B:
C: D:
E:
F:
G:
= previously computed partition
Task
Task Details
Stage boundaries are only at input RDDs or
“shuffle” operations
So, each task looks like this:
Task
f1  f2  …
Task
f1  f2  …
map output file
or master
external
storage
fetch map
outputs
and/or
How runJob Works
Needs to compute my parents, parents, parents, etc all
the way back to an RDD with no dependencies (e.g.
HadoopRDD).
15
Mapped RDD
Partition
1
Partition
2
Partition
3
Filtered RDD
Partition
1
Partition
2
Partition
3
Mapped RDD
Partition
1
Partition
2
Partition
3
Shuffle RDD
Partition
1
Partition
2
Hadoop RDD
Partition
1
Partition
2
Partition
3
input tokenized counts
runJob(counts)
16
How runJob Works
Needs to compute my parents, parents, parents, etc all
the way back to an RDD with no dependencies (e.g.
HadoopRDD).
input tokenized counts
Mapped RDD
Partition
1
Partition
2
Partition
3
Filtered RDD
Partition
1
Partition
2
Partition
3
Mapped RDD
Partition
1
Partition
2
Partition
3
Shuffle RDD
Partition
1
Partition
2
Hadoop RDD
Partition
1
Partition
2
Partition
3
runJob(counts)
Stage Graph
Task 1
Task 2
Task 3
Task 1
Task 2
Stage 1 Stage 2
Each task
will:
1.Read
Hadoop
input
2.Perform
maps and
filters
3.Write
partial sums
Each task
will:
1.Read
partial sums
2.Invoke user
function
passed to
runJob.
Shuffle write Shuffle readInput
read
Physical Execution Model
 Distinguish between:
- Jobs: complete work to be done
- Stages: bundles of work that can execute together
- Tasks: unit of work, corresponds to one RDD partition
 Defining stages and tasks should not require deep
knowledge of what these actually do
- Goal of Spark is to be extensible, letting users define new
RDD operators
RDD Interface
Set of partitions (“splits”)
List of dependencies on parent RDDs
Function to compute a partition given parents
Optional preferred locations
Optional partitioning info (Partitioner)
Captures all current Spark operations!Captures all current Spark operations!
Example: HadoopRDD
partitions = one per HDFS block
dependencies = none
compute(partition) = read corresponding block
preferredLocations(part) = HDFS block location
partitioner = none
Example: FilteredRDD
partitions = same as parent RDD
dependencies = “one-to-one” on parent
compute(partition) = compute parent and filter it
preferredLocations(part) = none (ask parent)
partitioner = none
Example: JoinedRDD
partitions = one per reduce task
dependencies = “shuffle” on each parent
compute(partition) = read and join shuffled data
preferredLocations(part) = none
partitioner = HashPartitioner(numTasks)
Spark will now know
this data is hashed!
Spark will now know
this data is hashed!
DependencyTypes
union join with inputs not
co-partitioned
map, filter
join with
inputs co-
partitioned
“Narrow” deps:
groupByKey
“Wide” (shuffle) deps:
Improving Efficiency
 Basic Principle: Avoid Shuffling!
Filter Input Early
Avoid groupByKey on Pair RDDs
 All key-value pairs will be shuffled accross the network, to a
reducer where the values are collected together
groupByKey
“Wide” (shuffle) deps:
aggregateByKey
 Three inputs
- Zero-element
- Merging function within partition
- Merging function across partitions
val initialCount = 0;
val addToCounts = (n: Int, v: String) => n + 1
val sumPartitionCounts =
(p1: Int, p2: Int) => p1 + p2
val countByKey =
kv.aggregateByKey(initialCount)
(addToCounts,sumPartitionCounts)
Combiners!
combineByKey
val result = input.combineByKey(
(v) => (v, 1),
(acc: (Int, Int), v) => (acc.1 + v, acc.2 + 1),
(acc1: (Int, Int), acc2: (Int, Int)) =>
(acc1._1 + acc2._1, acc1._2 + acc2._2) )
.map{
case (key, value) =>
(key, value._1 / value._2.toFloat) }
result.collectAsMap().map(println(_))
Control the Degree of Parallellism
 Repartition
- Concentrate effort - increase use of nodes
 Coalesce
- Reduce number of tasks
Broadcast Values
 In case of a join with a small RHS or LHS, broadcast the
small set to every node in the cluster
Bigdata processing with Spark - part II
Bigdata processing with Spark - part II
Bigdata processing with Spark - part II
Broadcast Variables
 Create with SparkContext.broadcast(initVal)
 Access with .value inside tasks
 Immutable!
- If you modify the broadcast value after creation, that change is
local to the node
Maintaining Partitioning
 mapValues instead of map
 flatMapValues instead of flatMap
- Good for tokenization!
Bigdata processing with Spark - part II
The best trick of all, however…
Use Higher Level API’s!
DataFrame APIs for core processing
Works across Scala, Java, Python and R
Spark ML for machine learning
Spark SQL for structured query processing
38
Higher-Level Libraries
Spark
Spark
Streaming
real-time
Spark SQL
structured
data
MLlib
machine
learning
GraphX
graph
Combining Processing
Types
// Load data using SQL
points = ctx.sql(“select latitude, longitude from tweets”)
// Train a machine learning model
model = KMeans.train(points, 10)
// Apply it to a stream
sc.twitterStream(...)
.map(t => (model.predict(t.location), 1))
.reduceByWindow(“5s”, (a, b) => a + b)
Performance of
Composition
Separate computing frameworks:
…
HDFS
read
HDFS
write
HDFS
read
HDFS
write
HDFS
read
HDFS
write
HDFS
write
HDFS
read
Spark:
Encode Domain Knowledge
 In essence, nothing more than libraries with pre-cooked
code – that still operates over the abstraction of RDDs
 Focus on optimizations that require domain knowledge
Spark MLLib
Data Sets
Challenge: Data
Representation
Java objects often many times larger than
data
class User(name: String, friends: Array[Int])
User(“Bobby”, Array(1, 2))
User 0x… 0x…
String
3
0
1 2
Bobby
5 0x…
int[]
char[] 5
DataFrames / Spark SQL
Efficient library for working with structured
data
» Two interfaces: SQL for data analysts and
external apps, DataFrames for complex
programs
» Optimized computation and storage underneath
Spark SQL added in 2014, DataFrames in
2015
Spark SQL Architecture
Logical
Plan
Logical
Plan
Physical
Plan
Physical
Plan
CatalogCatalog
Optimizer
RDDsRDDs
…
Data
Source
API
SQLSQL
Data
Frames
Data
Frames
Code
Generator
DataFrame API
DataFrames hold rows with a known
schema and offer relational operations
through a DSL
c = HiveContext()
users = c.sql(“select * from users”)
ma_users = users[users.state == “MA”]
ma_users.count()
ma_users.groupBy(“name”).avg(“age”)
ma_users.map(lambda row: row.user.toUpper())
Expression AST
What DataFrames Enable
1. Compact binary representation
• Columnar, compressed cache; rows for
processing
1. Optimization across operators (join
reordering, predicate pushdown, etc)
2. Runtime code generation
Performance
Performance
Data Sources
Uniform way to access structured data
» Apps can migrate across Hive, Cassandra, JSON, …
» Rich semantics allows query pushdown into data
sources
Spark
SQL
users[users.age > 20]
select * from users
Examples
JSON:
JDBC:
Together:
select user.id, text from tweets
{
“text”: “hi”,
“user”: {
“name”: “bob”,
“id”: 15 }
}
tweets.json
select age from users where lang = “en”
select t.text, u.age
from tweets t, users u
where t.user.id = u.id
and u.lang = “en”
Spark
SQL
{JSON}
select id, age from
users where lang=“en”
Thanks
 Matei Zaharia, MIT (https://cs.stanford.edu/~matei/)
 Paul Wendell, Databricks
 http://spark-project.org
Ad

Recommended

Bigdata processing with Spark
Bigdata processing with Spark
Arjen de Vries
 
Hadoop scalability
Hadoop scalability
WANdisco Plc
 
Distributed Computing with Apache Hadoop: Technology Overview
Distributed Computing with Apache Hadoop: Technology Overview
Konstantin V. Shvachko
 
Hadoop, MapReduce and R = RHadoop
Hadoop, MapReduce and R = RHadoop
Victoria López
 
Hadoop
Hadoop
Himanshu Soni
 
Hadoop Overview kdd2011
Hadoop Overview kdd2011
Milind Bhandarkar
 
Hadoop MapReduce Framework
Hadoop MapReduce Framework
Edureka!
 
Hadoop ppt2
Hadoop ppt2
Ankit Gupta
 
02.28.13 WANdisco ApacheCon 2013
02.28.13 WANdisco ApacheCon 2013
WANdisco Plc
 
Seminar_Report_hadoop
Seminar_Report_hadoop
Varun Narang
 
Hadoop
Hadoop
Scott Leberknight
 
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Cognizant
 
Hadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapa
kapa rohit
 
Overview of Hadoop and HDFS
Overview of Hadoop and HDFS
Brendan Tierney
 
Hadoop architecture meetup
Hadoop architecture meetup
vmoorthy
 
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hari Shankar Sreekumar
 
Introduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduce
eakasit_dpu
 
002 Introduction to hadoop v3
002 Introduction to hadoop v3
Dendej Sawarnkatat
 
Big data overview of apache hadoop
Big data overview of apache hadoop
veeracynixit
 
Hadoop training in hyderabad-kellytechnologies
Hadoop training in hyderabad-kellytechnologies
Kelly Technologies
 
Hadoop Seminar Report
Hadoop Seminar Report
Atul Kushwaha
 
Hadoop interview quations1
Hadoop interview quations1
Vemula Ravi
 
Introduction to Hadoop and Hadoop component
Introduction to Hadoop and Hadoop component
rebeccatho
 
Hadoop technology doc
Hadoop technology doc
tipanagiriharika
 
Hadoop ecosystem
Hadoop ecosystem
Mohamed Ali Mahmoud khouder
 
Big Data and Hadoop - An Introduction
Big Data and Hadoop - An Introduction
Nagarjuna Kanamarlapudi
 
Introduction to Big Data & Hadoop
Introduction to Big Data & Hadoop
Edureka!
 
Seminar Presentation Hadoop
Seminar Presentation Hadoop
Varun Narang
 
Secrets of Spark's success - Deenar Toraskar, Think Reactive
Secrets of Spark's success - Deenar Toraskar, Think Reactive
huguk
 
Scala and spark
Scala and spark
Fabio Fumarola
 

More Related Content

What's hot (20)

02.28.13 WANdisco ApacheCon 2013
02.28.13 WANdisco ApacheCon 2013
WANdisco Plc
 
Seminar_Report_hadoop
Seminar_Report_hadoop
Varun Narang
 
Hadoop
Hadoop
Scott Leberknight
 
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Cognizant
 
Hadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapa
kapa rohit
 
Overview of Hadoop and HDFS
Overview of Hadoop and HDFS
Brendan Tierney
 
Hadoop architecture meetup
Hadoop architecture meetup
vmoorthy
 
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hari Shankar Sreekumar
 
Introduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduce
eakasit_dpu
 
002 Introduction to hadoop v3
002 Introduction to hadoop v3
Dendej Sawarnkatat
 
Big data overview of apache hadoop
Big data overview of apache hadoop
veeracynixit
 
Hadoop training in hyderabad-kellytechnologies
Hadoop training in hyderabad-kellytechnologies
Kelly Technologies
 
Hadoop Seminar Report
Hadoop Seminar Report
Atul Kushwaha
 
Hadoop interview quations1
Hadoop interview quations1
Vemula Ravi
 
Introduction to Hadoop and Hadoop component
Introduction to Hadoop and Hadoop component
rebeccatho
 
Hadoop technology doc
Hadoop technology doc
tipanagiriharika
 
Hadoop ecosystem
Hadoop ecosystem
Mohamed Ali Mahmoud khouder
 
Big Data and Hadoop - An Introduction
Big Data and Hadoop - An Introduction
Nagarjuna Kanamarlapudi
 
Introduction to Big Data & Hadoop
Introduction to Big Data & Hadoop
Edureka!
 
Seminar Presentation Hadoop
Seminar Presentation Hadoop
Varun Narang
 
02.28.13 WANdisco ApacheCon 2013
02.28.13 WANdisco ApacheCon 2013
WANdisco Plc
 
Seminar_Report_hadoop
Seminar_Report_hadoop
Varun Narang
 
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Cognizant
 
Hadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapa
kapa rohit
 
Overview of Hadoop and HDFS
Overview of Hadoop and HDFS
Brendan Tierney
 
Hadoop architecture meetup
Hadoop architecture meetup
vmoorthy
 
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hari Shankar Sreekumar
 
Introduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduce
eakasit_dpu
 
Big data overview of apache hadoop
Big data overview of apache hadoop
veeracynixit
 
Hadoop training in hyderabad-kellytechnologies
Hadoop training in hyderabad-kellytechnologies
Kelly Technologies
 
Hadoop Seminar Report
Hadoop Seminar Report
Atul Kushwaha
 
Hadoop interview quations1
Hadoop interview quations1
Vemula Ravi
 
Introduction to Hadoop and Hadoop component
Introduction to Hadoop and Hadoop component
rebeccatho
 
Introduction to Big Data & Hadoop
Introduction to Big Data & Hadoop
Edureka!
 
Seminar Presentation Hadoop
Seminar Presentation Hadoop
Varun Narang
 

Similar to Bigdata processing with Spark - part II (20)

Secrets of Spark's success - Deenar Toraskar, Think Reactive
Secrets of Spark's success - Deenar Toraskar, Think Reactive
huguk
 
Scala and spark
Scala and spark
Fabio Fumarola
 
Dive into spark2
Dive into spark2
Gal Marder
 
Ten tools for ten big data areas 03_Apache Spark
Ten tools for ten big data areas 03_Apache Spark
Will Du
 
Introduction to Apache Spark
Introduction to Apache Spark
Anastasios Skarlatidis
 
Apache Spark: What? Why? When?
Apache Spark: What? Why? When?
Massimo Schenone
 
TriHUG talk on Spark and Shark
TriHUG talk on Spark and Shark
trihug
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
Databricks
 
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Sachin Aggarwal
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
Patrick Wendell
 
Spark after Dark by Chris Fregly of Databricks
Spark after Dark by Chris Fregly of Databricks
Data Con LA
 
Spark After Dark - LA Apache Spark Users Group - Feb 2015
Spark After Dark - LA Apache Spark Users Group - Feb 2015
Chris Fregly
 
Apache Spark™ is a multi-language engine for executing data-S5.ppt
Apache Spark™ is a multi-language engine for executing data-S5.ppt
bhargavi804095
 
Spark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student Slides
Databricks
 
Stefano Baghino - From Big Data to Fast Data: Apache Spark
Stefano Baghino - From Big Data to Fast Data: Apache Spark
Codemotion
 
Spark real world use cases and optimizations
Spark real world use cases and optimizations
Gal Marder
 
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
IT Event
 
Apache Spark
Apache Spark
SugumarSarDurai
 
Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache Spark
Databricks
 
Spark Deep Dive
Spark Deep Dive
Corey Nolet
 
Secrets of Spark's success - Deenar Toraskar, Think Reactive
Secrets of Spark's success - Deenar Toraskar, Think Reactive
huguk
 
Dive into spark2
Dive into spark2
Gal Marder
 
Ten tools for ten big data areas 03_Apache Spark
Ten tools for ten big data areas 03_Apache Spark
Will Du
 
Apache Spark: What? Why? When?
Apache Spark: What? Why? When?
Massimo Schenone
 
TriHUG talk on Spark and Shark
TriHUG talk on Spark and Shark
trihug
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
Databricks
 
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Sachin Aggarwal
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
Patrick Wendell
 
Spark after Dark by Chris Fregly of Databricks
Spark after Dark by Chris Fregly of Databricks
Data Con LA
 
Spark After Dark - LA Apache Spark Users Group - Feb 2015
Spark After Dark - LA Apache Spark Users Group - Feb 2015
Chris Fregly
 
Apache Spark™ is a multi-language engine for executing data-S5.ppt
Apache Spark™ is a multi-language engine for executing data-S5.ppt
bhargavi804095
 
Spark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student Slides
Databricks
 
Stefano Baghino - From Big Data to Fast Data: Apache Spark
Stefano Baghino - From Big Data to Fast Data: Apache Spark
Codemotion
 
Spark real world use cases and optimizations
Spark real world use cases and optimizations
Gal Marder
 
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
Volodymyr Lyubinets "Introduction to big data processing with Apache Spark"
IT Event
 
Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache Spark
Databricks
 
Ad

More from Arjen de Vries (20)

Doing a PhD @ DOSSIER
Doing a PhD @ DOSSIER
Arjen de Vries
 
Masterclass Big Data (leerlingen)
Masterclass Big Data (leerlingen)
Arjen de Vries
 
Beverwedstrijd Big Data (klas 3/4/5/6)
Beverwedstrijd Big Data (klas 3/4/5/6)
Arjen de Vries
 
Beverwedstrijd Big Data (groep 5/6 en klas 1/2)
Beverwedstrijd Big Data (groep 5/6 en klas 1/2)
Arjen de Vries
 
Web Archives and the dream of the Personal Search Engine
Web Archives and the dream of the Personal Search Engine
Arjen de Vries
 
Information Retrieval and Social Media
Information Retrieval and Social Media
Arjen de Vries
 
Information Retrieval intro TMM
Information Retrieval intro TMM
Arjen de Vries
 
ACM SIGIR 2017 - Opening - PC Chairs
ACM SIGIR 2017 - Opening - PC Chairs
Arjen de Vries
 
Data Science Master Specialisation
Data Science Master Specialisation
Arjen de Vries
 
PUC Masterclass Big Data
PUC Masterclass Big Data
Arjen de Vries
 
TREC 2016: Looking Forward Panel
TREC 2016: Looking Forward Panel
Arjen de Vries
 
The personal search engine
The personal search engine
Arjen de Vries
 
Models for Information Retrieval and Recommendation
Models for Information Retrieval and Recommendation
Arjen de Vries
 
Better Contextual Suggestions by Applying Domain Knowledge
Better Contextual Suggestions by Applying Domain Knowledge
Arjen de Vries
 
Similarity & Recommendation - CWI Scientific Meeting - Sep 27th, 2013
Similarity & Recommendation - CWI Scientific Meeting - Sep 27th, 2013
Arjen de Vries
 
ESSIR 2013 - IR and Social Media
ESSIR 2013 - IR and Social Media
Arjen de Vries
 
Looking beyond plain text for document representation in the enterprise
Looking beyond plain text for document representation in the enterprise
Arjen de Vries
 
Recommendation and Information Retrieval: Two Sides of the Same Coin?
Recommendation and Information Retrieval: Two Sides of the Same Coin?
Arjen de Vries
 
Searching Political Data by Strategy
Searching Political Data by Strategy
Arjen de Vries
 
How to Search Annotated Text by Strategy?
How to Search Annotated Text by Strategy?
Arjen de Vries
 
Masterclass Big Data (leerlingen)
Masterclass Big Data (leerlingen)
Arjen de Vries
 
Beverwedstrijd Big Data (klas 3/4/5/6)
Beverwedstrijd Big Data (klas 3/4/5/6)
Arjen de Vries
 
Beverwedstrijd Big Data (groep 5/6 en klas 1/2)
Beverwedstrijd Big Data (groep 5/6 en klas 1/2)
Arjen de Vries
 
Web Archives and the dream of the Personal Search Engine
Web Archives and the dream of the Personal Search Engine
Arjen de Vries
 
Information Retrieval and Social Media
Information Retrieval and Social Media
Arjen de Vries
 
Information Retrieval intro TMM
Information Retrieval intro TMM
Arjen de Vries
 
ACM SIGIR 2017 - Opening - PC Chairs
ACM SIGIR 2017 - Opening - PC Chairs
Arjen de Vries
 
Data Science Master Specialisation
Data Science Master Specialisation
Arjen de Vries
 
PUC Masterclass Big Data
PUC Masterclass Big Data
Arjen de Vries
 
TREC 2016: Looking Forward Panel
TREC 2016: Looking Forward Panel
Arjen de Vries
 
The personal search engine
The personal search engine
Arjen de Vries
 
Models for Information Retrieval and Recommendation
Models for Information Retrieval and Recommendation
Arjen de Vries
 
Better Contextual Suggestions by Applying Domain Knowledge
Better Contextual Suggestions by Applying Domain Knowledge
Arjen de Vries
 
Similarity & Recommendation - CWI Scientific Meeting - Sep 27th, 2013
Similarity & Recommendation - CWI Scientific Meeting - Sep 27th, 2013
Arjen de Vries
 
ESSIR 2013 - IR and Social Media
ESSIR 2013 - IR and Social Media
Arjen de Vries
 
Looking beyond plain text for document representation in the enterprise
Looking beyond plain text for document representation in the enterprise
Arjen de Vries
 
Recommendation and Information Retrieval: Two Sides of the Same Coin?
Recommendation and Information Retrieval: Two Sides of the Same Coin?
Arjen de Vries
 
Searching Political Data by Strategy
Searching Political Data by Strategy
Arjen de Vries
 
How to Search Annotated Text by Strategy?
How to Search Annotated Text by Strategy?
Arjen de Vries
 
Ad

Recently uploaded (20)

We are Living in a Dangerous Multilingual World!
We are Living in a Dangerous Multilingual World!
Editions La Dondaine
 
History of Nursing and Nursing As A Profession UNIT-3.pptx
History of Nursing and Nursing As A Profession UNIT-3.pptx
madhusrinivas68
 
Science 7 DLL Week 1 Quarter 1 Matatag Curriculum
Science 7 DLL Week 1 Quarter 1 Matatag Curriculum
RONAFAITHLOOC
 
Impact of Network Topologies on Blockchain Performance
Impact of Network Topologies on Blockchain Performance
vschiavoni
 
1-SEAFLOOR-SPREADINGGGGGGGGGGGGGGGGGGGG.pptx
1-SEAFLOOR-SPREADINGGGGGGGGGGGGGGGGGGGG.pptx
JohnCristoffMendoza
 
What is Research Grade 7/ Research I.pptx
What is Research Grade 7/ Research I.pptx
delapenamhey144
 
FYJC .Chapter-14 L-1 Human Nutrition.pdf
FYJC .Chapter-14 L-1 Human Nutrition.pdf
RachanaT6
 
GBSN_Unit 3 - Medical and surgical Asepsis
GBSN_Unit 3 - Medical and surgical Asepsis
Areesha Ahmad
 
MOLD -GENERAL CHARACTERISTICS AND CLASSIFICATION
MOLD -GENERAL CHARACTERISTICS AND CLASSIFICATION
aparnamp966
 
Science 10 1.3 Mountain Belts in the Philippines.pptx
Science 10 1.3 Mountain Belts in the Philippines.pptx
ClaireMangundayao1
 
Herbal Excipients: Natural Colorants & Perfumery Agents
Herbal Excipients: Natural Colorants & Perfumery Agents
Seacom Skills University
 
Gas Exchange in Insects and structures 01
Gas Exchange in Insects and structures 01
PhoebeAkinyi1
 
lysosomes "suicide bags of cell" and hydrolytic enzymes
lysosomes "suicide bags of cell" and hydrolytic enzymes
kchaturvedi070
 
Relazione di laboratorio Idrolisi dell'amido (in inglese)
Relazione di laboratorio Idrolisi dell'amido (in inglese)
paolofvesco
 
SULFUR PEARL OF NAMIBIA - Thiomargarita namibiensis
SULFUR PEARL OF NAMIBIA - Thiomargarita namibiensis
aparnamp966
 
GBSN_ Unit 1 - Introduction to Microbiology
GBSN_ Unit 1 - Introduction to Microbiology
Areesha Ahmad
 
An Analysis Of The Pearl Short Story By John Steinbeck
An Analysis Of The Pearl Short Story By John Steinbeck
BillyDarmawan3
 
Single-Cell Multi-Omics in Neurodegeneration p1.pptx
Single-Cell Multi-Omics in Neurodegeneration p1.pptx
KanakChaudhary10
 
THE CIRCULATORY SYSTEM GRADE 9 SCIENCE.pptx
THE CIRCULATORY SYSTEM GRADE 9 SCIENCE.pptx
roselyncatacutan
 
TISSUE TRANSPLANTATTION and IT'S IMPORTANCE IS DISCUSSED
TISSUE TRANSPLANTATTION and IT'S IMPORTANCE IS DISCUSSED
PhoebeAkinyi1
 
We are Living in a Dangerous Multilingual World!
We are Living in a Dangerous Multilingual World!
Editions La Dondaine
 
History of Nursing and Nursing As A Profession UNIT-3.pptx
History of Nursing and Nursing As A Profession UNIT-3.pptx
madhusrinivas68
 
Science 7 DLL Week 1 Quarter 1 Matatag Curriculum
Science 7 DLL Week 1 Quarter 1 Matatag Curriculum
RONAFAITHLOOC
 
Impact of Network Topologies on Blockchain Performance
Impact of Network Topologies on Blockchain Performance
vschiavoni
 
1-SEAFLOOR-SPREADINGGGGGGGGGGGGGGGGGGGG.pptx
1-SEAFLOOR-SPREADINGGGGGGGGGGGGGGGGGGGG.pptx
JohnCristoffMendoza
 
What is Research Grade 7/ Research I.pptx
What is Research Grade 7/ Research I.pptx
delapenamhey144
 
FYJC .Chapter-14 L-1 Human Nutrition.pdf
FYJC .Chapter-14 L-1 Human Nutrition.pdf
RachanaT6
 
GBSN_Unit 3 - Medical and surgical Asepsis
GBSN_Unit 3 - Medical and surgical Asepsis
Areesha Ahmad
 
MOLD -GENERAL CHARACTERISTICS AND CLASSIFICATION
MOLD -GENERAL CHARACTERISTICS AND CLASSIFICATION
aparnamp966
 
Science 10 1.3 Mountain Belts in the Philippines.pptx
Science 10 1.3 Mountain Belts in the Philippines.pptx
ClaireMangundayao1
 
Herbal Excipients: Natural Colorants & Perfumery Agents
Herbal Excipients: Natural Colorants & Perfumery Agents
Seacom Skills University
 
Gas Exchange in Insects and structures 01
Gas Exchange in Insects and structures 01
PhoebeAkinyi1
 
lysosomes "suicide bags of cell" and hydrolytic enzymes
lysosomes "suicide bags of cell" and hydrolytic enzymes
kchaturvedi070
 
Relazione di laboratorio Idrolisi dell'amido (in inglese)
Relazione di laboratorio Idrolisi dell'amido (in inglese)
paolofvesco
 
SULFUR PEARL OF NAMIBIA - Thiomargarita namibiensis
SULFUR PEARL OF NAMIBIA - Thiomargarita namibiensis
aparnamp966
 
GBSN_ Unit 1 - Introduction to Microbiology
GBSN_ Unit 1 - Introduction to Microbiology
Areesha Ahmad
 
An Analysis Of The Pearl Short Story By John Steinbeck
An Analysis Of The Pearl Short Story By John Steinbeck
BillyDarmawan3
 
Single-Cell Multi-Omics in Neurodegeneration p1.pptx
Single-Cell Multi-Omics in Neurodegeneration p1.pptx
KanakChaudhary10
 
THE CIRCULATORY SYSTEM GRADE 9 SCIENCE.pptx
THE CIRCULATORY SYSTEM GRADE 9 SCIENCE.pptx
roselyncatacutan
 
TISSUE TRANSPLANTATTION and IT'S IMPORTANCE IS DISCUSSED
TISSUE TRANSPLANTATTION and IT'S IMPORTANCE IS DISCUSSED
PhoebeAkinyi1
 

Bigdata processing with Spark - part II

  • 1. SIKS Big Data Course Part Two Prof.dr.ir. Arjen P. de Vries [email protected] Enschede, December 7, 2016
  • 3. Recap Spark  Data Sharing Crucial for: - Interactive Analysis - Iterative machine learning algorithms  Spark RDDs - Distributed collections, cached in memory across cluster nodes  Keep track of Lineage - To ensure fault-tolerance - To optimize processing based on knowledge of the data partitioning
  • 4. RDDs in More Detail RDDs additionally provide: - Control over partitioning, which can be used to optimize data placement across queries. - usually more efficient than the sort-based approach of Map Reduce - Control over persistence (e.g. store on disk vs in RAM) - Fine-grained reads (treat RDD as a big table) Slide by Matei Zaharia, creator Spark, http://spark-project.org
  • 5. Scheduling Process rdd1.join(rdd2) .groupBy(…) .filter(…) RDD Objects build operator DAG agnostic to operators! agnostic to operators! doesn’t know about stages doesn’t know about stages DAGScheduler split graph into stages of tasks submit each stage as ready DAG TaskScheduler TaskSet launch tasks via cluster manager retry failed or straggling tasks Cluster manager Worker execute tasks store and serve blocks Block manager Threads Task stage failed
  • 6. RDD API Example // Read input file val input = sc.textFile("input.txt") val tokenized = input .map(line => line.split(" ")) .filter(words => words.size > 0) // remove empty lines val counts = tokenized // frequency of log levels .map(words => (words(0), 1)). .reduceByKey{ (a, b) => a + b, 2 } 6
  • 7. RDD API Example // Read input file val input = sc.textFile( ) val tokenized = input .map(line => line.split(" ")) .filter(words => words.size > 0) // remove empty lines val counts = tokenized // frequency of log levels .map(words => (words(0), 1)). .reduceByKey{ (a, b) => a + b } 7
  • 9. DAG View of RDD’s textFile() map() filter() map() reduceByKey() 9 Mapped RDD Partition 1 Partition 2 Partition 3 Filtered RDD Partition 1 Partition 2 Partition 3 Mapped RDD Partition 1 Partition 2 Partition 3 Shuffle RDD Partition 1 Partition 2 Hadoop RDD Partition 1 Partition 2 Partition 3 input tokenized counts
  • 10. Transformations build up a DAG, but don’t “do anything” 10
  • 11. How runJob Works Needs to compute my parents, parents, parents, etc all the way back to an RDD with no dependencies (e.g. HadoopRDD). 11 Mapped RDD Partition 1 Partition 2 Partition 3 Filtered RDD Partition 1 Partition 2 Partition 3 Mapped RDD Partition 1 Partition 2 Partition 3 Hadoop RDD Partition 1 Partition 2 Partition 3 input tokenized counts runJob(counts)
  • 12. Physical Optimizations 1. Certain types of transformations can be pipelined. 2. If dependent RDD’s have already been cached (or persisted in a shuffle) the graph can be truncated. Pipelining and truncation produce a set of stages where each stage is composed of tasks 12
  • 13. Scheduler Optimizations Pipelines narrow ops. within a stage Picks join algorithms based on partitioning (minimize shuffles) Reuses previously cached data join union groupBy map Stage 3 Stage 1 Stage 2 A: B: C: D: E: F: G: = previously computed partition Task
  • 14. Task Details Stage boundaries are only at input RDDs or “shuffle” operations So, each task looks like this: Task f1  f2  … Task f1  f2  … map output file or master external storage fetch map outputs and/or
  • 15. How runJob Works Needs to compute my parents, parents, parents, etc all the way back to an RDD with no dependencies (e.g. HadoopRDD). 15 Mapped RDD Partition 1 Partition 2 Partition 3 Filtered RDD Partition 1 Partition 2 Partition 3 Mapped RDD Partition 1 Partition 2 Partition 3 Shuffle RDD Partition 1 Partition 2 Hadoop RDD Partition 1 Partition 2 Partition 3 input tokenized counts runJob(counts)
  • 16. 16 How runJob Works Needs to compute my parents, parents, parents, etc all the way back to an RDD with no dependencies (e.g. HadoopRDD). input tokenized counts Mapped RDD Partition 1 Partition 2 Partition 3 Filtered RDD Partition 1 Partition 2 Partition 3 Mapped RDD Partition 1 Partition 2 Partition 3 Shuffle RDD Partition 1 Partition 2 Hadoop RDD Partition 1 Partition 2 Partition 3 runJob(counts)
  • 17. Stage Graph Task 1 Task 2 Task 3 Task 1 Task 2 Stage 1 Stage 2 Each task will: 1.Read Hadoop input 2.Perform maps and filters 3.Write partial sums Each task will: 1.Read partial sums 2.Invoke user function passed to runJob. Shuffle write Shuffle readInput read
  • 18. Physical Execution Model  Distinguish between: - Jobs: complete work to be done - Stages: bundles of work that can execute together - Tasks: unit of work, corresponds to one RDD partition  Defining stages and tasks should not require deep knowledge of what these actually do - Goal of Spark is to be extensible, letting users define new RDD operators
  • 19. RDD Interface Set of partitions (“splits”) List of dependencies on parent RDDs Function to compute a partition given parents Optional preferred locations Optional partitioning info (Partitioner) Captures all current Spark operations!Captures all current Spark operations!
  • 20. Example: HadoopRDD partitions = one per HDFS block dependencies = none compute(partition) = read corresponding block preferredLocations(part) = HDFS block location partitioner = none
  • 21. Example: FilteredRDD partitions = same as parent RDD dependencies = “one-to-one” on parent compute(partition) = compute parent and filter it preferredLocations(part) = none (ask parent) partitioner = none
  • 22. Example: JoinedRDD partitions = one per reduce task dependencies = “shuffle” on each parent compute(partition) = read and join shuffled data preferredLocations(part) = none partitioner = HashPartitioner(numTasks) Spark will now know this data is hashed! Spark will now know this data is hashed!
  • 23. DependencyTypes union join with inputs not co-partitioned map, filter join with inputs co- partitioned “Narrow” deps: groupByKey “Wide” (shuffle) deps:
  • 24. Improving Efficiency  Basic Principle: Avoid Shuffling!
  • 26. Avoid groupByKey on Pair RDDs  All key-value pairs will be shuffled accross the network, to a reducer where the values are collected together groupByKey “Wide” (shuffle) deps:
  • 27. aggregateByKey  Three inputs - Zero-element - Merging function within partition - Merging function across partitions val initialCount = 0; val addToCounts = (n: Int, v: String) => n + 1 val sumPartitionCounts = (p1: Int, p2: Int) => p1 + p2 val countByKey = kv.aggregateByKey(initialCount) (addToCounts,sumPartitionCounts) Combiners!
  • 28. combineByKey val result = input.combineByKey( (v) => (v, 1), (acc: (Int, Int), v) => (acc.1 + v, acc.2 + 1), (acc1: (Int, Int), acc2: (Int, Int)) => (acc1._1 + acc2._1, acc1._2 + acc2._2) ) .map{ case (key, value) => (key, value._1 / value._2.toFloat) } result.collectAsMap().map(println(_))
  • 29. Control the Degree of Parallellism  Repartition - Concentrate effort - increase use of nodes  Coalesce - Reduce number of tasks
  • 30. Broadcast Values  In case of a join with a small RHS or LHS, broadcast the small set to every node in the cluster
  • 34. Broadcast Variables  Create with SparkContext.broadcast(initVal)  Access with .value inside tasks  Immutable! - If you modify the broadcast value after creation, that change is local to the node
  • 35. Maintaining Partitioning  mapValues instead of map  flatMapValues instead of flatMap - Good for tokenization!
  • 37. The best trick of all, however…
  • 38. Use Higher Level API’s! DataFrame APIs for core processing Works across Scala, Java, Python and R Spark ML for machine learning Spark SQL for structured query processing 38
  • 40. Combining Processing Types // Load data using SQL points = ctx.sql(“select latitude, longitude from tweets”) // Train a machine learning model model = KMeans.train(points, 10) // Apply it to a stream sc.twitterStream(...) .map(t => (model.predict(t.location), 1)) .reduceByWindow(“5s”, (a, b) => a + b)
  • 41. Performance of Composition Separate computing frameworks: … HDFS read HDFS write HDFS read HDFS write HDFS read HDFS write HDFS write HDFS read Spark:
  • 42. Encode Domain Knowledge  In essence, nothing more than libraries with pre-cooked code – that still operates over the abstraction of RDDs  Focus on optimizations that require domain knowledge
  • 45. Challenge: Data Representation Java objects often many times larger than data class User(name: String, friends: Array[Int]) User(“Bobby”, Array(1, 2)) User 0x… 0x… String 3 0 1 2 Bobby 5 0x… int[] char[] 5
  • 46. DataFrames / Spark SQL Efficient library for working with structured data » Two interfaces: SQL for data analysts and external apps, DataFrames for complex programs » Optimized computation and storage underneath Spark SQL added in 2014, DataFrames in 2015
  • 48. DataFrame API DataFrames hold rows with a known schema and offer relational operations through a DSL c = HiveContext() users = c.sql(“select * from users”) ma_users = users[users.state == “MA”] ma_users.count() ma_users.groupBy(“name”).avg(“age”) ma_users.map(lambda row: row.user.toUpper()) Expression AST
  • 49. What DataFrames Enable 1. Compact binary representation • Columnar, compressed cache; rows for processing 1. Optimization across operators (join reordering, predicate pushdown, etc) 2. Runtime code generation
  • 52. Data Sources Uniform way to access structured data » Apps can migrate across Hive, Cassandra, JSON, … » Rich semantics allows query pushdown into data sources Spark SQL users[users.age > 20] select * from users
  • 53. Examples JSON: JDBC: Together: select user.id, text from tweets { “text”: “hi”, “user”: { “name”: “bob”, “id”: 15 } } tweets.json select age from users where lang = “en” select t.text, u.age from tweets t, users u where t.user.id = u.id and u.lang = “en” Spark SQL {JSON} select id, age from users where lang=“en”
  • 54. Thanks  Matei Zaharia, MIT (https://cs.stanford.edu/~matei/)  Paul Wendell, Databricks  http://spark-project.org

Editor's Notes

  • #14: NOT a modified version of Hadoop