SlideShare a Scribd company logo
Improving PySpark
Performance
(Spark beyond the JVM)
PyDataDC 2016
Who am I?
● My name is Holden Karau
● Prefered pronouns are she/her
● I’m a Principal Software Engineer at IBM’s Spark Technology Center
● previously Alpine, Databricks, Google, Foursquare & Amazon
● co-author of Learning Spark & Fast Data processing with Spark
○ co-author of a new book focused on Spark performance coming this next year*
● @holdenkarau
● Slide share http://www.slideshare.net/hkarau
● Linkedin https://www.linkedin.com/in/holdenkarau
● Github https://github.com/holdenk
● Spark Videos http://bit.ly/holdenSparkVideos
What is going to be covered:
● What I think I might know about you
● A quick background of how PySpark works
● RDD re-use (caching, persistence levels, and checkpointing)
● Working with key/value data
○ Why group key is evil and what we can do about it
● When Spark SQL can be amazing and wonderful
● A brief introduction to Datasets (new in Spark 1.6)
● Calling Scala code from Python with Spark
● How we can make PySpark go fast in the future (vroom vroom)
Torsten Reuschling
Or….
Huang
Yun
Chung
Who I think you wonderful humans are?
● Nice people - we are at PyData conference :)
● Don’t mind pictures of cats
● Might know some Apache Spark
● Want to scale your Apache Spark jobs
● Don’t overly mind a grab-bag of topics
Lori Erickson
What is Spark?
● General purpose distributed system
○ With a really nice API including Python :)
● Apache project (one of the most
active)
● Must faster than Hadoop
Map/Reduce
● Good when too big for a single
machine
● Built on top of two abstractions for
distributed data: RDDs & Datasets
The different pieces of Spark
Apache Spark
SQL &
DataFrames
Streaming
Language
APIs
Scala,
Java,
Python, &
R
Graph
Tools
Spark ML
bagel &
Graph X
MLLib
Community
Packages
Jon Ross
SparkContext: entry to the world
● Can be used to create distributed data from many input
sources
○ Native collections, local & remote FS
○ Any Hadoop Data Source
● Also create counters & accumulators
● Automatically created in the shells (called sc)
● Specify master & app name when creating
○ Master can be local[*], spark:// , yarn, etc.
○ app name should be human readable and make sense
● etc.
Petfu
l
RDDs: Spark’s Primary abstraction
RDD (Resilient Distributed Dataset)
● Distributed collection
● Recomputed on node failure
● Distributes data & work across the cluster
● Lazily evaluated (transformations & actions)
Helen Olney
What’s new for PySpark in 2.0?
● Newer Py4J bridge
● SparkSession now replaces SQLContext & HiveContext
● DataFrame/SQL speedups
● Better filter push downs in SQL
● Much better ML interop
● Streaming DataFrames* (ALPHA)
● WARNING: Slightly Different Persistence Levels
● And a bunch more :)
Improving PySpark Performance - Spark Beyond the JVM @ PyData DC 2016
A detour into PySpark’s internals
Photo by Bill Ward
Spark in Scala, how does PySpark work?
● Py4J + pickling + magic
○ This can be kind of slow sometimes
● RDDs are generally RDDs of pickled objects
● Spark SQL (and DataFrames) avoid some of this
kristin klein
So what does that look like?
Driver
py4j
Worker 1
Worker K
pipe
pipe
So how does that impact PySpark?
● Data from Spark worker serialized and piped to Python
worker
○ Multiple iterator-to-iterator transformations are still pipelined :)
● Double serialization cost makes everything more
expensive
● Python worker startup takes a bit of extra time
● Python memory isn’t controlled by the JVM - easy to go
over container limits if deploying on YARN or similar
● Error messages make ~0 sense
● etc.
Cat photo from http://galato901.deviantart.com/art/Cat-on-Work-Break-173043455
Photo from Cocoa Dream
Lets look at some old stand bys:
words = rdd.flatMap(lambda x: x.split(" "))
wordPairs = words.map(lambda w: (w, 1))
grouped = wordPairs.groupByKey()
grouped.mapValues(lambda counts: sum(counts))
grouped.saveAsTextFile("counts")
warnings = rdd.filter(lambda x: x.lower.find("warning") !=
-1).count()
Tomomi
RDD re-use - sadly not magic
● If we know we are going to re-use the RDD what should we do?
○ If it fits nicely in memory caching in memory
○ persisting at another level
■ MEMORY, MEMORY_AND_DISK
○ checkpointing
○ The options changed in Spark 2.0 (we can’t easily specify serialized anymore since there is no
benefit on RDDs - but things get complicated when sharing RDDs or working with
DataFrames)
● Noisey clusters
○ _2 & checkpointing can help
● persist first for checkpointing
Richard Gillin
What is key skew and why do we care?
● Keys aren’t evenly distributed
○ Sales by zip code, or records by city, etc.
● groupByKey will explode (but it's pretty easy to break)
● We can have really unbalanced partitions
○ If we have enough key skew sortByKey could even fail
○ Stragglers (uneven sharding can make some tasks take much longer)
Mitchell
Joyce
groupByKey - just how evil is it?
● Pretty evil
● Groups all of the records with the same key into a single record
○ Even if we immediately reduce it (e.g. sum it or similar)
○ This can be too big to fit in memory, then our job fails
● Unless we are in SQL then happy pandas
PROgeckoam
So what does that look like?
(94110, A, B)
(94110, A, C)
(10003, D, E)
(94110, E, F)
(94110, A, R)
(10003, A, R)
(94110, D, R)
(94110, E, R)
(94110, E, R)
(67843, T, R)
(94110, T, R)
(94110, T, R)
(67843, T, R)(10003, A, R)
(94110, [(A, B), (A, C), (E, F), (A, R), (D, R), (E, R), (E, R), (T, R) (T, R)]
Tomomi
“Normal” Word count w/RDDs
lines = sc.textFile(src)
words = lines.flatMap(lambda x: x.split(" "))
word_count =
(words.map(lambda x: (x, 1))
.reduceByKey(lambda x, y: x+y))
word_count.saveAsTextFile(output)
No data is read or
processed until after
this line
This is an “action”
which forces spark to
evaluate the RDD
These are still
pipelined
inside of the
same python
executor
Trish Hamme
GroupByKey
reduceByKey
So what did we do instead?
● reduceByKey
○ Works when the types are the same (e.g. in our summing version)
● aggregateByKey
○ Doesn’t require the types to be the same (e.g. computing stats model or similar)
Allows Spark to pipeline the reduction & skip making the list
We also got a map-side reduction (note the difference in shuffled read)
Can just the shuffle cause problems?
● Sorting by key can put all of the records in the same partition
● We can run into partition size limits (around 2GB)
● Or just get bad performance
● So we can handle data like the above we can add some “junk” to our key
(94110, A, B)
(94110, A, C)
(10003, D, E)
(94110, E, F)
(94110, A, R)
(10003, A, R)
(94110, D, R)
(94110, E, R)
(94110, E, R)
(67843, T, R)
(94110, T, R)
(94110, T, R)
PROTodd
Klassy
Shuffle explosions :(
(94110, A, B)
(94110, A, C)
(10003, D, E)
(94110, E, F)
(94110, A, R)
(10003, A, R)
(94110, D, R)
(94110, E, R)
(94110, E, R)
(67843, T, R)
(94110, T, R)
(94110, T, R)
(94110, A, B)
(94110, A, C)
(94110, E, F)
(94110, A, R)
(94110, D, R)
(94110, E, R)
(94110, E, R)
(94110, T, R)
(94110, T, R)
(67843, T, R)(10003, A, R)
(10003, D, E)
javier_artiles
100% less explosions
(94110, A, B)
(94110, A, C)
(10003, D, E)
(94110, E, F)
(94110, A, R)
(10003, A, R)
(94110, D, R)
(94110, E, R)
(94110, E, R)
(67843, T, R)
(94110, T, R)
(94110, T, R)
(94110_A, A, B)
(94110_A, A, C)
(94110_A, A, R)
(94110_D, D, R)
(94110_T, T, R)
(10003_A, A, R)
(10003_D, D, E)
(67843_T, T, R)
(94110_E, E, R)
(94110_E, E, R)
(94110_E, E, F)
(94110_T, T, R)
Jennifer Williams
Well there is a bit of magic in the shuffle….
● We can reuse shuffle files
● But it can (and does) explode*
Sculpture by Flaming Lotus Girls
Photo by Zaskoda
Our saviour from serialization: DataFrames
● For the most part keeps data in the JVM
○ Notable exception is UDFs written in Python
● Takes our python calls and turns it into a query plan
● If we need more than the native operations in Spark’s
DataFrames
● be wary of Distributed Systems bringing claims of
usability….
Andy
Blackledge
So what are Spark DataFrames?
● More than SQL tables
● Not Pandas or R DataFrames
● Semi-structured (have schema information)
● tabular
● work on expression instead of lambdas
○ e.g. df.filter(df.col(“happy”) == true) instead of rdd.filter(lambda x:
x.happy == true))
● Not a subset of “Datasets” - since Dataset API isn’t
exposed in Python yet :(
Quinn Dombrowski
Why are DataFrames good for performance?
● Space efficient columnar cached representation
● Able to push down operations to the data store
● Reduced serialization/data transfer overhead
● Able to perform some operations on serialized data
● Optimizer is able to look inside of our operations
○ Regular spark can’t see inside our operations to spot the difference between (min(_, _)) and
(append(_, _))
How much faster can it be? (Python)
Andrew Skudder
Loading with sparkSQL & spark-csv
df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true")
.option("inferSchema", "true")
.load("resources/adult.data")
Jess Johnson
What about other data formats?
● Built in
○ Parquet
○ JDBC
○ Json (which is amazing!)
○ Orc
○ Hive
● Available as packages
○ csv*
○ Avro, Redshift, Mongo, Cassandra, Cloudant, Couchbase, etc.
○ +34 at
http://spark-packages.org/?q=tags%3A%22Data%20Sources%22
Michael Coghlan
*pre-2.0 package, 2.0+ built in hopefully
Ok so we’ve got our Data, what now?
● We can inspect the Schema
● We can start to apply some transformations (relational)
● We can do some machine learning
● We can jump into an RDD or a Dataset for functional
transformations
● We could wordcount - again!
Getting the schema
● printSchema() for human readable
● schema for machine readable
Sample json record
{"name":"mission",
"pandas":[{"id":1,"zip":"94110","pt":"giant",
"happy":true, "attributes":[0.4,0.5]}]}
Xiahong Chen
Resulting schema:
root
|-- name: string (nullable = true)
|-- pandas: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- id: long (nullable = false)
| | |-- zip: string (nullable = true)
| | |-- pt: string (nullable = true)
| | |-- happy: boolean (nullable = false)
| | |-- attributes: array (nullable = true)
| | | |-- element: double (containsNull = false)
Simon Götz
Word count w/Dataframes
df = sqlCtx.read.load(src)
# Returns an RDD
words = df.select("text").flatMap(lambda x: x.text.split(" "))
words_df = words.map(
lambda x: Row(word=x, cnt=1)).toDF()
word_count = words_df.groupBy("word").sum()
word_count.write.format("parquet").save("wc.parquet")
Still have the double
serialization here :(
*(Also in 2.0 have to
explicitly switch to
RDD)
Buuuut….
● UDFs / custom maps will be “slow” (e.g. require data
copy from executor and back)
Nick
Ellis
Mixing Python & JVM code FTW:
● DataFrames are an example of pushing our processing
to the JVM
● Python UDFS & maps lose this benefit
● But we can write Scala UDFS and call them from
Python
○ py4j error messages can be difficult to understand :(
● Work to make JVM UDFs easier to register in PR #9766
● Trickier with RDDs since stores pickled objects
Exposing functions to be callable from
Python:
// functions we want to be callable from python
object functions {
def kurtosis(e: Column): Column = new
Column(Kurtosis(EvilSqlTools.getExpr(e)))
def registerUdfs(sqlCtx: SQLContext): Unit = {
sqlCtx.udf.register("rowKurtosis", helpers.rowKurtosis _)
}
}
Fiona
Henderson
Calling the functions with py4j*:
● The SparkContext has a reference to the jvm (_jvm)
● Many Python objects which are wrappers of JVM
objects have _j[objtype] to get the JVM object
○ rdd._jrdd
○ df._jdf
○ sc._jsc
● These are private and may change
*The py4j bridge only exists on the driver**
** Not exactly true but close enough
Fiona
Henderson
e.g.:
def register_sql_extensions(sql_ctx):
scala_sql_context = sql_ctx._ssql_ctx
spark_ctx = sql_ctx._sc
(spark_ctx._jvm.com.sparklingpandas.functions
.registerUdfs(scala_sql_context))
More things to keep in mind with DFs (in Python)
● Schema serialized as json from JVM
● toPandas is essentially collect
● joins can result in the cross product
○ big data x big data =~ out of memory
● Pre 2.0: Use the HiveContext
○ you don’t need a hive install
○ more powerful UDFs, window functions, etc.
Improving PySpark Performance - Spark Beyond the JVM @ PyData DC 2016
The “future*”: Awesome UDFs
● Work going on in Scala land to translate simple Scala
into SQL expressions - need the Dataset API
○ Maybe we can try similar approaches with Python?
● Very early work going on to use Jython for simple UDFs
(e.g. 2.7 compat & no native libraries) - SPARK-15369
○ Early benchmarking w/word count 5% slower than native Scala UDF,
close to 2x faster than regular Python
● Willing to share your Python UDFs for benchmarking? -
http://bit.ly/pySparkUDF
*The future may or may not have better performance than today. But bun-bun the bunny has some lettuce so its
ok!
Improving PySpark Performance - Spark Beyond the JVM @ PyData DC 2016
Want to help with reviewing the code?
● https://github.com/apache/spark/pull/13571
Some open questions:
● Do we want to make the Jython dependency optional?
● If so how do we want people to load it?
● Do we want to fall back automatically on Jython failure?
E-mail me: holden@pigscanfly.ca :)
The “future*”: Faster interchange
● Faster interchange between Python and Spark (e.g.
Tungsten + Apache Arrow)? (SPARK-13391 &
SPARK-13534)
● Willing to share your Python UDFs for benchmarking? -
http://bit.ly/pySparkUDF
● Dask integration?
*The future may or may not have better performance than today. But bun-bun the bunny has some lettuce so its
ok!
Additional Spark Resources
● Programming guide (along with JavaDoc, PyDoc,
ScalaDoc, etc.)
○ http://spark.apache.org/docs/latest/
● Books
● Videos
● Spark Office Hours
○ Normally in the bay area - will do Google Hangouts ones soon
○ follow me on twitter for future ones - https://twitter.com/holdenkarau
raider of gin
Learning Spark
Fast Data
Processing with
Spark
(Out of Date)
Fast Data
Processing with
Spark
(2nd edition)
Advanced
Analytics with
Spark
Coming soon:
Spark in Action
Coming soon:
High Performance Spark
And the next book…..
First five chapters are available in “Early Release”*:
● Buy from O’Reilly - http://bit.ly/highPerfSpark
● Chapter 9(ish) - Going Beyond Scala
Get notified when updated & finished:
● http://www.highperformancespark.com
● https://twitter.com/highperfspark
* Early Release means extra mistakes, but also a chance to help us make a more awesome
book.
Spark Videos
● Apache Spark Youtube Channel
● My Spark videos on YouTube -
○ http://bit.ly/holdenSparkVideos
● Spark Summit 2014 training
● Paco’s Introduction to Apache Spark
k thnx bye!
If you care about Spark testing and
don’t hate surveys:
http://bit.ly/holdenTestingSpark
Will tweet results
“eventually” @holdenkarau
PySpark Users: Have some simple
UDFs you wish ran faster you are
willing to share?:
http://bit.ly/pySparkUDF

More Related Content

PDF
Getting the best performance with PySpark - Spark Summit West 2016
PDF
Introduction to Spark Datasets - Functional and relational together at last
PDF
Beyond Shuffling, Tips and Tricks for Scaling Apache Spark updated for Spark ...
PDF
Testing and validating spark programs - Strata SJ 2016
PDF
Beyond Shuffling - Effective Tips and Tricks for Scaling Spark (Vancouver Sp...
PDF
Effective testing for spark programs scala bay preview (pre-strata ny 2015)
PPTX
Beyond shuffling - Strata London 2016
PPTX
Beyond parallelize and collect - Spark Summit East 2016
Getting the best performance with PySpark - Spark Summit West 2016
Introduction to Spark Datasets - Functional and relational together at last
Beyond Shuffling, Tips and Tricks for Scaling Apache Spark updated for Spark ...
Testing and validating spark programs - Strata SJ 2016
Beyond Shuffling - Effective Tips and Tricks for Scaling Spark (Vancouver Sp...
Effective testing for spark programs scala bay preview (pre-strata ny 2015)
Beyond shuffling - Strata London 2016
Beyond parallelize and collect - Spark Summit East 2016

What's hot (20)

PDF
Streaming & Scaling Spark - London Spark Meetup 2016
PDF
Scaling with apache spark (a lesson in unintended consequences) strange loo...
PDF
Apache Spark Structured Streaming for Machine Learning - StrataConf 2016
PDF
Introduction to and Extending Spark ML
PDF
Beyond shuffling - Scala Days Berlin 2016
PDF
Improving PySpark performance: Spark Performance Beyond the JVM
PDF
A fast introduction to PySpark with a quick look at Arrow based UDFs
PDF
Extending spark ML for custom models now with python!
PDF
Beyond Shuffling and Streaming Preview - Salt Lake City Spark Meetup
PDF
Spark ML for custom models - FOSDEM HPC 2017
PDF
Introduction to Spark ML Pipelines Workshop
PDF
Debugging PySpark: Spark Summit East talk by Holden Karau
PDF
Beyond Wordcount with spark datasets (and scalaing) - Nide PDX Jan 2018
PDF
A super fast introduction to Spark and glance at BEAM
PDF
Testing and validating distributed systems with Apache Spark and Apache Beam ...
PDF
Debugging PySpark - PyCon US 2018
PPTX
Beyond shuffling global big data tech conference 2015 sj
PDF
Apache Spark Super Happy Funtimes - CHUG 2016
PDF
Introducing Apache Spark's Data Frames and Dataset APIs workshop series
PDF
Getting started contributing to Apache Spark
Streaming & Scaling Spark - London Spark Meetup 2016
Scaling with apache spark (a lesson in unintended consequences) strange loo...
Apache Spark Structured Streaming for Machine Learning - StrataConf 2016
Introduction to and Extending Spark ML
Beyond shuffling - Scala Days Berlin 2016
Improving PySpark performance: Spark Performance Beyond the JVM
A fast introduction to PySpark with a quick look at Arrow based UDFs
Extending spark ML for custom models now with python!
Beyond Shuffling and Streaming Preview - Salt Lake City Spark Meetup
Spark ML for custom models - FOSDEM HPC 2017
Introduction to Spark ML Pipelines Workshop
Debugging PySpark: Spark Summit East talk by Holden Karau
Beyond Wordcount with spark datasets (and scalaing) - Nide PDX Jan 2018
A super fast introduction to Spark and glance at BEAM
Testing and validating distributed systems with Apache Spark and Apache Beam ...
Debugging PySpark - PyCon US 2018
Beyond shuffling global big data tech conference 2015 sj
Apache Spark Super Happy Funtimes - CHUG 2016
Introducing Apache Spark's Data Frames and Dataset APIs workshop series
Getting started contributing to Apache Spark
Ad

Similar to Improving PySpark Performance - Spark Beyond the JVM @ PyData DC 2016 (20)

PDF
Getting The Best Performance With PySpark
PDF
Big data beyond the JVM - DDTX 2018
PDF
The magic of (data parallel) distributed systems and where it all breaks - Re...
PDF
Big Data Beyond the JVM - Strata San Jose 2018
PDF
Accelerating Big Data beyond the JVM - Fosdem 2018
PDF
Introduction to Spark with Python
PDF
Making the big data ecosystem work together with Python & Apache Arrow, Apach...
PDF
Making the big data ecosystem work together with python apache arrow, spark,...
PPTX
Building a modern Application with DataFrames
PPTX
Building a modern Application with DataFrames
PDF
Keeping the fun in functional w/ Apache Spark @ Scala Days NYC
PDF
A really really fast introduction to PySpark - lightning fast cluster computi...
PPTX
Frustration-Reduced PySpark: Data engineering with DataFrames
PDF
How does that PySpark thing work? And why Arrow makes it faster?
PDF
Strata NYC 2015 - What's coming for the Spark community
PDF
Spark Programming Basic Training Handout
PPTX
Alpine academy apache spark series #1 introduction to cluster computing wit...
PPTX
Apache spark as a gateway drug to FP concepts taught and broken - Curry On 2018
PPTX
Apache spark sneha challa- google pittsburgh-aug 25th
PDF
Spark Summit EU 2015: Lessons from 300+ production users
Getting The Best Performance With PySpark
Big data beyond the JVM - DDTX 2018
The magic of (data parallel) distributed systems and where it all breaks - Re...
Big Data Beyond the JVM - Strata San Jose 2018
Accelerating Big Data beyond the JVM - Fosdem 2018
Introduction to Spark with Python
Making the big data ecosystem work together with Python & Apache Arrow, Apach...
Making the big data ecosystem work together with python apache arrow, spark,...
Building a modern Application with DataFrames
Building a modern Application with DataFrames
Keeping the fun in functional w/ Apache Spark @ Scala Days NYC
A really really fast introduction to PySpark - lightning fast cluster computi...
Frustration-Reduced PySpark: Data engineering with DataFrames
How does that PySpark thing work? And why Arrow makes it faster?
Strata NYC 2015 - What's coming for the Spark community
Spark Programming Basic Training Handout
Alpine academy apache spark series #1 introduction to cluster computing wit...
Apache spark as a gateway drug to FP concepts taught and broken - Curry On 2018
Apache spark sneha challa- google pittsburgh-aug 25th
Spark Summit EU 2015: Lessons from 300+ production users
Ad

Recently uploaded (20)

PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Approach and Philosophy of On baking technology
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Electronic commerce courselecture one. Pdf
PDF
Machine learning based COVID-19 study performance prediction
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Encapsulation theory and applications.pdf
PPTX
Cloud computing and distributed systems.
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
Spectroscopy.pptx food analysis technology
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
gpt5_lecture_notes_comprehensive_20250812015547.pdf
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Diabetes mellitus diagnosis method based random forest with bat algorithm
Approach and Philosophy of On baking technology
MIND Revenue Release Quarter 2 2025 Press Release
Electronic commerce courselecture one. Pdf
Machine learning based COVID-19 study performance prediction
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Building Integrated photovoltaic BIPV_UPV.pdf
Unlocking AI with Model Context Protocol (MCP)
Encapsulation theory and applications.pdf
Cloud computing and distributed systems.
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Spectroscopy.pptx food analysis technology
Network Security Unit 5.pdf for BCA BBA.
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx

Improving PySpark Performance - Spark Beyond the JVM @ PyData DC 2016

  • 2. Who am I? ● My name is Holden Karau ● Prefered pronouns are she/her ● I’m a Principal Software Engineer at IBM’s Spark Technology Center ● previously Alpine, Databricks, Google, Foursquare & Amazon ● co-author of Learning Spark & Fast Data processing with Spark ○ co-author of a new book focused on Spark performance coming this next year* ● @holdenkarau ● Slide share http://www.slideshare.net/hkarau ● Linkedin https://www.linkedin.com/in/holdenkarau ● Github https://github.com/holdenk ● Spark Videos http://bit.ly/holdenSparkVideos
  • 3. What is going to be covered: ● What I think I might know about you ● A quick background of how PySpark works ● RDD re-use (caching, persistence levels, and checkpointing) ● Working with key/value data ○ Why group key is evil and what we can do about it ● When Spark SQL can be amazing and wonderful ● A brief introduction to Datasets (new in Spark 1.6) ● Calling Scala code from Python with Spark ● How we can make PySpark go fast in the future (vroom vroom) Torsten Reuschling
  • 5. Who I think you wonderful humans are? ● Nice people - we are at PyData conference :) ● Don’t mind pictures of cats ● Might know some Apache Spark ● Want to scale your Apache Spark jobs ● Don’t overly mind a grab-bag of topics Lori Erickson
  • 6. What is Spark? ● General purpose distributed system ○ With a really nice API including Python :) ● Apache project (one of the most active) ● Must faster than Hadoop Map/Reduce ● Good when too big for a single machine ● Built on top of two abstractions for distributed data: RDDs & Datasets
  • 7. The different pieces of Spark Apache Spark SQL & DataFrames Streaming Language APIs Scala, Java, Python, & R Graph Tools Spark ML bagel & Graph X MLLib Community Packages Jon Ross
  • 8. SparkContext: entry to the world ● Can be used to create distributed data from many input sources ○ Native collections, local & remote FS ○ Any Hadoop Data Source ● Also create counters & accumulators ● Automatically created in the shells (called sc) ● Specify master & app name when creating ○ Master can be local[*], spark:// , yarn, etc. ○ app name should be human readable and make sense ● etc. Petfu l
  • 9. RDDs: Spark’s Primary abstraction RDD (Resilient Distributed Dataset) ● Distributed collection ● Recomputed on node failure ● Distributes data & work across the cluster ● Lazily evaluated (transformations & actions) Helen Olney
  • 10. What’s new for PySpark in 2.0? ● Newer Py4J bridge ● SparkSession now replaces SQLContext & HiveContext ● DataFrame/SQL speedups ● Better filter push downs in SQL ● Much better ML interop ● Streaming DataFrames* (ALPHA) ● WARNING: Slightly Different Persistence Levels ● And a bunch more :)
  • 12. A detour into PySpark’s internals Photo by Bill Ward
  • 13. Spark in Scala, how does PySpark work? ● Py4J + pickling + magic ○ This can be kind of slow sometimes ● RDDs are generally RDDs of pickled objects ● Spark SQL (and DataFrames) avoid some of this kristin klein
  • 14. So what does that look like? Driver py4j Worker 1 Worker K pipe pipe
  • 15. So how does that impact PySpark? ● Data from Spark worker serialized and piped to Python worker ○ Multiple iterator-to-iterator transformations are still pipelined :) ● Double serialization cost makes everything more expensive ● Python worker startup takes a bit of extra time ● Python memory isn’t controlled by the JVM - easy to go over container limits if deploying on YARN or similar ● Error messages make ~0 sense ● etc.
  • 16. Cat photo from http://galato901.deviantart.com/art/Cat-on-Work-Break-173043455 Photo from Cocoa Dream
  • 17. Lets look at some old stand bys: words = rdd.flatMap(lambda x: x.split(" ")) wordPairs = words.map(lambda w: (w, 1)) grouped = wordPairs.groupByKey() grouped.mapValues(lambda counts: sum(counts)) grouped.saveAsTextFile("counts") warnings = rdd.filter(lambda x: x.lower.find("warning") != -1).count() Tomomi
  • 18. RDD re-use - sadly not magic ● If we know we are going to re-use the RDD what should we do? ○ If it fits nicely in memory caching in memory ○ persisting at another level ■ MEMORY, MEMORY_AND_DISK ○ checkpointing ○ The options changed in Spark 2.0 (we can’t easily specify serialized anymore since there is no benefit on RDDs - but things get complicated when sharing RDDs or working with DataFrames) ● Noisey clusters ○ _2 & checkpointing can help ● persist first for checkpointing Richard Gillin
  • 19. What is key skew and why do we care? ● Keys aren’t evenly distributed ○ Sales by zip code, or records by city, etc. ● groupByKey will explode (but it's pretty easy to break) ● We can have really unbalanced partitions ○ If we have enough key skew sortByKey could even fail ○ Stragglers (uneven sharding can make some tasks take much longer) Mitchell Joyce
  • 20. groupByKey - just how evil is it? ● Pretty evil ● Groups all of the records with the same key into a single record ○ Even if we immediately reduce it (e.g. sum it or similar) ○ This can be too big to fit in memory, then our job fails ● Unless we are in SQL then happy pandas PROgeckoam
  • 21. So what does that look like? (94110, A, B) (94110, A, C) (10003, D, E) (94110, E, F) (94110, A, R) (10003, A, R) (94110, D, R) (94110, E, R) (94110, E, R) (67843, T, R) (94110, T, R) (94110, T, R) (67843, T, R)(10003, A, R) (94110, [(A, B), (A, C), (E, F), (A, R), (D, R), (E, R), (E, R), (T, R) (T, R)] Tomomi
  • 22. “Normal” Word count w/RDDs lines = sc.textFile(src) words = lines.flatMap(lambda x: x.split(" ")) word_count = (words.map(lambda x: (x, 1)) .reduceByKey(lambda x, y: x+y)) word_count.saveAsTextFile(output) No data is read or processed until after this line This is an “action” which forces spark to evaluate the RDD These are still pipelined inside of the same python executor Trish Hamme
  • 25. So what did we do instead? ● reduceByKey ○ Works when the types are the same (e.g. in our summing version) ● aggregateByKey ○ Doesn’t require the types to be the same (e.g. computing stats model or similar) Allows Spark to pipeline the reduction & skip making the list We also got a map-side reduction (note the difference in shuffled read)
  • 26. Can just the shuffle cause problems? ● Sorting by key can put all of the records in the same partition ● We can run into partition size limits (around 2GB) ● Or just get bad performance ● So we can handle data like the above we can add some “junk” to our key (94110, A, B) (94110, A, C) (10003, D, E) (94110, E, F) (94110, A, R) (10003, A, R) (94110, D, R) (94110, E, R) (94110, E, R) (67843, T, R) (94110, T, R) (94110, T, R) PROTodd Klassy
  • 27. Shuffle explosions :( (94110, A, B) (94110, A, C) (10003, D, E) (94110, E, F) (94110, A, R) (10003, A, R) (94110, D, R) (94110, E, R) (94110, E, R) (67843, T, R) (94110, T, R) (94110, T, R) (94110, A, B) (94110, A, C) (94110, E, F) (94110, A, R) (94110, D, R) (94110, E, R) (94110, E, R) (94110, T, R) (94110, T, R) (67843, T, R)(10003, A, R) (10003, D, E) javier_artiles
  • 28. 100% less explosions (94110, A, B) (94110, A, C) (10003, D, E) (94110, E, F) (94110, A, R) (10003, A, R) (94110, D, R) (94110, E, R) (94110, E, R) (67843, T, R) (94110, T, R) (94110, T, R) (94110_A, A, B) (94110_A, A, C) (94110_A, A, R) (94110_D, D, R) (94110_T, T, R) (10003_A, A, R) (10003_D, D, E) (67843_T, T, R) (94110_E, E, R) (94110_E, E, R) (94110_E, E, F) (94110_T, T, R) Jennifer Williams
  • 29. Well there is a bit of magic in the shuffle…. ● We can reuse shuffle files ● But it can (and does) explode* Sculpture by Flaming Lotus Girls Photo by Zaskoda
  • 30. Our saviour from serialization: DataFrames ● For the most part keeps data in the JVM ○ Notable exception is UDFs written in Python ● Takes our python calls and turns it into a query plan ● If we need more than the native operations in Spark’s DataFrames ● be wary of Distributed Systems bringing claims of usability…. Andy Blackledge
  • 31. So what are Spark DataFrames? ● More than SQL tables ● Not Pandas or R DataFrames ● Semi-structured (have schema information) ● tabular ● work on expression instead of lambdas ○ e.g. df.filter(df.col(“happy”) == true) instead of rdd.filter(lambda x: x.happy == true)) ● Not a subset of “Datasets” - since Dataset API isn’t exposed in Python yet :( Quinn Dombrowski
  • 32. Why are DataFrames good for performance? ● Space efficient columnar cached representation ● Able to push down operations to the data store ● Reduced serialization/data transfer overhead ● Able to perform some operations on serialized data ● Optimizer is able to look inside of our operations ○ Regular spark can’t see inside our operations to spot the difference between (min(_, _)) and (append(_, _))
  • 33. How much faster can it be? (Python) Andrew Skudder
  • 34. Loading with sparkSQL & spark-csv df = sqlContext.read .format("com.databricks.spark.csv") .option("header", "true") .option("inferSchema", "true") .load("resources/adult.data") Jess Johnson
  • 35. What about other data formats? ● Built in ○ Parquet ○ JDBC ○ Json (which is amazing!) ○ Orc ○ Hive ● Available as packages ○ csv* ○ Avro, Redshift, Mongo, Cassandra, Cloudant, Couchbase, etc. ○ +34 at http://spark-packages.org/?q=tags%3A%22Data%20Sources%22 Michael Coghlan *pre-2.0 package, 2.0+ built in hopefully
  • 36. Ok so we’ve got our Data, what now? ● We can inspect the Schema ● We can start to apply some transformations (relational) ● We can do some machine learning ● We can jump into an RDD or a Dataset for functional transformations ● We could wordcount - again!
  • 37. Getting the schema ● printSchema() for human readable ● schema for machine readable
  • 39. Resulting schema: root |-- name: string (nullable = true) |-- pandas: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: long (nullable = false) | | |-- zip: string (nullable = true) | | |-- pt: string (nullable = true) | | |-- happy: boolean (nullable = false) | | |-- attributes: array (nullable = true) | | | |-- element: double (containsNull = false) Simon Götz
  • 40. Word count w/Dataframes df = sqlCtx.read.load(src) # Returns an RDD words = df.select("text").flatMap(lambda x: x.text.split(" ")) words_df = words.map( lambda x: Row(word=x, cnt=1)).toDF() word_count = words_df.groupBy("word").sum() word_count.write.format("parquet").save("wc.parquet") Still have the double serialization here :( *(Also in 2.0 have to explicitly switch to RDD)
  • 41. Buuuut…. ● UDFs / custom maps will be “slow” (e.g. require data copy from executor and back) Nick Ellis
  • 42. Mixing Python & JVM code FTW: ● DataFrames are an example of pushing our processing to the JVM ● Python UDFS & maps lose this benefit ● But we can write Scala UDFS and call them from Python ○ py4j error messages can be difficult to understand :( ● Work to make JVM UDFs easier to register in PR #9766 ● Trickier with RDDs since stores pickled objects
  • 43. Exposing functions to be callable from Python: // functions we want to be callable from python object functions { def kurtosis(e: Column): Column = new Column(Kurtosis(EvilSqlTools.getExpr(e))) def registerUdfs(sqlCtx: SQLContext): Unit = { sqlCtx.udf.register("rowKurtosis", helpers.rowKurtosis _) } } Fiona Henderson
  • 44. Calling the functions with py4j*: ● The SparkContext has a reference to the jvm (_jvm) ● Many Python objects which are wrappers of JVM objects have _j[objtype] to get the JVM object ○ rdd._jrdd ○ df._jdf ○ sc._jsc ● These are private and may change *The py4j bridge only exists on the driver** ** Not exactly true but close enough Fiona Henderson
  • 45. e.g.: def register_sql_extensions(sql_ctx): scala_sql_context = sql_ctx._ssql_ctx spark_ctx = sql_ctx._sc (spark_ctx._jvm.com.sparklingpandas.functions .registerUdfs(scala_sql_context))
  • 46. More things to keep in mind with DFs (in Python) ● Schema serialized as json from JVM ● toPandas is essentially collect ● joins can result in the cross product ○ big data x big data =~ out of memory ● Pre 2.0: Use the HiveContext ○ you don’t need a hive install ○ more powerful UDFs, window functions, etc.
  • 48. The “future*”: Awesome UDFs ● Work going on in Scala land to translate simple Scala into SQL expressions - need the Dataset API ○ Maybe we can try similar approaches with Python? ● Very early work going on to use Jython for simple UDFs (e.g. 2.7 compat & no native libraries) - SPARK-15369 ○ Early benchmarking w/word count 5% slower than native Scala UDF, close to 2x faster than regular Python ● Willing to share your Python UDFs for benchmarking? - http://bit.ly/pySparkUDF *The future may or may not have better performance than today. But bun-bun the bunny has some lettuce so its ok!
  • 50. Want to help with reviewing the code? ● https://github.com/apache/spark/pull/13571 Some open questions: ● Do we want to make the Jython dependency optional? ● If so how do we want people to load it? ● Do we want to fall back automatically on Jython failure? E-mail me: [email protected] :)
  • 51. The “future*”: Faster interchange ● Faster interchange between Python and Spark (e.g. Tungsten + Apache Arrow)? (SPARK-13391 & SPARK-13534) ● Willing to share your Python UDFs for benchmarking? - http://bit.ly/pySparkUDF ● Dask integration? *The future may or may not have better performance than today. But bun-bun the bunny has some lettuce so its ok!
  • 52. Additional Spark Resources ● Programming guide (along with JavaDoc, PyDoc, ScalaDoc, etc.) ○ http://spark.apache.org/docs/latest/ ● Books ● Videos ● Spark Office Hours ○ Normally in the bay area - will do Google Hangouts ones soon ○ follow me on twitter for future ones - https://twitter.com/holdenkarau raider of gin
  • 53. Learning Spark Fast Data Processing with Spark (Out of Date) Fast Data Processing with Spark (2nd edition) Advanced Analytics with Spark Coming soon: Spark in Action Coming soon: High Performance Spark
  • 54. And the next book….. First five chapters are available in “Early Release”*: ● Buy from O’Reilly - http://bit.ly/highPerfSpark ● Chapter 9(ish) - Going Beyond Scala Get notified when updated & finished: ● http://www.highperformancespark.com ● https://twitter.com/highperfspark * Early Release means extra mistakes, but also a chance to help us make a more awesome book.
  • 55. Spark Videos ● Apache Spark Youtube Channel ● My Spark videos on YouTube - ○ http://bit.ly/holdenSparkVideos ● Spark Summit 2014 training ● Paco’s Introduction to Apache Spark
  • 56. k thnx bye! If you care about Spark testing and don’t hate surveys: http://bit.ly/holdenTestingSpark Will tweet results “eventually” @holdenkarau PySpark Users: Have some simple UDFs you wish ran faster you are willing to share?: http://bit.ly/pySparkUDF