SlideShare a Scribd company logo
Luca Canali, CERN
Apache Spark Performance
Troubleshooting at Scale:
Challenges, Tools and Methods
#EUdev2
About Luca
• Computing engineer and team lead at CERN IT
– Hadoop and Spark service, database services
– Joined CERN in 2005
• 17+ years of experience with database services
– Performance, architecture, tools, internals
– Sharing information: blog, notes, code
• @LucaCanaliDB – http://cern.ch/canali
2#EUdev2
CERN and the Large Hadron Collider
• Largest and most powerful particle accelerator
3#EUdev2
Apache Spark @
• Spark is a popular component for data processing
– Deployed on four production Hadoop/YARN clusters
• Aggregated capacity (2017): ~1500 physical cores, 11 PB
– Adoption is growing. Key projects involving Spark:
• Analytics for accelerator controls and logging
• Monitoring use cases, this includes use of Spark streaming
• Analytics on aggregated logs
• Explorations on the use of Spark for high energy physics
Link: http://cern.ch/canali/docs/BigData_Solutions_at_CERN_KT_Forum_20170929.pdf
4#EUdev2
Motivations for This Work
• Understanding Spark workloads
– Understanding technology (where are the bottlenecks, how
much do Spark jobs scale, etc?)
– Capacity planning: benchmark platforms
• Provide our users with a range of monitoring tools
• Measurements and troubleshooting Spark SQL
– Structured data in Parquet for data analytics
– Spark-ROOT (project on using Spark for physics data)
5#EUdev2
Outlook of This Talk
• Topic is vast, I will just share some ideas and
lessons learned
• How to approach performance troubleshooting,
benchmarking and relevant methods
• Data sources and tools to measure Spark
workloads, challenges at scale
• Examples and lessons learned with some key tools
6#EUdev2
Challenges
• Just measuring performance metrics is easy
• Producing actionable insights requires effort and
preparation
– Methods on how to approach troubleshooting performance
– How to gather relevant data
• Need to use the right tools, possibly many tools
• Be aware of the limitations of your tools
– Know your product internals: there are many “moving parts”
– Model and understand root causes from effects
7#EUdev2
Anti-Pattern: The Marketing
Benchmark
• The over-simplified
benchmark graph
• Does not tell you why B
is better than A
• To understand, you need
more context and root
cause analysis
8#EUdev2
0
2
4
6
8
10
12
System A System B
SOMEMETRIC(HIGHERISBETTER)
System B is 5x better
than System A !?
Benchmark for Speed
• Which one is faster?
• 20x 10x 1x
9#EUdev2
Adapt Answer to Circumstances
• Which one is faster?
• 20x 10x 1x
• Actually, it depends..
10#EUdev2
Active Benchmarking
• Example: use TPC-DS benchmark as workload generator
– Understand and measure Spark SQL, optimizations, systems performance, etc
11#EUdev2
0
500
1000
1500
2000
2500
3000
qSs…
QueryExecutionTime(Latency)in
seconds
Query
TPCDS WORKLOAD - DATA SET SIZE: 10 TB - QUERY SET V1.4
420 CORES, EXECUTOR MEMORY PER CORE 5G
MIN_Exec MAX_Exec AVG_Exec_Time_sec
Troubleshooting by Understanding
• Measure the workload
– Use all relevant tools
– Not a “black box”: instrument code where is needed
• Be aware of the blind spots
– Missing tools, measurements hard to get, etc
• Make a mental model
– Explain the observed performance and bottlenecks
– Prove it or disprove it with experiment
• Summary:
– Be data driven, no dogma, produce insights
12#EUdev2
Actionable Measurement Data
• You want to find answers to questions like
– What is my workload doing?
– Where is it spending time?
– What are the bottlenecks (CPU, I/O)?
– Why do I measure the {latency/throughput} that I
measure?
• Why not 10x better?
13#EUdev2
Measuring Spark
• Distributed system, parallel architecture
– Many components, complexity increases when running at scale
– Optimizing a component does not necessarily optimize the whole
14#EUdev2
Spark and Monitoring Tools
• Spark instrumentation
– Web UI
– REST API
– Eventlog
– Executor/Task Metrics
– Dropwizard metrics library
• Complement with
– OS tools
– For large clusters, deploy tools that ease working at cluster-level
• https://spark.apache.org/docs/latest/monitoring.html
15#EUdev2
Web UI
• Info on Jobs, Stages, Executors, Metrics, SQL,..
– Start with: point web browser driver_host, port 4040
16#EUdev2
Execution Plans and DAGs
17#EUdev2
Web UI Event Timeline
• Event Timeline
– show task execution details by activity and time
18#EUdev2
REST API – Spark Metrics
• History server URL + /api/v1/applications
• http://historyserver:18080/api/v1/applicati
ons/application_1507881680808_0002/s
tages
19#EUdev2
EventLog – Stores Web UI History
• Config:
– spark.eventLog.enabled=true
– spark.eventLog.dir = <path>
• JSON files store info displayed by Spark History server
– You can read the JSON files with Spark task metrics and history with
custom applications. For example sparklint.
– You can read and analyze event log files using the Dataframe API with
the Spark SQL JSON reader. More details at:
https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Notes
20#EUdev2
Spark Executor Task Metrics
val df = spark.read.json("/user/spark/applicationHistory/application_...")
df.filter("Event='SparkListenerTaskEnd'").select("Task Metrics.*").printSchema
21#EUdev2
Task ID: long (nullable = true)
|-- Disk Bytes Spilled: long (nullable = true)
|-- Executor CPU Time: long (nullable = true)
|-- Executor Deserialize CPU Time: long (nullable = true)
|-- Executor Deserialize Time: long (nullable = true)
|-- Executor Run Time: long (nullable = true)
|-- Input Metrics: struct (nullable = true)
| |-- Bytes Read: long (nullable = true)
| |-- Records Read: long (nullable = true)
|-- JVM GC Time: long (nullable = true)
|-- Memory Bytes Spilled: long (nullable = true)
|-- Output Metrics: struct (nullable = true)
| |-- Bytes Written: long (nullable = true)
| |-- Records Written: long (nullable = true)
|-- Result Serialization Time: long (nullable = true)
|-- Result Size: long (nullable = true)
|-- Shuffle Read Metrics: struct (nullable = true)
| |-- Fetch Wait Time: long (nullable = true)
| |-- Local Blocks Fetched: long (nullable = true)
| |-- Local Bytes Read: long (nullable = true)
| |-- Remote Blocks Fetched: long (nullable = true)
| |-- Remote Bytes Read: long (nullable = true)
| |-- Total Records Read: long (nullable = true)
|-- Shuffle Write Metrics: struct (nullable = true)
| |-- Shuffle Bytes Written: long (nullable = true)
| |-- Shuffle Records Written: long (nullable = true)
| |-- Shuffle Write Time: long (nullable = true)
|-- Updated Blocks: array (nullable = true)
.. ..
Spark Internal Task metrics:
Provide info on executors’ activity:
Run time, CPU time used, I/O metrics,
JVM Garbage Collection, Shuffle
activity, etc.
Task Info, Accumulables, SQL Metrics
df.filter("Event='SparkListenerTaskEnd'").select("Task Info.*").printSchema
root
|-- Accumulables: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- ID: long (nullable = true)
| | |-- Name: string (nullable = true)
| | |-- Value: string (nullable = true)
| | | . . .
|-- Attempt: long (nullable = true)
|-- Executor ID: string (nullable = true)
|-- Failed: boolean (nullable = true)
|-- Finish Time: long (nullable = true)
|-- Getting Result Time: long (nullable = true)
|-- Host: string (nullable = true)
|-- Index: long (nullable = true)
|-- Killed: boolean (nullable = true)
|-- Launch Time: long (nullable = true)
|-- Locality: string (nullable = true)
|-- Speculative: boolean (nullable = true)
|-- Task ID: long (nullable = true)
22#EUdev2
Details about the Task:
Launch Time, Finish
Time, Host, Locality, etc
Accumulables are used
to keep accounting of
metrics updates,
including SQL metrics
EventLog Analytics Using Spark SQL
Aggregate stage info metrics by name and display sum(values):
scala> spark.sql("select Name, sum(Value) as value from
aggregatedStageMetrics group by Name order by Name").show(40,false)
+---------------------------------------------------+----------------+
|Name |value |
+---------------------------------------------------+----------------+
|aggregate time total (min, med, max) |1230038.0 |
|data size total (min, med, max) |5.6000205E7 |
|duration total (min, med, max) |3202872.0 |
|number of output rows |2.504759806E9 |
|internal.metrics.executorRunTime |857185.0 |
|internal.metrics.executorCpuTime |1.46231111372E11|
|... |... |
23#EUdev2
Drill Down Into Executor Task Metrics
Relevant code in Apache Spark - Core
– Example snippets, show instrumentation in Executor.scala
– Note, for SQL metrics, see instrumentation with code-generation
24#EUdev2
Read Metrics with sparkMeasure
sparkMeasure is a tool for performance investigations of Apache Spark
workloads https://github.com/LucaCanali/sparkMeasure
$ bin/spark-shell --packages ch.cern.sparkmeasure:spark-measure_2.11:0.11
scala> val stageMetrics = ch.cern.sparkmeasure.StageMetrics(spark)
scala> stageMetrics.runAndMeasure(spark.sql("select count(*) from
range(1000) cross join range(1000) cross join range(1000)").show)
Scheduling mode = FIFO
Spark Context default degree of parallelism = 8
Aggregated Spark stage metrics:
numStages => 3
sum(numTasks) => 17
elapsedTime => 9103 (9 s)
sum(stageDuration) => 9027 (9 s)
sum(executorRunTime) => 69238 (1.2 min)
sum(executorCpuTime) => 68004 (1.1 min)
. . . <more metrics>
25#EUdev2
Notebooks and sparkMeasure
• Interactive use: suitable for
notebooks and REPL
• Offline use: save metrics for
later analysis
• Metrics granularity:
collected per stage or
record all tasks
• Metrics aggregation: user-
defined, e.g. per SQL
statement
• Works with Scala and
Python
26#EUdev2
Collecting Info Using Spark Listener
- Spark Listeners
are used to send
task metrics from
executors to driver
- Underlying data
transport used by
WebUI,
sparkMeasure, etc
- Spark Listeners for
your custom
monitoring code
27#EUdev2
Examples – Parquet I/O
• An example of how to measure I/O, Spark reading Apache
Parquet files
• This causes a full scan of the table store_sales
spark.sql("select * from store_sales where ss_sales_price=-1.0")
.collect()
• Test run on a cluster of 12 nodes, with 12 executors, 4 cores each
• Total Time Across All Tasks: 59 min
• Locality Level Summary: Node local: 1675
• Input Size / Records: 185.3 GB / 4319943621
• Duration: 1.3 min
28#EUdev2
Parquet I/O – Filter Push Down
• Parquet filter push down in action
• This causes a full scan of the table store_sales with a filter
condition pushed down
spark.sql("select * from store_sales where ss_quantity=-1.0")
.collect()
• Test run on a cluster of 12 nodes, with 12 executors, 4 cores each
• Total Time Across All Tasks: 1.0 min
• Locality Level Summary: Node local: 1675
• Input Size / Records: 16.2 MB / 0
• Duration: 3 s
29#EUdev2
Parquet I/O – Drill Down
• Parquet filter push down
– I/O reduction when Parquet pushed down a filter condition and using
stats on data (min, max, num values, num nulls)
– Filter push down not available for decimal data type
(ss_sales_price)
https://db-blog.web.cern.ch/blog/luca-canali/2017-06-diving-spark-and-parquet-workloads-example
30#EUdev2
CPU and I/O Reading Parquet Files
# echo 3 > /proc/sys/vm/drop_caches # drop the filesystem cache
$ bin/spark-shell --master local[1] --packages ch.cern.sparkmeasure:spark-
measure_2.11:0.11 --driver-memory 16g
val stageMetrics = ch.cern.sparkmeasure.StageMetrics(spark)
stageMetrics.runAndMeasure(spark.sql("select * from web_sales
where ws_sales_price=-1").collect())
Spark Context default degree of parallelism = 1
Aggregated Spark stage metrics:
numStages => 1
sum(numTasks) => 787
elapsedTime => 465430 (7.8 min)
sum(stageDuration) => 465430 (7.8 min)
sum(executorRunTime) => 463966 (7.7 min)
sum(executorCpuTime) => 325077 (5.4 min)
sum(jvmGCTime) => 3220 (3 s)
31#EUdev2
CPU time is 70% of run time
Note: OS tools confirm that the
difference “Run”- “CPU” time is
spent in read calls (used a
SystemTap script)
Stack Profiling and Flame Graphs
- Use stack profiling to
investigate CPU usage
- Flame graph
visualization to help
identify “hot methods”
and context (parent
stack)
- Use profilers that
don’t suffer from Java
Safepoint bias, e.g.
async-profiler
32#EUdev2
https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Tools_Spark_Linux_FlameGraph.md
How Does Your Workload Scale?
Measure latency as function of
N# of concurrent tasks
Example workload: Spark
reading Parquet files from
memory
Speedup(p) = R(1)/R(p)
Speedup grows linearly in
ideal case. Saturation effects
and serialization reduce
scalability
(see also Amdhal’s law)
33#EUdev2
Are CPUs Processing Instructions or
Stalling for Memory?
• Measure Instructions per Cycle (IPC) and CPU-to-Memory throughput
• Minimizing CPU stalled cycles is key on modern platforms
• Tools to read CPU HW counters: perf and more
https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Tools_Linux_Memory_Perf_Measure.md
34#EUdev2
Increasing N# of stalled
cycles at high load
CPU-to-memory throughput close
to saturation for this system
Lessons Learned – Measuring CPU
• Reading Parquet data is CPU-intensive
– Measured throughput for the test system at high load (using all 20 cores)
• about 3 GB/s – max read throughput with lightweight processing of parquet files
– Measured CPU-to-memory traffic at high load ~80 GB/s
• Comments:
– CPU utilization and memory throughput are the bottleneck in this test
• Other systems could have I/O or network bottlenecks at lower throughput
– Room for optimizations in the Parquet reader code?
https://db-blog.web.cern.ch/blog/luca-canali/2017-09-performance-analysis-cpu-intensive-workload-apache-spark
35#EUdev2
Pitfalls: CPU Utilization at High Load
• Physical cores vs. threads
– CPU utilization grows up to the number of available threads
– Throughput at scale mostly limited by number of available cores
– Pitfall: understanding Hyper-threading on multitenant systems
36#EUdev2
Metric
20 concurrent
tasks
40 concurrent
tasks
60 concurrent
tasks
Elapsed time 20 s 23 s 23 s
Executor run time 392 s 892 s 1354 s
Executor CPU Time 376 s 849 s 872 s
CPU-memory data volume 1.6 TB 2.2 TB 2.2 TB
CPU-memory throughput 85 GB/s 90 GB/s 90 GB/s
IPC 1.42 0.66 0.63
Job latency is roughly constant
20 tasks -> each task gets a core
40 tasks -> they share CPU cores
It is as if CPU speed has become
2 times slower
Extra time from CPU runqueue wait
Example data: CPU-bound workload (reading Parquet files from memory)
Test system has 20 physical cores
Lessons Learned on Garbage
Collection and CPU Usage
Measure: reading Parquet Table with “--driver-memory 1g” (default)
sum(executorRunTime) => 468480 (7.8 min)
sum(executorCpuTime) => 304396 (5.1 min)
sum(jvmGCTime) => 163641 (2.7 min)
OS tools: (ps -efo cputime -p <pid_of_SparkSubmit>)
CPU time = 2306 sec
Lessons learned:
• Use OS tools to measure CPU used by JVM
• Garbage Collection is memory hungry (size your executors accordingly)
37#EUdev2
Run Time =
CPU Time (executor) + JVM GC
Many CPU cycles used by JVM, extra CPU time
not accounted in Spark metrics due to GC
Performance at Scale: Keep
Systems Resources Busy
Running tasks in parallel
is key for performance
Important loss of
efficiency when the
number of concurrent
active tasks << available
cores
38#EUdev2
Issues With Stragglers
• Slow running tasks - stragglers
– Many causes possible, including
– Tasks running on slow/busy nodes
– Nodes with HW problems
– Skew in data and/or partitioning
• A few “local” slow tasks can wreck havoc in global perf
– It is often the case that one stage needs to finish before the next
one can start
– See also discussion in SPARK-2387 on stage barriers
– Just a few slow tasks can slow everything down
39#EUdev2
Investigate Stragglers With Analytics
on “Task Info” Data
Example of performance
limited by long tail and
stragglers
Data source: EventLog or
sparkMeasure (from task info:
task launch and finish time)
Data analyzed using Spark
SQL and notebooks
40#EUdev2
From https://db-blog.web.cern.ch/blog/luca-canali/2017-03-measuring-apache-spark-workload-metrics-performance-troubleshooting
Task Stragglers – Drill Down
Drill down on task latency per
executor:
it’s a plot with 3 dimensions
Stragglers due to a few
machines in the cluster:
later identified as slow HW
Lessons learned: identify and
remove/repair non-performing
hardware from the cluster
41#EUdev2
From https://github.com/LucaCanali/sparkMeasure/blob/master/examples/SparkTaskMetricsAnalysisExample.ipynb
Web UI – Monitor Executors
• The Web UI shows details of executors
– Including number of active tasks (+ per-node info)
42#EUdev2
All OK: 480 cores allocated and 480 active tasks
Example of Underutilization
• Monitor active tasks with Web UI
43#EUdev2
Utilization is low at this snapshot:
480 cores allocated and 48 active tasks
Visualize the Number of Active Tasks
• Plot as function of time to identify possible under-utilization
– Grafana visualization of number of active tasks for a benchmark job running on
60 executors, 480 cores
44#EUdev2
Data source:
/executor/threadpool/
activeTasks
Transport: Dropwizard
metrics to Graphite sink
Measure the Number of Active Tasks
With Dropwizard Metrics Library
• The Dropwizard metrics library is integrated with Spark
– Provides configurable data sources and sinks. Details in doc and config file
“metrics.properties”
--conf spark.metrics.conf=metrics.properties
• Spark data sources:
– Can be optional, as the JvmSource or “on by default”, as the executor source
– Notably the gauge: /executor/threadpool/activeTasks
– Note: executor source also has info on I/O
• Architecture
– Metrics are sent directly by each executor -> no need to pass via the driver.
– More details: see source code “ExecutorSource.scala”
45#EUdev2
Limitations and Future Work
• Many important topics not covered here
– Such as investigations and optimization of shuffle operations, SQL plans, etc
– Understanding root causes of stragglers, long tails and issues related to efficient
utilization of available cores/resources can be hard
• Current tools to measure Spark performance are very useful.. but:
– Instrumentation does not yet provide a way to directly find bottlenecks
• Identify where time is spent and critical resources for job latency
• See Kay Ousterhout on “Re-Architecting Spark For Performance Understandability”
– Currently difficult to link measurements of OS metrics and Spark metrics
• Difficult to understand time spent for HDFS I/O (see HADOOP-11873)
– Improvements on user-facing tools
• Currently investigating linking Spark executor metrics sources and Dropwizard
sink/Grafana visualization (see SPARK-22190)
46#EUdev2
Conclusions
• Think clearly about performance
– Approach it as a problem in experimental science
– Measure – build models – test – produce actionable results
• Know your tools
– Experiment with the toolset – active benchmarking to understand
how your application works – know the tools’ limitations
• Measure, build tools and share results!
– Spark performance is a field of great interest
– Many gains to be made + a rapidly developing topic
47#EUdev2
Acknowledgements and References
• CERN
– Members of Hadoop and Spark service and CERN+HEP
users community
– Special thanks to Zbigniew Baranowski, Prasanth Kothuri,
Viktor Khristenko, Kacper Surdy
– Many lessons learned over the years from the RDBMS
community, notably www.oaktable.net
• Relevant links
– Material by Brendan Gregg (www.brendangregg.com)
– More info: links to blog and notes at http://cern.ch/canali
48#EUdev2

More Related Content

PDF
Spark Performance Tuning .pdf
PPTX
Dynamic filtering for presto join optimisation
PDF
Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014
PDF
Performance Troubleshooting Using Apache Spark Metrics
PDF
Write Faster SQL with Trino.pdf
PDF
Radical Speed for SQL Queries on Databricks: Photon Under the Hood
PPTX
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
PDF
Unified Stream and Batch Processing with Apache Flink
Spark Performance Tuning .pdf
Dynamic filtering for presto join optimisation
Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014
Performance Troubleshooting Using Apache Spark Metrics
Write Faster SQL with Trino.pdf
Radical Speed for SQL Queries on Databricks: Photon Under the Hood
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
Unified Stream and Batch Processing with Apache Flink

What's hot (20)

PDF
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
PPTX
Introduction to NoSQL
PDF
Spark Operator—Deploy, Manage and Monitor Spark clusters on Kubernetes
PPTX
Apache Superset - open source data exploration and visualization (Conclusion ...
PDF
Getting Started with Apache Spark on Kubernetes
PPTX
Oracle architecture ppt
PDF
NOSQLEU - Graph Databases and Neo4j
PDF
Apache Spark 101
PPTX
Hive + Tez: A Performance Deep Dive
PDF
Intro to Neo4j and Graph Databases
PPTX
Performance Update: When Apache ORC Met Apache Spark
PPTX
Apache Spark Fundamentals
PPTX
Apache Spark Core
PPTX
Introduction to Storm
PPTX
Apache Kafka 0.8 basic training - Verisign
PDF
Building Robust ETL Pipelines with Apache Spark
PDF
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)
PPTX
Apache Spark Architecture
PPTX
Optimizing Apache Spark SQL Joins
PDF
Understanding Query Plans and Spark UIs
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
Introduction to NoSQL
Spark Operator—Deploy, Manage and Monitor Spark clusters on Kubernetes
Apache Superset - open source data exploration and visualization (Conclusion ...
Getting Started with Apache Spark on Kubernetes
Oracle architecture ppt
NOSQLEU - Graph Databases and Neo4j
Apache Spark 101
Hive + Tez: A Performance Deep Dive
Intro to Neo4j and Graph Databases
Performance Update: When Apache ORC Met Apache Spark
Apache Spark Fundamentals
Apache Spark Core
Introduction to Storm
Apache Kafka 0.8 basic training - Verisign
Building Robust ETL Pipelines with Apache Spark
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)
Apache Spark Architecture
Optimizing Apache Spark SQL Joins
Understanding Query Plans and Spark UIs
Ad

Viewers also liked (8)

PPTX
Securing Your Apache Spark Applications
PDF
Spark Pipelines in the Cloud with Alluxio with Gene Pang
PDF
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...
PDF
Best Practices for Using Alluxio with Apache Spark with Gene Pang
PPTX
Deep Learning and Streaming in Apache Spark 2.x with Matei Zaharia
PDF
Storage Engine Considerations for Your Apache Spark Applications with Mladen ...
PDF
Fast Data with Apache Ignite and Apache Spark with Christos Erotocritou
PDF
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...
Securing Your Apache Spark Applications
Spark Pipelines in the Cloud with Alluxio with Gene Pang
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...
Best Practices for Using Alluxio with Apache Spark with Gene Pang
Deep Learning and Streaming in Apache Spark 2.x with Matei Zaharia
Storage Engine Considerations for Your Apache Spark Applications with Mladen ...
Fast Data with Apache Ignite and Apache Spark with Christos Erotocritou
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...
Ad

Similar to Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Methodologies with Luca Canali (20)

PDF
Apache Spark for RDBMS Practitioners: How I Learned to Stop Worrying and Lov...
PPTX
ETL with SPARK - First Spark London meetup
PDF
Monitor Apache Spark 3 on Kubernetes using Metrics and Plugins
PDF
Spark Summit EU talk by Luca Canali
PDF
Track A-2 基於 Spark 的數據分析
PDF
Spark streaming , Spark SQL
PDF
What is New with Apache Spark Performance Monitoring in Spark 3.0
PDF
Learnings Using Spark Streaming and DataFrames for Walmart Search: Spark Summ...
PDF
20170126 big data processing
PDF
Lessons learned while building Omroep.nl
PDF
Lessons learned while building Omroep.nl
PPTX
Typesafe spark- Zalando meetup
PDF
Dev Ops Training
PPTX
Hot tutorials
PDF
6 tips for improving ruby performance
PDF
Jump Start with Apache Spark 2.0 on Databricks
PDF
Linux Systems Performance 2016
PDF
Extending Spark Streaming to Support Complex Event Processing
PDF
Spark cep
PDF
IoT databases - review and challenges - IoT, Hardware & Robotics meetup - onl...
Apache Spark for RDBMS Practitioners: How I Learned to Stop Worrying and Lov...
ETL with SPARK - First Spark London meetup
Monitor Apache Spark 3 on Kubernetes using Metrics and Plugins
Spark Summit EU talk by Luca Canali
Track A-2 基於 Spark 的數據分析
Spark streaming , Spark SQL
What is New with Apache Spark Performance Monitoring in Spark 3.0
Learnings Using Spark Streaming and DataFrames for Walmart Search: Spark Summ...
20170126 big data processing
Lessons learned while building Omroep.nl
Lessons learned while building Omroep.nl
Typesafe spark- Zalando meetup
Dev Ops Training
Hot tutorials
6 tips for improving ruby performance
Jump Start with Apache Spark 2.0 on Databricks
Linux Systems Performance 2016
Extending Spark Streaming to Support Complex Event Processing
Spark cep
IoT databases - review and challenges - IoT, Hardware & Robotics meetup - onl...

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
PPT
Data Lakehouse Symposium | Day 1 | Part 2
PPTX
Data Lakehouse Symposium | Day 2
PPTX
Data Lakehouse Symposium | Day 4
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
PDF
Democratizing Data Quality Through a Centralized Platform
PDF
Learn to Use Databricks for Data Science
PDF
Why APM Is Not the Same As ML Monitoring
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
PDF
Stage Level Scheduling Improving Big Data and AI Integration
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
PDF
Sawtooth Windows for Feature Aggregations
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
PDF
Re-imagine Data Monitoring with whylogs and Spark
PDF
Raven: End-to-end Optimization of ML Prediction Queries
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
PDF
Massive Data Processing in Adobe Using Delta Lake
DW Migration Webinar-March 2022.pptx
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 4
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Democratizing Data Quality Through a Centralized Platform
Learn to Use Databricks for Data Science
Why APM Is Not the Same As ML Monitoring
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Stage Level Scheduling Improving Big Data and AI Integration
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Sawtooth Windows for Feature Aggregations
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Re-imagine Data Monitoring with whylogs and Spark
Raven: End-to-end Optimization of ML Prediction Queries
Processing Large Datasets for ADAS Applications using Apache Spark
Massive Data Processing in Adobe Using Delta Lake

Recently uploaded (20)

PPTX
Managing Community Partner Relationships
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PDF
annual-report-2024-2025 original latest.
PPT
Reliability_Chapter_ presentation 1221.5784
PPTX
Leprosy and NLEP programme community medicine
PDF
Transcultural that can help you someday.
PDF
.pdf is not working space design for the following data for the following dat...
PDF
Business Analytics and business intelligence.pdf
PPTX
SAP 2 completion done . PRESENTATION.pptx
PDF
Optimise Shopper Experiences with a Strong Data Estate.pdf
PDF
Lecture1 pattern recognition............
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPTX
Computer network topology notes for revision
PDF
Introduction to the R Programming Language
PPTX
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
PDF
Clinical guidelines as a resource for EBP(1).pdf
PPT
ISS -ESG Data flows What is ESG and HowHow
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PDF
[EN] Industrial Machine Downtime Prediction
Managing Community Partner Relationships
STUDY DESIGN details- Lt Col Maksud (21).pptx
IBA_Chapter_11_Slides_Final_Accessible.pptx
annual-report-2024-2025 original latest.
Reliability_Chapter_ presentation 1221.5784
Leprosy and NLEP programme community medicine
Transcultural that can help you someday.
.pdf is not working space design for the following data for the following dat...
Business Analytics and business intelligence.pdf
SAP 2 completion done . PRESENTATION.pptx
Optimise Shopper Experiences with a Strong Data Estate.pdf
Lecture1 pattern recognition............
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Computer network topology notes for revision
Introduction to the R Programming Language
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
Clinical guidelines as a resource for EBP(1).pdf
ISS -ESG Data flows What is ESG and HowHow
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
[EN] Industrial Machine Downtime Prediction

Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Methodologies with Luca Canali

  • 1. Luca Canali, CERN Apache Spark Performance Troubleshooting at Scale: Challenges, Tools and Methods #EUdev2
  • 2. About Luca • Computing engineer and team lead at CERN IT – Hadoop and Spark service, database services – Joined CERN in 2005 • 17+ years of experience with database services – Performance, architecture, tools, internals – Sharing information: blog, notes, code • @LucaCanaliDB – http://cern.ch/canali 2#EUdev2
  • 3. CERN and the Large Hadron Collider • Largest and most powerful particle accelerator 3#EUdev2
  • 4. Apache Spark @ • Spark is a popular component for data processing – Deployed on four production Hadoop/YARN clusters • Aggregated capacity (2017): ~1500 physical cores, 11 PB – Adoption is growing. Key projects involving Spark: • Analytics for accelerator controls and logging • Monitoring use cases, this includes use of Spark streaming • Analytics on aggregated logs • Explorations on the use of Spark for high energy physics Link: http://cern.ch/canali/docs/BigData_Solutions_at_CERN_KT_Forum_20170929.pdf 4#EUdev2
  • 5. Motivations for This Work • Understanding Spark workloads – Understanding technology (where are the bottlenecks, how much do Spark jobs scale, etc?) – Capacity planning: benchmark platforms • Provide our users with a range of monitoring tools • Measurements and troubleshooting Spark SQL – Structured data in Parquet for data analytics – Spark-ROOT (project on using Spark for physics data) 5#EUdev2
  • 6. Outlook of This Talk • Topic is vast, I will just share some ideas and lessons learned • How to approach performance troubleshooting, benchmarking and relevant methods • Data sources and tools to measure Spark workloads, challenges at scale • Examples and lessons learned with some key tools 6#EUdev2
  • 7. Challenges • Just measuring performance metrics is easy • Producing actionable insights requires effort and preparation – Methods on how to approach troubleshooting performance – How to gather relevant data • Need to use the right tools, possibly many tools • Be aware of the limitations of your tools – Know your product internals: there are many “moving parts” – Model and understand root causes from effects 7#EUdev2
  • 8. Anti-Pattern: The Marketing Benchmark • The over-simplified benchmark graph • Does not tell you why B is better than A • To understand, you need more context and root cause analysis 8#EUdev2 0 2 4 6 8 10 12 System A System B SOMEMETRIC(HIGHERISBETTER) System B is 5x better than System A !?
  • 9. Benchmark for Speed • Which one is faster? • 20x 10x 1x 9#EUdev2
  • 10. Adapt Answer to Circumstances • Which one is faster? • 20x 10x 1x • Actually, it depends.. 10#EUdev2
  • 11. Active Benchmarking • Example: use TPC-DS benchmark as workload generator – Understand and measure Spark SQL, optimizations, systems performance, etc 11#EUdev2 0 500 1000 1500 2000 2500 3000 qSs… QueryExecutionTime(Latency)in seconds Query TPCDS WORKLOAD - DATA SET SIZE: 10 TB - QUERY SET V1.4 420 CORES, EXECUTOR MEMORY PER CORE 5G MIN_Exec MAX_Exec AVG_Exec_Time_sec
  • 12. Troubleshooting by Understanding • Measure the workload – Use all relevant tools – Not a “black box”: instrument code where is needed • Be aware of the blind spots – Missing tools, measurements hard to get, etc • Make a mental model – Explain the observed performance and bottlenecks – Prove it or disprove it with experiment • Summary: – Be data driven, no dogma, produce insights 12#EUdev2
  • 13. Actionable Measurement Data • You want to find answers to questions like – What is my workload doing? – Where is it spending time? – What are the bottlenecks (CPU, I/O)? – Why do I measure the {latency/throughput} that I measure? • Why not 10x better? 13#EUdev2
  • 14. Measuring Spark • Distributed system, parallel architecture – Many components, complexity increases when running at scale – Optimizing a component does not necessarily optimize the whole 14#EUdev2
  • 15. Spark and Monitoring Tools • Spark instrumentation – Web UI – REST API – Eventlog – Executor/Task Metrics – Dropwizard metrics library • Complement with – OS tools – For large clusters, deploy tools that ease working at cluster-level • https://spark.apache.org/docs/latest/monitoring.html 15#EUdev2
  • 16. Web UI • Info on Jobs, Stages, Executors, Metrics, SQL,.. – Start with: point web browser driver_host, port 4040 16#EUdev2
  • 17. Execution Plans and DAGs 17#EUdev2
  • 18. Web UI Event Timeline • Event Timeline – show task execution details by activity and time 18#EUdev2
  • 19. REST API – Spark Metrics • History server URL + /api/v1/applications • http://historyserver:18080/api/v1/applicati ons/application_1507881680808_0002/s tages 19#EUdev2
  • 20. EventLog – Stores Web UI History • Config: – spark.eventLog.enabled=true – spark.eventLog.dir = <path> • JSON files store info displayed by Spark History server – You can read the JSON files with Spark task metrics and history with custom applications. For example sparklint. – You can read and analyze event log files using the Dataframe API with the Spark SQL JSON reader. More details at: https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Notes 20#EUdev2
  • 21. Spark Executor Task Metrics val df = spark.read.json("/user/spark/applicationHistory/application_...") df.filter("Event='SparkListenerTaskEnd'").select("Task Metrics.*").printSchema 21#EUdev2 Task ID: long (nullable = true) |-- Disk Bytes Spilled: long (nullable = true) |-- Executor CPU Time: long (nullable = true) |-- Executor Deserialize CPU Time: long (nullable = true) |-- Executor Deserialize Time: long (nullable = true) |-- Executor Run Time: long (nullable = true) |-- Input Metrics: struct (nullable = true) | |-- Bytes Read: long (nullable = true) | |-- Records Read: long (nullable = true) |-- JVM GC Time: long (nullable = true) |-- Memory Bytes Spilled: long (nullable = true) |-- Output Metrics: struct (nullable = true) | |-- Bytes Written: long (nullable = true) | |-- Records Written: long (nullable = true) |-- Result Serialization Time: long (nullable = true) |-- Result Size: long (nullable = true) |-- Shuffle Read Metrics: struct (nullable = true) | |-- Fetch Wait Time: long (nullable = true) | |-- Local Blocks Fetched: long (nullable = true) | |-- Local Bytes Read: long (nullable = true) | |-- Remote Blocks Fetched: long (nullable = true) | |-- Remote Bytes Read: long (nullable = true) | |-- Total Records Read: long (nullable = true) |-- Shuffle Write Metrics: struct (nullable = true) | |-- Shuffle Bytes Written: long (nullable = true) | |-- Shuffle Records Written: long (nullable = true) | |-- Shuffle Write Time: long (nullable = true) |-- Updated Blocks: array (nullable = true) .. .. Spark Internal Task metrics: Provide info on executors’ activity: Run time, CPU time used, I/O metrics, JVM Garbage Collection, Shuffle activity, etc.
  • 22. Task Info, Accumulables, SQL Metrics df.filter("Event='SparkListenerTaskEnd'").select("Task Info.*").printSchema root |-- Accumulables: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- ID: long (nullable = true) | | |-- Name: string (nullable = true) | | |-- Value: string (nullable = true) | | | . . . |-- Attempt: long (nullable = true) |-- Executor ID: string (nullable = true) |-- Failed: boolean (nullable = true) |-- Finish Time: long (nullable = true) |-- Getting Result Time: long (nullable = true) |-- Host: string (nullable = true) |-- Index: long (nullable = true) |-- Killed: boolean (nullable = true) |-- Launch Time: long (nullable = true) |-- Locality: string (nullable = true) |-- Speculative: boolean (nullable = true) |-- Task ID: long (nullable = true) 22#EUdev2 Details about the Task: Launch Time, Finish Time, Host, Locality, etc Accumulables are used to keep accounting of metrics updates, including SQL metrics
  • 23. EventLog Analytics Using Spark SQL Aggregate stage info metrics by name and display sum(values): scala> spark.sql("select Name, sum(Value) as value from aggregatedStageMetrics group by Name order by Name").show(40,false) +---------------------------------------------------+----------------+ |Name |value | +---------------------------------------------------+----------------+ |aggregate time total (min, med, max) |1230038.0 | |data size total (min, med, max) |5.6000205E7 | |duration total (min, med, max) |3202872.0 | |number of output rows |2.504759806E9 | |internal.metrics.executorRunTime |857185.0 | |internal.metrics.executorCpuTime |1.46231111372E11| |... |... | 23#EUdev2
  • 24. Drill Down Into Executor Task Metrics Relevant code in Apache Spark - Core – Example snippets, show instrumentation in Executor.scala – Note, for SQL metrics, see instrumentation with code-generation 24#EUdev2
  • 25. Read Metrics with sparkMeasure sparkMeasure is a tool for performance investigations of Apache Spark workloads https://github.com/LucaCanali/sparkMeasure $ bin/spark-shell --packages ch.cern.sparkmeasure:spark-measure_2.11:0.11 scala> val stageMetrics = ch.cern.sparkmeasure.StageMetrics(spark) scala> stageMetrics.runAndMeasure(spark.sql("select count(*) from range(1000) cross join range(1000) cross join range(1000)").show) Scheduling mode = FIFO Spark Context default degree of parallelism = 8 Aggregated Spark stage metrics: numStages => 3 sum(numTasks) => 17 elapsedTime => 9103 (9 s) sum(stageDuration) => 9027 (9 s) sum(executorRunTime) => 69238 (1.2 min) sum(executorCpuTime) => 68004 (1.1 min) . . . <more metrics> 25#EUdev2
  • 26. Notebooks and sparkMeasure • Interactive use: suitable for notebooks and REPL • Offline use: save metrics for later analysis • Metrics granularity: collected per stage or record all tasks • Metrics aggregation: user- defined, e.g. per SQL statement • Works with Scala and Python 26#EUdev2
  • 27. Collecting Info Using Spark Listener - Spark Listeners are used to send task metrics from executors to driver - Underlying data transport used by WebUI, sparkMeasure, etc - Spark Listeners for your custom monitoring code 27#EUdev2
  • 28. Examples – Parquet I/O • An example of how to measure I/O, Spark reading Apache Parquet files • This causes a full scan of the table store_sales spark.sql("select * from store_sales where ss_sales_price=-1.0") .collect() • Test run on a cluster of 12 nodes, with 12 executors, 4 cores each • Total Time Across All Tasks: 59 min • Locality Level Summary: Node local: 1675 • Input Size / Records: 185.3 GB / 4319943621 • Duration: 1.3 min 28#EUdev2
  • 29. Parquet I/O – Filter Push Down • Parquet filter push down in action • This causes a full scan of the table store_sales with a filter condition pushed down spark.sql("select * from store_sales where ss_quantity=-1.0") .collect() • Test run on a cluster of 12 nodes, with 12 executors, 4 cores each • Total Time Across All Tasks: 1.0 min • Locality Level Summary: Node local: 1675 • Input Size / Records: 16.2 MB / 0 • Duration: 3 s 29#EUdev2
  • 30. Parquet I/O – Drill Down • Parquet filter push down – I/O reduction when Parquet pushed down a filter condition and using stats on data (min, max, num values, num nulls) – Filter push down not available for decimal data type (ss_sales_price) https://db-blog.web.cern.ch/blog/luca-canali/2017-06-diving-spark-and-parquet-workloads-example 30#EUdev2
  • 31. CPU and I/O Reading Parquet Files # echo 3 > /proc/sys/vm/drop_caches # drop the filesystem cache $ bin/spark-shell --master local[1] --packages ch.cern.sparkmeasure:spark- measure_2.11:0.11 --driver-memory 16g val stageMetrics = ch.cern.sparkmeasure.StageMetrics(spark) stageMetrics.runAndMeasure(spark.sql("select * from web_sales where ws_sales_price=-1").collect()) Spark Context default degree of parallelism = 1 Aggregated Spark stage metrics: numStages => 1 sum(numTasks) => 787 elapsedTime => 465430 (7.8 min) sum(stageDuration) => 465430 (7.8 min) sum(executorRunTime) => 463966 (7.7 min) sum(executorCpuTime) => 325077 (5.4 min) sum(jvmGCTime) => 3220 (3 s) 31#EUdev2 CPU time is 70% of run time Note: OS tools confirm that the difference “Run”- “CPU” time is spent in read calls (used a SystemTap script)
  • 32. Stack Profiling and Flame Graphs - Use stack profiling to investigate CPU usage - Flame graph visualization to help identify “hot methods” and context (parent stack) - Use profilers that don’t suffer from Java Safepoint bias, e.g. async-profiler 32#EUdev2 https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Tools_Spark_Linux_FlameGraph.md
  • 33. How Does Your Workload Scale? Measure latency as function of N# of concurrent tasks Example workload: Spark reading Parquet files from memory Speedup(p) = R(1)/R(p) Speedup grows linearly in ideal case. Saturation effects and serialization reduce scalability (see also Amdhal’s law) 33#EUdev2
  • 34. Are CPUs Processing Instructions or Stalling for Memory? • Measure Instructions per Cycle (IPC) and CPU-to-Memory throughput • Minimizing CPU stalled cycles is key on modern platforms • Tools to read CPU HW counters: perf and more https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Tools_Linux_Memory_Perf_Measure.md 34#EUdev2 Increasing N# of stalled cycles at high load CPU-to-memory throughput close to saturation for this system
  • 35. Lessons Learned – Measuring CPU • Reading Parquet data is CPU-intensive – Measured throughput for the test system at high load (using all 20 cores) • about 3 GB/s – max read throughput with lightweight processing of parquet files – Measured CPU-to-memory traffic at high load ~80 GB/s • Comments: – CPU utilization and memory throughput are the bottleneck in this test • Other systems could have I/O or network bottlenecks at lower throughput – Room for optimizations in the Parquet reader code? https://db-blog.web.cern.ch/blog/luca-canali/2017-09-performance-analysis-cpu-intensive-workload-apache-spark 35#EUdev2
  • 36. Pitfalls: CPU Utilization at High Load • Physical cores vs. threads – CPU utilization grows up to the number of available threads – Throughput at scale mostly limited by number of available cores – Pitfall: understanding Hyper-threading on multitenant systems 36#EUdev2 Metric 20 concurrent tasks 40 concurrent tasks 60 concurrent tasks Elapsed time 20 s 23 s 23 s Executor run time 392 s 892 s 1354 s Executor CPU Time 376 s 849 s 872 s CPU-memory data volume 1.6 TB 2.2 TB 2.2 TB CPU-memory throughput 85 GB/s 90 GB/s 90 GB/s IPC 1.42 0.66 0.63 Job latency is roughly constant 20 tasks -> each task gets a core 40 tasks -> they share CPU cores It is as if CPU speed has become 2 times slower Extra time from CPU runqueue wait Example data: CPU-bound workload (reading Parquet files from memory) Test system has 20 physical cores
  • 37. Lessons Learned on Garbage Collection and CPU Usage Measure: reading Parquet Table with “--driver-memory 1g” (default) sum(executorRunTime) => 468480 (7.8 min) sum(executorCpuTime) => 304396 (5.1 min) sum(jvmGCTime) => 163641 (2.7 min) OS tools: (ps -efo cputime -p <pid_of_SparkSubmit>) CPU time = 2306 sec Lessons learned: • Use OS tools to measure CPU used by JVM • Garbage Collection is memory hungry (size your executors accordingly) 37#EUdev2 Run Time = CPU Time (executor) + JVM GC Many CPU cycles used by JVM, extra CPU time not accounted in Spark metrics due to GC
  • 38. Performance at Scale: Keep Systems Resources Busy Running tasks in parallel is key for performance Important loss of efficiency when the number of concurrent active tasks << available cores 38#EUdev2
  • 39. Issues With Stragglers • Slow running tasks - stragglers – Many causes possible, including – Tasks running on slow/busy nodes – Nodes with HW problems – Skew in data and/or partitioning • A few “local” slow tasks can wreck havoc in global perf – It is often the case that one stage needs to finish before the next one can start – See also discussion in SPARK-2387 on stage barriers – Just a few slow tasks can slow everything down 39#EUdev2
  • 40. Investigate Stragglers With Analytics on “Task Info” Data Example of performance limited by long tail and stragglers Data source: EventLog or sparkMeasure (from task info: task launch and finish time) Data analyzed using Spark SQL and notebooks 40#EUdev2 From https://db-blog.web.cern.ch/blog/luca-canali/2017-03-measuring-apache-spark-workload-metrics-performance-troubleshooting
  • 41. Task Stragglers – Drill Down Drill down on task latency per executor: it’s a plot with 3 dimensions Stragglers due to a few machines in the cluster: later identified as slow HW Lessons learned: identify and remove/repair non-performing hardware from the cluster 41#EUdev2 From https://github.com/LucaCanali/sparkMeasure/blob/master/examples/SparkTaskMetricsAnalysisExample.ipynb
  • 42. Web UI – Monitor Executors • The Web UI shows details of executors – Including number of active tasks (+ per-node info) 42#EUdev2 All OK: 480 cores allocated and 480 active tasks
  • 43. Example of Underutilization • Monitor active tasks with Web UI 43#EUdev2 Utilization is low at this snapshot: 480 cores allocated and 48 active tasks
  • 44. Visualize the Number of Active Tasks • Plot as function of time to identify possible under-utilization – Grafana visualization of number of active tasks for a benchmark job running on 60 executors, 480 cores 44#EUdev2 Data source: /executor/threadpool/ activeTasks Transport: Dropwizard metrics to Graphite sink
  • 45. Measure the Number of Active Tasks With Dropwizard Metrics Library • The Dropwizard metrics library is integrated with Spark – Provides configurable data sources and sinks. Details in doc and config file “metrics.properties” --conf spark.metrics.conf=metrics.properties • Spark data sources: – Can be optional, as the JvmSource or “on by default”, as the executor source – Notably the gauge: /executor/threadpool/activeTasks – Note: executor source also has info on I/O • Architecture – Metrics are sent directly by each executor -> no need to pass via the driver. – More details: see source code “ExecutorSource.scala” 45#EUdev2
  • 46. Limitations and Future Work • Many important topics not covered here – Such as investigations and optimization of shuffle operations, SQL plans, etc – Understanding root causes of stragglers, long tails and issues related to efficient utilization of available cores/resources can be hard • Current tools to measure Spark performance are very useful.. but: – Instrumentation does not yet provide a way to directly find bottlenecks • Identify where time is spent and critical resources for job latency • See Kay Ousterhout on “Re-Architecting Spark For Performance Understandability” – Currently difficult to link measurements of OS metrics and Spark metrics • Difficult to understand time spent for HDFS I/O (see HADOOP-11873) – Improvements on user-facing tools • Currently investigating linking Spark executor metrics sources and Dropwizard sink/Grafana visualization (see SPARK-22190) 46#EUdev2
  • 47. Conclusions • Think clearly about performance – Approach it as a problem in experimental science – Measure – build models – test – produce actionable results • Know your tools – Experiment with the toolset – active benchmarking to understand how your application works – know the tools’ limitations • Measure, build tools and share results! – Spark performance is a field of great interest – Many gains to be made + a rapidly developing topic 47#EUdev2
  • 48. Acknowledgements and References • CERN – Members of Hadoop and Spark service and CERN+HEP users community – Special thanks to Zbigniew Baranowski, Prasanth Kothuri, Viktor Khristenko, Kacper Surdy – Many lessons learned over the years from the RDBMS community, notably www.oaktable.net • Relevant links – Material by Brendan Gregg (www.brendangregg.com) – More info: links to blog and notes at http://cern.ch/canali 48#EUdev2