SlideShare a Scribd company logo
Deep Learning on Apache®
Spark™: Workflows and Best
Practices
Tim Hunter (Software Engineer)
Jules S. Damji (Spark Community Evangelist)
May 4, 2017
Agenda
• Logistics
• Databricks Overview
• Deep Learning on Apache® Spark™: Workflows and Best
Practices
• Q & A
Logistics
• We can’t hear you…
• Recording will be available...
• Slides and Notebooks will be available...
• Queue up Questions ….
• Orange Button for Tech Support difficulties...
Empower anyone to innovate faster with big data.
Founded by the creators of Apache Spark.
Contributes 75%of the open source code,
10x more than any other company.
VISION
WHO WE ARE
A data processing for data scientists, data engineers, and data
analysts that simplifies that data integration, real-time
experimentation, machine learning and deployment of
production pipelines .
PRODUCT
A New Paradigm
SECOND GENERATION
THE BEST OF BOTH WORLDS
Hadoop + data lake
Hard to centralize data and
extract value with disparate tools
Virtual analytics
• Holisticallyanalyze data from
data warehouses, data lakes,
and other data stores
• Utilize a single engine for batch,
ML, streaming & real-time
queries
• Enable enterprise-wide
collaboration
+
FIRST GENERATION
Data warehouses
ETL process is rigid, scaling out
is expensive, limited to SQL
CLUSTER TUNING &
MANAGEMENT
INTERACTIVE
WORKSPACE
PRODUCTION
PIPELINE
AUTOMATION
OPTIMIZED DATA
ACCESS
DATABRICKS ENTERPRISE SECURITY
YOUR	TEAMS
Data Science
Data Engineering
Many others…
BI Analysts
YOUR	DATA
Cloud Storage
Data Warehouses
Data Lake
VIRTUAL ANALYTICS PLATFORM
Deep Learning on Apache®
Spark™: Workflows and Best
Practices
Tim Hunter (Software Engineer)
May 4 , 2017
About Me
• Tim Hunter
• Software engineer @ Databricks
• Ph.D. from UC Berkeley in Machine Learning
• Very early Spark user
• Contributor to MLlib
• Author of TensorFrames and GraphFrames
Deep Learning and Apache Spark
Deep Learning frameworks w/ Spark bindings
• Caffe (CaffeOnSpark)
• Keras (Elephas)
• MXNet
• Paddle
• TensorFlow(TensorFlowOnSpark,TensorFrames)
Extensions to Spark for specialized hardware
• Blaze (UCLA & Falcon Computing Solutions)
• IBM Conductor with Spark
Native Spark
• BigDL
• DeepDist
• DeepLearning4J
• MLlib
• SparkCL
• SparkNet
Deep Learning and Apache Spark
2016: the year of emerging solutions for Spark + Deep Learning
No consensus
• Many approaches for libraries: integrate existing ones with Spark, build on
top of Spark, modify Spark itself
• Official Spark MLlib support is limited(perceptron-like networks)
One Framework to Rule Them All?
Should we look for The One Deep Learning Framework?
Databricks’ perspective
• Databricks: hosted Spark platform on public cloud
• GPUs for compute-intensive workloads
• Customers use many Deep Learning frameworks: TensorFlow, MXNet, BigDL,
Theano, Caffe, and more
This talk
• Lessons learned from supporting many Deep Learning frameworks
• Multiple ways to integrate Deep Learning & Spark
• Best practices for these integrations
Outline
• Deep Learning in data pipelines
• Recurring patterns in Spark + Deep Learning integrations
• Developer tips
• Monitoring
Outline
• Deep Learning in data pipelines
• Recurring patterns in Spark + Deep Learning integrations
• Developer tips
• Monitoring
ML is a small part of data pipelines.
Hidden	technical	debt	in	Machine	Learning	systems
Sculley et	al.,	NIPS	2016
DL in a data pipeline: Training
Data
collection
ETL Featurization Deep
Learning
Validation Export,
Serving
compute intensive IO intensiveIO intensive
Large cluster
High memory/CPU ratio
Small cluster
Low memory/CPU ratio
DL in a data pipeline: Transformation
Specialized data transforms: feature extraction & prediction
Input Output
cat
dog
dog
Saulius Garalevicius - CC BY-SA3.0
Outline
• Deep Learning in data pipelines
• Recurringpatterns in Spark + Deep Learning integrations
• Developer tips
• Monitoring
Recurring patterns
Spark as a scheduler
• Data-parallel tasks
• Data stored outside Spark
Embedded Deep Learning transforms
• Data-parallel tasks
• Data stored in DataFrames/RDDs
Cooperative frameworks
• Multiple passes over data
• Heavy and/or specialized communication
Streaming data through DL
Primary storage choices:
• Cold layer (HDFS/S3/etc.)
• Local storage: files, Spark’s on-disk persistence layer
• In memory: SparkRDDs or SparkDataFrames
Find out if you are I/O constrained or processor-constrained
• How big is your dataset? MNIST or ImageNet?
If using PySpark:
• All frameworks heavily optimized for diskI/O
• Use Spark’s broadcastfor small datasets that fitin memory
• Reading files is fast: use local files when it does not fit
Cooperative frameworks
• Use Spark for data input
• Examples:
• IBM GPU efforts
• Skymind’s DeepLearning4J
• DistML and other Parameter Server efforts
RDD
Partition	1
Partition	n
RDD
Partition	1
Partition	m
Black	box
Cooperative frameworks
• Bypass Spark for asynchronous / specific communication
patterns across machines
• Lose benefit of RDDs and DataFrames and
reproducibility/determinism
• But these guarantees are not requested anyway when doing
deep learning (stochastic gradient)
• “reproducibility is worth a factor of 2” (Leon Bottou, quoted by
John Langford)
Outline
• Deep Learning in data pipelines
• Recurring patterns in Spark + Deep Learning integrations
• Developer tips
• Monitoring
The GPU software stack
• Deep Learning commonly used with GPUs
• A lot of workon Spark dependencies:
• Few dependencies on local machine when compiling Spark
• The build process works well in a largenumber of configurations (just scala +
maven)
• GPUs present challenges: CUDA, support libraries, drivers, etc.
• Deep softwarestack, requires careful construction (hardware+ drivers + CUDA
+ libraries)
• All these are expected by the user
• Turnkey stacks just starting to appear
• Provide a Docker image with all the GPU SDK
• Pre-install GPU drivers on the instance
Container:
nvidia-docker,
lxc,	etc.
The GPU software stack
GPU	hardware
Linux	kernel NV	Kernel	driver
CuBLAS CuDNN
Deep	learning	libraries
(Tensorflow,	etc.) JCUDA
Python	/	JVM	clients
CUDA
NV	kernel	driver	(userspace interface)
Using GPUs through PySpark
• Popular choice for many independent tasks
• Many DL packages have Python interfaces: TensorFlow,
Theano, Caffe, MXNet, etc.
• Lifetime for python packages: the process
• Requires some configuration tweaks in Spark
PySpark recommendation
• spark.executor.cores = 1
• Gives the DL framework full access over all the resources
• Important for frameworks that optimize processor pipelines
Outline
• Deep Learning in data pipelines
• Recurring patterns in Spark + Deep Learning integrations
• Developer tips
• Monitoring
Monitoring
?
Monitoring
• How do you monitor the progress of your tasks?
• It depends on the granularity
• Around tasks
• Inside (long-running) tasks
Monitoring: Accumulators
• Good to check throughput
or failure rate
• Works for Scala
• Limited use for Python
(for now, SPARK-2868)
• No “real-time” update
batchesAcc = sc.accumulator(1)
def processBatch(i):
global acc
acc += 1
# Process image batch here
images = sc.parallelize(…)
images.map(processBatch).collect()
Monitoring: external system
• Plugs into an external system
• Existing solutions: Grafana, Graphite, Prometheus, etc.
• Most flexible, but more complex to deploy
Conclusion
• Distributed deep learning: exciting and fast-moving space
• Most insights are specific to a task, a dataset and an algorithm:
nothing replaces experiments
• Get started with data-parallel jobs
• Move to cooperative frameworks only when your data are too large.
Challenges to address
For Spark developers
• Monitoringlong-running tasks
• Presentingand introspecting intermediate results
For DL developers
• What boundary to put between the algorithm and Spark?
• How to integrate with Spark at the low-level?
Resources
Recent blog posts — http://databricks.com/blog
• TensorFrames
• GPU acceleration
• Getting started with Deep Learning
• Intel’s BigDL
Docs for Deep Learning on Databricks — http://docs.databricks.com
• Getting started
• Spark integration
SPARK SUMMIT 2017
DATA SCIENCE AND ENGINEERING AT SCALE
JUNE 5 – 7 | MOSCONE CENTER | SAN FRANCISCO
ORGANIZED BY spark-summit.org/2017
Thank You!
Questions?
Happy Sparking & Deep Learning!

More Related Content

PDF
Build, Scale, and Deploy Deep Learning Pipelines with Ease Using Apache Spark
PPTX
Deep Learning with Spark and GPUs
PDF
Build, Scale, and Deploy Deep Learning Pipelines Using Apache Spark
PDF
Build, Scale, and Deploy Deep Learning Pipelines Using Apache Spark
PDF
Extending the R API for Spark with sparklyr and Microsoft R Server with Ali Z...
PDF
Spark Summit 2016: Connecting Python to the Spark Ecosystem
PDF
High Performance Python on Apache Spark
PDF
Spark Summit EU talk by Berni Schiefer
Build, Scale, and Deploy Deep Learning Pipelines with Ease Using Apache Spark
Deep Learning with Spark and GPUs
Build, Scale, and Deploy Deep Learning Pipelines Using Apache Spark
Build, Scale, and Deploy Deep Learning Pipelines Using Apache Spark
Extending the R API for Spark with sparklyr and Microsoft R Server with Ali Z...
Spark Summit 2016: Connecting Python to the Spark Ecosystem
High Performance Python on Apache Spark
Spark Summit EU talk by Berni Schiefer

What's hot (20)

PDF
What's New in Apache Spark 2.3 & Why Should You Care
PDF
Deep Learning with Apache Spark and GPUs with Pierce Spitler
PDF
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...
PDF
Getting Ready to Use Redis with Apache Spark with Dvir Volk
PDF
Re-Architecting Spark For Performance Understandability
PDF
Apache Spark on Supercomputers: A Tale of the Storage Hierarchy with Costin I...
PDF
Deep Learning Pipelines for High Energy Physics using Apache Spark with Distr...
PPTX
Tuning and Monitoring Deep Learning on Apache Spark
PDF
Apache Spark Performance: Past, Future and Present
PDF
Resource-Efficient Deep Learning Model Selection on Apache Spark
PPTX
Stories About Spark, HPC and Barcelona by Jordi Torres
PPTX
Spark r under the hood with Hossein Falaki
PPTX
Large-Scale Data Science in Apache Spark 2.0
PDF
Integrating Deep Learning Libraries with Apache Spark
PDF
Running Emerging AI Applications on Big Data Platforms with Ray On Apache Spark
PDF
Using Spark with Tachyon by Gene Pang
PDF
Towards True Elasticity of Spark-(Michael Le and Min Li, IBM)
PDF
What's New in Upcoming Apache Spark 2.3
PDF
Serverless Machine Learning on Modern Hardware Using Apache Spark with Patric...
PDF
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & Deep Learning ...
What's New in Apache Spark 2.3 & Why Should You Care
Deep Learning with Apache Spark and GPUs with Pierce Spitler
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...
Getting Ready to Use Redis with Apache Spark with Dvir Volk
Re-Architecting Spark For Performance Understandability
Apache Spark on Supercomputers: A Tale of the Storage Hierarchy with Costin I...
Deep Learning Pipelines for High Energy Physics using Apache Spark with Distr...
Tuning and Monitoring Deep Learning on Apache Spark
Apache Spark Performance: Past, Future and Present
Resource-Efficient Deep Learning Model Selection on Apache Spark
Stories About Spark, HPC and Barcelona by Jordi Torres
Spark r under the hood with Hossein Falaki
Large-Scale Data Science in Apache Spark 2.0
Integrating Deep Learning Libraries with Apache Spark
Running Emerging AI Applications on Big Data Platforms with Ray On Apache Spark
Using Spark with Tachyon by Gene Pang
Towards True Elasticity of Spark-(Michael Le and Min Li, IBM)
What's New in Upcoming Apache Spark 2.3
Serverless Machine Learning on Modern Hardware Using Apache Spark with Patric...
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & Deep Learning ...
Ad

Similar to Deep Learning on Apache® Spark™: Workflows and Best Practices (20)

PPTX
Emiliano Martinez | Deep learning in Spark Slides | Codemotion Madrid 2018
PDF
Build, Scale, and Deploy Deep Learning Pipelines with Ease
PPTX
Combining Machine Learning frameworks with Apache Spark
PPTX
Combining Machine Learning Frameworks with Apache Spark
PPTX
Apache® Spark™ MLlib 2.x: migrating ML workloads to DataFrames
PDF
Build a deep learning pipeline on apache spark for ads optimization
PDF
Build Deep Learning Applications for Big Data Platforms (CVPR 2018 tutorial)
PDF
Jeremy Nixon, Machine Learning Engineer, Spark Technology Center at MLconf AT...
PDF
BKK16-408B Data Analytics and Machine Learning From Node to Cluster
PDF
Data Analytics and Machine Learning: From Node to Cluster on ARM64
PDF
BKK16-404B Data Analytics and Machine Learning- from Node to Cluster
PPTX
No BS Guide to Deep Learning in the Enterprise
PDF
Bringing Deep Learning into production
PDF
Index conf sparkai-feb20-n-pentreath
PPTX
AI and Spark - IBM Community AI Day
PDF
Deep learning and Apache Spark
PDF
BigDL: Bringing Ease of Use of Deep Learning for Apache Spark with Jason Dai ...
PDF
Guglielmo iozzia - Google I/O extended dublin 2018
PDF
Very large scale distributed deep learning on BigDL
PPTX
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and ...
Emiliano Martinez | Deep learning in Spark Slides | Codemotion Madrid 2018
Build, Scale, and Deploy Deep Learning Pipelines with Ease
Combining Machine Learning frameworks with Apache Spark
Combining Machine Learning Frameworks with Apache Spark
Apache® Spark™ MLlib 2.x: migrating ML workloads to DataFrames
Build a deep learning pipeline on apache spark for ads optimization
Build Deep Learning Applications for Big Data Platforms (CVPR 2018 tutorial)
Jeremy Nixon, Machine Learning Engineer, Spark Technology Center at MLconf AT...
BKK16-408B Data Analytics and Machine Learning From Node to Cluster
Data Analytics and Machine Learning: From Node to Cluster on ARM64
BKK16-404B Data Analytics and Machine Learning- from Node to Cluster
No BS Guide to Deep Learning in the Enterprise
Bringing Deep Learning into production
Index conf sparkai-feb20-n-pentreath
AI and Spark - IBM Community AI Day
Deep learning and Apache Spark
BigDL: Bringing Ease of Use of Deep Learning for Apache Spark with Jason Dai ...
Guglielmo iozzia - Google I/O extended dublin 2018
Very large scale distributed deep learning on BigDL
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and ...
Ad

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
PPT
Data Lakehouse Symposium | Day 1 | Part 2
PPTX
Data Lakehouse Symposium | Day 2
PPTX
Data Lakehouse Symposium | Day 4
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
PDF
Democratizing Data Quality Through a Centralized Platform
PDF
Learn to Use Databricks for Data Science
PDF
Why APM Is Not the Same As ML Monitoring
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
PDF
Stage Level Scheduling Improving Big Data and AI Integration
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
PDF
Sawtooth Windows for Feature Aggregations
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
PDF
Re-imagine Data Monitoring with whylogs and Spark
PDF
Raven: End-to-end Optimization of ML Prediction Queries
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
PDF
Massive Data Processing in Adobe Using Delta Lake
DW Migration Webinar-March 2022.pptx
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 4
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Democratizing Data Quality Through a Centralized Platform
Learn to Use Databricks for Data Science
Why APM Is Not the Same As ML Monitoring
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Stage Level Scheduling Improving Big Data and AI Integration
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Sawtooth Windows for Feature Aggregations
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Re-imagine Data Monitoring with whylogs and Spark
Raven: End-to-end Optimization of ML Prediction Queries
Processing Large Datasets for ADAS Applications using Apache Spark
Massive Data Processing in Adobe Using Delta Lake

Recently uploaded (20)

PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
Computer network topology notes for revision
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PPTX
IB Computer Science - Internal Assessment.pptx
PDF
Clinical guidelines as a resource for EBP(1).pdf
PDF
Taxes Foundatisdcsdcsdon Certificate.pdf
PPTX
Introduction to machine learning and Linear Models
PPT
Reliability_Chapter_ presentation 1221.5784
PDF
Fluorescence-microscope_Botany_detailed content
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PPT
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
Major-Components-ofNKJNNKNKNKNKronment.pptx
PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PPTX
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pptx
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
oil_refinery_comprehensive_20250804084928 (1).pptx
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
Computer network topology notes for revision
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
IB Computer Science - Internal Assessment.pptx
Clinical guidelines as a resource for EBP(1).pdf
Taxes Foundatisdcsdcsdon Certificate.pdf
Introduction to machine learning and Linear Models
Reliability_Chapter_ presentation 1221.5784
Fluorescence-microscope_Botany_detailed content
STUDY DESIGN details- Lt Col Maksud (21).pptx
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
Data_Analytics_and_PowerBI_Presentation.pptx
Major-Components-ofNKJNNKNKNKNKronment.pptx
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
Galatica Smart Energy Infrastructure Startup Pitch Deck
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pptx

Deep Learning on Apache® Spark™: Workflows and Best Practices

  • 1. Deep Learning on Apache® Spark™: Workflows and Best Practices Tim Hunter (Software Engineer) Jules S. Damji (Spark Community Evangelist) May 4, 2017
  • 2. Agenda • Logistics • Databricks Overview • Deep Learning on Apache® Spark™: Workflows and Best Practices • Q & A
  • 3. Logistics • We can’t hear you… • Recording will be available... • Slides and Notebooks will be available... • Queue up Questions …. • Orange Button for Tech Support difficulties...
  • 4. Empower anyone to innovate faster with big data. Founded by the creators of Apache Spark. Contributes 75%of the open source code, 10x more than any other company. VISION WHO WE ARE A data processing for data scientists, data engineers, and data analysts that simplifies that data integration, real-time experimentation, machine learning and deployment of production pipelines . PRODUCT
  • 5. A New Paradigm SECOND GENERATION THE BEST OF BOTH WORLDS Hadoop + data lake Hard to centralize data and extract value with disparate tools Virtual analytics • Holisticallyanalyze data from data warehouses, data lakes, and other data stores • Utilize a single engine for batch, ML, streaming & real-time queries • Enable enterprise-wide collaboration + FIRST GENERATION Data warehouses ETL process is rigid, scaling out is expensive, limited to SQL
  • 6. CLUSTER TUNING & MANAGEMENT INTERACTIVE WORKSPACE PRODUCTION PIPELINE AUTOMATION OPTIMIZED DATA ACCESS DATABRICKS ENTERPRISE SECURITY YOUR TEAMS Data Science Data Engineering Many others… BI Analysts YOUR DATA Cloud Storage Data Warehouses Data Lake VIRTUAL ANALYTICS PLATFORM
  • 7. Deep Learning on Apache® Spark™: Workflows and Best Practices Tim Hunter (Software Engineer) May 4 , 2017
  • 8. About Me • Tim Hunter • Software engineer @ Databricks • Ph.D. from UC Berkeley in Machine Learning • Very early Spark user • Contributor to MLlib • Author of TensorFrames and GraphFrames
  • 9. Deep Learning and Apache Spark Deep Learning frameworks w/ Spark bindings • Caffe (CaffeOnSpark) • Keras (Elephas) • MXNet • Paddle • TensorFlow(TensorFlowOnSpark,TensorFrames) Extensions to Spark for specialized hardware • Blaze (UCLA & Falcon Computing Solutions) • IBM Conductor with Spark Native Spark • BigDL • DeepDist • DeepLearning4J • MLlib • SparkCL • SparkNet
  • 10. Deep Learning and Apache Spark 2016: the year of emerging solutions for Spark + Deep Learning No consensus • Many approaches for libraries: integrate existing ones with Spark, build on top of Spark, modify Spark itself • Official Spark MLlib support is limited(perceptron-like networks)
  • 11. One Framework to Rule Them All? Should we look for The One Deep Learning Framework?
  • 12. Databricks’ perspective • Databricks: hosted Spark platform on public cloud • GPUs for compute-intensive workloads • Customers use many Deep Learning frameworks: TensorFlow, MXNet, BigDL, Theano, Caffe, and more This talk • Lessons learned from supporting many Deep Learning frameworks • Multiple ways to integrate Deep Learning & Spark • Best practices for these integrations
  • 13. Outline • Deep Learning in data pipelines • Recurring patterns in Spark + Deep Learning integrations • Developer tips • Monitoring
  • 14. Outline • Deep Learning in data pipelines • Recurring patterns in Spark + Deep Learning integrations • Developer tips • Monitoring
  • 15. ML is a small part of data pipelines. Hidden technical debt in Machine Learning systems Sculley et al., NIPS 2016
  • 16. DL in a data pipeline: Training Data collection ETL Featurization Deep Learning Validation Export, Serving compute intensive IO intensiveIO intensive Large cluster High memory/CPU ratio Small cluster Low memory/CPU ratio
  • 17. DL in a data pipeline: Transformation Specialized data transforms: feature extraction & prediction Input Output cat dog dog Saulius Garalevicius - CC BY-SA3.0
  • 18. Outline • Deep Learning in data pipelines • Recurringpatterns in Spark + Deep Learning integrations • Developer tips • Monitoring
  • 19. Recurring patterns Spark as a scheduler • Data-parallel tasks • Data stored outside Spark Embedded Deep Learning transforms • Data-parallel tasks • Data stored in DataFrames/RDDs Cooperative frameworks • Multiple passes over data • Heavy and/or specialized communication
  • 20. Streaming data through DL Primary storage choices: • Cold layer (HDFS/S3/etc.) • Local storage: files, Spark’s on-disk persistence layer • In memory: SparkRDDs or SparkDataFrames Find out if you are I/O constrained or processor-constrained • How big is your dataset? MNIST or ImageNet? If using PySpark: • All frameworks heavily optimized for diskI/O • Use Spark’s broadcastfor small datasets that fitin memory • Reading files is fast: use local files when it does not fit
  • 21. Cooperative frameworks • Use Spark for data input • Examples: • IBM GPU efforts • Skymind’s DeepLearning4J • DistML and other Parameter Server efforts RDD Partition 1 Partition n RDD Partition 1 Partition m Black box
  • 22. Cooperative frameworks • Bypass Spark for asynchronous / specific communication patterns across machines • Lose benefit of RDDs and DataFrames and reproducibility/determinism • But these guarantees are not requested anyway when doing deep learning (stochastic gradient) • “reproducibility is worth a factor of 2” (Leon Bottou, quoted by John Langford)
  • 23. Outline • Deep Learning in data pipelines • Recurring patterns in Spark + Deep Learning integrations • Developer tips • Monitoring
  • 24. The GPU software stack • Deep Learning commonly used with GPUs • A lot of workon Spark dependencies: • Few dependencies on local machine when compiling Spark • The build process works well in a largenumber of configurations (just scala + maven) • GPUs present challenges: CUDA, support libraries, drivers, etc. • Deep softwarestack, requires careful construction (hardware+ drivers + CUDA + libraries) • All these are expected by the user • Turnkey stacks just starting to appear
  • 25. • Provide a Docker image with all the GPU SDK • Pre-install GPU drivers on the instance Container: nvidia-docker, lxc, etc. The GPU software stack GPU hardware Linux kernel NV Kernel driver CuBLAS CuDNN Deep learning libraries (Tensorflow, etc.) JCUDA Python / JVM clients CUDA NV kernel driver (userspace interface)
  • 26. Using GPUs through PySpark • Popular choice for many independent tasks • Many DL packages have Python interfaces: TensorFlow, Theano, Caffe, MXNet, etc. • Lifetime for python packages: the process • Requires some configuration tweaks in Spark
  • 27. PySpark recommendation • spark.executor.cores = 1 • Gives the DL framework full access over all the resources • Important for frameworks that optimize processor pipelines
  • 28. Outline • Deep Learning in data pipelines • Recurring patterns in Spark + Deep Learning integrations • Developer tips • Monitoring
  • 30. Monitoring • How do you monitor the progress of your tasks? • It depends on the granularity • Around tasks • Inside (long-running) tasks
  • 31. Monitoring: Accumulators • Good to check throughput or failure rate • Works for Scala • Limited use for Python (for now, SPARK-2868) • No “real-time” update batchesAcc = sc.accumulator(1) def processBatch(i): global acc acc += 1 # Process image batch here images = sc.parallelize(…) images.map(processBatch).collect()
  • 32. Monitoring: external system • Plugs into an external system • Existing solutions: Grafana, Graphite, Prometheus, etc. • Most flexible, but more complex to deploy
  • 33. Conclusion • Distributed deep learning: exciting and fast-moving space • Most insights are specific to a task, a dataset and an algorithm: nothing replaces experiments • Get started with data-parallel jobs • Move to cooperative frameworks only when your data are too large.
  • 34. Challenges to address For Spark developers • Monitoringlong-running tasks • Presentingand introspecting intermediate results For DL developers • What boundary to put between the algorithm and Spark? • How to integrate with Spark at the low-level?
  • 35. Resources Recent blog posts — http://databricks.com/blog • TensorFrames • GPU acceleration • Getting started with Deep Learning • Intel’s BigDL Docs for Deep Learning on Databricks — http://docs.databricks.com • Getting started • Spark integration
  • 36. SPARK SUMMIT 2017 DATA SCIENCE AND ENGINEERING AT SCALE JUNE 5 – 7 | MOSCONE CENTER | SAN FRANCISCO ORGANIZED BY spark-summit.org/2017