SlideShare a Scribd company logo
Speeding up R
with Parallel Programming in the Cloud
David M Smith
Developer Advocate, Microsoft
@revodavid
Hi. I’m David.
• Lapsed statistician
• Cloud Developer Advocate
at Microsoft
– cda.ms/7f
• Editor, Revolutions blog
– cda.ms/7g
• Twitter: @revodavid
2 @revodavid
What is R?
• Widely used data science software
• Used by millions of data scientists, statisticians and analysts
• Most powerful statistical programming language
• Flexible, extensible and comprehensive for productivity
• Creates beautiful and unique data visualizations
• As seen in New York Times, The Economist and FlowingData
• Thriving open-source community
• Leading edge of Statistics research
3 @revodavid
What aren’t you telling me about R?
• R is single-threaded
• R is an in-memory application
• R is kinda quirky
And yet, major companies use R for production data science on
large databases.
• Examples: blog.revolutionanalytics.com/applications/
4 @revodavid
Secrets to using R in production
• Don’t use R alone
– Know what it’s good for! (And what it’s not good for.)
– Use R as part of a production stack
• Use modern R workflows
– Hire great data scientists
• Use R in conjunction with parallel/distributed data & compute
architectures
5 @revodavid
Speeding up your R code
• Vectorize
6 @revodavid
• Vectorize
• Moar megahertz!
• Moar RAM!
• Moar cores!
• Moar computers!
• Moar cloud!
Embarassingly Parallel Problems
Easy to speed things up when:
• Calculating similar things many times
– Iterations in a loop, chunks of data, …
• Calculations are independent of each other
• Each calculation takes a decent amount of time
Just run multiple calculations at the same time
7 @revodavid
8
Is this Embarrassing?
Embarrassingly Parallel
Group-by Analyses
Reporting
Simulations
Resampling / Bootstrapping
Optimization / Search (somewhat)
Prediction (scoring)
Cross-Validation
Backtesting
Not Embarrassingly Parallel
SQL operations (many)
Matrix inverse
Linear regression (training)
Logistic Regression (training)
Trees (training)
Neural Networks (training)
Time Series (most things)
Train tracks
The Birthday Paradox
What is the likelihood that there are two people in this room
who share the same birthday?
9 @revodavid
Birthday Problem Simulator
10
pbirthdaysim <- function(n) {
ntests <- 100000
pop <- 1:365
anydup <- function(i)
any(duplicated(
sample(pop, n, replace=TRUE)))
sum(sapply(seq(ntests), anydup)) / ntests
}
bdayp <- sapply(1:100, pbirthdaysim)  About 5 minutes
(on this laptop)
library(foreach)
Looping with the foreach package on CRAN
– x is a list of results
– each entry calculated from RHS of %dopar%
Learn more about foreach: cda.ms/6Q
11 @revodavid
x <- foreach (n=1:100) %dopar% pbirthdaysim(n)
Parallel processing with foreach
• Change how processing is done by registering a backend
– registerDoSEQ() sequential processing (default)
– registerdoParallel() local cluster via library(parallel)
– registerAzureParallel() remote cluster in Azure
• Whatever you use, call to foreach does not change
– Also: no need to worry about data, packages etc. (mostly)
12 @revodavid
library(doParallel)
cl <- makeCluster(2) # local cluster, 2 workers
registerDoParallel(cl)
bdayp <- foreach(n=1:100) %dopar% pbirthdaysim(n)
foreach + doAzureParallel
• doAzureParallel: A simple R package that uses the Azure Batch
cluster service as a parallel-backend for foreach
github.com/Azure/doAzureParallel
Demo: birthday simulation
8-node cluster (compute-optimized D2v2 2-core instances)
• specify VM class in cluster.json
• specify credentials for Azure Batch and Azure Storage in credentials.json
14 @revodavid
library(doAzureParallel)
setCredentials("credentials.json")
cluster <- makeCluster("cluster.json")
registerDoAzureParallel(cluster)
bdayp <- foreach(n=1:100) %dopar% pbirthdaysim(n)
bdayp <- unlist(bdayp)
cluster.json (excerpt):
"name": "davidsmi8caret",
"vmSize": "Standard_D2_v2",
"maxTasksPerNode": 8,
"poolSize": {
"dedicatedNodes": {
"min": 8,
"max": 8
}
45 seconds (more than 6 times faster!)
Scale
• From 1 to 10,000 VMs for a cluster
• From 1 to millions of tasks
• Your selection of hardware:
– General compute VMs (A-Series / D-
Series)
– Memory / storage optimized (G-Series)
– Compute Optimized (F-Series)
– GPU enabled (N-Series)
• Results from computing the mandelbrot set
when scaling up:
Local
machine
5 parallel
workers
10 parallel
workers
20 parallel
workers
Cross-validation with caret
• Most predictive modeling algorithms have
“tuning parameters”
• Example: Boosted Trees
– Boosting iterations
– Max Tree Depth
– Shrinkage
• Parameters affect model performance
• Try ‘em out: cross-validate
16 @revodavid
grid <-
data.frame(
nrounds = …,
max_depth = …,
gamma = …,
colsample_bytree = …,
min_child_weight = …,
subsample = …)
)
Cross-validation in parallel
• Caret’s train function will automatically
use the registered foreach backend
• Just register your cluster first:
registerDoAzureParallel(cluster)
• Handles sending objects, packages to
nodes
17 @revodavid
mod <- train(
Class ~ .,
data = dat,
method = "xgbTree",
trControl = ctrl,
tuneGrid = grid,
nthread = 1
)
caret speed-ups
• Max Kuhn
benchmarked various
hardware and OS for
local parallel
– cda.ms/6V
• Let’s see how it works
with doAzureParallel
18 @revodavid
Source: cda.ms/6V
Packages and Containers
• Docker images used to spawn nodes
– Default: rocker/tidyverse:latest
– Lots of R packages pre-installed
• But this cross-validation also needs:
– xgboost, e1071
• Easy fix: add to cluster.json
19 @revodavid
{
"name": "davidsmi8caret",
"vmSize": "Standard_D2_v2",
"maxTasksPerNode": 8,
"poolSize": {
"dedicatedNodes": {
"min": 4,
"max": 4
},
"lowPriorityNodes": {
"min": 4,
"max": 4
},
"autoscaleFormula": "QUEUE"
},
"containerImage":
"rocker/tidyverse:latest",
"rPackages": {
"cran": ["xgboost","e1071"],
"github": [],
"bioconductor": []
},
"commandLine": []
}
20 @revodavid
================================================
Id: job20180126022301
chunkSize: 1
enableCloudCombine: TRUE
packages:
caret;
errorHandling: stop
wait: TRUE
autoDeleteJob: TRUE
================================================
Submitting tasks (1250/1250)
Submitting merge task. . .
Job Preparation Status: Package(s) being install
Waiting for tasks to complete. . .
| Progress: 13.84% (173/1250) | Running: 59 | Qu
MY LAPTOP: 78 minutes
THIS CLUSTER: 16 minutes
(almost 5x faster)
How much does it cost?
• Pay by the minute only for VMs used in cluster
– No additional cost for the Azure Batch cluster service
• Using D2v2 Virtual Machines
– Ubuntu 16, 8Mb RAM, 2-core “compute optimized”
• 17 minutes × 8 VMs @ $0.10 / hour
– about 23 cents (not counting startup)
… but why pay full price?
21 @revodavid
Low Priority Nodes
• Low-Priority = (very) Low Costs VMs from surplus capacity
– up to 80% discount
• Clusters can mix dedicated VMs and low-priority VMs
My Local R
Session
Azure Batch
Low Priority VMs
at up to 80% discount
Dedicated VMs "poolSize": {
"dedicatedNodes": {
"min": 3,
"max": 3
},
"lowPriorityNodes": {
"min": 9,
"max": 9
},
TL;DR: Embarrassingly Parallel
• Install the foreach and doAzureParallel packages
• Get Azure Batch and Azure Storage accounts
– Need an account? http://azure.com/free
• Set up Azure keys in credentials.json
• Define your cluster size/type in cluster.json
• Use registerAzureParallel to set up your job
• Use foreach / %dopar% to loop in parallel
• Worked example with code: cda.ms/7d
24 @revodavid
DISTRIBUTED DATA PROCESSING WITH
SPARKLYR
For when it’s not embarrassingly parallel:
25 @revodavid
Azure
What is Spark?
• Distributed data processing engine
• Store and analyze massive volumes in a robust, scalable cluster
• Successor to Hadoop
• in-memory engine 100x faster than map-reduce
• Highly extensible, with machine-learning capabilities
• Supports Scala, Java, Python, R …
• Managed cloud services available
– Azure Databricks & HDInsight, AWS EMR, GCP Dataproc
• Largest open-source data project
• Apache project with 1000+ contributors
26 @revodavid
R and Spark: Sparklyr
• sparklyr: R interface to Spark
– open-source R package from RStudio
• Move data between R and Spark
• “References” to Spark Data Frames
– Familiar R operations, including dplyr syntax
– Computations offloaded to Spark cluster, and deferred until needed
• CPU/RAM/Disk consumed in cluster, not by R
• Interfaces to Spark ML algorithms
27 @revodavid
spark.rstudio.com
Provisioning clusters for Sparklyr with aztk
• aztk: Command-line interface to provision Spark-ready (and
Sparklyr-ready) clusters in Batch
– www.github.com/azure/aztk
• Provision a Spark cluster in about 5 minutes
– Choice of VM instance types
– Use provided Docker instances (or your own)
– Pay only for VM usage, by the minute
– Optionally, use low-priority nodes to save costs
• Tools to manage persistent storage
• Easily connect to RStudio Server and Spark UIs from desktop
28 @revodavid
Launch and connect
Provision a Spark cluster:
aztk spark cluster create --id davidsmispark4 --size 4
Connect to the Spark cluster and map ports:
aztk spark cluster ssh --id davidsmispark4
Launch RStudio
http://localhost:8787
29 @revodavid
dplyr with Sparklyr
30 @revodavid
Connect to the Spark cluster:
library(sparklyr)
cluster_url <- paste0("spark://", system("hostname -i", intern = TRUE), ":7077")
sc <- spark_connect(master = cluster_url)
Load in some data:
library(dplyr)
flights_tbl <- copy_to(sc, nycflights13::flights, "flights")
Munge with dplyr:
delay <- flights_tbl %>%
group_by(tailnum) %>%
summarise(count = n(), dist = mean(distance), delay = mean(arr_delay)) %>%
filter(count > 20, dist < 2000, !is.na(delay)) %>%
collect
Things to note
• All of the computation take place in the Spark cluster
– Computations are delayed until you need results
– Behind the scenes, Spark SQL statements are being written for you
• None of the data comes back to R
– Until you call collect, when it becomes a tbl
– It’s only at this point you have to worry about data size
• This is all ordinary dplyr syntax
31 @revodavid
Speed up R with parallel programming in the Cloud
Machine Learning with SparklyR
33 @revodavid
> m <- ml_linear_regression(delay ~ dist, data=delay_near)
* No rows dropped by 'na.omit' call
> summary(m)
Call: ml_linear_regression(delay ~ dist, data = delay_near)
Deviance Residuals::
Min 1Q Median 3Q Max
-19.9499 -5.8752 -0.7035 5.1867 40.8973
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.6904319 1.0199146 0.677 0.4986
dist 0.0195910 0.0019252 10.176 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
R-Squared: 0.09619
Root Mean Squared Error: 8.075
>
SparklyR provides R
interfaces to Spark’s
distributed machine
learning algorithms
(MLlib)
Computations
happing in the Spark
cluster, not in R
In summary
• Embarrassingly parallel (small): foreach + local backend
• Embarrassingly parallel (big): foreach + cluster backend
– Create & use clusters in Azure with doAzureParallel
• Big, distributed data: sparklyr
– Create Spark clusters with sparklyr in Azure with azkt
34 @revodavid
Get your links here
• Code for birthday problem simulation: cda.ms/7d
• Using the foreach package: cda.ms/6Q
• Get doAzureParallel: cda.ms/7w
• Get aztk (for sparklyr): cda.ms/7x
• Sparklyr: spark.rstudio.com
• Free Azure account with $200 credit: cda.ms/7v
35 @revodavid
Thank you!

More Related Content

PPS
Searching At Scale
PDF
Hdfs high availability
PDF
Big Data Day LA 2015 - Sparking up your Cassandra Cluster- Analytics made Awe...
PDF
Scylla Summit 2022: Operating at Monstrous Scales: Benchmarking Petabyte Work...
PPTX
Leverage Mesos for running Spark Streaming production jobs by Iulian Dragos a...
PDF
Scylla Summit 2016: Analytics Show Time - Spark and Presto Powered by Scylla
PDF
Deconstructiong Recommendations on Spark-(Ilya Ganelin, Capital One)
PDF
Cassandra Exports as a Trivially Parallelizable Problem (Emilio Del Tessandor...
Searching At Scale
Hdfs high availability
Big Data Day LA 2015 - Sparking up your Cassandra Cluster- Analytics made Awe...
Scylla Summit 2022: Operating at Monstrous Scales: Benchmarking Petabyte Work...
Leverage Mesos for running Spark Streaming production jobs by Iulian Dragos a...
Scylla Summit 2016: Analytics Show Time - Spark and Presto Powered by Scylla
Deconstructiong Recommendations on Spark-(Ilya Ganelin, Capital One)
Cassandra Exports as a Trivially Parallelizable Problem (Emilio Del Tessandor...

What's hot (20)

PDF
Extending Spark Streaming to Support Complex Event Processing
PDF
Getting Started Running Apache Spark on Apache Mesos
PDF
Use of Spark MLib for Predicting the Offlining of Digital Media-(Christopher ...
PPT
Yahoo! Hadoop User Group - May Meetup - Extraordinarily rapid and robust data...
PPTX
Terraform Modules Restructured
PDF
Spark Summit EU talk by Qifan Pu
PPTX
Real time data viz with Spark Streaming, Kafka and D3.js
PPTX
Enterprise Grade Streaming under 2ms on Hadoop
PDF
Spark Summit EU talk by Ruben Pulido and Behar Veliqi
PDF
Drizzle—Low Latency Execution for Apache Spark: Spark Summit East talk by Shi...
PPTX
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
PDF
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
PDF
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
PPTX
HBaseCon 2015: Blackbird Collections - In-situ Stream Processing in HBase
PDF
Webinar: Does it Still Make Sense to do Big Data with Small Nodes?
PPTX
Dask: Scaling Python
PDF
SSR: Structured Streaming for R and Machine Learning
PDF
Provisioning Datadog with Terraform
PPT
January 2011 HUG: Kafka Presentation
PDF
Making Sense of Spark Performance-(Kay Ousterhout, UC Berkeley)
Extending Spark Streaming to Support Complex Event Processing
Getting Started Running Apache Spark on Apache Mesos
Use of Spark MLib for Predicting the Offlining of Digital Media-(Christopher ...
Yahoo! Hadoop User Group - May Meetup - Extraordinarily rapid and robust data...
Terraform Modules Restructured
Spark Summit EU talk by Qifan Pu
Real time data viz with Spark Streaming, Kafka and D3.js
Enterprise Grade Streaming under 2ms on Hadoop
Spark Summit EU talk by Ruben Pulido and Behar Veliqi
Drizzle—Low Latency Execution for Apache Spark: Spark Summit East talk by Shi...
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
HBaseCon 2015: Blackbird Collections - In-situ Stream Processing in HBase
Webinar: Does it Still Make Sense to do Big Data with Small Nodes?
Dask: Scaling Python
SSR: Structured Streaming for R and Machine Learning
Provisioning Datadog with Terraform
January 2011 HUG: Kafka Presentation
Making Sense of Spark Performance-(Kay Ousterhout, UC Berkeley)
Ad

Similar to Speed up R with parallel programming in the Cloud (20)

PPTX
Speeding up R with Parallel Programming in the Cloud
PDF
Cassandra at Pollfish
PDF
Cassandra at Pollfish
PPTX
Migrating existing open source machine learning to azure
PPTX
Tips, Tricks & Best Practices for large scale HDInsight Deployments
PPTX
Migrating Existing Open Source Machine Learning to Azure
PDF
Ingesting Over Four Million Rows Per Second With QuestDB Timeseries Database ...
PDF
Buildingsocialanalyticstoolwithmongodb
PDF
In-Memory Logical Data Warehouse for accelerating Machine Learning Pipelines ...
PDF
Incrementalism: An Industrial Strategy For Adopting Modern Automation
PPTX
Using R on High Performance Computers
PDF
Introduction to Galera Cluster
PPTX
Typesafe spark- Zalando meetup
PDF
10 things i wish i'd known before using spark in production
PDF
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
PPTX
How Opera Syncs Tens of Millions of Browsers and Sleeps Well at Night
PPTX
Explore big data at speed of thought with Spark 2.0 and Snappydata
PDF
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
PDF
Multi Source Replication With MySQL 5.7 @ Verisure
PDF
Concurrency
Speeding up R with Parallel Programming in the Cloud
Cassandra at Pollfish
Cassandra at Pollfish
Migrating existing open source machine learning to azure
Tips, Tricks & Best Practices for large scale HDInsight Deployments
Migrating Existing Open Source Machine Learning to Azure
Ingesting Over Four Million Rows Per Second With QuestDB Timeseries Database ...
Buildingsocialanalyticstoolwithmongodb
In-Memory Logical Data Warehouse for accelerating Machine Learning Pipelines ...
Incrementalism: An Industrial Strategy For Adopting Modern Automation
Using R on High Performance Computers
Introduction to Galera Cluster
Typesafe spark- Zalando meetup
10 things i wish i'd known before using spark in production
Scalable Monitoring Using Prometheus with Apache Spark Clusters with Diane F...
How Opera Syncs Tens of Millions of Browsers and Sleeps Well at Night
Explore big data at speed of thought with Spark 2.0 and Snappydata
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
Multi Source Replication With MySQL 5.7 @ Verisure
Concurrency
Ad

More from Revolution Analytics (20)

PPTX
R in Minecraft
PPTX
The case for R for AI developers
PPTX
The R Ecosystem
PPTX
R Then and Now
PPTX
Predicting Loan Delinquency at One Million Transactions per Second
PPTX
Reproducible Data Science with R
PPTX
The Value of Open Source Communities
PPTX
The R Ecosystem
PPTX
R at Microsoft (useR! 2016)
PPTX
Building a scalable data science platform with R
PPTX
R at Microsoft
PPTX
The Business Economics and Opportunity of Open Source Data Science
PPTX
Taking R Analytics to SQL and the Cloud
PPTX
The Network structure of R packages on CRAN & BioConductor
PPTX
The network structure of cran 2015 07-02 final
PPTX
Simple Reproducibility with the checkpoint package
PPTX
R at Microsoft
PDF
Revolution R Enterprise 7.4 - Presentation by Bill Jacobs 11Jun15
PDF
Warranty Predictive Analytics solution
PPTX
Reproducibility with Checkpoint & RRO - NYC R Conference
R in Minecraft
The case for R for AI developers
The R Ecosystem
R Then and Now
Predicting Loan Delinquency at One Million Transactions per Second
Reproducible Data Science with R
The Value of Open Source Communities
The R Ecosystem
R at Microsoft (useR! 2016)
Building a scalable data science platform with R
R at Microsoft
The Business Economics and Opportunity of Open Source Data Science
Taking R Analytics to SQL and the Cloud
The Network structure of R packages on CRAN & BioConductor
The network structure of cran 2015 07-02 final
Simple Reproducibility with the checkpoint package
R at Microsoft
Revolution R Enterprise 7.4 - Presentation by Bill Jacobs 11Jun15
Warranty Predictive Analytics solution
Reproducibility with Checkpoint & RRO - NYC R Conference

Recently uploaded (20)

PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PPTX
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
PPTX
Computer Software and OS of computer science of grade 11.pptx
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PPTX
L1 - Introduction to python Backend.pptx
PDF
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
PPTX
Log360_SIEM_Solutions Overview PPT_Feb 2020.pptx
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PDF
medical staffing services at VALiNTRY
PDF
Navsoft: AI-Powered Business Solutions & Custom Software Development
PDF
Odoo Companies in India – Driving Business Transformation.pdf
PPTX
Odoo POS Development Services by CandidRoot Solutions
PPTX
history of c programming in notes for students .pptx
PDF
Nekopoi APK 2025 free lastest update
PDF
Understanding Forklifts - TECH EHS Solution
PDF
Designing Intelligence for the Shop Floor.pdf
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PDF
Product Update: Alluxio AI 3.7 Now with Sub-Millisecond Latency
PDF
Design an Analysis of Algorithms I-SECS-1021-03
Upgrade and Innovation Strategies for SAP ERP Customers
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
Computer Software and OS of computer science of grade 11.pptx
Design an Analysis of Algorithms II-SECS-1021-03
L1 - Introduction to python Backend.pptx
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
Log360_SIEM_Solutions Overview PPT_Feb 2020.pptx
Wondershare Filmora 15 Crack With Activation Key [2025
Internet Downloader Manager (IDM) Crack 6.42 Build 41
medical staffing services at VALiNTRY
Navsoft: AI-Powered Business Solutions & Custom Software Development
Odoo Companies in India – Driving Business Transformation.pdf
Odoo POS Development Services by CandidRoot Solutions
history of c programming in notes for students .pptx
Nekopoi APK 2025 free lastest update
Understanding Forklifts - TECH EHS Solution
Designing Intelligence for the Shop Floor.pdf
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
Product Update: Alluxio AI 3.7 Now with Sub-Millisecond Latency
Design an Analysis of Algorithms I-SECS-1021-03

Speed up R with parallel programming in the Cloud

  • 1. Speeding up R with Parallel Programming in the Cloud David M Smith Developer Advocate, Microsoft @revodavid
  • 2. Hi. I’m David. • Lapsed statistician • Cloud Developer Advocate at Microsoft – cda.ms/7f • Editor, Revolutions blog – cda.ms/7g • Twitter: @revodavid 2 @revodavid
  • 3. What is R? • Widely used data science software • Used by millions of data scientists, statisticians and analysts • Most powerful statistical programming language • Flexible, extensible and comprehensive for productivity • Creates beautiful and unique data visualizations • As seen in New York Times, The Economist and FlowingData • Thriving open-source community • Leading edge of Statistics research 3 @revodavid
  • 4. What aren’t you telling me about R? • R is single-threaded • R is an in-memory application • R is kinda quirky And yet, major companies use R for production data science on large databases. • Examples: blog.revolutionanalytics.com/applications/ 4 @revodavid
  • 5. Secrets to using R in production • Don’t use R alone – Know what it’s good for! (And what it’s not good for.) – Use R as part of a production stack • Use modern R workflows – Hire great data scientists • Use R in conjunction with parallel/distributed data & compute architectures 5 @revodavid
  • 6. Speeding up your R code • Vectorize 6 @revodavid • Vectorize • Moar megahertz! • Moar RAM! • Moar cores! • Moar computers! • Moar cloud!
  • 7. Embarassingly Parallel Problems Easy to speed things up when: • Calculating similar things many times – Iterations in a loop, chunks of data, … • Calculations are independent of each other • Each calculation takes a decent amount of time Just run multiple calculations at the same time 7 @revodavid
  • 8. 8 Is this Embarrassing? Embarrassingly Parallel Group-by Analyses Reporting Simulations Resampling / Bootstrapping Optimization / Search (somewhat) Prediction (scoring) Cross-Validation Backtesting Not Embarrassingly Parallel SQL operations (many) Matrix inverse Linear regression (training) Logistic Regression (training) Trees (training) Neural Networks (training) Time Series (most things) Train tracks
  • 9. The Birthday Paradox What is the likelihood that there are two people in this room who share the same birthday? 9 @revodavid
  • 10. Birthday Problem Simulator 10 pbirthdaysim <- function(n) { ntests <- 100000 pop <- 1:365 anydup <- function(i) any(duplicated( sample(pop, n, replace=TRUE))) sum(sapply(seq(ntests), anydup)) / ntests } bdayp <- sapply(1:100, pbirthdaysim)  About 5 minutes (on this laptop)
  • 11. library(foreach) Looping with the foreach package on CRAN – x is a list of results – each entry calculated from RHS of %dopar% Learn more about foreach: cda.ms/6Q 11 @revodavid x <- foreach (n=1:100) %dopar% pbirthdaysim(n)
  • 12. Parallel processing with foreach • Change how processing is done by registering a backend – registerDoSEQ() sequential processing (default) – registerdoParallel() local cluster via library(parallel) – registerAzureParallel() remote cluster in Azure • Whatever you use, call to foreach does not change – Also: no need to worry about data, packages etc. (mostly) 12 @revodavid library(doParallel) cl <- makeCluster(2) # local cluster, 2 workers registerDoParallel(cl) bdayp <- foreach(n=1:100) %dopar% pbirthdaysim(n)
  • 13. foreach + doAzureParallel • doAzureParallel: A simple R package that uses the Azure Batch cluster service as a parallel-backend for foreach github.com/Azure/doAzureParallel
  • 14. Demo: birthday simulation 8-node cluster (compute-optimized D2v2 2-core instances) • specify VM class in cluster.json • specify credentials for Azure Batch and Azure Storage in credentials.json 14 @revodavid library(doAzureParallel) setCredentials("credentials.json") cluster <- makeCluster("cluster.json") registerDoAzureParallel(cluster) bdayp <- foreach(n=1:100) %dopar% pbirthdaysim(n) bdayp <- unlist(bdayp) cluster.json (excerpt): "name": "davidsmi8caret", "vmSize": "Standard_D2_v2", "maxTasksPerNode": 8, "poolSize": { "dedicatedNodes": { "min": 8, "max": 8 } 45 seconds (more than 6 times faster!)
  • 15. Scale • From 1 to 10,000 VMs for a cluster • From 1 to millions of tasks • Your selection of hardware: – General compute VMs (A-Series / D- Series) – Memory / storage optimized (G-Series) – Compute Optimized (F-Series) – GPU enabled (N-Series) • Results from computing the mandelbrot set when scaling up: Local machine 5 parallel workers 10 parallel workers 20 parallel workers
  • 16. Cross-validation with caret • Most predictive modeling algorithms have “tuning parameters” • Example: Boosted Trees – Boosting iterations – Max Tree Depth – Shrinkage • Parameters affect model performance • Try ‘em out: cross-validate 16 @revodavid grid <- data.frame( nrounds = …, max_depth = …, gamma = …, colsample_bytree = …, min_child_weight = …, subsample = …) )
  • 17. Cross-validation in parallel • Caret’s train function will automatically use the registered foreach backend • Just register your cluster first: registerDoAzureParallel(cluster) • Handles sending objects, packages to nodes 17 @revodavid mod <- train( Class ~ ., data = dat, method = "xgbTree", trControl = ctrl, tuneGrid = grid, nthread = 1 )
  • 18. caret speed-ups • Max Kuhn benchmarked various hardware and OS for local parallel – cda.ms/6V • Let’s see how it works with doAzureParallel 18 @revodavid Source: cda.ms/6V
  • 19. Packages and Containers • Docker images used to spawn nodes – Default: rocker/tidyverse:latest – Lots of R packages pre-installed • But this cross-validation also needs: – xgboost, e1071 • Easy fix: add to cluster.json 19 @revodavid { "name": "davidsmi8caret", "vmSize": "Standard_D2_v2", "maxTasksPerNode": 8, "poolSize": { "dedicatedNodes": { "min": 4, "max": 4 }, "lowPriorityNodes": { "min": 4, "max": 4 }, "autoscaleFormula": "QUEUE" }, "containerImage": "rocker/tidyverse:latest", "rPackages": { "cran": ["xgboost","e1071"], "github": [], "bioconductor": [] }, "commandLine": [] }
  • 20. 20 @revodavid ================================================ Id: job20180126022301 chunkSize: 1 enableCloudCombine: TRUE packages: caret; errorHandling: stop wait: TRUE autoDeleteJob: TRUE ================================================ Submitting tasks (1250/1250) Submitting merge task. . . Job Preparation Status: Package(s) being install Waiting for tasks to complete. . . | Progress: 13.84% (173/1250) | Running: 59 | Qu MY LAPTOP: 78 minutes THIS CLUSTER: 16 minutes (almost 5x faster)
  • 21. How much does it cost? • Pay by the minute only for VMs used in cluster – No additional cost for the Azure Batch cluster service • Using D2v2 Virtual Machines – Ubuntu 16, 8Mb RAM, 2-core “compute optimized” • 17 minutes × 8 VMs @ $0.10 / hour – about 23 cents (not counting startup) … but why pay full price? 21 @revodavid
  • 22. Low Priority Nodes • Low-Priority = (very) Low Costs VMs from surplus capacity – up to 80% discount • Clusters can mix dedicated VMs and low-priority VMs My Local R Session Azure Batch Low Priority VMs at up to 80% discount Dedicated VMs "poolSize": { "dedicatedNodes": { "min": 3, "max": 3 }, "lowPriorityNodes": { "min": 9, "max": 9 },
  • 23. TL;DR: Embarrassingly Parallel • Install the foreach and doAzureParallel packages • Get Azure Batch and Azure Storage accounts – Need an account? http://azure.com/free • Set up Azure keys in credentials.json • Define your cluster size/type in cluster.json • Use registerAzureParallel to set up your job • Use foreach / %dopar% to loop in parallel • Worked example with code: cda.ms/7d 24 @revodavid
  • 24. DISTRIBUTED DATA PROCESSING WITH SPARKLYR For when it’s not embarrassingly parallel: 25 @revodavid Azure
  • 25. What is Spark? • Distributed data processing engine • Store and analyze massive volumes in a robust, scalable cluster • Successor to Hadoop • in-memory engine 100x faster than map-reduce • Highly extensible, with machine-learning capabilities • Supports Scala, Java, Python, R … • Managed cloud services available – Azure Databricks & HDInsight, AWS EMR, GCP Dataproc • Largest open-source data project • Apache project with 1000+ contributors 26 @revodavid
  • 26. R and Spark: Sparklyr • sparklyr: R interface to Spark – open-source R package from RStudio • Move data between R and Spark • “References” to Spark Data Frames – Familiar R operations, including dplyr syntax – Computations offloaded to Spark cluster, and deferred until needed • CPU/RAM/Disk consumed in cluster, not by R • Interfaces to Spark ML algorithms 27 @revodavid spark.rstudio.com
  • 27. Provisioning clusters for Sparklyr with aztk • aztk: Command-line interface to provision Spark-ready (and Sparklyr-ready) clusters in Batch – www.github.com/azure/aztk • Provision a Spark cluster in about 5 minutes – Choice of VM instance types – Use provided Docker instances (or your own) – Pay only for VM usage, by the minute – Optionally, use low-priority nodes to save costs • Tools to manage persistent storage • Easily connect to RStudio Server and Spark UIs from desktop 28 @revodavid
  • 28. Launch and connect Provision a Spark cluster: aztk spark cluster create --id davidsmispark4 --size 4 Connect to the Spark cluster and map ports: aztk spark cluster ssh --id davidsmispark4 Launch RStudio http://localhost:8787 29 @revodavid
  • 29. dplyr with Sparklyr 30 @revodavid Connect to the Spark cluster: library(sparklyr) cluster_url <- paste0("spark://", system("hostname -i", intern = TRUE), ":7077") sc <- spark_connect(master = cluster_url) Load in some data: library(dplyr) flights_tbl <- copy_to(sc, nycflights13::flights, "flights") Munge with dplyr: delay <- flights_tbl %>% group_by(tailnum) %>% summarise(count = n(), dist = mean(distance), delay = mean(arr_delay)) %>% filter(count > 20, dist < 2000, !is.na(delay)) %>% collect
  • 30. Things to note • All of the computation take place in the Spark cluster – Computations are delayed until you need results – Behind the scenes, Spark SQL statements are being written for you • None of the data comes back to R – Until you call collect, when it becomes a tbl – It’s only at this point you have to worry about data size • This is all ordinary dplyr syntax 31 @revodavid
  • 32. Machine Learning with SparklyR 33 @revodavid > m <- ml_linear_regression(delay ~ dist, data=delay_near) * No rows dropped by 'na.omit' call > summary(m) Call: ml_linear_regression(delay ~ dist, data = delay_near) Deviance Residuals:: Min 1Q Median 3Q Max -19.9499 -5.8752 -0.7035 5.1867 40.8973 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.6904319 1.0199146 0.677 0.4986 dist 0.0195910 0.0019252 10.176 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-Squared: 0.09619 Root Mean Squared Error: 8.075 > SparklyR provides R interfaces to Spark’s distributed machine learning algorithms (MLlib) Computations happing in the Spark cluster, not in R
  • 33. In summary • Embarrassingly parallel (small): foreach + local backend • Embarrassingly parallel (big): foreach + cluster backend – Create & use clusters in Azure with doAzureParallel • Big, distributed data: sparklyr – Create Spark clusters with sparklyr in Azure with azkt 34 @revodavid
  • 34. Get your links here • Code for birthday problem simulation: cda.ms/7d • Using the foreach package: cda.ms/6Q • Get doAzureParallel: cda.ms/7w • Get aztk (for sparklyr): cda.ms/7x • Sparklyr: spark.rstudio.com • Free Azure account with $200 credit: cda.ms/7v 35 @revodavid Thank you!

Editor's Notes

  • #4: R is #8 in January 2018 Tiobe language rankings. #6 in IEEE Spectrum 2017 top programming languages.
  • #27: R is #8 in January 2018 Tiobe language rankings. #6 in IEEE Spectrum 2017 top programming languages.