SlideShare a Scribd company logo
Scio
A Scala API for
Google Cloud Dataflow &
Apache Beam
Neville Li
@sinisa_lyh
About Us
● 100M+ active users, 40M+ paying
● 30M+ songs, 20K new per day
● 2B+ playlists
● 60+ markets
● 2500+ node Hadoop cluster
● 50TB logs per day
● 10K+ jobs per day
Who am I?
● Spotify NYC since 2011
● Formerly Yahoo! Search
● Music recommendations
● Data infrastructure
● Scala since 2013
Origin Story
● Python Luigi, circa 2011
● Scalding, Spark and Storm, circa 2013
● ML, recommendation, analytics
● 100+ Scala users, 500+ unique jobs
Moving to
Google
CloudEarly 2015 - Dataflow Scala hack project
What is
Dataflow/Beam?
The Evolution of Apache Beam
MapReduce
BigTable DremelColossus
FlumeMegastoreSpanner
PubSub
Millwheel
Apache
Beam
Google Cloud
Dataflow
What is Apache Beam?
1. The Beam Programming Model
2. SDKs for writing Beam pipelines -- starting with Java
3. Runners for existing distributed processing backends
○ Apache Flink (thanks to data Artisans)
○ Apache Spark (thanks to Cloudera and PayPal)
○ Google Cloud Dataflow (fully managed service)
○ Local runner for testing
9
The Beam Model: Asking the Right Questions
What results are calculated?
Where in event time are results calculated?
When in processing time are results materialized?
How do refinements of results relate?
10
Customizing What Where When How
3
Streaming
4
Streaming
+ Accumulation
1
Classic
Batch
2
Windowed
Batch
11
The Apache Beam Vision
1. End users: who want to write
pipelines in a language that’s familiar.
2. SDK writers: who want to make Beam
concepts available in new languages.
3. Runner writers: who have a
distributed processing environment
and want to support Beam pipelines
Beam Model: Fn Runners
Apache
Flink
Apache
Spark
Beam Model: Pipeline Construction
Other
LanguagesBeam Java
Beam
Python
Execution Execution
Cloud
Dataflow
Execution
Data model
Spark
● RDD for batch, DStream for streaming
● Two sets of APIs
● Explicit caching semantics
Dataflow / Beam
● PCollection for batch and streaming
● One unified API
● Windowed and timestamped values
Execution
Spark
● One driver, n executors
● Dynamic execution from driver
● Transforms and actions
Dataflow / Beam
● No master
● Static execution planning
● Transforms only, no actions
Why
Dataflow/Beam?
Scalding on Google Cloud
Pros
● Community - Twitter, Stripe, Etsy, eBay
● Hadoop stable and proven
Cons
● Cluster ops
● Multi-tenancy - resource contention and utilization
● No streaming (Summingbird?)
● Integration with GCP - BigQuery, Bigtable, Datastore, Pubsub
Spark on Google Cloud
Pros
● Batch, streaming, interactive, SQL and MLLib
● Scala, Java, Python and R
● Zeppelin, spark-notebook
Cons
● Cluster lifecycle management
● Hard to tune and scale
● Integration with GCP - BigQuery, Bigtable, Datastore, Pubsub
Dataflow
● Hosted, fully managed, no ops
● GCP ecosystem - BigQuery, Bigtable, Datastore, Pubsub
● Unified batch and streaming model
Scala
● High level DSL
● Functional programming natural fit for data
● Numerical libraries - Breeze, Algebird
Why Dataflow with Scala
Cloud
Storage
Pub/Sub Datastore BigtableBigQuery
Batch Streaming Interactive REPL
Scio Scala API
Dataflow Java SDK Scala Libraries
Extra features
Scio
Ecclesiastical Latin IPA: /ˈʃi.o/, [ˈʃiː.o], [ˈʃi.i̯o]
Verb: I can, know, understand, have knowledge.
github.com/spotify/scio
Apache Licence 2.0
WordCount
val sc = ScioContext()
sc.textFile("shakespeare.txt")
.flatMap { _
.split("[^a-zA-Z']+")
.filter(_.nonEmpty)
}
.countByValue
.saveAsTextFile("wordcount.txt")
sc.close()
PageRank
def pageRank(in: SCollection[(String, String)]) = {
val links = in.groupByKey()
var ranks = links.mapValues(_ => 1.0)
for (i <- 1 to 10) {
val contribs = links.join(ranks).values
.flatMap { case (urls, rank) =>
val size = urls.size
urls.map((_, rank / size))
}
ranks = contribs.sumByKey.mapValues((1 - 0.85) + 0.85 * _)
}
ranks
}
Why Scio?
Type safe BigQuery
Macro generated case classes, schemas and converters
@BigQuery.fromQuery("SELECT id, name FROM [users] WHERE ...")
class User // look mom no code!
sc.typedBigQuery[User]().map(u => (u.id, u.name))
@BigQuery.toTable
case class Score(id: String, score: Double)
data.map(kv => Score(kv._1, kv._2)).saveAsTypedBigQuery("table")
REPL
$ scio-repl
Welcome to
_____
________________(_)_____
__ ___/ ___/_ /_ __ 
_(__ )/ /__ _ / / /_/ /
/____/ ___/ /_/ ____/ version 0.2.5
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_11)
Type in expressions to have them evaluated.
Type :help for more information.
Using 'scio-test' as your BigQuery project.
BigQuery client available as 'bq'
Scio context available as 'sc'
scio> _
Available in github.com/spotify/homebrew-public
Future based orchestration
// Job 1
val f: Future[Tap[String]] = data1.saveAsTextFile("output")
sc1.close() // submit job
val t: Tap[String] = Await.result(f)
t.value.foreach(println) // Iterator[String]
// Job 2
val sc2 = ScioContext(options)
val data2: SCollection[String] = t.open(sc2)
DistCache
val sw = sc.distCache("gs://bucket/stopwords.txt") { f =>
Source.fromFile(f).getLines().toSet
}
sc.textFile("gs://bucket/shakespeare.txt")
.flatMap { _
.split("[^a-zA-Z']+")
.filter(w => w.nonEmpty && !sw().contains(w))
}
.countByValue
.saveAsTextFile("wordcount.txt")
● DAG visualization & source code mapping
● BigQuery caching, legacy & SQL 2011 support
● HDFS Source/Sink, Protobuf & object file I/O
● Job metrics, e.g. accumulators
○ Programmatic access
○ Persist to file
● Bigtable
○ Multi-table write
○ Cluster scaling for bulk I/O
Other goodies
Demo Time!
Adoption
● At Spotify
○ 20+ teams, 80+ users, 70+ production pipelines
○ Most of them new to Scala and Scio
● Open source model
○ Discussion on Slack, mailing list
○ Issue tracking on public Github
○ Community driven - type safe BigQuery, Bigtable, Datastore, Protobuf
Release Radar
● 50 n1-standard-1 workers
● 1 core 3.75GB RAM
● 130GB in - Avro & Bigtable
● 130GB out x 2 - Bigtable in US+EU
● 110M Bigtable mutations
● 120 LOC
Fan Insights
● Listener stats
[artist|track] ×
[context|geography|demography] ×
[day|week|month]
● BigQuery, GCS, Datastore
● TBs daily
● 150+ Java jobs to < 10 Scio jobs
Master Metadata
● n1-standard-1 workers
● 1 core 3.75GB RAM
● Autoscaling 2-35 workers
● 26 Avro sources - artist, album, track, disc, cover art, ...
● 120GB out, 70M records
● 200 LOC vs original Java 600 LOC
And we broke Google
BigDiffy
● Pairwise field-level statistical diff
● Diff 2 SCollection[T] given keyFn: T => String
● T: Avro, BigQuery, Protobuf
● Field level Δ - numeric, string, vector
● Δ statistics - min, max, μ, σ, etc.
● Non-deterministic fields
○ ignore field
○ treat "repeated" field as unordered list
Part of github.com/spotify/ratatool
Dataset Diff
● Diff stats
○ Global: # of SAME, DIFF, MISSING LHS/RHS
○ Key: key → SAME, DIFF, MISSING LHS/RHS
○ Field: field → min, max, μ, σ, etc.
● Use cases
○ Validating pipeline migration
○ Sanity checking ML models
Pairwise field-level deltas
val lKeyed = lhs.keyBy(keyFn)
val rKeyed = rhs.keyBy(keyFn)
val deltas = (lKeyed outerJoin rKeyed).map { case (k, (lOpt, rOpt)) =>
(lOpt, rOpt) match {
case (Some(l), Some(r)) =>
val ds = diffy(l, r) // Seq[Delta]
val dt = if (ds.isEmpty) SAME else DIFFERENT
(k, (ds, dt))
case (_, _) =>
val dt = if (lOpt.isDefined) MISSING_RHS else MISSING_LHS
(k, (Nil, dt))
}
}
Summing deltas
import com.twitter.algebird._
// convert deltas to map of (field → summable stats)
def deltasToMap(ds: Seq[Delta], dt: DeltaType)
: Map[String, (Long, Option[(DeltaType, Min[Double], Max[Double], Moments)])] = {
// ...
}
deltas
.map { case (_, (ds, dt)) => deltasToMap(ds, dt) }
.sum // Semigroup!
Other uses
● AB testing
○ Statistical analysis with bootstrap
and DimSum
○ BigQuery, Datastore, TBs in/out
● Monetization
○ Ads targeting
○ User conversion analysis
○ BigQuery, TBs in/out
● User understanding
○ Diversity
○ Session analysis
○ Behavior analysis
● Home page ranking
● Audio fingerprint analysis
Finally
Let's talk Scala
Serialization
● Data ser/de
○ Scalding, Spark and Storm uses Kryo and Chill
○ Dataflow/Beam requires explicit Coder[T]
Sometimes inferable via Guava TypeToken
○ ClassTag to the rescue, fallback to Kryo/Chill
● Lambda ser/de
○ ClosureCleaner
○ Serializable and @transient lazy val
REPL
● Spark REPL transports lambda bytecode via HTTP
● Dataflow requires job jar for execution (no master)
● Custom class loader and ILoop
● Interpreted classes → job jar → job submission
● SCollection[T]#closeAndCollect(): Iterator[T]
to mimic Spark actions
Macros and IntelliJ IDEA
● IntelliJ IDEA does not see macro expanded classes
https://youtrack.jetbrains.com/issue/SCL-8834
● @BigQueryType.{fromTable, fromQuery}
class MyRecord
● Scio IDEA plugin
https://github.com/spotify/scio-idea-plugin
Scio in Apache Zeppelin
Local Zeppelin server, remote managed Dataflow cluster, NO OPS
Experimental
● github.com/nevillelyh/shapeless-datatype
○ Case class ↔ BigQuery TableRow & Datastore Entity
○ Generic mapper between case class types
○ Type and lens based record matcher
● github.com/nevillelyh/protobuf-generic
○ Generic Protobuf manipulation similar to Avro GenericRecord
○ Protobuf type T → JSON schema
○ Bytes ↔ JSON given JSON schema
What's Next?
● Better streaming support [#163]
● Support Beam 0.3.0-incubating
● Support other runners
● Donate to Beam as Scala DSL [BEAM-302]
The End
Thank You
Neville Li
@sinisa_lyh

More Related Content

PPT
ของไหล 1
PDF
200以上のwebサービス事例から見えてきた鉄板グロースハック ~傾向と対策~ 先生:須藤 憲司
PDF
カークマンの女学生問題と有限幾何
PDF
Python仮想環境構築の基礎と ツールの比較
PDF
Statistical Semantic入門 ~分布仮説からword2vecまで~
PDF
บริษัท ไทยเบฟเวอเรจ จำกัด (มหาชน) 
PDF
Rファイルの保存と活用1―KH Coderによる対応分析の結果のエクスポートと活用―
PDF
乱択データ構造の最新事情 -MinHash と HyperLogLog の最近の進歩-
ของไหล 1
200以上のwebサービス事例から見えてきた鉄板グロースハック ~傾向と対策~ 先生:須藤 憲司
カークマンの女学生問題と有限幾何
Python仮想環境構築の基礎と ツールの比較
Statistical Semantic入門 ~分布仮説からword2vecまで~
บริษัท ไทยเบฟเวอเรจ จำกัด (มหาชน) 
Rファイルの保存と活用1―KH Coderによる対応分析の結果のエクスポートと活用―
乱択データ構造の最新事情 -MinHash と HyperLogLog の最近の進歩-

What's hot (11)

PDF
AtCoder Regular Contest 019 解説
PDF
プログラミングコンテストでのデータ構造
PDF
ノンパラベイズ入門の入門
PDF
いまさら聞けない機械学習のキホン
PDF
勉強か?趣味か?人生か?―プログラミングコンテストとは
PDF
オセロゲームにおける強化学習を用いたゲーム戦略の獲得
PDF
アドテクに機械学習を組み込むための推論の高速化
PDF
パターン認識 05 ロジスティック回帰
PDF
แผนภาพต้นไม้13
PDF
様々な全域木問題
PPTX
ประชากรและกลุ่มตัวอย่าง
AtCoder Regular Contest 019 解説
プログラミングコンテストでのデータ構造
ノンパラベイズ入門の入門
いまさら聞けない機械学習のキホン
勉強か?趣味か?人生か?―プログラミングコンテストとは
オセロゲームにおける強化学習を用いたゲーム戦略の獲得
アドテクに機械学習を組み込むための推論の高速化
パターン認識 05 ロジスティック回帰
แผนภาพต้นไม้13
様々な全域木問題
ประชากรและกลุ่มตัวอย่าง
Ad

Similar to Scio - A Scala API for Google Cloud Dataflow & Apache Beam (20)

PDF
Sorry - How Bieber broke Google Cloud at Spotify
PDF
Scio - Moving to Google Cloud, A Spotify Story
PDF
New Developments in Spark
PPTX
Exploiting GPU's for Columnar DataFrrames by Kiran Lonikar
PDF
Data pipelines from zero to solid
PDF
Distributed real time stream processing- why and how
PDF
R - the language
PDF
Greg Hogan – To Petascale and Beyond- Apache Flink in the Clouds
PDF
Sergei Sokolenko "Advances in Stream Analytics: Apache Beam and Google Cloud ...
PDF
A super fast introduction to Spark and glance at BEAM
PDF
Dirty Data? Clean it up! - Rocky Mountain DataCon 2016
PDF
Improving Apache Spark Downscaling
PDF
Apache Kylin - Balance between space and time - Hadoop Summit 2015
PDF
SnappyData Overview Slidedeck for Big Data Bellevue
PPTX
SnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14th
PDF
Approximate Queries and Graph Streams on Apache Flink - Theodore Vasiloudis -...
PDF
Approximate queries and graph streams on Flink, theodore vasiloudis, seattle...
PDF
Introduction to Spark Datasets - Functional and relational together at last
PPTX
AWS Big Data Demystified #1: Big data architecture lessons learned
PDF
Data Science in the Cloud @StitchFix
Sorry - How Bieber broke Google Cloud at Spotify
Scio - Moving to Google Cloud, A Spotify Story
New Developments in Spark
Exploiting GPU's for Columnar DataFrrames by Kiran Lonikar
Data pipelines from zero to solid
Distributed real time stream processing- why and how
R - the language
Greg Hogan – To Petascale and Beyond- Apache Flink in the Clouds
Sergei Sokolenko "Advances in Stream Analytics: Apache Beam and Google Cloud ...
A super fast introduction to Spark and glance at BEAM
Dirty Data? Clean it up! - Rocky Mountain DataCon 2016
Improving Apache Spark Downscaling
Apache Kylin - Balance between space and time - Hadoop Summit 2015
SnappyData Overview Slidedeck for Big Data Bellevue
SnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14th
Approximate Queries and Graph Streams on Apache Flink - Theodore Vasiloudis -...
Approximate queries and graph streams on Flink, theodore vasiloudis, seattle...
Introduction to Spark Datasets - Functional and relational together at last
AWS Big Data Demystified #1: Big data architecture lessons learned
Data Science in the Cloud @StitchFix
Ad

Recently uploaded (20)

PPTX
history of c programming in notes for students .pptx
PDF
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PPTX
ai tools demonstartion for schools and inter college
PPTX
Online Work Permit System for Fast Permit Processing
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
Understanding Forklifts - TECH EHS Solution
PDF
top salesforce developer skills in 2025.pdf
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
medical staffing services at VALiNTRY
PDF
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
PPTX
VVF-Customer-Presentation2025-Ver1.9.pptx
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PDF
System and Network Administraation Chapter 3
PPTX
Operating system designcfffgfgggggggvggggggggg
PPTX
Odoo POS Development Services by CandidRoot Solutions
PDF
Nekopoi APK 2025 free lastest update
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PPTX
CHAPTER 12 - CYBER SECURITY AND FUTURE SKILLS (1) (1).pptx
PDF
Digital Strategies for Manufacturing Companies
history of c programming in notes for students .pptx
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
2025 Textile ERP Trends: SAP, Odoo & Oracle
ai tools demonstartion for schools and inter college
Online Work Permit System for Fast Permit Processing
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Understanding Forklifts - TECH EHS Solution
top salesforce developer skills in 2025.pdf
Which alternative to Crystal Reports is best for small or large businesses.pdf
medical staffing services at VALiNTRY
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
VVF-Customer-Presentation2025-Ver1.9.pptx
Adobe Illustrator 28.6 Crack My Vision of Vector Design
System and Network Administraation Chapter 3
Operating system designcfffgfgggggggvggggggggg
Odoo POS Development Services by CandidRoot Solutions
Nekopoi APK 2025 free lastest update
Wondershare Filmora 15 Crack With Activation Key [2025
CHAPTER 12 - CYBER SECURITY AND FUTURE SKILLS (1) (1).pptx
Digital Strategies for Manufacturing Companies

Scio - A Scala API for Google Cloud Dataflow & Apache Beam

  • 1. Scio A Scala API for Google Cloud Dataflow & Apache Beam Neville Li @sinisa_lyh
  • 2. About Us ● 100M+ active users, 40M+ paying ● 30M+ songs, 20K new per day ● 2B+ playlists ● 60+ markets ● 2500+ node Hadoop cluster ● 50TB logs per day ● 10K+ jobs per day
  • 3. Who am I? ● Spotify NYC since 2011 ● Formerly Yahoo! Search ● Music recommendations ● Data infrastructure ● Scala since 2013
  • 4. Origin Story ● Python Luigi, circa 2011 ● Scalding, Spark and Storm, circa 2013 ● ML, recommendation, analytics ● 100+ Scala users, 500+ unique jobs
  • 5. Moving to Google CloudEarly 2015 - Dataflow Scala hack project
  • 7. The Evolution of Apache Beam MapReduce BigTable DremelColossus FlumeMegastoreSpanner PubSub Millwheel Apache Beam Google Cloud Dataflow
  • 8. What is Apache Beam? 1. The Beam Programming Model 2. SDKs for writing Beam pipelines -- starting with Java 3. Runners for existing distributed processing backends ○ Apache Flink (thanks to data Artisans) ○ Apache Spark (thanks to Cloudera and PayPal) ○ Google Cloud Dataflow (fully managed service) ○ Local runner for testing
  • 9. 9 The Beam Model: Asking the Right Questions What results are calculated? Where in event time are results calculated? When in processing time are results materialized? How do refinements of results relate?
  • 10. 10 Customizing What Where When How 3 Streaming 4 Streaming + Accumulation 1 Classic Batch 2 Windowed Batch
  • 11. 11 The Apache Beam Vision 1. End users: who want to write pipelines in a language that’s familiar. 2. SDK writers: who want to make Beam concepts available in new languages. 3. Runner writers: who have a distributed processing environment and want to support Beam pipelines Beam Model: Fn Runners Apache Flink Apache Spark Beam Model: Pipeline Construction Other LanguagesBeam Java Beam Python Execution Execution Cloud Dataflow Execution
  • 12. Data model Spark ● RDD for batch, DStream for streaming ● Two sets of APIs ● Explicit caching semantics Dataflow / Beam ● PCollection for batch and streaming ● One unified API ● Windowed and timestamped values
  • 13. Execution Spark ● One driver, n executors ● Dynamic execution from driver ● Transforms and actions Dataflow / Beam ● No master ● Static execution planning ● Transforms only, no actions
  • 15. Scalding on Google Cloud Pros ● Community - Twitter, Stripe, Etsy, eBay ● Hadoop stable and proven Cons ● Cluster ops ● Multi-tenancy - resource contention and utilization ● No streaming (Summingbird?) ● Integration with GCP - BigQuery, Bigtable, Datastore, Pubsub
  • 16. Spark on Google Cloud Pros ● Batch, streaming, interactive, SQL and MLLib ● Scala, Java, Python and R ● Zeppelin, spark-notebook Cons ● Cluster lifecycle management ● Hard to tune and scale ● Integration with GCP - BigQuery, Bigtable, Datastore, Pubsub
  • 17. Dataflow ● Hosted, fully managed, no ops ● GCP ecosystem - BigQuery, Bigtable, Datastore, Pubsub ● Unified batch and streaming model Scala ● High level DSL ● Functional programming natural fit for data ● Numerical libraries - Breeze, Algebird Why Dataflow with Scala
  • 18. Cloud Storage Pub/Sub Datastore BigtableBigQuery Batch Streaming Interactive REPL Scio Scala API Dataflow Java SDK Scala Libraries Extra features
  • 19. Scio Ecclesiastical Latin IPA: /ˈʃi.o/, [ˈʃiː.o], [ˈʃi.i̯o] Verb: I can, know, understand, have knowledge.
  • 21. WordCount val sc = ScioContext() sc.textFile("shakespeare.txt") .flatMap { _ .split("[^a-zA-Z']+") .filter(_.nonEmpty) } .countByValue .saveAsTextFile("wordcount.txt") sc.close()
  • 22. PageRank def pageRank(in: SCollection[(String, String)]) = { val links = in.groupByKey() var ranks = links.mapValues(_ => 1.0) for (i <- 1 to 10) { val contribs = links.join(ranks).values .flatMap { case (urls, rank) => val size = urls.size urls.map((_, rank / size)) } ranks = contribs.sumByKey.mapValues((1 - 0.85) + 0.85 * _) } ranks }
  • 24. Type safe BigQuery Macro generated case classes, schemas and converters @BigQuery.fromQuery("SELECT id, name FROM [users] WHERE ...") class User // look mom no code! sc.typedBigQuery[User]().map(u => (u.id, u.name)) @BigQuery.toTable case class Score(id: String, score: Double) data.map(kv => Score(kv._1, kv._2)).saveAsTypedBigQuery("table")
  • 25. REPL $ scio-repl Welcome to _____ ________________(_)_____ __ ___/ ___/_ /_ __ _(__ )/ /__ _ / / /_/ / /____/ ___/ /_/ ____/ version 0.2.5 Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_11) Type in expressions to have them evaluated. Type :help for more information. Using 'scio-test' as your BigQuery project. BigQuery client available as 'bq' Scio context available as 'sc' scio> _ Available in github.com/spotify/homebrew-public
  • 26. Future based orchestration // Job 1 val f: Future[Tap[String]] = data1.saveAsTextFile("output") sc1.close() // submit job val t: Tap[String] = Await.result(f) t.value.foreach(println) // Iterator[String] // Job 2 val sc2 = ScioContext(options) val data2: SCollection[String] = t.open(sc2)
  • 27. DistCache val sw = sc.distCache("gs://bucket/stopwords.txt") { f => Source.fromFile(f).getLines().toSet } sc.textFile("gs://bucket/shakespeare.txt") .flatMap { _ .split("[^a-zA-Z']+") .filter(w => w.nonEmpty && !sw().contains(w)) } .countByValue .saveAsTextFile("wordcount.txt")
  • 28. ● DAG visualization & source code mapping ● BigQuery caching, legacy & SQL 2011 support ● HDFS Source/Sink, Protobuf & object file I/O ● Job metrics, e.g. accumulators ○ Programmatic access ○ Persist to file ● Bigtable ○ Multi-table write ○ Cluster scaling for bulk I/O Other goodies
  • 30. Adoption ● At Spotify ○ 20+ teams, 80+ users, 70+ production pipelines ○ Most of them new to Scala and Scio ● Open source model ○ Discussion on Slack, mailing list ○ Issue tracking on public Github ○ Community driven - type safe BigQuery, Bigtable, Datastore, Protobuf
  • 31. Release Radar ● 50 n1-standard-1 workers ● 1 core 3.75GB RAM ● 130GB in - Avro & Bigtable ● 130GB out x 2 - Bigtable in US+EU ● 110M Bigtable mutations ● 120 LOC
  • 32. Fan Insights ● Listener stats [artist|track] × [context|geography|demography] × [day|week|month] ● BigQuery, GCS, Datastore ● TBs daily ● 150+ Java jobs to < 10 Scio jobs
  • 33. Master Metadata ● n1-standard-1 workers ● 1 core 3.75GB RAM ● Autoscaling 2-35 workers ● 26 Avro sources - artist, album, track, disc, cover art, ... ● 120GB out, 70M records ● 200 LOC vs original Java 600 LOC
  • 34. And we broke Google
  • 35. BigDiffy ● Pairwise field-level statistical diff ● Diff 2 SCollection[T] given keyFn: T => String ● T: Avro, BigQuery, Protobuf ● Field level Δ - numeric, string, vector ● Δ statistics - min, max, μ, σ, etc. ● Non-deterministic fields ○ ignore field ○ treat "repeated" field as unordered list Part of github.com/spotify/ratatool
  • 36. Dataset Diff ● Diff stats ○ Global: # of SAME, DIFF, MISSING LHS/RHS ○ Key: key → SAME, DIFF, MISSING LHS/RHS ○ Field: field → min, max, μ, σ, etc. ● Use cases ○ Validating pipeline migration ○ Sanity checking ML models
  • 37. Pairwise field-level deltas val lKeyed = lhs.keyBy(keyFn) val rKeyed = rhs.keyBy(keyFn) val deltas = (lKeyed outerJoin rKeyed).map { case (k, (lOpt, rOpt)) => (lOpt, rOpt) match { case (Some(l), Some(r)) => val ds = diffy(l, r) // Seq[Delta] val dt = if (ds.isEmpty) SAME else DIFFERENT (k, (ds, dt)) case (_, _) => val dt = if (lOpt.isDefined) MISSING_RHS else MISSING_LHS (k, (Nil, dt)) } }
  • 38. Summing deltas import com.twitter.algebird._ // convert deltas to map of (field → summable stats) def deltasToMap(ds: Seq[Delta], dt: DeltaType) : Map[String, (Long, Option[(DeltaType, Min[Double], Max[Double], Moments)])] = { // ... } deltas .map { case (_, (ds, dt)) => deltasToMap(ds, dt) } .sum // Semigroup!
  • 39. Other uses ● AB testing ○ Statistical analysis with bootstrap and DimSum ○ BigQuery, Datastore, TBs in/out ● Monetization ○ Ads targeting ○ User conversion analysis ○ BigQuery, TBs in/out ● User understanding ○ Diversity ○ Session analysis ○ Behavior analysis ● Home page ranking ● Audio fingerprint analysis
  • 41. Serialization ● Data ser/de ○ Scalding, Spark and Storm uses Kryo and Chill ○ Dataflow/Beam requires explicit Coder[T] Sometimes inferable via Guava TypeToken ○ ClassTag to the rescue, fallback to Kryo/Chill ● Lambda ser/de ○ ClosureCleaner ○ Serializable and @transient lazy val
  • 42. REPL ● Spark REPL transports lambda bytecode via HTTP ● Dataflow requires job jar for execution (no master) ● Custom class loader and ILoop ● Interpreted classes → job jar → job submission ● SCollection[T]#closeAndCollect(): Iterator[T] to mimic Spark actions
  • 43. Macros and IntelliJ IDEA ● IntelliJ IDEA does not see macro expanded classes https://youtrack.jetbrains.com/issue/SCL-8834 ● @BigQueryType.{fromTable, fromQuery} class MyRecord ● Scio IDEA plugin https://github.com/spotify/scio-idea-plugin
  • 44. Scio in Apache Zeppelin Local Zeppelin server, remote managed Dataflow cluster, NO OPS
  • 45. Experimental ● github.com/nevillelyh/shapeless-datatype ○ Case class ↔ BigQuery TableRow & Datastore Entity ○ Generic mapper between case class types ○ Type and lens based record matcher ● github.com/nevillelyh/protobuf-generic ○ Generic Protobuf manipulation similar to Avro GenericRecord ○ Protobuf type T → JSON schema ○ Bytes ↔ JSON given JSON schema
  • 46. What's Next? ● Better streaming support [#163] ● Support Beam 0.3.0-incubating ● Support other runners ● Donate to Beam as Scala DSL [BEAM-302]
  • 47. The End Thank You Neville Li @sinisa_lyh