SlideShare a Scribd company logo
A Deep Dive into Stateful Stream
Processing in Structured Streaming
Spark + AI Summit Europe 2018
4th October, London
Tathagata “TD” Das
@tathadas
Structured Streaming
stream processing on Spark SQL engine
fast, scalable, fault-tolerant
rich, unified, high level APIs
deal with complex data and complex workloads
rich ecosystem of data sources
integrate with many storage systems
you
should not have to
reason about streaming
you
should write simple queries
&
Spark
should continuously update the answer
Treat Streams as Unbounded Tables
data stream unbounded input table
new data in the
data stream
=
new rows appended
to a unbounded table
Anatomy of a Streaming Query
Example
Read JSON data from Kafka
Parse nested JSON
Store in structured Parquet table
Get end-to-end failure guarantees
ETL
Anatomy of a Streaming Query
spark.readStream.format("kafka")
.option("kafka.boostrap.servers",...)
.option("subscribe", "topic")
.load()
Source
Specify where to read data from
Built-in support for Files / Kafka /
Kinesis*
Can include multiple sources of
different types using join() / union()
*Available only on Databricks Runtime
returns a Spark DataFrame
(common API for batch & streaming data)
Anatomy of a Streaming Query
spark.readStream.format("kafka")
.option("kafka.boostrap.servers",...)
.option("subscribe", "topic")
.load()
Kafka DataFrame
key value topic partition offset timestamp
[binary] [binary] "topic" 0 345 1486087873
[binary] [binary] "topic" 3 2890 1486086721
Anatomy of a Streaming Query
spark.readStream.format("kafka")
.option("kafka.boostrap.servers",...)
.option("subscribe", "topic")
.load()
.selectExpr("cast (value as string) as json")
.select(from_json("json", schema).as("data"))
Transformations
Cast bytes from Kafka records to a
string, parse it as a json, and
generate nested columns
100s of built-in, optimized SQL
functions like from_json
user-defined functions, lambdas,
function literals with map, flatMap…
Anatomy of a Streaming Query
Sink
Write transformed output to
external storage systems
Built-in support for Files / Kafka
Use foreach to execute arbitrary
code with the output data
Some sinks are transactional and
exactly once (e.g. files)
spark.readStream.format("kafka")
.option("kafka.boostrap.servers",...)
.option("subscribe", "topic")
.load()
.selectExpr("cast (value as string) as json")
.select(from_json("json", schema).as("data"))
.writeStream
.format("parquet")
.option("path", "/parquetTable/")
Anatomy of a Streaming Query
Processing Details
Trigger: when to process data
- Fixed interval micro-batches
- As fast as possible micro-batches
- Continuously (new in Spark 2.3)
Checkpoint location: for tracking the
progress of the query
spark.readStream.format("kafka")
.option("kafka.boostrap.servers",...)
.option("subscribe", "topic")
.load()
.selectExpr("cast (value as string) as json")
.select(from_json("json", schema).as("data"))
.writeStream
.format("parquet")
.option("path", "/parquetTable/")
.trigger("1 minute")
.option("checkpointLocation", "…")
.start()
DataFrames,
Datasets, SQL
Logical
Plan
Read from
Kafka
Project
device, signal
Filter
signal > 15
Write to
Parquet
Spark automatically streamifies!
Spark SQL converts batch-like query to a series of incremental
execution plans operating on new micro-batches of data
Kafka
Source
Optimized
Operator
codegen, off-
heap, etc.
Parquet
Sink
Optimized
Plan
spark.readStream.format("kafka")
.option("kafka.boostrap.servers",...)
.option("subscribe", "topic")
.load()
.selectExpr("cast (value as string) as json")
.select(from_json("json", schema).as("data"))
.writeStream
.format("parquet")
.option("path", "/parquetTable/")
.trigger("1 minute")
.option("checkpointLocation", "…")
.start()
Series of Incremental
Execution Plans
process
newdata
t = 1 t = 2 t = 3
process
newdata
process
newdata
process
newdata
t = 1 t = 2 t = 3
process
newdata
process
newdata
Fault-tolerance with Checkpointing
Checkpointing
Saves processed offset info to stable storage
Saved as JSON for forward-compatibility
Allows recovery from any failure
Can resume after limited changes to your
streaming transformations (e.g. adding new
filters to drop corrupted data, etc.)
end-to-end
exactly-once
guarantees
write
ahead
log
Anatomy of a Streaming Query
ETL
Raw data from Kafka available
as structured data in seconds,
ready for querying
spark.readStream.format("kafka")
.option("kafka.boostrap.servers",...)
.option("subscribe", "topic")
.load()
.selectExpr("cast (value as string) as json")
.select(from_json("json", schema).as("data"))
.writeStream
.format("parquet")
.option("path", "/parquetTable/")
.trigger("1 minute")
.option("checkpointLocation", "…")
.start()
2xfaster
Structured Streaming reuses
the Spark SQL Optimizer
and Tungsten Engine
Performance: Benchmark
40-core throughput
700K
33M
65M
0
10
20
30
40
50
60
70
Kafka
Streams
Apache Flink Structured
Streaming
Millionsofrecords/s
More details in our blog post
cheaper
Stateful
Stream Processing
What is Stateless Stream Processing?
Stateless streaming queries (e.g.
ETL) process each record
independent of other records
df.select(from_json("json", schema).as("data"))
.where("data.type = 'typeA')
Spark
stateless
streaming
Every record is parsed into a structured form
and then selected (or not) by the filter
What is Stateful Stream Processing?
Stateful streaming queries
combine information from
multiple records together
.count()
Spark
stateful
streaming
statedf.select(from_json("json", schema).as("data"))
.where("data.type = 'typeA')
Count is the streaming state and every
selected record increments the count
State is the information that
is maintained for future use
statestate
Stateful Micro-Batch Processing
State is versioned between
micro-batches while streaming
query is running
Each micro-batch reads previous
version state and updates it to
new version
Versions used for fault recovery
process
newdata
t = 1
sink
src
t = 2
process
newdata
sink
src
t = 3
process
newdata
sink
src
statestatestate
micro-batch incremental execution
Distributed, Fault-tolerant State
State data is distributed across executors
State stored in the executor memory
Micro-batch tasks update the state
Changes are checkpointed with version to
given checkpoint location (e.g. HDFS)
Recovery from failure is automatic
Exactly-once fault-tolerance guarantees!
executor 2
executor 1
driver
state
state
HDFS
tasks
Philosophy
of Stateful Operations
Automatic State
Cleanup
User-defined State
Cleanup
Two types
of Stateful Operations
Automatic State
Cleanup
User-defined State
Cleanup
For SQL operations with well
known semantics
State cleanup is automatic
with watermarking because
we precisely know when state
data is not needed any more
For user-defined, arbitrary
stateful operations
No automatic state cleanup
User has to explicitly
manage state
Automatic State
Cleanup
User-defined State
Cleanup
aggregations
deduplications
joins
mapGroupsWithState
flatMapGroupsWithState
Rest of this talk
Explore built-in stateful operations
How to use watermarks to control state size
How to build arbitrary stateful operations
How to monitor and debug stateful queries
Streaming Aggregation
Aggregation by key and/or time windows
Aggregation by key only
Aggregation by event time windows
Aggregation by both
Supports multiple aggregations,
user-defined functions (UDAFs)
events
.groupBy("key")
.count()
events
.groupBy(window("timestamp","10 mins"))
.avg("value")
events
.groupBy(
col(key),
window("timestamp","10 mins"))
.agg(avg("value"), corr("value"))
Automatically handles Late Data
12:00 - 13:00 1 12:00 - 13:00 3
13:00 - 14:00 1
12:00 - 13:00 3
13:00 - 14:00 2
14:00 - 15:00 5
12:00 - 13:00 5
13:00 - 14:00 2
14:00 - 15:00 5
15:00 - 16:00 4
12:00 - 13:00 3
13:00 - 14:00 2
14:00 - 15:00 6
15:00 - 16:00 4
16:00 - 17:00 3
13:00 14:00 15:00 16:00 17:00Keeping state allows
late data to update
counts of old windows
red = state updated
with late data
But size of the state increases
indefinitely if old windows are not
dropped
Watermarking
Watermark - moving threshold of
how late data is expected to be
and when to drop old state
Trails behind max event time
seen by the engine
Watermark delay = trailing gap
event time
max event time
watermark data older
than
watermark
not expected
12:30 PM
12:20 PM
trailing gap
of 10 mins
Watermarking
Data newer than watermark may
be late, but allowed to aggregate
Data older than watermark is "too
late" and dropped
Windows older than watermark
automatically deleted to limit state
max event time
event time
watermark
late data
allowed to
aggregate
data too
late,
dropped
watermark
delay
of 10 mins
Watermarking
max event time
event time
watermark
parsedData
.withWatermark("timestamp", "10 minutes")
.groupBy(window("timestamp","5 minutes"))
.count()
late data
allowed to
aggregate
data too
late,
dropped
Used only in stateful operations
Ignored in non-stateful streaming
queries and batch queries
watermark
delay
of 10 mins
Watermarking
data too late,
ignored in counts,
state dropped
Processing Time12:00
12:05
12:10
12:15
12:10 12:15 12:20
12:07
12:13
12:08
EventTime
12:15
12:18
12:04
watermark updated to
12:14 - 10m = 12:04
for next trigger,
state < 12:04 deleted
data is late, but
considered in
counts
system tracks max
observed event time
12:08
wm = 12:04
10min
12:14
More details in my blog post
parsedData
.withWatermark("timestamp", "10 minutes")
.groupBy(window("timestamp","5 minutes"))
.count()
Watermarking
Trade off between lateness tolerance and state size
lateness toleranceless late data
processed,
less memory
consumed
more late data
processed,
more memory
consumed
state size
Clean separation of concerns
parsedData
.withWatermark("timestamp", "10 minutes")
.groupBy(window("timestamp","5 minutes"))
.count()
.writeStream
.trigger("10 seconds")
.start()
Query Semantics
Processing Details
separated from
Clean separation of concerns
parsedData
.withWatermark("timestamp", "10 minutes")
.groupBy(window("timestamp","5 minutes"))
.count()
.writeStream
.trigger("10 seconds")
.start()
Query Semantics
How to group data by time?
(same for batch & streaming)
Processing Details
Clean separation of concerns
parsedData
.withWatermark("timestamp", "10 minutes")
.groupBy(window("timestamp","5 minutes"))
.count()
.writeStream
.trigger("10 seconds")
.start()
Query Semantics
How to group data by time?
(same for batch & streaming)
Processing Details
How late can data be?
Clean separation of concerns
parsedData
.withWatermark("timestamp", "10 minutes")
.groupBy(window("timestamp","5 minutes"))
.count()
.writeStream
.trigger("10 seconds")
.start()
Query Semantics
How to group data by time?
(same for batch & streaming)
Processing Details
How late can data be?
How often to emit updates?
Streaming Deduplication
Streaming Deduplication
Drop duplicate records in a stream
Specify columns which uniquely
identify a record
Spark SQL will store past unique
column values as state and drop
any record that matches the state
userActions
.dropDuplicates("uniqueRecordId")
Streaming Deduplication with Watermark
Timestamp as a unique column
along with watermark allows old
values in state to dropped
Records older than watermark delay is
not going to get any further duplicates
Timestamp must be same for
duplicated records
userActions
.withWatermark("timestamp")
.dropDuplicates(
"uniqueRecordId",
"timestamp")
Streaming Joins
Streaming Joins
Spark 2.0+ supports joins between streams and static datasets
Spark 2.3+ supports joins between multiple streams
Join
(ad, impression)
(ad, click)
(ad, impression, click)
Join stream of ad impressions
with another stream of their
corresponding user clicks
Example: Ad Monetization
Streaming Joins
Most of the time click events arrive after their impressions
Sometimes, due to delays, impressions can arrive after clicks
Each stream in a join
needs to buffer past
events as state for
matching with future
events of the other stream
Join
(ad, impression)
(ad, click)
(ad, impression, click)
state
state
Join
(ad, impression)
(ad, click)
(ad, impression, click)
Simple Inner Join
Inner join by ad ID column
Need to buffer all past events as
state, a match can come on the
other stream any time in the future
To allow buffered events to be
dropped, query needs to provide
more time constraints
impressions.join(
clicks,
expr("clickAdId = impressionAdId")
)
state
state
∞
∞
Inner Join + Time constraints + Watermarks
time constraints
Time constraints
Let's assume
Impressions can be 2 hours late
Clicks can be 3 hours late
A click can occur within 1 hour
after the corresponding
impression
val impressionsWithWatermark = impressions
.withWatermark("impressionTime", "2 hours")
val clicksWithWatermark = clicks
.withWatermark("clickTime", "3 hours")
impressionsWithWatermark.join(
clicksWithWatermark,
expr("""
clickAdId = impressionAdId AND
clickTime >= impressionTime AND
clickTime <= impressionTime + interval 1 hour
"""
))
Join
Range Join
impressionsWithWatermark.join(
clicksWithWatermark,
expr("""
clickAdId = impressionAdId AND
clickTime >= impressionTime AND
clickTime <= impressionTime + interval 1 hour
"""
))
Inner Join + Time constraints + Watermarks
Spark calculates
- impressions need to be
buffered for 4 hours
- clicks need to be
buffered for 2 hours
Join
impr. upto 2 hrs late
clicks up 3 hrs late
4-hour
state
2-hour
state
3-hour-late click may match with
impression received 4 hours ago
2-hour-late impression may match
with click received 2 hours ago
Spark drops events older
than these thresholds
Join
Outer Join + Time constraints + Watermarks
Left and right outer joins are
allowed only with time constraints
and watermarks
Needed for correctness, Spark must
output nulls when an event cannot
get any future match
Note: null outputs are delayed as
Spark has to wait for sometime to be
sure that there cannot be any match
impressionsWithWatermark.join(
clicksWithWatermark,
expr("""
clickAdId = impressionAdId AND
clickTime >= impressionTime AND
clickTime <= impressionTime + interval 1 hour
"""
),
joinType = "leftOuter"
)
Can be "inner" (default) /"leftOuter"/ "rightOuter"
Arbitrary Stateful
Operations
Arbitrary Stateful Operations
Many use cases require more complicated logic than SQL ops
Example: Tracking user activity on your product
Input: User actions (login, clicks, logout, …)
Output: Latest user status (online, active, inactive, …)
Solution: MapGroupsWithState
General API for per-key user-defined stateful processing
Since Spark 2.2, for Scala and Java only
MapGroupsWithState / FlatMapGroupsWithState
No automatic state clean up and dropping of late data
Adding watermark does not automatically manage late and state data
Explicit state clean up by the user
More powerful + efficient than DStream's mapWithState and
updateStateByKey
MapGroupsWithState / FlatMapGroupsWithState
MapGroupsWithState - How to use?
1. Define the data structures
Input event: UserAction
State data: UserStatus
Output event: UserStatus
(can be different from state)
case class UserAction(
userId: String, action: String)
case class UserStatus(
userId: String, active: Boolean)
MapGroupsWithState
MapGroupsWithState - How to use?
2. Define function to update
state of each grouping
key using the new data
Input
Grouping key: userId
New data: new user actions
Previous state: previous status
of this user
case class UserAction(
userId: String, action: String)
case class UserStatus(
userId: String, active: Boolean)
def updateState(
userId: String,
actions: Iterator[UserAction],
state: GroupState[UserStatus]):UserStatus = {
}
MapGroupsWithState - How to use?
2. Define function to update
state of each grouping key
using the new data
Body
Get previous user status
Update user status with actions
Update state with latest user status
Return the status
def updateState(
userId: String,
actions: Iterator[UserAction],
state: GroupState[UserStatus]):UserStatus = {
}
val prevStatus = state.getOption.getOrElse {
new UserStatus()
}
actions.foreah { action =>
prevStatus.updateWith(action)
}
state.update(prevStatus)
return prevStatus
MapGroupsWithState - How to use?
3. Use the user-defined function
on a grouped Dataset
Works with both batch and
streaming queries
In batch query, the function is called
only once per group with no prior state
def updateState(
userId: String,
actions: Iterator[UserAction],
state: GroupState[UserStatus]):UserStatus = {
}
// process actions, update and return status
userActions
.groupByKey(_.userId)
.mapGroupsWithState(updateState)
Timeouts
Example: Mark a user as inactive when
there is no actions in 1 hour
Timeouts: When a group does not get any
event for a while, then the function is
called for that group with an empty iterator
Must specify a global timeout type, and set
per-group timeout timestamp/duration
Ignored in a batch queries
userActions.withWatermark("timestamp")
.groupByKey(_.userId)
.mapGroupsWithState
(timeoutConf)(updateState)
EventTime
Timeout
ProcessingTime
Timeout
NoTimeout
(default)
userActions
.withWatermark("timestamp")
.groupByKey(_.userId)
.mapGroupsWithState
( timeoutConf )(updateState)
Event-time Timeout - How to use?
1. Enable EventTimeTimeout in
mapGroupsWithState
2. Enable watermarking
3. Update the mapping function
Every time function is called, set
the timeout timestamp using the
max seen event timestamp +
timeout duration
Update state when timeout occurs
def updateState(...): UserStatus = {
if (!state.hasTimedOut) {
// track maxActionTimestamp while
// processing actions and updating state
state.setTimeoutTimestamp(
maxActionTimestamp, "1 hour")
} else { // handle timeout
userStatus.handleTimeout()
state.remove()
}
// return user status
}
EventTimeTimeout
if (!state.hasTimedOut) {
} else { // handle timeout
userStatus.handleTimeout()
state.remove()
} return userStatus
Event-time Timeout - When?
Watermark is calculated with max event time across all groups
For a specific group, if there is no event till watermark exceeds
the timeout timestamp,
Then
Function is called with an empty iterator, and hasTimedOut = true
Else
Function is called with new data, and timeout is disabled
Needs to explicitly set timeout timestamp every time
Processing-time Timeout
Instead of setting timeout
timestamp, function sets
timeout duration (in terms of
wall-clock-time) to wait before
timing out
Independent of watermarks
Note, query downtimes will cause
lots of timeouts after recovery
def updateState(...): UserStatus = {
if (!state.hasTimedOut) {
// handle new data
state.setTimeoutDuration("1 hour")
} else {
// handle timeout
}
return userStatus
}
userActions
.groupByKey(_.userId)
.mapGroupsWithState
(ProcessingTimeTimeout)(updateState)
FlatMapGroupsWithState
More general version where the
function can return any number
of events, possibly none at all
Example: instead of returning
user status, want to return
specific actions that are
significant based on the history
def updateState(
userId: String,
actions: Iterator[UserAction],
state: GroupState[UserStatus]):
Iterator[SpecialUserAction] = {
}
userActions
.groupByKey(_.userId)
.flatMapGroupsWithState
(outputMode, timeoutConf)
(updateState)
userActions
.groupByKey(_.userId)
.flatMapGroupsWithState
(outputMode, timeoutConf)
(updateState)
Function Output Mode
Function output mode* gives Spark insights into
the output from this opaque function
Update Mode - Output events are key-value pairs, each
output is updating the value of a key in the result table
Append Mode - Output events are independent rows
that being appended to the result table
Allows Spark SQL planner to correctly compose
flatMapGroupsWithState with other operations
*Not to be confused with output mode of the query
Update
Mode
Append
Mode
Managing Stateful
Streaming Queries
Optimizing Query State
Set # shuffle partitions to 1-3 times number of cores
Too low = not all cores will be used à lower throughput
Too high = cost of writing state to HDFS will increases à higher latency
Total size of state per worker
Larger state leads to higher overheads of snapshotting, JVM GC pauses, etc.
Monitoring the state of Query State
Get current state metrics using the
last progress of the query
Total number of rows in state
Total memory consumed (approx.)
Get it asynchronously through
StreamingQueryListener API
val progress = query.lastProgress
print(progress.json)
{
...
"stateOperators" : [ {
"numRowsTotal" : 660000,
"memoryUsedBytes" : 120571087
...
} ],
}
new StreamingQueryListener {
...
def onQueryProgress(
event: QueryProgressEvent)
}
Monitoring the state of Query State
Databricks Notebooks integrated with Structured Streaming
Shows size of state along with other processing metrics
Data rates Batch durations # state keys
Debugging Query State
SQL metrics in the Spark UI (SQL
tab, DAG view) expose more
operator-specific stats
Answer questions like
- Is the memory usage skewed?
- Is removing rows slow?
- Is writing checkpoints slow?
Managing Very Large State
State data kept on JVM heap
Can have GC issues with millions of state keys per worker
Limits depend on the size and complexity of state data structures
Latency
spikes of > 20s
due to GC
Managing Very Large Statewith RocksDB
In Databricks Runtime, you can store state locally in RocksDB
Avoids JVM heap, no GC issues with 100 millions state keys per worker
Local RocksDB snapshot files automatically checkpointed to HDFS
Same exactly-once fault-tolerant guarantees
Latency
capped
at 10s
[More info in Databricks Docs]
New in Apache Spark 2.4
• Lower state memory usage for streaming aggregations
• foreach() in Python
• foreachBatch() in Scala and Python
• Reuse existing batch data sources for streaming output
Example: Write to Cassandra using Cassandra batch data source
• Write streaming output to multiple locations
streamingDF.writeStream.foreachBatch { (batchDF: DataFrame, batchId: Long) =>
batchDF.write.format(...).save(...) // location 1
batchDF.write.format(...).save(...) // location 2
}
More Info
Structured Streaming Docs
http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html
https://docs.databricks.com/spark/latest/structured-streaming/index.html
Databricks blog posts for more focused discussions
https://databricks.com/blog/category/engineering/streaming
My previous talk on the basics of Structured Streaming
https://www.slideshare.net/databricks/a-deep-dive-into-structured-streaming
Questions?

More Related Content

PDF
Apache Iceberg - A Table Format for Hige Analytic Datasets
PDF
Getting Started with Apache Spark on Kubernetes
PDF
Making Apache Spark Better with Delta Lake
PPTX
Evening out the uneven: dealing with skew in Flink
PDF
Change Data Feed in Delta
PDF
Spark with Delta Lake
PDF
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Apache Iceberg - A Table Format for Hige Analytic Datasets
Getting Started with Apache Spark on Kubernetes
Making Apache Spark Better with Delta Lake
Evening out the uneven: dealing with skew in Flink
Change Data Feed in Delta
Spark with Delta Lake
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...

What's hot (20)

PDF
Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark
PDF
Spark (Structured) Streaming vs. Kafka Streams
PDF
Unified Stream and Batch Processing with Apache Flink
PDF
Delta Lake Streaming: Under the Hood
PPTX
Autoscaling Flink with Reactive Mode
PDF
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
PDF
Introduction to Kafka Streams
PDF
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
PDF
Understanding Query Plans and Spark UIs
PDF
Four Things to Know About Reliable Spark Streaming with Typesafe and Databricks
PDF
A Thorough Comparison of Delta Lake, Iceberg and Hudi
PDF
Apache Spark Core – Practical Optimization
PDF
Iceberg: A modern table format for big data (Strata NY 2018)
PDF
How to Extend Apache Spark with Customized Optimizations
ODP
Stream processing using Kafka
PPTX
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
PDF
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
PDF
Kappa vs Lambda Architectures and Technology Comparison
PDF
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
PDF
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Arbitrary Stateful Aggregations using Structured Streaming in Apache Spark
Spark (Structured) Streaming vs. Kafka Streams
Unified Stream and Batch Processing with Apache Flink
Delta Lake Streaming: Under the Hood
Autoscaling Flink with Reactive Mode
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Introduction to Kafka Streams
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
Understanding Query Plans and Spark UIs
Four Things to Know About Reliable Spark Streaming with Typesafe and Databricks
A Thorough Comparison of Delta Lake, Iceberg and Hudi
Apache Spark Core – Practical Optimization
Iceberg: A modern table format for big data (Strata NY 2018)
How to Extend Apache Spark with Customized Optimizations
Stream processing using Kafka
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Kappa vs Lambda Architectures and Technology Comparison
Apache Spark Data Source V2 with Wenchen Fan and Gengliang Wang
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Ad

Similar to Deep Dive into Stateful Stream Processing in Structured Streaming with Tathagata Das (20)

PDF
Real-Time Data Pipelines Made Easy with Structured Streaming in Apache Spark.pdf
PDF
Making Structured Streaming Ready for Production
PPTX
Leveraging Azure Databricks to minimize time to insight by combining Batch an...
PDF
Writing Continuous Applications with Structured Streaming in PySpark
PDF
Writing Continuous Applications with Structured Streaming PySpark API
PDF
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
PDF
Easy, scalable, fault tolerant stream processing with structured streaming - ...
PDF
Easy, scalable, fault tolerant stream processing with structured streaming - ...
PPTX
Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...
PDF
What's new with Apache Spark's Structured Streaming?
PDF
Easy, Scalable, Fault-tolerant stream processing with Structured Streaming in...
PDF
Deep dive into stateful stream processing in structured streaming by Tathaga...
PPTX
Apache Spark Structured Streaming + Apache Kafka = ♡
PPTX
A Deep Dive into Structured Streaming: Apache Spark Meetup at Bloomberg 2016
PDF
Taking Spark Streaming to the Next Level with Datasets and DataFrames
PDF
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
PPT
Introduction to Spark Streaming
PDF
Continuous Application with Structured Streaming 2.0
PDF
A Tale of Two APIs: Using Spark Streaming In Production
PDF
A Deep Dive into Structured Streaming in Apache Spark
Real-Time Data Pipelines Made Easy with Structured Streaming in Apache Spark.pdf
Making Structured Streaming Ready for Production
Leveraging Azure Databricks to minimize time to insight by combining Batch an...
Writing Continuous Applications with Structured Streaming in PySpark
Writing Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Easy, scalable, fault tolerant stream processing with structured streaming - ...
Easy, scalable, fault tolerant stream processing with structured streaming - ...
Kafka Summit NYC 2017 - Easy, Scalable, Fault-tolerant Stream Processing with...
What's new with Apache Spark's Structured Streaming?
Easy, Scalable, Fault-tolerant stream processing with Structured Streaming in...
Deep dive into stateful stream processing in structured streaming by Tathaga...
Apache Spark Structured Streaming + Apache Kafka = ♡
A Deep Dive into Structured Streaming: Apache Spark Meetup at Bloomberg 2016
Taking Spark Streaming to the Next Level with Datasets and DataFrames
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
Introduction to Spark Streaming
Continuous Application with Structured Streaming 2.0
A Tale of Two APIs: Using Spark Streaming In Production
A Deep Dive into Structured Streaming in Apache Spark
Ad

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
PPT
Data Lakehouse Symposium | Day 1 | Part 2
PPTX
Data Lakehouse Symposium | Day 2
PPTX
Data Lakehouse Symposium | Day 4
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
PDF
Democratizing Data Quality Through a Centralized Platform
PDF
Learn to Use Databricks for Data Science
PDF
Why APM Is Not the Same As ML Monitoring
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
PDF
Stage Level Scheduling Improving Big Data and AI Integration
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
PDF
Sawtooth Windows for Feature Aggregations
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
PDF
Re-imagine Data Monitoring with whylogs and Spark
PDF
Raven: End-to-end Optimization of ML Prediction Queries
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
PDF
Massive Data Processing in Adobe Using Delta Lake
DW Migration Webinar-March 2022.pptx
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 4
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Democratizing Data Quality Through a Centralized Platform
Learn to Use Databricks for Data Science
Why APM Is Not the Same As ML Monitoring
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Stage Level Scheduling Improving Big Data and AI Integration
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Sawtooth Windows for Feature Aggregations
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Re-imagine Data Monitoring with whylogs and Spark
Raven: End-to-end Optimization of ML Prediction Queries
Processing Large Datasets for ADAS Applications using Apache Spark
Massive Data Processing in Adobe Using Delta Lake

Recently uploaded (20)

PPT
ISS -ESG Data flows What is ESG and HowHow
PDF
.pdf is not working space design for the following data for the following dat...
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
Computer network topology notes for revision
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PDF
Introduction to the R Programming Language
PPTX
Leprosy and NLEP programme community medicine
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PPTX
Managing Community Partner Relationships
PDF
Business Analytics and business intelligence.pdf
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PDF
Clinical guidelines as a resource for EBP(1).pdf
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PDF
Transcultural that can help you someday.
PPTX
Database Infoormation System (DBIS).pptx
PPTX
IB Computer Science - Internal Assessment.pptx
PDF
Mega Projects Data Mega Projects Data
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
ISS -ESG Data flows What is ESG and HowHow
.pdf is not working space design for the following data for the following dat...
climate analysis of Dhaka ,Banglades.pptx
Computer network topology notes for revision
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Introduction to the R Programming Language
Leprosy and NLEP programme community medicine
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
Managing Community Partner Relationships
Business Analytics and business intelligence.pdf
Data_Analytics_and_PowerBI_Presentation.pptx
Clinical guidelines as a resource for EBP(1).pdf
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Transcultural that can help you someday.
Database Infoormation System (DBIS).pptx
IB Computer Science - Internal Assessment.pptx
Mega Projects Data Mega Projects Data
Acceptance and paychological effects of mandatory extra coach I classes.pptx

Deep Dive into Stateful Stream Processing in Structured Streaming with Tathagata Das

  • 1. A Deep Dive into Stateful Stream Processing in Structured Streaming Spark + AI Summit Europe 2018 4th October, London Tathagata “TD” Das @tathadas
  • 2. Structured Streaming stream processing on Spark SQL engine fast, scalable, fault-tolerant rich, unified, high level APIs deal with complex data and complex workloads rich ecosystem of data sources integrate with many storage systems
  • 3. you should not have to reason about streaming
  • 4. you should write simple queries & Spark should continuously update the answer
  • 5. Treat Streams as Unbounded Tables data stream unbounded input table new data in the data stream = new rows appended to a unbounded table
  • 6. Anatomy of a Streaming Query Example Read JSON data from Kafka Parse nested JSON Store in structured Parquet table Get end-to-end failure guarantees ETL
  • 7. Anatomy of a Streaming Query spark.readStream.format("kafka") .option("kafka.boostrap.servers",...) .option("subscribe", "topic") .load() Source Specify where to read data from Built-in support for Files / Kafka / Kinesis* Can include multiple sources of different types using join() / union() *Available only on Databricks Runtime returns a Spark DataFrame (common API for batch & streaming data)
  • 8. Anatomy of a Streaming Query spark.readStream.format("kafka") .option("kafka.boostrap.servers",...) .option("subscribe", "topic") .load() Kafka DataFrame key value topic partition offset timestamp [binary] [binary] "topic" 0 345 1486087873 [binary] [binary] "topic" 3 2890 1486086721
  • 9. Anatomy of a Streaming Query spark.readStream.format("kafka") .option("kafka.boostrap.servers",...) .option("subscribe", "topic") .load() .selectExpr("cast (value as string) as json") .select(from_json("json", schema).as("data")) Transformations Cast bytes from Kafka records to a string, parse it as a json, and generate nested columns 100s of built-in, optimized SQL functions like from_json user-defined functions, lambdas, function literals with map, flatMap…
  • 10. Anatomy of a Streaming Query Sink Write transformed output to external storage systems Built-in support for Files / Kafka Use foreach to execute arbitrary code with the output data Some sinks are transactional and exactly once (e.g. files) spark.readStream.format("kafka") .option("kafka.boostrap.servers",...) .option("subscribe", "topic") .load() .selectExpr("cast (value as string) as json") .select(from_json("json", schema).as("data")) .writeStream .format("parquet") .option("path", "/parquetTable/")
  • 11. Anatomy of a Streaming Query Processing Details Trigger: when to process data - Fixed interval micro-batches - As fast as possible micro-batches - Continuously (new in Spark 2.3) Checkpoint location: for tracking the progress of the query spark.readStream.format("kafka") .option("kafka.boostrap.servers",...) .option("subscribe", "topic") .load() .selectExpr("cast (value as string) as json") .select(from_json("json", schema).as("data")) .writeStream .format("parquet") .option("path", "/parquetTable/") .trigger("1 minute") .option("checkpointLocation", "…") .start()
  • 12. DataFrames, Datasets, SQL Logical Plan Read from Kafka Project device, signal Filter signal > 15 Write to Parquet Spark automatically streamifies! Spark SQL converts batch-like query to a series of incremental execution plans operating on new micro-batches of data Kafka Source Optimized Operator codegen, off- heap, etc. Parquet Sink Optimized Plan spark.readStream.format("kafka") .option("kafka.boostrap.servers",...) .option("subscribe", "topic") .load() .selectExpr("cast (value as string) as json") .select(from_json("json", schema).as("data")) .writeStream .format("parquet") .option("path", "/parquetTable/") .trigger("1 minute") .option("checkpointLocation", "…") .start() Series of Incremental Execution Plans process newdata t = 1 t = 2 t = 3 process newdata process newdata
  • 13. process newdata t = 1 t = 2 t = 3 process newdata process newdata Fault-tolerance with Checkpointing Checkpointing Saves processed offset info to stable storage Saved as JSON for forward-compatibility Allows recovery from any failure Can resume after limited changes to your streaming transformations (e.g. adding new filters to drop corrupted data, etc.) end-to-end exactly-once guarantees write ahead log
  • 14. Anatomy of a Streaming Query ETL Raw data from Kafka available as structured data in seconds, ready for querying spark.readStream.format("kafka") .option("kafka.boostrap.servers",...) .option("subscribe", "topic") .load() .selectExpr("cast (value as string) as json") .select(from_json("json", schema).as("data")) .writeStream .format("parquet") .option("path", "/parquetTable/") .trigger("1 minute") .option("checkpointLocation", "…") .start()
  • 15. 2xfaster Structured Streaming reuses the Spark SQL Optimizer and Tungsten Engine Performance: Benchmark 40-core throughput 700K 33M 65M 0 10 20 30 40 50 60 70 Kafka Streams Apache Flink Structured Streaming Millionsofrecords/s More details in our blog post cheaper
  • 17. What is Stateless Stream Processing? Stateless streaming queries (e.g. ETL) process each record independent of other records df.select(from_json("json", schema).as("data")) .where("data.type = 'typeA') Spark stateless streaming Every record is parsed into a structured form and then selected (or not) by the filter
  • 18. What is Stateful Stream Processing? Stateful streaming queries combine information from multiple records together .count() Spark stateful streaming statedf.select(from_json("json", schema).as("data")) .where("data.type = 'typeA') Count is the streaming state and every selected record increments the count State is the information that is maintained for future use statestate
  • 19. Stateful Micro-Batch Processing State is versioned between micro-batches while streaming query is running Each micro-batch reads previous version state and updates it to new version Versions used for fault recovery process newdata t = 1 sink src t = 2 process newdata sink src t = 3 process newdata sink src statestatestate micro-batch incremental execution
  • 20. Distributed, Fault-tolerant State State data is distributed across executors State stored in the executor memory Micro-batch tasks update the state Changes are checkpointed with version to given checkpoint location (e.g. HDFS) Recovery from failure is automatic Exactly-once fault-tolerance guarantees! executor 2 executor 1 driver state state HDFS tasks
  • 23. Automatic State Cleanup User-defined State Cleanup For SQL operations with well known semantics State cleanup is automatic with watermarking because we precisely know when state data is not needed any more For user-defined, arbitrary stateful operations No automatic state cleanup User has to explicitly manage state
  • 25. Rest of this talk Explore built-in stateful operations How to use watermarks to control state size How to build arbitrary stateful operations How to monitor and debug stateful queries
  • 27. Aggregation by key and/or time windows Aggregation by key only Aggregation by event time windows Aggregation by both Supports multiple aggregations, user-defined functions (UDAFs) events .groupBy("key") .count() events .groupBy(window("timestamp","10 mins")) .avg("value") events .groupBy( col(key), window("timestamp","10 mins")) .agg(avg("value"), corr("value"))
  • 28. Automatically handles Late Data 12:00 - 13:00 1 12:00 - 13:00 3 13:00 - 14:00 1 12:00 - 13:00 3 13:00 - 14:00 2 14:00 - 15:00 5 12:00 - 13:00 5 13:00 - 14:00 2 14:00 - 15:00 5 15:00 - 16:00 4 12:00 - 13:00 3 13:00 - 14:00 2 14:00 - 15:00 6 15:00 - 16:00 4 16:00 - 17:00 3 13:00 14:00 15:00 16:00 17:00Keeping state allows late data to update counts of old windows red = state updated with late data But size of the state increases indefinitely if old windows are not dropped
  • 29. Watermarking Watermark - moving threshold of how late data is expected to be and when to drop old state Trails behind max event time seen by the engine Watermark delay = trailing gap event time max event time watermark data older than watermark not expected 12:30 PM 12:20 PM trailing gap of 10 mins
  • 30. Watermarking Data newer than watermark may be late, but allowed to aggregate Data older than watermark is "too late" and dropped Windows older than watermark automatically deleted to limit state max event time event time watermark late data allowed to aggregate data too late, dropped watermark delay of 10 mins
  • 31. Watermarking max event time event time watermark parsedData .withWatermark("timestamp", "10 minutes") .groupBy(window("timestamp","5 minutes")) .count() late data allowed to aggregate data too late, dropped Used only in stateful operations Ignored in non-stateful streaming queries and batch queries watermark delay of 10 mins
  • 32. Watermarking data too late, ignored in counts, state dropped Processing Time12:00 12:05 12:10 12:15 12:10 12:15 12:20 12:07 12:13 12:08 EventTime 12:15 12:18 12:04 watermark updated to 12:14 - 10m = 12:04 for next trigger, state < 12:04 deleted data is late, but considered in counts system tracks max observed event time 12:08 wm = 12:04 10min 12:14 More details in my blog post parsedData .withWatermark("timestamp", "10 minutes") .groupBy(window("timestamp","5 minutes")) .count()
  • 33. Watermarking Trade off between lateness tolerance and state size lateness toleranceless late data processed, less memory consumed more late data processed, more memory consumed state size
  • 34. Clean separation of concerns parsedData .withWatermark("timestamp", "10 minutes") .groupBy(window("timestamp","5 minutes")) .count() .writeStream .trigger("10 seconds") .start() Query Semantics Processing Details separated from
  • 35. Clean separation of concerns parsedData .withWatermark("timestamp", "10 minutes") .groupBy(window("timestamp","5 minutes")) .count() .writeStream .trigger("10 seconds") .start() Query Semantics How to group data by time? (same for batch & streaming) Processing Details
  • 36. Clean separation of concerns parsedData .withWatermark("timestamp", "10 minutes") .groupBy(window("timestamp","5 minutes")) .count() .writeStream .trigger("10 seconds") .start() Query Semantics How to group data by time? (same for batch & streaming) Processing Details How late can data be?
  • 37. Clean separation of concerns parsedData .withWatermark("timestamp", "10 minutes") .groupBy(window("timestamp","5 minutes")) .count() .writeStream .trigger("10 seconds") .start() Query Semantics How to group data by time? (same for batch & streaming) Processing Details How late can data be? How often to emit updates?
  • 39. Streaming Deduplication Drop duplicate records in a stream Specify columns which uniquely identify a record Spark SQL will store past unique column values as state and drop any record that matches the state userActions .dropDuplicates("uniqueRecordId")
  • 40. Streaming Deduplication with Watermark Timestamp as a unique column along with watermark allows old values in state to dropped Records older than watermark delay is not going to get any further duplicates Timestamp must be same for duplicated records userActions .withWatermark("timestamp") .dropDuplicates( "uniqueRecordId", "timestamp")
  • 42. Streaming Joins Spark 2.0+ supports joins between streams and static datasets Spark 2.3+ supports joins between multiple streams Join (ad, impression) (ad, click) (ad, impression, click) Join stream of ad impressions with another stream of their corresponding user clicks Example: Ad Monetization
  • 43. Streaming Joins Most of the time click events arrive after their impressions Sometimes, due to delays, impressions can arrive after clicks Each stream in a join needs to buffer past events as state for matching with future events of the other stream Join (ad, impression) (ad, click) (ad, impression, click) state state
  • 44. Join (ad, impression) (ad, click) (ad, impression, click) Simple Inner Join Inner join by ad ID column Need to buffer all past events as state, a match can come on the other stream any time in the future To allow buffered events to be dropped, query needs to provide more time constraints impressions.join( clicks, expr("clickAdId = impressionAdId") ) state state ∞ ∞
  • 45. Inner Join + Time constraints + Watermarks time constraints Time constraints Let's assume Impressions can be 2 hours late Clicks can be 3 hours late A click can occur within 1 hour after the corresponding impression val impressionsWithWatermark = impressions .withWatermark("impressionTime", "2 hours") val clicksWithWatermark = clicks .withWatermark("clickTime", "3 hours") impressionsWithWatermark.join( clicksWithWatermark, expr(""" clickAdId = impressionAdId AND clickTime >= impressionTime AND clickTime <= impressionTime + interval 1 hour """ )) Join Range Join
  • 46. impressionsWithWatermark.join( clicksWithWatermark, expr(""" clickAdId = impressionAdId AND clickTime >= impressionTime AND clickTime <= impressionTime + interval 1 hour """ )) Inner Join + Time constraints + Watermarks Spark calculates - impressions need to be buffered for 4 hours - clicks need to be buffered for 2 hours Join impr. upto 2 hrs late clicks up 3 hrs late 4-hour state 2-hour state 3-hour-late click may match with impression received 4 hours ago 2-hour-late impression may match with click received 2 hours ago Spark drops events older than these thresholds
  • 47. Join Outer Join + Time constraints + Watermarks Left and right outer joins are allowed only with time constraints and watermarks Needed for correctness, Spark must output nulls when an event cannot get any future match Note: null outputs are delayed as Spark has to wait for sometime to be sure that there cannot be any match impressionsWithWatermark.join( clicksWithWatermark, expr(""" clickAdId = impressionAdId AND clickTime >= impressionTime AND clickTime <= impressionTime + interval 1 hour """ ), joinType = "leftOuter" ) Can be "inner" (default) /"leftOuter"/ "rightOuter"
  • 49. Arbitrary Stateful Operations Many use cases require more complicated logic than SQL ops Example: Tracking user activity on your product Input: User actions (login, clicks, logout, …) Output: Latest user status (online, active, inactive, …) Solution: MapGroupsWithState General API for per-key user-defined stateful processing Since Spark 2.2, for Scala and Java only MapGroupsWithState / FlatMapGroupsWithState
  • 50. No automatic state clean up and dropping of late data Adding watermark does not automatically manage late and state data Explicit state clean up by the user More powerful + efficient than DStream's mapWithState and updateStateByKey MapGroupsWithState / FlatMapGroupsWithState
  • 51. MapGroupsWithState - How to use? 1. Define the data structures Input event: UserAction State data: UserStatus Output event: UserStatus (can be different from state) case class UserAction( userId: String, action: String) case class UserStatus( userId: String, active: Boolean) MapGroupsWithState
  • 52. MapGroupsWithState - How to use? 2. Define function to update state of each grouping key using the new data Input Grouping key: userId New data: new user actions Previous state: previous status of this user case class UserAction( userId: String, action: String) case class UserStatus( userId: String, active: Boolean) def updateState( userId: String, actions: Iterator[UserAction], state: GroupState[UserStatus]):UserStatus = { }
  • 53. MapGroupsWithState - How to use? 2. Define function to update state of each grouping key using the new data Body Get previous user status Update user status with actions Update state with latest user status Return the status def updateState( userId: String, actions: Iterator[UserAction], state: GroupState[UserStatus]):UserStatus = { } val prevStatus = state.getOption.getOrElse { new UserStatus() } actions.foreah { action => prevStatus.updateWith(action) } state.update(prevStatus) return prevStatus
  • 54. MapGroupsWithState - How to use? 3. Use the user-defined function on a grouped Dataset Works with both batch and streaming queries In batch query, the function is called only once per group with no prior state def updateState( userId: String, actions: Iterator[UserAction], state: GroupState[UserStatus]):UserStatus = { } // process actions, update and return status userActions .groupByKey(_.userId) .mapGroupsWithState(updateState)
  • 55. Timeouts Example: Mark a user as inactive when there is no actions in 1 hour Timeouts: When a group does not get any event for a while, then the function is called for that group with an empty iterator Must specify a global timeout type, and set per-group timeout timestamp/duration Ignored in a batch queries userActions.withWatermark("timestamp") .groupByKey(_.userId) .mapGroupsWithState (timeoutConf)(updateState) EventTime Timeout ProcessingTime Timeout NoTimeout (default)
  • 56. userActions .withWatermark("timestamp") .groupByKey(_.userId) .mapGroupsWithState ( timeoutConf )(updateState) Event-time Timeout - How to use? 1. Enable EventTimeTimeout in mapGroupsWithState 2. Enable watermarking 3. Update the mapping function Every time function is called, set the timeout timestamp using the max seen event timestamp + timeout duration Update state when timeout occurs def updateState(...): UserStatus = { if (!state.hasTimedOut) { // track maxActionTimestamp while // processing actions and updating state state.setTimeoutTimestamp( maxActionTimestamp, "1 hour") } else { // handle timeout userStatus.handleTimeout() state.remove() } // return user status } EventTimeTimeout if (!state.hasTimedOut) { } else { // handle timeout userStatus.handleTimeout() state.remove() } return userStatus
  • 57. Event-time Timeout - When? Watermark is calculated with max event time across all groups For a specific group, if there is no event till watermark exceeds the timeout timestamp, Then Function is called with an empty iterator, and hasTimedOut = true Else Function is called with new data, and timeout is disabled Needs to explicitly set timeout timestamp every time
  • 58. Processing-time Timeout Instead of setting timeout timestamp, function sets timeout duration (in terms of wall-clock-time) to wait before timing out Independent of watermarks Note, query downtimes will cause lots of timeouts after recovery def updateState(...): UserStatus = { if (!state.hasTimedOut) { // handle new data state.setTimeoutDuration("1 hour") } else { // handle timeout } return userStatus } userActions .groupByKey(_.userId) .mapGroupsWithState (ProcessingTimeTimeout)(updateState)
  • 59. FlatMapGroupsWithState More general version where the function can return any number of events, possibly none at all Example: instead of returning user status, want to return specific actions that are significant based on the history def updateState( userId: String, actions: Iterator[UserAction], state: GroupState[UserStatus]): Iterator[SpecialUserAction] = { } userActions .groupByKey(_.userId) .flatMapGroupsWithState (outputMode, timeoutConf) (updateState)
  • 60. userActions .groupByKey(_.userId) .flatMapGroupsWithState (outputMode, timeoutConf) (updateState) Function Output Mode Function output mode* gives Spark insights into the output from this opaque function Update Mode - Output events are key-value pairs, each output is updating the value of a key in the result table Append Mode - Output events are independent rows that being appended to the result table Allows Spark SQL planner to correctly compose flatMapGroupsWithState with other operations *Not to be confused with output mode of the query Update Mode Append Mode
  • 62. Optimizing Query State Set # shuffle partitions to 1-3 times number of cores Too low = not all cores will be used à lower throughput Too high = cost of writing state to HDFS will increases à higher latency Total size of state per worker Larger state leads to higher overheads of snapshotting, JVM GC pauses, etc.
  • 63. Monitoring the state of Query State Get current state metrics using the last progress of the query Total number of rows in state Total memory consumed (approx.) Get it asynchronously through StreamingQueryListener API val progress = query.lastProgress print(progress.json) { ... "stateOperators" : [ { "numRowsTotal" : 660000, "memoryUsedBytes" : 120571087 ... } ], } new StreamingQueryListener { ... def onQueryProgress( event: QueryProgressEvent) }
  • 64. Monitoring the state of Query State Databricks Notebooks integrated with Structured Streaming Shows size of state along with other processing metrics Data rates Batch durations # state keys
  • 65. Debugging Query State SQL metrics in the Spark UI (SQL tab, DAG view) expose more operator-specific stats Answer questions like - Is the memory usage skewed? - Is removing rows slow? - Is writing checkpoints slow?
  • 66. Managing Very Large State State data kept on JVM heap Can have GC issues with millions of state keys per worker Limits depend on the size and complexity of state data structures Latency spikes of > 20s due to GC
  • 67. Managing Very Large Statewith RocksDB In Databricks Runtime, you can store state locally in RocksDB Avoids JVM heap, no GC issues with 100 millions state keys per worker Local RocksDB snapshot files automatically checkpointed to HDFS Same exactly-once fault-tolerant guarantees Latency capped at 10s [More info in Databricks Docs]
  • 68. New in Apache Spark 2.4 • Lower state memory usage for streaming aggregations • foreach() in Python • foreachBatch() in Scala and Python • Reuse existing batch data sources for streaming output Example: Write to Cassandra using Cassandra batch data source • Write streaming output to multiple locations streamingDF.writeStream.foreachBatch { (batchDF: DataFrame, batchId: Long) => batchDF.write.format(...).save(...) // location 1 batchDF.write.format(...).save(...) // location 2 }
  • 69. More Info Structured Streaming Docs http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html https://docs.databricks.com/spark/latest/structured-streaming/index.html Databricks blog posts for more focused discussions https://databricks.com/blog/category/engineering/streaming My previous talk on the basics of Structured Streaming https://www.slideshare.net/databricks/a-deep-dive-into-structured-streaming