SlideShare a Scribd company logo
Changelog
Stream
Processing with
Apache Flink
Timo Walther
@twalthr
–
Flink Forward 2022
2022-08-03
About me
Open source
● Long-term committer since 2014 (before ASF)
● Member of the project management committee (PMC)
● Top 5 contributor (commits), top 1 contributor (additions)
● Among core architects of Flink SQL
Career
● Early software engineer @ DataArtisans
● SDK team @ DataArtisans/Ververica (acquisition by Alibaba)
● SQL team lead @ Ververica
● Co-founder @ Immerok
2
What is Apache Flink?
3
Building Blocks for Stream Processing
4
Time
● Synchronize
● Progress
● Wait
● Timeout
● Fast-forward
● Replay
State
● Store
● Buffer
● Cache
● Model
● Grow
● Expire
Streams
● Pipeline
● Distribute
● Join
● Enrich
● Control
● Replay
Snapshots
● Backup
● Version
● Fork
● A/B test
● Time-travel
● Restore
What is Apache Flink used for?
5
Transactions
Logs
IoT
Interactions
Events
…
Analytics
Event-driven
Applications
Data
Integration
ETL
Messaging
Systems
Files
Databases
Key/Value Stores
Applications
Messaging
Systems
Files
Databases
Key/Value Stores
Apache Flink’s APIs
6
API Stack
7
Dataflow Runtime
Low-Level Stream Operator API
Optimizer / Planner
Table / SQL API
DataStream API Stateful Functions
DataStream API
8
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(STREAMING);
DataStream<Integer> stream = env.fromElements(1, 2, 3);
stream.executeAndCollect().forEachRemaining(System.out::println);
Properties
● Exposes the building blocks for stream processing
● Arbitrary operator topologies using map(), process(), connect(), ...
● Business logic is written in user-defined functions
● Arbitrary user-defined record types flow in-between
● Conceptually always an append-only / insert-only log!
1
2
3
Output
Table / SQL API
9
TableEnvironment env = TableEnvironment.create(EnvironmentSettings.inStreamingMode());
// Programmatic
Table table = env.fromValues(row(1), row(2), row(3));
// SQL
Table table = env.sqlQuery("SELECT * FROM (VALUES (1), (2), (3))");
table.execute().print();
Properties
● Abstracts the building blocks for stream processing
● Operator topology is determined by planner
● Business logic is declared in SQL and/or Table API
● Internal record types flow, Flink’s Row type is exposed in Table API
● Conceptually a table, but a changelog under the hood!
+----+-------------+
| op | f0 |
+----+-------------+
| +I | 1 |
| +I | 2 |
| +I | 3 |
Output
DataStream API ↔Table / SQL API
10
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
// Stream -> Table
DataStream<?> inStream1 = ...
Table appendOnlyTable = tableEnv.fromDataStream(inStream1)
DataStream<Row> inStream2 = ...
Table anyTable = tableEnv.fromChangelogStream(inStream2)
// Table -> Stream
DataStream<T> appendOnlyStream = tableEnv.toDataStream(insertOnlyTable, T.class)
DataStream<Row> changelogStream = tableEnv.toChangelogStream(anyTable)
Mix and match APIs!
Changelog Stream
Processing
11
Change is the law of life and those who look only to
the past or present are certain to miss the future.
John F. Kennedy
Data Processing is a Stream of Changes
12
● Business data is always a stream: bounded or unbounded
● Every record is a changelog entry: insertion as the default
● Batch processing is just a special case in the runtime
now
past future
start end of stream
bounded stream unbounded stream
unbounded stream
How do I Work with Streams in Flink SQL?
13
● You don’t. You work with dynamic tables!
● A concept similar to materialized views
CREATE TABLE Revenue
(name STRING, total INT)
WITH (…)
INSERT INTO Revenue
SELECT name, SUM(amount)
FROM Transactions
GROUP BY name
CREATE TABLE Transactions
(name STRING, amount INT)
WITH (…)
name amount
Alice 56
Bob 10
Alice 89
name total
Alice 145
Bob 10
So, is Flink SQL a database? No, bring your own data and systems!
Stream-Table Duality - Basics
14
● A stream is the changelog of a dynamic table
● Sources, operators, and sinks work on changelogs under the hood
● Each component declares the kind of changes it consumes/produces
only +I Appending/Insert-only
contains -… Updating
contains -U Retracting
never –U but +U Upserting
Short name Long name
+I Insertion Default for scans + output of bounded results.
-U Update Before Retracts a previously emitted result.
+U Update After Updates a previously emitted result.
Requires a primary key if -U is omitted for idempotent updates.
-D Delete Removes the last result.
Stream-Table Duality - Example
15
An applied changelog becomes a real (materialized) table.
name amount
Alice 56
Bob 10
Alice 89
name total
Alice 56
Bob 10
changelog
+I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56]
145
materialization
CREATE TABLE Revenue
(name STRING, total INT)
WITH (…)
INSERT INTO Revenue
SELECT name, SUM(amount)
FROM Transactions
GROUP BY name
CREATE TABLE Transactions
(name STRING, amount INT)
WITH (…)
Stream-Table Duality - Example
16
An applied changelog becomes a real (materialized) table.
name amount
Alice 56
Bob 10
Alice 89
name total
Alice 56
Bob 10
+I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56]
145
materialization
CREATE TABLE Revenue
(PRIMARY KEY(name) …)
WITH (…)
INSERT INTO Revenue
SELECT name, SUM(amount)
FROM Transactions
GROUP BY name
CREATE TABLE Transactions
(name STRING, amount INT)
WITH (…)
Save ~50% of traffic if downstream system supports upserting!
Stream-Table Duality - Propagation
17
● Sources declares set of emitted changes i.e. changelog mode
● Optimizer tracks changelog mode and primary key through pipeline
● Sink declares changes it can digest
CREATE TABLE …
… WITH ('connector'='filesystem')
… WITH ('connector'='kafka')
… WITH ('connector'='kafka-upsert')
… WITH ('connector'='jdbc')
… WITH ('connector'='kafka', 'format' = 'debezium-json')
+I
+I
+I -D
+I -U +U -D
+I
(for sources)
Retract vs. Upsert
18
Retract
● No primary key requirements
● Works for almost every external system
● Supports duplicate rows
● In distributed system often unavoidable
à most flexible changelog mode
à default mode
Upsert
● Traffic + computation optimization
● In-place updates (idempotency)
SELECT c, COUNT(*) FROM (
SELECT COUNT(*) AS c
FROM T
GROUP BY user
)
GROUP BY c
Count 1
Subtask 1
Count 2
Subtask 1
Subtask 2
+U[1]
+U[2]
+I[…]
1=>1
2=>1
Subtask 2
+I[…]
Changelog Insights – Append-only
19
CREATE TABLE Transaction (tid BIGINT, amount INT);
CREATE TABLE Payment (tid BIGINT, method STRING);
CREATE TABLE Result (tid BIGINT, …); // accepts all changes
INSERT INTO Result SELECT * FROM Transactions T JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE])
+- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Changelog Insights – Updating
20
CREATE TABLE Transaction (tid BIGINT, amount INT);
CREATE TABLE Payment (tid BIGINT, method STRING);
CREATE TABLE Result (tid BIGINT, …);
INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE])
+- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Changelog Insights – Updating with PK
21
CREATE TABLE Transaction (tid BIGINT, amount INT);
CREATE TABLE Payment (tid BIGINT, method STRING);
CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE], upsertMaterialize=[true])
+- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Changelog Insights – Updating with PK
22
CREATE TABLE Transaction (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
CREATE TABLE Payment (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED);
INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid;
Sink(table=[Result], changelogMode=[NONE])
+- Join(leftInputSpec=[UniqueKey], rightInputSpec=[UniqueKey], changelogMode=[I,UA,D])
:- Exchange(changelogMode=[I])
: +- TableSourceScan(table=[[Transaction]], changelogMode=[I])
+- Exchange(changelogMode=[I])
+- TableSourceScan(table=[[Payment]], changelogMode=[I])
Mode Transitions
23
Append-only
Retracting
Updating
through operation
if operator/sink requires it
ChangelogNormalize
if sink requires it
UpsertMaterialize
Mode Transitions – Characteristics
24
Append-only
● Event-time column backed
by watermarks
● Highly state efficient due to
notion of completeness
● Usually no event-time
column
● State usage needs to
be kept in mind
● Pure materialized view
maintenance
Retracting
Updating
aka "TABLE"
aka "STREAM"
aka ?
Demo
29
Summary
TLDR
● Flink's SQL engine is a powerful changelog processor
● Flexible tool for integrating systems with different semantics
There is more…
● CDC connector ecosystem
à 2.6k Github stars
https://flink-packages.org/packages/cdc-connectors
● Table Store
unified storage engine for dynamic tables
à native changelog support
à deep integration into Flink SQL "like a DB"
https://flink.apache.org/news/2022/05/11/release-table-store-0.1.0.html
30
Thanks
Timo Walther
@twalthr
mrsql@immerok.com
Ad

Recommended

Tame the small files problem and optimize data layout for streaming ingestion...
Tame the small files problem and optimize data layout for streaming ingestion...
Flink Forward
 
Batch Processing at Scale with Flink & Iceberg
Batch Processing at Scale with Flink & Iceberg
Flink Forward
 
One sink to rule them all: Introducing the new Async Sink
One sink to rule them all: Introducing the new Async Sink
Flink Forward
 
Apache Flink 101 - the rise of stream processing and beyond
Apache Flink 101 - the rise of stream processing and beyond
Bowen Li
 
Evening out the uneven: dealing with skew in Flink
Evening out the uneven: dealing with skew in Flink
Flink Forward
 
Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...
Flink Forward
 
Virtual Flink Forward 2020: A deep dive into Flink SQL - Jark Wu
Virtual Flink Forward 2020: A deep dive into Flink SQL - Jark Wu
Flink Forward
 
Flink powered stream processing platform at Pinterest
Flink powered stream processing platform at Pinterest
Flink Forward
 
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Flink Forward
 
Apache Flink internals
Apache Flink internals
Kostas Tzoumas
 
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...
ScaleGrid.io
 
Fundamentals of Apache Kafka
Fundamentals of Apache Kafka
Chhavi Parasher
 
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Flink Forward
 
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
mumrah
 
Autoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive Mode
Flink Forward
 
Flink SQL & TableAPI in Large Scale Production at Alibaba
Flink SQL & TableAPI in Large Scale Production at Alibaba
DataWorks Summit
 
FlinkML: Large Scale Machine Learning with Apache Flink
FlinkML: Large Scale Machine Learning with Apache Flink
Theodoros Vasiloudis
 
Introduction to Apache Flink
Introduction to Apache Flink
mxmxm
 
Kafka replication apachecon_2013
Kafka replication apachecon_2013
Jun Rao
 
Dynamic filtering for presto join optimisation
Dynamic filtering for presto join optimisation
Ori Reshef
 
Practical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobs
Flink Forward
 
Introduction to Apache Kafka
Introduction to Apache Kafka
Jeff Holoman
 
ksqlDB: A Stream-Relational Database System
ksqlDB: A Stream-Relational Database System
confluent
 
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Flink Forward
 
How to Avoid Common Mistakes When Using Reactor Netty
How to Avoid Common Mistakes When Using Reactor Netty
VMware Tanzu
 
Apache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native Era
Flink Forward
 
Dataflow with Apache NiFi
Dataflow with Apache NiFi
DataWorks Summit/Hadoop Summit
 
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
HostedbyConfluent
 
CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
HostedbyConfluent
 
CDC Stream Processing with Apache Flink
CDC Stream Processing with Apache Flink
Timo Walther
 

More Related Content

What's hot (20)

Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Flink Forward
 
Apache Flink internals
Apache Flink internals
Kostas Tzoumas
 
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...
ScaleGrid.io
 
Fundamentals of Apache Kafka
Fundamentals of Apache Kafka
Chhavi Parasher
 
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Flink Forward
 
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
mumrah
 
Autoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive Mode
Flink Forward
 
Flink SQL & TableAPI in Large Scale Production at Alibaba
Flink SQL & TableAPI in Large Scale Production at Alibaba
DataWorks Summit
 
FlinkML: Large Scale Machine Learning with Apache Flink
FlinkML: Large Scale Machine Learning with Apache Flink
Theodoros Vasiloudis
 
Introduction to Apache Flink
Introduction to Apache Flink
mxmxm
 
Kafka replication apachecon_2013
Kafka replication apachecon_2013
Jun Rao
 
Dynamic filtering for presto join optimisation
Dynamic filtering for presto join optimisation
Ori Reshef
 
Practical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobs
Flink Forward
 
Introduction to Apache Kafka
Introduction to Apache Kafka
Jeff Holoman
 
ksqlDB: A Stream-Relational Database System
ksqlDB: A Stream-Relational Database System
confluent
 
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Flink Forward
 
How to Avoid Common Mistakes When Using Reactor Netty
How to Avoid Common Mistakes When Using Reactor Netty
VMware Tanzu
 
Apache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native Era
Flink Forward
 
Dataflow with Apache NiFi
Dataflow with Apache NiFi
DataWorks Summit/Hadoop Summit
 
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
HostedbyConfluent
 
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Flink Forward
 
Apache Flink internals
Apache Flink internals
Kostas Tzoumas
 
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...
ScaleGrid.io
 
Fundamentals of Apache Kafka
Fundamentals of Apache Kafka
Chhavi Parasher
 
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Flink Forward
 
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
mumrah
 
Autoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive Mode
Flink Forward
 
Flink SQL & TableAPI in Large Scale Production at Alibaba
Flink SQL & TableAPI in Large Scale Production at Alibaba
DataWorks Summit
 
FlinkML: Large Scale Machine Learning with Apache Flink
FlinkML: Large Scale Machine Learning with Apache Flink
Theodoros Vasiloudis
 
Introduction to Apache Flink
Introduction to Apache Flink
mxmxm
 
Kafka replication apachecon_2013
Kafka replication apachecon_2013
Jun Rao
 
Dynamic filtering for presto join optimisation
Dynamic filtering for presto join optimisation
Ori Reshef
 
Practical learnings from running thousands of Flink jobs
Practical learnings from running thousands of Flink jobs
Flink Forward
 
Introduction to Apache Kafka
Introduction to Apache Kafka
Jeff Holoman
 
ksqlDB: A Stream-Relational Database System
ksqlDB: A Stream-Relational Database System
confluent
 
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Squirreling Away $640 Billion: How Stripe Leverages Flink for Change Data Cap...
Flink Forward
 
How to Avoid Common Mistakes When Using Reactor Netty
How to Avoid Common Mistakes When Using Reactor Netty
VMware Tanzu
 
Apache Flink in the Cloud-Native Era
Apache Flink in the Cloud-Native Era
Flink Forward
 
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
HostedbyConfluent
 

Similar to Changelog Stream Processing with Apache Flink (20)

CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
HostedbyConfluent
 
CDC Stream Processing with Apache Flink
CDC Stream Processing with Apache Flink
Timo Walther
 
Flink's SQL Engine: Let's Open the Engine Room!
Flink's SQL Engine: Let's Open the Engine Room!
HostedbyConfluent
 
Why and how to leverage the power and simplicity of SQL on Apache Flink
Why and how to leverage the power and simplicity of SQL on Apache Flink
Fabian Hueske
 
Streaming SQL
Streaming SQL
Julian Hyde
 
2019-01-29 - Demystifying Kotlin Coroutines
2019-01-29 - Demystifying Kotlin Coroutines
Eamonn Boyle
 
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
DataWorks Summit/Hadoop Summit
 
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Julian Hyde
 
The Current State of Table API in 2022
The Current State of Table API in 2022
Flink Forward
 
Streaming SQL w/ Apache Calcite
Streaming SQL w/ Apache Calcite
Hortonworks
 
Streaming SQL with Apache Calcite
Streaming SQL with Apache Calcite
Julian Hyde
 
Omid: scalable and highly available transaction processing for Apache Phoenix
Omid: scalable and highly available transaction processing for Apache Phoenix
DataWorks Summit
 
The Ring programming language version 1.5.2 book - Part 7 of 181
The Ring programming language version 1.5.2 book - Part 7 of 181
Mahmoud Samir Fayed
 
ARIES Recovery Algorithms
ARIES Recovery Algorithms
Pulasthi Lankeshwara
 
PyCon Ukraine 2017: Operational Transformation
PyCon Ukraine 2017: Operational Transformation
Max Klymyshyn
 
Streaming SQL
Streaming SQL
Julian Hyde
 
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
Flink Forward
 
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...
Flink Forward
 
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
2ГИС Технологии
 
Практическое применения Akka Streams
Практическое применения Akka Streams
Alexey Romanchuk
 
CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
CDC Stream Processing With Apache Flink With Timo Walther | Current 2022
HostedbyConfluent
 
CDC Stream Processing with Apache Flink
CDC Stream Processing with Apache Flink
Timo Walther
 
Flink's SQL Engine: Let's Open the Engine Room!
Flink's SQL Engine: Let's Open the Engine Room!
HostedbyConfluent
 
Why and how to leverage the power and simplicity of SQL on Apache Flink
Why and how to leverage the power and simplicity of SQL on Apache Flink
Fabian Hueske
 
2019-01-29 - Demystifying Kotlin Coroutines
2019-01-29 - Demystifying Kotlin Coroutines
Eamonn Boyle
 
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
DataWorks Summit/Hadoop Summit
 
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Querying the Internet of Things: Streaming SQL on Kafka/Samza and Storm/Trident
Julian Hyde
 
The Current State of Table API in 2022
The Current State of Table API in 2022
Flink Forward
 
Streaming SQL w/ Apache Calcite
Streaming SQL w/ Apache Calcite
Hortonworks
 
Streaming SQL with Apache Calcite
Streaming SQL with Apache Calcite
Julian Hyde
 
Omid: scalable and highly available transaction processing for Apache Phoenix
Omid: scalable and highly available transaction processing for Apache Phoenix
DataWorks Summit
 
The Ring programming language version 1.5.2 book - Part 7 of 181
The Ring programming language version 1.5.2 book - Part 7 of 181
Mahmoud Samir Fayed
 
PyCon Ukraine 2017: Operational Transformation
PyCon Ukraine 2017: Operational Transformation
Max Klymyshyn
 
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
Flink Forward SF 2017: Shaoxuan Wang_Xiaowei Jiang - Blinks Improvements to F...
Flink Forward
 
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...
Flink Forward SF 2017: Timo Walther - Table & SQL API – unified APIs for bat...
Flink Forward
 
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
«Практическое применение Akka Streams» — Алексей Романчук, 2ГИС
2ГИС Технологии
 
Практическое применения Akka Streams
Практическое применения Akka Streams
Alexey Romanchuk
 
Ad

More from Flink Forward (17)

“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
Flink Forward
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Flink Forward
 
Introducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes Operator
Flink Forward
 
Tuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptx
Flink Forward
 
Where is my bottleneck? Performance troubleshooting in Flink
Where is my bottleneck? Performance troubleshooting in Flink
Flink Forward
 
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Flink Forward
 
Flink SQL on Pulsar made easy
Flink SQL on Pulsar made easy
Flink Forward
 
Dynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data Alerts
Flink Forward
 
Processing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial Services
Flink Forward
 
Welcome to the Flink Community!
Welcome to the Flink Community!
Flink Forward
 
Extending Flink SQL for stream processing use cases
Extending Flink SQL for stream processing use cases
Flink Forward
 
The top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scale
Flink Forward
 
Using Queryable State for Fun and Profit
Using Queryable State for Fun and Profit
Flink Forward
 
Large Scale Real Time Fraudulent Web Behavior Detection
Large Scale Real Time Fraudulent Web Behavior Detection
Flink Forward
 
Building Reliable Lakehouses with Apache Flink and Delta Lake
Building Reliable Lakehouses with Apache Flink and Delta Lake
Flink Forward
 
Near real-time statistical modeling and anomaly detection using Flink!
Near real-time statistical modeling and anomaly detection using Flink!
Flink Forward
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Flink Forward
 
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
“Alexa, be quiet!”: End-to-end near-real time model building and evaluation i...
Flink Forward
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Flink Forward
 
Introducing the Apache Flink Kubernetes Operator
Introducing the Apache Flink Kubernetes Operator
Flink Forward
 
Tuning Apache Kafka Connectors for Flink.pptx
Tuning Apache Kafka Connectors for Flink.pptx
Flink Forward
 
Where is my bottleneck? Performance troubleshooting in Flink
Where is my bottleneck? Performance troubleshooting in Flink
Flink Forward
 
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Using the New Apache Flink Kubernetes Operator in a Production Deployment
Flink Forward
 
Flink SQL on Pulsar made easy
Flink SQL on Pulsar made easy
Flink Forward
 
Dynamic Rule-based Real-time Market Data Alerts
Dynamic Rule-based Real-time Market Data Alerts
Flink Forward
 
Processing Semantically-Ordered Streams in Financial Services
Processing Semantically-Ordered Streams in Financial Services
Flink Forward
 
Welcome to the Flink Community!
Welcome to the Flink Community!
Flink Forward
 
Extending Flink SQL for stream processing use cases
Extending Flink SQL for stream processing use cases
Flink Forward
 
The top 3 challenges running multi-tenant Flink at scale
The top 3 challenges running multi-tenant Flink at scale
Flink Forward
 
Using Queryable State for Fun and Profit
Using Queryable State for Fun and Profit
Flink Forward
 
Large Scale Real Time Fraudulent Web Behavior Detection
Large Scale Real Time Fraudulent Web Behavior Detection
Flink Forward
 
Building Reliable Lakehouses with Apache Flink and Delta Lake
Building Reliable Lakehouses with Apache Flink and Delta Lake
Flink Forward
 
Near real-time statistical modeling and anomaly detection using Flink!
Near real-time statistical modeling and anomaly detection using Flink!
Flink Forward
 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Flink Forward
 
Ad

Recently uploaded (20)

CapCut Pro Crack For PC Latest Version {Fully Unlocked} 2025
CapCut Pro Crack For PC Latest Version {Fully Unlocked} 2025
pcprocore
 
Smarter Aviation Data Management: Lessons from Swedavia Airports and Sweco
Smarter Aviation Data Management: Lessons from Swedavia Airports and Sweco
Safe Software
 
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Priyanka Aash
 
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Safe Software
 
Curietech AI in action - Accelerate MuleSoft development
Curietech AI in action - Accelerate MuleSoft development
shyamraj55
 
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
Earley Information Science
 
Quantum AI: Where Impossible Becomes Probable
Quantum AI: Where Impossible Becomes Probable
Saikat Basu
 
Python Conference Singapore - 19 Jun 2025
Python Conference Singapore - 19 Jun 2025
ninefyi
 
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Saikat Basu
 
cnc-processing-centers-centateq-p-110-en.pdf
cnc-processing-centers-centateq-p-110-en.pdf
AmirStern2
 
2025_06_18 - OpenMetadata Community Meeting.pdf
2025_06_18 - OpenMetadata Community Meeting.pdf
OpenMetadata
 
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC
 
AI VIDEO MAGAZINE - June 2025 - r/aivideo
AI VIDEO MAGAZINE - June 2025 - r/aivideo
1pcity Studios, Inc
 
From Manual to Auto Searching- FME in the Driver's Seat
From Manual to Auto Searching- FME in the Driver's Seat
Safe Software
 
Securing AI - There Is No Try, Only Do!.pdf
Securing AI - There Is No Try, Only Do!.pdf
Priyanka Aash
 
OpenPOWER Foundation & Open-Source Core Innovations
OpenPOWER Foundation & Open-Source Core Innovations
IBM
 
Techniques for Automatic Device Identification and Network Assignment.pdf
Techniques for Automatic Device Identification and Network Assignment.pdf
Priyanka Aash
 
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
Edge AI and Vision Alliance
 
Lessons Learned from Developing Secure AI Workflows.pdf
Lessons Learned from Developing Secure AI Workflows.pdf
Priyanka Aash
 
Wenn alles versagt - IBM Tape schützt, was zählt! Und besonders mit dem neust...
Wenn alles versagt - IBM Tape schützt, was zählt! Und besonders mit dem neust...
Josef Weingand
 
CapCut Pro Crack For PC Latest Version {Fully Unlocked} 2025
CapCut Pro Crack For PC Latest Version {Fully Unlocked} 2025
pcprocore
 
Smarter Aviation Data Management: Lessons from Swedavia Airports and Sweco
Smarter Aviation Data Management: Lessons from Swedavia Airports and Sweco
Safe Software
 
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Priyanka Aash
 
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Safe Software
 
Curietech AI in action - Accelerate MuleSoft development
Curietech AI in action - Accelerate MuleSoft development
shyamraj55
 
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
Earley Information Science
 
Quantum AI: Where Impossible Becomes Probable
Quantum AI: Where Impossible Becomes Probable
Saikat Basu
 
Python Conference Singapore - 19 Jun 2025
Python Conference Singapore - 19 Jun 2025
ninefyi
 
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Saikat Basu
 
cnc-processing-centers-centateq-p-110-en.pdf
cnc-processing-centers-centateq-p-110-en.pdf
AmirStern2
 
2025_06_18 - OpenMetadata Community Meeting.pdf
2025_06_18 - OpenMetadata Community Meeting.pdf
OpenMetadata
 
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC
 
AI VIDEO MAGAZINE - June 2025 - r/aivideo
AI VIDEO MAGAZINE - June 2025 - r/aivideo
1pcity Studios, Inc
 
From Manual to Auto Searching- FME in the Driver's Seat
From Manual to Auto Searching- FME in the Driver's Seat
Safe Software
 
Securing AI - There Is No Try, Only Do!.pdf
Securing AI - There Is No Try, Only Do!.pdf
Priyanka Aash
 
OpenPOWER Foundation & Open-Source Core Innovations
OpenPOWER Foundation & Open-Source Core Innovations
IBM
 
Techniques for Automatic Device Identification and Network Assignment.pdf
Techniques for Automatic Device Identification and Network Assignment.pdf
Priyanka Aash
 
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
Edge AI and Vision Alliance
 
Lessons Learned from Developing Secure AI Workflows.pdf
Lessons Learned from Developing Secure AI Workflows.pdf
Priyanka Aash
 
Wenn alles versagt - IBM Tape schützt, was zählt! Und besonders mit dem neust...
Wenn alles versagt - IBM Tape schützt, was zählt! Und besonders mit dem neust...
Josef Weingand
 

Changelog Stream Processing with Apache Flink

  • 1. Changelog Stream Processing with Apache Flink Timo Walther @twalthr – Flink Forward 2022 2022-08-03
  • 2. About me Open source ● Long-term committer since 2014 (before ASF) ● Member of the project management committee (PMC) ● Top 5 contributor (commits), top 1 contributor (additions) ● Among core architects of Flink SQL Career ● Early software engineer @ DataArtisans ● SDK team @ DataArtisans/Ververica (acquisition by Alibaba) ● SQL team lead @ Ververica ● Co-founder @ Immerok 2
  • 3. What is Apache Flink? 3
  • 4. Building Blocks for Stream Processing 4 Time ● Synchronize ● Progress ● Wait ● Timeout ● Fast-forward ● Replay State ● Store ● Buffer ● Cache ● Model ● Grow ● Expire Streams ● Pipeline ● Distribute ● Join ● Enrich ● Control ● Replay Snapshots ● Backup ● Version ● Fork ● A/B test ● Time-travel ● Restore
  • 5. What is Apache Flink used for? 5 Transactions Logs IoT Interactions Events … Analytics Event-driven Applications Data Integration ETL Messaging Systems Files Databases Key/Value Stores Applications Messaging Systems Files Databases Key/Value Stores
  • 7. API Stack 7 Dataflow Runtime Low-Level Stream Operator API Optimizer / Planner Table / SQL API DataStream API Stateful Functions
  • 8. DataStream API 8 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setRuntimeMode(STREAMING); DataStream<Integer> stream = env.fromElements(1, 2, 3); stream.executeAndCollect().forEachRemaining(System.out::println); Properties ● Exposes the building blocks for stream processing ● Arbitrary operator topologies using map(), process(), connect(), ... ● Business logic is written in user-defined functions ● Arbitrary user-defined record types flow in-between ● Conceptually always an append-only / insert-only log! 1 2 3 Output
  • 9. Table / SQL API 9 TableEnvironment env = TableEnvironment.create(EnvironmentSettings.inStreamingMode()); // Programmatic Table table = env.fromValues(row(1), row(2), row(3)); // SQL Table table = env.sqlQuery("SELECT * FROM (VALUES (1), (2), (3))"); table.execute().print(); Properties ● Abstracts the building blocks for stream processing ● Operator topology is determined by planner ● Business logic is declared in SQL and/or Table API ● Internal record types flow, Flink’s Row type is exposed in Table API ● Conceptually a table, but a changelog under the hood! +----+-------------+ | op | f0 | +----+-------------+ | +I | 1 | | +I | 2 | | +I | 3 | Output
  • 10. DataStream API ↔Table / SQL API 10 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env); // Stream -> Table DataStream<?> inStream1 = ... Table appendOnlyTable = tableEnv.fromDataStream(inStream1) DataStream<Row> inStream2 = ... Table anyTable = tableEnv.fromChangelogStream(inStream2) // Table -> Stream DataStream<T> appendOnlyStream = tableEnv.toDataStream(insertOnlyTable, T.class) DataStream<Row> changelogStream = tableEnv.toChangelogStream(anyTable) Mix and match APIs!
  • 11. Changelog Stream Processing 11 Change is the law of life and those who look only to the past or present are certain to miss the future. John F. Kennedy
  • 12. Data Processing is a Stream of Changes 12 ● Business data is always a stream: bounded or unbounded ● Every record is a changelog entry: insertion as the default ● Batch processing is just a special case in the runtime now past future start end of stream bounded stream unbounded stream unbounded stream
  • 13. How do I Work with Streams in Flink SQL? 13 ● You don’t. You work with dynamic tables! ● A concept similar to materialized views CREATE TABLE Revenue (name STRING, total INT) WITH (…) INSERT INTO Revenue SELECT name, SUM(amount) FROM Transactions GROUP BY name CREATE TABLE Transactions (name STRING, amount INT) WITH (…) name amount Alice 56 Bob 10 Alice 89 name total Alice 145 Bob 10 So, is Flink SQL a database? No, bring your own data and systems!
  • 14. Stream-Table Duality - Basics 14 ● A stream is the changelog of a dynamic table ● Sources, operators, and sinks work on changelogs under the hood ● Each component declares the kind of changes it consumes/produces only +I Appending/Insert-only contains -… Updating contains -U Retracting never –U but +U Upserting Short name Long name +I Insertion Default for scans + output of bounded results. -U Update Before Retracts a previously emitted result. +U Update After Updates a previously emitted result. Requires a primary key if -U is omitted for idempotent updates. -D Delete Removes the last result.
  • 15. Stream-Table Duality - Example 15 An applied changelog becomes a real (materialized) table. name amount Alice 56 Bob 10 Alice 89 name total Alice 56 Bob 10 changelog +I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56] 145 materialization CREATE TABLE Revenue (name STRING, total INT) WITH (…) INSERT INTO Revenue SELECT name, SUM(amount) FROM Transactions GROUP BY name CREATE TABLE Transactions (name STRING, amount INT) WITH (…)
  • 16. Stream-Table Duality - Example 16 An applied changelog becomes a real (materialized) table. name amount Alice 56 Bob 10 Alice 89 name total Alice 56 Bob 10 +I[Alice, 89] +I[Bob, 10] +I[Alice, 56] +U[Alice, 145] -U[Alice, 56] +I[Bob, 10] +I[Alice, 56] 145 materialization CREATE TABLE Revenue (PRIMARY KEY(name) …) WITH (…) INSERT INTO Revenue SELECT name, SUM(amount) FROM Transactions GROUP BY name CREATE TABLE Transactions (name STRING, amount INT) WITH (…) Save ~50% of traffic if downstream system supports upserting!
  • 17. Stream-Table Duality - Propagation 17 ● Sources declares set of emitted changes i.e. changelog mode ● Optimizer tracks changelog mode and primary key through pipeline ● Sink declares changes it can digest CREATE TABLE … … WITH ('connector'='filesystem') … WITH ('connector'='kafka') … WITH ('connector'='kafka-upsert') … WITH ('connector'='jdbc') … WITH ('connector'='kafka', 'format' = 'debezium-json') +I +I +I -D +I -U +U -D +I (for sources)
  • 18. Retract vs. Upsert 18 Retract ● No primary key requirements ● Works for almost every external system ● Supports duplicate rows ● In distributed system often unavoidable à most flexible changelog mode à default mode Upsert ● Traffic + computation optimization ● In-place updates (idempotency) SELECT c, COUNT(*) FROM ( SELECT COUNT(*) AS c FROM T GROUP BY user ) GROUP BY c Count 1 Subtask 1 Count 2 Subtask 1 Subtask 2 +U[1] +U[2] +I[…] 1=>1 2=>1 Subtask 2 +I[…]
  • 19. Changelog Insights – Append-only 19 CREATE TABLE Transaction (tid BIGINT, amount INT); CREATE TABLE Payment (tid BIGINT, method STRING); CREATE TABLE Result (tid BIGINT, …); // accepts all changes INSERT INTO Result SELECT * FROM Transactions T JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE]) +- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 20. Changelog Insights – Updating 20 CREATE TABLE Transaction (tid BIGINT, amount INT); CREATE TABLE Payment (tid BIGINT, method STRING); CREATE TABLE Result (tid BIGINT, …); INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE]) +- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 21. Changelog Insights – Updating with PK 21 CREATE TABLE Transaction (tid BIGINT, amount INT); CREATE TABLE Payment (tid BIGINT, method STRING); CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE], upsertMaterialize=[true]) +- Join(leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey], changelogMode=[I,UB,UA,D]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 22. Changelog Insights – Updating with PK 22 CREATE TABLE Transaction (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); CREATE TABLE Payment (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); CREATE TABLE Result (tid BIGINT, …, PRIMARY KEY(tid) NOT ENFORCED); INSERT INTO Result SELECT * FROM Transactions T LEFT JOIN Payments P ON T.tid = P.tid; Sink(table=[Result], changelogMode=[NONE]) +- Join(leftInputSpec=[UniqueKey], rightInputSpec=[UniqueKey], changelogMode=[I,UA,D]) :- Exchange(changelogMode=[I]) : +- TableSourceScan(table=[[Transaction]], changelogMode=[I]) +- Exchange(changelogMode=[I]) +- TableSourceScan(table=[[Payment]], changelogMode=[I])
  • 23. Mode Transitions 23 Append-only Retracting Updating through operation if operator/sink requires it ChangelogNormalize if sink requires it UpsertMaterialize
  • 24. Mode Transitions – Characteristics 24 Append-only ● Event-time column backed by watermarks ● Highly state efficient due to notion of completeness ● Usually no event-time column ● State usage needs to be kept in mind ● Pure materialized view maintenance Retracting Updating aka "TABLE" aka "STREAM" aka ?
  • 26. Summary TLDR ● Flink's SQL engine is a powerful changelog processor ● Flexible tool for integrating systems with different semantics There is more… ● CDC connector ecosystem à 2.6k Github stars https://flink-packages.org/packages/cdc-connectors ● Table Store unified storage engine for dynamic tables à native changelog support à deep integration into Flink SQL "like a DB" https://flink.apache.org/news/2022/05/11/release-table-store-0.1.0.html 30