SlideShare a Scribd company logo
Abstract
Unbounded, unordered, global­ scale datasets are increasingly common in day-­to-­day business, and consumers of
these datasets have detailed requirements for latency, cost, and completeness. Apache Beam defines a new data
processing programming model that evolved from more than a decade of experience building Big Data infrastructure
within Google, including MapReduce, FlumeJava, Millwheel, and Cloud Dataflow.
Apache Beam handles both batch and streaming use cases, offering a powerful, unified model. It neatly separates
properties of the data from run-time characteristics, allowing pipelines to be portable across multiple run-time
environments, both open ­source, including Apache Apex, Apache Flink, Apache Gearpump, Apache Spark, and
proprietary. Finally, Beam's model enables newer optimizations, like dynamic work rebalancing and autoscaling,
resulting in an efficient execution.
This talk will cover the basics of Apache Beam, touch on its evolution, and describe main concepts in its powerful
programming model. We'll show how Beam unifies batch and streaming use cases, and show efficient execution in
real-world scenarios. Finally, we'll demonstrate pipeline portability across Apache Apex, Apache Flink, Apache Spark
and Google Cloud Dataflow in a live setting.
This session is a Technical (Intermediate) talk in our IoT and Streaming track. It focuses on Apache Flink, Apache
Kafka, Apache Spark, Cloud and is geared towards Architect, Data Scientist, Developer / Engineer audiences.
Unified, Efficient and
Portable Data Processing
with Apache Beam
Davor Bonaci
PMC Chair, Apache Beam
Software Engineer, Google Inc.
Apache Beam: Open Source data processing APIs
● Expresses data-parallel batch and streaming
algorithms using one unified API
● Cleanly separates data processing logic
from runtime requirements
● Supports execution on multiple distributed
processing runtime environments
The evolution of Apache Beam
MapReduce Apache
Beam
Cloud
Dataflow
BigTable DremelColossus
FlumeMegastore Spanner
PubSub
Millwheel
Agenda
1. Expressing data-parallel pipelines with the Beam model
2. The Beam vision for portability
3. Parallel and portable pipelines in practice
Apache Beam is
a unified programming model
designed to provide
portable data processing pipelines
(efficient too)
Expressing
data-parallel pipelines
with the Beam model
A unified model for batch and
streaming
Processing time vs. event time
The Beam Model: asking the right questions
What results are calculated?
Where in event time are results calculated?
When in processing time are results materialized?
How do refinements of results relate?
PCollection<KV<String, Integer>> scores = input
.apply(Sum.integersPerKey());
The Beam Model: What is being computed?
The Beam Model: What is being computed?
PCollection<KV<String, Integer>> scores = input
.apply(Window.into(FixedWindows.of(Duration.standardMinutes(2)))
.apply(Sum.integersPerKey());
The Beam Model: Where in event time?
The Beam Model: Where in event time?
PCollection<KV<String, Integer>> scores = input
.apply(Window.into(FixedWindows.of(Duration.standardMinutes(2))
.triggering(AtWatermark()))
.apply(Sum.integersPerKey());
The Beam Model: When in processing time?
The Beam Model: When in processing time?
PCollection<KV<String, Integer>> scores = input
.apply(Window.into(FixedWindows.of(Duration.standardMinutes(2))
.triggering(AtWatermark()
.withEarlyFirings(
AtPeriod(Duration.standardMinutes(1)))
.withLateFirings(AtCount(1)))
.accumulatingFiredPanes())
.apply(Sum.integersPerKey());
The Beam Model: How do refinements relate?
The Beam Model: How do refinements relate?
Customizing What Where When How
3
Streaming
4
Streaming
+ Accumulation
1
Classic
Batch
2
Windowed
Batch
The Beam vision for
portablility
Write once,
run anywhere“
”
Beam Vision: mix and match SDKs and runtimes
● The Beam Model: the abstractions
at the core of Apache Beam
Runner 1 Runner 3Runner 2
● Choice of SDK: Users write their
pipelines in a language that’s
familiar and integrated with their
other tooling
● Choice of Runners: Users choose
the right runtime for their current
needs -- on-prem / cloud, open
source / not, fully managed / not
● Scalability for Developers: Clean
APIs allow developers to contribute
modules independently
The Beam Model
Language A Language CLanguage B
The Beam Model
Language A
SDK
Language C
SDK
Language B
SDK
● Beam’s Java SDK runs on multiple
runtime environments, including:
○ Apache Apex
○ Apache Spark
○ Apache Flink
○ Google Cloud Dataflow
○ [in development] Apache Gearpump
● Cross-language infrastructure is in
progress.
○ Beam’s Python SDK currently runs
on Google Cloud Dataflow
Beam Vision: as of April 2017
Beam Model: Fn Runners
Apache
Spark
Cloud
Dataflow
Beam Model: Pipeline Construction
Apache
Flink
Java
Java
Python
Python
Apache
Apex
Apache
Gearpump
Example Beam Runners
Apache Spark
● Open-source
cluster-computing
framework
● Large ecosystem of
APIs and tools
● Runs on premise or in
the cloud
Apache Flink
● Open-source
distributed data
processing engine
● High-throughput and
low-latency stream
processing
● Runs on premise or in
the cloud
Google Cloud Dataflow
● Fully-managed service
for batch and stream
data processing
● Provides dynamic
auto-scaling,
monitoring tools, and
tight integration with
Google Cloud
Platform
How do you build an abstraction layer?
Apache
Spark
Cloud
Dataflow
Apache
Flink
????????
????????
Beam: the intersection of runner functionality?
Beam: the union of runner functionality?
Beam: the future!
Categorizing Runner Capabilities
https://beam.apache.org/
documentation/runners/capability-matrix/
Parallel and portable
pipelines in practice
Demo and Use Case
Demo!
Getting Started with Apache Beam
Quickstarts
● Java SDK
● Python SDK
Example walkthroughs
● Word Count
● Mobile Gaming
Extensive documentation
Related sessions
Hadoop Summit San Jose 2016
● Apache Beam: A Unified Model for Batch and Streaming Data Processing
○ Speaker: Davor Bonaci
Hadoop Summit Melbourne 2016
● Stream/Batch processing portable across on-premise and Cloud with Apache Beam
○ Speaker: Eric Anderson
DataWorks Summit San Jose 2017
● Realizing the promise of portable data processing with Apache Beam
○ Speaker: Davor Bonaci
● Stateful processing of massive out-of-order streams with Apache Beam
○ Speaker: Kenneth Knowles
Apache Beam is
a unified programming model
designed to provide
portable data processing pipelines
(efficient too)
Learn more!
Apache Beam
https://beam.apache.org
Join the Beam mailing lists!
user-subscribe@beam.apache.org
dev-subscribe@beam.apache.org
Follow @ApacheBeam on Twitter
Demo screenshots
because if I make them, I won’t
need to use them
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam
Unified, Efficient, and Portable Data Processing with Apache Beam

More Related Content

PPTX
Debunking Common Myths in Stream Processing
PDF
The Next Generation of Data Processing and Open Source
PPTX
Streaming in the Wild with Apache Flink
PDF
Realizing the promise of portable data processing with Apache Beam
PDF
The Future of Hadoop by Arun Murthy, PMC Apache Hadoop & Cofounder Hortonworks
PPTX
Machine Learning in the IoT with Apache NiFi
PPTX
Data Pipeline at Tapad
PDF
Spark Summit EU talk by Kaarthik Sivashanmugam
Debunking Common Myths in Stream Processing
The Next Generation of Data Processing and Open Source
Streaming in the Wild with Apache Flink
Realizing the promise of portable data processing with Apache Beam
The Future of Hadoop by Arun Murthy, PMC Apache Hadoop & Cofounder Hortonworks
Machine Learning in the IoT with Apache NiFi
Data Pipeline at Tapad
Spark Summit EU talk by Kaarthik Sivashanmugam

What's hot (20)

PDF
Jim Dowling – Interactive Flink analytics with HopsWorks and Zeppelin
PPTX
Combining Machine Learning frameworks with Apache Spark
PPTX
Why apache Flink is the 4G of Big Data Analytics Frameworks
PDF
Spark Summit EU talk by Zoltan Zvara
PDF
RUNNING A PETASCALE DATA SYSTEM: GOOD, BAD, AND UGLY CHOICES by Alexey Kharlamov
PPTX
Design Patterns for Large-Scale Real-Time Learning
PPTX
Faster, Faster, Faster: The True Story of a Mobile Analytics Data Mart on Hive
PPTX
Bridging the gap of Relational to Hadoop using Sqoop @ Expedia
PPTX
Spark Technology Center IBM
PPTX
Next Gen Big Data Analytics with Apache Apex
PPTX
Observing Intraday Indicators Using Real-Time Tick Data on Apache Superset an...
PPTX
Lambda architecture on Spark, Kafka for real-time large scale ML
PPTX
Large-scaled telematics analytics
PPTX
Streamline - Stream Analytics for Everyone
PDF
Sherlock: an anomaly detection service on top of Druid
PPTX
Self-Service Analytics on Hadoop: Lessons Learned
PDF
How to use Parquet as a Sasis for ETL and Analytics
PDF
Spark Uber Development Kit
PDF
How to Boost 100x Performance for Real World Application with Apache Spark-(G...
PPTX
Apache Zeppelin Meetup Christian Tzolov 1/21/16
Jim Dowling – Interactive Flink analytics with HopsWorks and Zeppelin
Combining Machine Learning frameworks with Apache Spark
Why apache Flink is the 4G of Big Data Analytics Frameworks
Spark Summit EU talk by Zoltan Zvara
RUNNING A PETASCALE DATA SYSTEM: GOOD, BAD, AND UGLY CHOICES by Alexey Kharlamov
Design Patterns for Large-Scale Real-Time Learning
Faster, Faster, Faster: The True Story of a Mobile Analytics Data Mart on Hive
Bridging the gap of Relational to Hadoop using Sqoop @ Expedia
Spark Technology Center IBM
Next Gen Big Data Analytics with Apache Apex
Observing Intraday Indicators Using Real-Time Tick Data on Apache Superset an...
Lambda architecture on Spark, Kafka for real-time large scale ML
Large-scaled telematics analytics
Streamline - Stream Analytics for Everyone
Sherlock: an anomaly detection service on top of Druid
Self-Service Analytics on Hadoop: Lessons Learned
How to use Parquet as a Sasis for ETL and Analytics
Spark Uber Development Kit
How to Boost 100x Performance for Real World Application with Apache Spark-(G...
Apache Zeppelin Meetup Christian Tzolov 1/21/16
Ad

Similar to Unified, Efficient, and Portable Data Processing with Apache Beam (20)

PDF
Present and future of unified, portable, and efficient data processing with A...
PDF
Present and future of unified, portable and efficient data processing with Ap...
PDF
Realizing the Promise of Portable Data Processing with Apache Beam
PPTX
Portable Streaming Pipelines with Apache Beam
PDF
Realizing the promise of portability with Apache Beam
PDF
Portable batch and streaming pipelines with Apache Beam (Big Data Application...
PPTX
ApacheBeam_Google_Theater_TalendConnect2017.pptx
PDF
ApacheBeam_Google_Theater_TalendConnect2017.pdf
PDF
Data Summer Conf 2018, “Building unified Batch and Stream processing pipeline...
PDF
Introduction to Apache Beam
PDF
HBaseCon2017 Efficient and portable data processing with Apache Beam and HBase
PDF
Flink Forward San Francisco 2019: Apache Beam portability in the times of rea...
PPTX
Apache Beam (incubating)
PDF
Introduction to Apache Beam (incubating) - DataCamp Salzburg - 7 dec 2016
PDF
Introduction to Apache Beam
PDF
Flink Forward Berlin 2017: Aljoscha Krettek - Talk Python to me: Stream Proce...
PDF
hbaseconasia2017: HBase on Beam
PPTX
Talk Python To Me: Stream Processing in your favourite Language with Beam on ...
PDF
Flink Forward Berlin 2018: Thomas Weise & Aljoscha Krettek - "Python Streamin...
PDF
Maximilian Michels - Flink and Beam
Present and future of unified, portable, and efficient data processing with A...
Present and future of unified, portable and efficient data processing with Ap...
Realizing the Promise of Portable Data Processing with Apache Beam
Portable Streaming Pipelines with Apache Beam
Realizing the promise of portability with Apache Beam
Portable batch and streaming pipelines with Apache Beam (Big Data Application...
ApacheBeam_Google_Theater_TalendConnect2017.pptx
ApacheBeam_Google_Theater_TalendConnect2017.pdf
Data Summer Conf 2018, “Building unified Batch and Stream processing pipeline...
Introduction to Apache Beam
HBaseCon2017 Efficient and portable data processing with Apache Beam and HBase
Flink Forward San Francisco 2019: Apache Beam portability in the times of rea...
Apache Beam (incubating)
Introduction to Apache Beam (incubating) - DataCamp Salzburg - 7 dec 2016
Introduction to Apache Beam
Flink Forward Berlin 2017: Aljoscha Krettek - Talk Python to me: Stream Proce...
hbaseconasia2017: HBase on Beam
Talk Python To Me: Stream Processing in your favourite Language with Beam on ...
Flink Forward Berlin 2018: Thomas Weise & Aljoscha Krettek - "Python Streamin...
Maximilian Michels - Flink and Beam
Ad

More from DataWorks Summit/Hadoop Summit (20)

PPT
Running Apache Spark & Apache Zeppelin in Production
PPT
State of Security: Apache Spark & Apache Zeppelin
PDF
Unleashing the Power of Apache Atlas with Apache Ranger
PDF
Enabling Digital Diagnostics with a Data Science Platform
PDF
Revolutionize Text Mining with Spark and Zeppelin
PDF
Double Your Hadoop Performance with Hortonworks SmartSense
PDF
Hadoop Crash Course
PDF
Data Science Crash Course
PDF
Apache Spark Crash Course
PDF
Dataflow with Apache NiFi
PPTX
Schema Registry - Set you Data Free
PPTX
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
PDF
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
PPTX
Mool - Automated Log Analysis using Data Science and ML
PPTX
How Hadoop Makes the Natixis Pack More Efficient
PPTX
HBase in Practice
PPTX
The Challenge of Driving Business Value from the Analytics of Things (AOT)
PDF
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
PPTX
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
PPTX
Backup and Disaster Recovery in Hadoop
Running Apache Spark & Apache Zeppelin in Production
State of Security: Apache Spark & Apache Zeppelin
Unleashing the Power of Apache Atlas with Apache Ranger
Enabling Digital Diagnostics with a Data Science Platform
Revolutionize Text Mining with Spark and Zeppelin
Double Your Hadoop Performance with Hortonworks SmartSense
Hadoop Crash Course
Data Science Crash Course
Apache Spark Crash Course
Dataflow with Apache NiFi
Schema Registry - Set you Data Free
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Mool - Automated Log Analysis using Data Science and ML
How Hadoop Makes the Natixis Pack More Efficient
HBase in Practice
The Challenge of Driving Business Value from the Analytics of Things (AOT)
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
Backup and Disaster Recovery in Hadoop

Recently uploaded (20)

PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Sensors and Actuators in IoT Systems using pdf
PPT
Teaching material agriculture food technology
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Advanced IT Governance
PDF
madgavkar20181017ppt McKinsey Presentation.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
Cloud computing and distributed systems.
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
PDF
Electronic commerce courselecture one. Pdf
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Transforming Manufacturing operations through Intelligent Integrations
Spectral efficient network and resource selection model in 5G networks
Sensors and Actuators in IoT Systems using pdf
Teaching material agriculture food technology
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
MYSQL Presentation for SQL database connectivity
Advanced IT Governance
madgavkar20181017ppt McKinsey Presentation.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Understanding_Digital_Forensics_Presentation.pptx
The Rise and Fall of 3GPP – Time for a Sabbatical?
Cloud computing and distributed systems.
NewMind AI Monthly Chronicles - July 2025
Advanced methodologies resolving dimensionality complications for autism neur...
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
Electronic commerce courselecture one. Pdf
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Transforming Manufacturing operations through Intelligent Integrations

Unified, Efficient, and Portable Data Processing with Apache Beam

  • 1. Abstract Unbounded, unordered, global­ scale datasets are increasingly common in day-­to-­day business, and consumers of these datasets have detailed requirements for latency, cost, and completeness. Apache Beam defines a new data processing programming model that evolved from more than a decade of experience building Big Data infrastructure within Google, including MapReduce, FlumeJava, Millwheel, and Cloud Dataflow. Apache Beam handles both batch and streaming use cases, offering a powerful, unified model. It neatly separates properties of the data from run-time characteristics, allowing pipelines to be portable across multiple run-time environments, both open ­source, including Apache Apex, Apache Flink, Apache Gearpump, Apache Spark, and proprietary. Finally, Beam's model enables newer optimizations, like dynamic work rebalancing and autoscaling, resulting in an efficient execution. This talk will cover the basics of Apache Beam, touch on its evolution, and describe main concepts in its powerful programming model. We'll show how Beam unifies batch and streaming use cases, and show efficient execution in real-world scenarios. Finally, we'll demonstrate pipeline portability across Apache Apex, Apache Flink, Apache Spark and Google Cloud Dataflow in a live setting. This session is a Technical (Intermediate) talk in our IoT and Streaming track. It focuses on Apache Flink, Apache Kafka, Apache Spark, Cloud and is geared towards Architect, Data Scientist, Developer / Engineer audiences.
  • 2. Unified, Efficient and Portable Data Processing with Apache Beam Davor Bonaci PMC Chair, Apache Beam Software Engineer, Google Inc.
  • 3. Apache Beam: Open Source data processing APIs ● Expresses data-parallel batch and streaming algorithms using one unified API ● Cleanly separates data processing logic from runtime requirements ● Supports execution on multiple distributed processing runtime environments
  • 4. The evolution of Apache Beam MapReduce Apache Beam Cloud Dataflow BigTable DremelColossus FlumeMegastore Spanner PubSub Millwheel
  • 5. Agenda 1. Expressing data-parallel pipelines with the Beam model 2. The Beam vision for portability 3. Parallel and portable pipelines in practice
  • 6. Apache Beam is a unified programming model designed to provide portable data processing pipelines (efficient too)
  • 7. Expressing data-parallel pipelines with the Beam model A unified model for batch and streaming
  • 8. Processing time vs. event time
  • 9. The Beam Model: asking the right questions What results are calculated? Where in event time are results calculated? When in processing time are results materialized? How do refinements of results relate?
  • 10. PCollection<KV<String, Integer>> scores = input .apply(Sum.integersPerKey()); The Beam Model: What is being computed?
  • 11. The Beam Model: What is being computed?
  • 12. PCollection<KV<String, Integer>> scores = input .apply(Window.into(FixedWindows.of(Duration.standardMinutes(2))) .apply(Sum.integersPerKey()); The Beam Model: Where in event time?
  • 13. The Beam Model: Where in event time?
  • 14. PCollection<KV<String, Integer>> scores = input .apply(Window.into(FixedWindows.of(Duration.standardMinutes(2)) .triggering(AtWatermark())) .apply(Sum.integersPerKey()); The Beam Model: When in processing time?
  • 15. The Beam Model: When in processing time?
  • 16. PCollection<KV<String, Integer>> scores = input .apply(Window.into(FixedWindows.of(Duration.standardMinutes(2)) .triggering(AtWatermark() .withEarlyFirings( AtPeriod(Duration.standardMinutes(1))) .withLateFirings(AtCount(1))) .accumulatingFiredPanes()) .apply(Sum.integersPerKey()); The Beam Model: How do refinements relate?
  • 17. The Beam Model: How do refinements relate?
  • 18. Customizing What Where When How 3 Streaming 4 Streaming + Accumulation 1 Classic Batch 2 Windowed Batch
  • 19. The Beam vision for portablility Write once, run anywhere“ ”
  • 20. Beam Vision: mix and match SDKs and runtimes ● The Beam Model: the abstractions at the core of Apache Beam Runner 1 Runner 3Runner 2 ● Choice of SDK: Users write their pipelines in a language that’s familiar and integrated with their other tooling ● Choice of Runners: Users choose the right runtime for their current needs -- on-prem / cloud, open source / not, fully managed / not ● Scalability for Developers: Clean APIs allow developers to contribute modules independently The Beam Model Language A Language CLanguage B The Beam Model Language A SDK Language C SDK Language B SDK
  • 21. ● Beam’s Java SDK runs on multiple runtime environments, including: ○ Apache Apex ○ Apache Spark ○ Apache Flink ○ Google Cloud Dataflow ○ [in development] Apache Gearpump ● Cross-language infrastructure is in progress. ○ Beam’s Python SDK currently runs on Google Cloud Dataflow Beam Vision: as of April 2017 Beam Model: Fn Runners Apache Spark Cloud Dataflow Beam Model: Pipeline Construction Apache Flink Java Java Python Python Apache Apex Apache Gearpump
  • 22. Example Beam Runners Apache Spark ● Open-source cluster-computing framework ● Large ecosystem of APIs and tools ● Runs on premise or in the cloud Apache Flink ● Open-source distributed data processing engine ● High-throughput and low-latency stream processing ● Runs on premise or in the cloud Google Cloud Dataflow ● Fully-managed service for batch and stream data processing ● Provides dynamic auto-scaling, monitoring tools, and tight integration with Google Cloud Platform
  • 23. How do you build an abstraction layer? Apache Spark Cloud Dataflow Apache Flink ???????? ????????
  • 24. Beam: the intersection of runner functionality?
  • 25. Beam: the union of runner functionality?
  • 28. Parallel and portable pipelines in practice Demo and Use Case
  • 29. Demo!
  • 30. Getting Started with Apache Beam Quickstarts ● Java SDK ● Python SDK Example walkthroughs ● Word Count ● Mobile Gaming Extensive documentation
  • 31. Related sessions Hadoop Summit San Jose 2016 ● Apache Beam: A Unified Model for Batch and Streaming Data Processing ○ Speaker: Davor Bonaci Hadoop Summit Melbourne 2016 ● Stream/Batch processing portable across on-premise and Cloud with Apache Beam ○ Speaker: Eric Anderson DataWorks Summit San Jose 2017 ● Realizing the promise of portable data processing with Apache Beam ○ Speaker: Davor Bonaci ● Stateful processing of massive out-of-order streams with Apache Beam ○ Speaker: Kenneth Knowles
  • 32. Apache Beam is a unified programming model designed to provide portable data processing pipelines (efficient too)
  • 34. Demo screenshots because if I make them, I won’t need to use them