SlideShare a Scribd company logo
The Evolution of InfluxDB
Paul Dix
paul@influxdata.com
@pauldix
InfluxDB 0.0.1 - 0.8.8
Optimize for Developer
Happiness
 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | InfluxData
HTTP API
Schema on the fly
[
{
"name" : "hd_used",
"columns" : ["value", "host", "mount"],
"points" : [
[23.2, "serverA", "/mnt"]
]
}
]
select value from response_times
where time > '2013-08-12 23:32:01.232' and time < '2013-08-13';
Columns & Performance
Solving non-DB problems
InfluxDB 0.9.0 - 0.10.3
Line Protocol
cpu,host=serverA,num=1,region=west idle=1.667,system=2342.2 1492214400000000000
Line Protocol
Measurement
cpu,host=serverA,num=1,region=west idle=1.667,system=2342.2 1492214400000000000
Line Protocol
cpu,host=serverA,num=1,region=west idle=1.667,system=2342.2 1492214400000000000
Tags
Line Protocol
cpu,host=serverA,num=1,region=west idle=1.667,system=2342.2 1492214400000000000
Fields
float64, int64, bool, string
Line Protocol
cpu,host=serverA,num=1,region=west idle=1.667,system=2342.2 1492214400000000000
nanosecond
epoch
Common problems
data collector
processing, ETL, monitoring, alerting
UI, visualization, management
InfluxDB 0.11.0 - 1.7.7
TSM
Query Language Limitations
Kapacitor & TICKscript
var queue_size = stream
|from()
.database('telegraf')
.retentionPolicy('default')
.measurement('influxdb_hh_processor')
.where(lambda: "host" =~ /tot.*/ OR "host" =~ /prod.*/)
|groupBy('host', 'cluster_id')
|window()
.period(period)
.every(every)
|default()
.field('queueBytes', 0.0)
|max('queueBytes')
|eval(lambda: "max" * 1024.0 * 1024.0)
.as('queue_size_mb')
queue_size
|alert()
.id('cloud/{{ .TaskName }}/{{ index .Tags "cluster_id" }}/{{ index .Tags "host" }}')
.message('Host {{ index .Tags "host" }} (cluster {{ index .Tags "cluster_id" }}) has a
hinted-handoff queue size of {{ index .Fields "queue_size_mb" }}MB')
.details('')
.warn(lambda: "queue_size_mb" > warn_threshold)
.crit(lambda: "queue_size_mb" > crit_threshold)
.stateChangesOnly()
.slack()
Not Composable
Hard to Debug
Over 20k Kapacitors
Telegraf is huge
2.0
• MIT Licensed
• TSDB (write, query)
• UI & Visualizations, Dashboards
• Pull Metrics (Prometheus & OpenMetrics)
• Tasks (background processing, ETL, monitoring/alerting)
> DB
Unified, Consistent API
UI for everything
 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | InfluxData
 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | InfluxData
 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | InfluxData
 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | InfluxData
 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | InfluxData
 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | InfluxData
Officially Supported Client
Libraries
Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
Visualization Libraries
 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | InfluxData
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Comments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Named Arguments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
String Literals
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Buckets, not DBs
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Duration Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:2018-11-07T00:00:00Z)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Time Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Pipe forward operator
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Anonymous Function
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu")
and r.host == “serverA")
Predicate Function
• Query planner
• Query optimizer
• Turing complete language, VM, and query engine
• InfluxDB, CLI, REPL, Go library
!=
Flux is multi-language
Flux is more than query
r = http.get(url: "https://foo.com/resource")
data = if r.status_code == http.status_ok then
json.parse(
body: r.body,
on_err: (err) => {{}}
)
else {}
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
tasks
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
cron scheduling
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
packages & imports
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
map
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
String interpolation
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
Ship data elsewhere
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
Store secrets in a
store like Vault
Tasks + Flux = Serverless Time Series
+
+ 2.0
User Packages
influx package init
option package = {
name: "foo", // required, must match the name of the package declaration above
description: "an example package", // required
author: "Paul Dix <paul@influxdata.com>", // required
version: "0.1.0", // required
license: "MIT", // required
homepage: "https://foo.com", // optional
documentation: "https://foo.com/docs", // optional
repository: "https://github.com/pauldix/foo", //optional
flux_versions: ["0.*.*"], // optional, versions of flux this works with
tags: ["example"], // optional
files: [
"README.md",
"stuff.flux",
"other.flux",
], // optional, all code can be contained in package.flux file
tests: [
"stuff_test.flux",
], // optional
packages: [], // optional
dependencies: [
{package: "nathaniel/bar", version: 1.*.*"}
], // only required for any other package repo dependencies
}
influx package test
influx package publish
import "/pauldix/foo", "0.1.*"
import "/pauldix/foo", "0.1.*"
// if $FLUX_PATH/pauldix/foo exists
// 1. check package.flux version
// 2. look in _versions/ for matching
// if not, request matching version from repository
import "/pauldix/foo", "0.1.*"
// if $FLUX_PATH/pauldix/foo exists
// 1. check package.flux version
// 2. look in _versions/ for matching
// if not, request matching version from repository
foo.myFunc()
Not just for Flux
option package = {
name: "foo", // required, must match the name of the package declaration above
description: "an example package", // required
author: "Paul Dix <paul@influxdata.com>", // required
version: "0.1.0", // required
license: "MIT", // required
homepage: "https://foo.com", // optional
documentation: "https://foo.com/docs", // optional
repository: "https://github.com/pauldix/foo", //optional
flux_versions: ["0.*.*"], // optional, versions of flux this works with
tags: ["example"], // optional
files: [
"README.md",
"stuff.flux",
"other.flux",
], // optional, all code can be contained in package.flux file
tests: [
"stuff_test.flux",
], // optional
packages: [], // optional
dependencies: [
{package: "nathaniel/bar", version: 1.*.*"}
], // only required for any other package repo dependencies
}
option package = {
name: "averages",
description: "an example downsampling task package",
author: "Paul Dix <paul@influxdata.com>",
version: "0.1.0",
license: "MIT",
type: "task", // could be flux, or application
files: [
"README.md",
"downsample.flux",
],
}
Flux + UI + InfluxDB
More contribution and
community
Optimizes for Developer
Happiness
Open Source Alpha
Cloud 2 Beta
Thank you
@pauldix
paul@influxdata.com

More Related Content

PDF
Kubernetes Monitoring with InfluxDB 2.0 and Flux by Gianluca Arbezzano, Site ...
PDF
Extending Flux to Support Other Databases and Data Stores | Adam Anthony | In...
PDF
Monitoring InfluxEnterprise
PDF
InfluxDB IOx Tech Talks: Query Processing in InfluxDB IOx
PDF
Obtaining the Perfect Smoke By Monitoring Your BBQ with InfluxDB and Telegraf
PPTX
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob Lisi
PDF
INFLUXQL & TICKSCRIPT
PPTX
How to Introduce Telemetry Streaming (gNMI) in Your Network with SNMP with Te...
Kubernetes Monitoring with InfluxDB 2.0 and Flux by Gianluca Arbezzano, Site ...
Extending Flux to Support Other Databases and Data Stores | Adam Anthony | In...
Monitoring InfluxEnterprise
InfluxDB IOx Tech Talks: Query Processing in InfluxDB IOx
Obtaining the Perfect Smoke By Monitoring Your BBQ with InfluxDB and Telegraf
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob Lisi
INFLUXQL & TICKSCRIPT
How to Introduce Telemetry Streaming (gNMI) in Your Network with SNMP with Te...

What's hot (20)

PDF
Observability of InfluxDB IOx: Tracing, Metrics and System Tables
PPTX
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
PPTX
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
PDF
InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...
PDF
Monitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
PDF
Meet the Experts: InfluxDB Product Update
PDF
Time Series Data with InfluxDB
PDF
Reactive programming on Android
PDF
Apache Storm Tutorial
ODP
Cascalog internal dsl_preso
PDF
Harnessing the power of YARN with Apache Twill
PDF
Anais Dotis-Georgiou [InfluxData] | Learn Flux by Example | InfluxDays NA 2021
PDF
HadoopCon 2016 - 用 Jupyter Notebook Hold 住一個上線 Spark Machine Learning 專案實戰
PDF
Optimizing the Grafana Platform for Flux
PDF
Wayfair Use Case: The four R's of Metrics Delivery
PPTX
Kapacitor - Real Time Data Processing Engine
PDF
Creating and Using the Flux SQL Datasource | Katy Farmer | InfluxData
PDF
Accumulo Summit 2015: Using Fluo to incrementally process data in Accumulo [API]
PPTX
InfluxDB 1.0 - Optimizing InfluxDB by Sam Dillard
PDF
Lessons Learned: Running InfluxDB Cloud and Other Cloud Services at Scale | T...
Observability of InfluxDB IOx: Tracing, Metrics and System Tables
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
InfluxDB IOx Tech Talks: Intro to the InfluxDB IOx Read Buffer - A Read-Optim...
Monitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
Meet the Experts: InfluxDB Product Update
Time Series Data with InfluxDB
Reactive programming on Android
Apache Storm Tutorial
Cascalog internal dsl_preso
Harnessing the power of YARN with Apache Twill
Anais Dotis-Georgiou [InfluxData] | Learn Flux by Example | InfluxDays NA 2021
HadoopCon 2016 - 用 Jupyter Notebook Hold 住一個上線 Spark Machine Learning 專案實戰
Optimizing the Grafana Platform for Flux
Wayfair Use Case: The four R's of Metrics Delivery
Kapacitor - Real Time Data Processing Engine
Creating and Using the Flux SQL Datasource | Katy Farmer | InfluxData
Accumulo Summit 2015: Using Fluo to incrementally process data in Accumulo [API]
InfluxDB 1.0 - Optimizing InfluxDB by Sam Dillard
Lessons Learned: Running InfluxDB Cloud and Other Cloud Services at Scale | T...
Ad

Similar to 9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | InfluxData (20)

PDF
Flux and InfluxDB 2.0 by Paul Dix
PDF
Flux and InfluxDB 2.0
PPTX
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
PDF
InfluxData Platform Future and Vision
PDF
Balaji Palani [InfluxData] | InfluxDB Tasks Overview | InfluxDays 2022
PPTX
slide-keras-tf.pptx
PPTX
Functions in advanced programming
PDF
Python profiling
PDF
What's new with Apache Spark's Structured Streaming?
PDF
Job Queue in Golang
PDF
Artimon - Apache Flume (incubating) NYC Meetup 20111108
PDF
Wprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
PDF
Monitoring with Prometheus
PDF
Router Queue Simulation in C++ in MMNN and MM1 conditions
PDF
The Monitoring Playground
PPTX
Time Series Analysis for Network Secruity
PDF
Passenger forecasting at KLM
PPTX
What is new in Java 8
PDF
Spark workshop
PPTX
Anti patterns
Flux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
InfluxData Platform Future and Vision
Balaji Palani [InfluxData] | InfluxDB Tasks Overview | InfluxDays 2022
slide-keras-tf.pptx
Functions in advanced programming
Python profiling
What's new with Apache Spark's Structured Streaming?
Job Queue in Golang
Artimon - Apache Flume (incubating) NYC Meetup 20111108
Wprowadzenie do technologii Big Data / Intro to Big Data Ecosystem
Monitoring with Prometheus
Router Queue Simulation in C++ in MMNN and MM1 conditions
The Monitoring Playground
Time Series Analysis for Network Secruity
Passenger forecasting at KLM
What is new in Java 8
Spark workshop
Anti patterns
Ad

More from InfluxData (20)

PPTX
Announcing InfluxDB Clustered
PDF
Best Practices for Leveraging the Apache Arrow Ecosystem
PDF
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
PDF
Power Your Predictive Analytics with InfluxDB
PDF
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
PDF
Build an Edge-to-Cloud Solution with the MING Stack
PDF
Meet the Founders: An Open Discussion About Rewriting Using Rust
PDF
Introducing InfluxDB Cloud Dedicated
PDF
Gain Better Observability with OpenTelemetry and InfluxDB
PPTX
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
PDF
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
PPTX
Introducing InfluxDB’s New Time Series Database Storage Engine
PDF
Start Automating InfluxDB Deployments at the Edge with balena
PDF
Understanding InfluxDB’s New Storage Engine
PDF
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
PPTX
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
PDF
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
PDF
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
PDF
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
PDF
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Announcing InfluxDB Clustered
Best Practices for Leveraging the Apache Arrow Ecosystem
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
Power Your Predictive Analytics with InfluxDB
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
Build an Edge-to-Cloud Solution with the MING Stack
Meet the Founders: An Open Discussion About Rewriting Using Rust
Introducing InfluxDB Cloud Dedicated
Gain Better Observability with OpenTelemetry and InfluxDB
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
Introducing InfluxDB’s New Time Series Database Storage Engine
Start Automating InfluxDB Deployments at the Edge with balena
Understanding InfluxDB’s New Storage Engine
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022

Recently uploaded (20)

PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Zenith AI: Advanced Artificial Intelligence
PPTX
OMC Textile Division Presentation 2021.pptx
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PPTX
cloud_computing_Infrastucture_as_cloud_p
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Getting Started with Data Integration: FME Form 101
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Web App vs Mobile App What Should You Build First.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Hybrid model detection and classification of lung cancer
PPTX
A Presentation on Artificial Intelligence
PPTX
Tartificialntelligence_presentation.pptx
PDF
Approach and Philosophy of On baking technology
PPTX
1. Introduction to Computer Programming.pptx
SOPHOS-XG Firewall Administrator PPT.pptx
Zenith AI: Advanced Artificial Intelligence
OMC Textile Division Presentation 2021.pptx
Group 1 Presentation -Planning and Decision Making .pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
cloud_computing_Infrastucture_as_cloud_p
Digital-Transformation-Roadmap-for-Companies.pptx
Getting Started with Data Integration: FME Form 101
Assigned Numbers - 2025 - Bluetooth® Document
Web App vs Mobile App What Should You Build First.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Encapsulation_ Review paper, used for researhc scholars
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Hybrid model detection and classification of lung cancer
A Presentation on Artificial Intelligence
Tartificialntelligence_presentation.pptx
Approach and Philosophy of On baking technology
1. Introduction to Computer Programming.pptx

9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | InfluxData

  • 1. The Evolution of InfluxDB Paul Dix [email protected] @pauldix
  • 7. [ { "name" : "hd_used", "columns" : ["value", "host", "mount"], "points" : [ [23.2, "serverA", "/mnt"] ] } ]
  • 8. select value from response_times where time > '2013-08-12 23:32:01.232' and time < '2013-08-13';
  • 23. TSM
  • 26. var queue_size = stream |from() .database('telegraf') .retentionPolicy('default') .measurement('influxdb_hh_processor') .where(lambda: "host" =~ /tot.*/ OR "host" =~ /prod.*/) |groupBy('host', 'cluster_id') |window() .period(period) .every(every) |default() .field('queueBytes', 0.0) |max('queueBytes') |eval(lambda: "max" * 1024.0 * 1024.0) .as('queue_size_mb') queue_size |alert() .id('cloud/{{ .TaskName }}/{{ index .Tags "cluster_id" }}/{{ index .Tags "host" }}') .message('Host {{ index .Tags "host" }} (cluster {{ index .Tags "cluster_id" }}) has a hinted-handoff queue size of {{ index .Fields "queue_size_mb" }}MB') .details('') .warn(lambda: "queue_size_mb" > warn_threshold) .crit(lambda: "queue_size_mb" > crit_threshold) .stateChangesOnly() .slack()
  • 31. 2.0
  • 32. • MIT Licensed • TSDB (write, query) • UI & Visualizations, Dashboards • Pull Metrics (Prometheus & OpenMetrics) • Tasks (background processing, ETL, monitoring/alerting)
  • 33. > DB
  • 42. Officially Supported Client Libraries Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
  • 45. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
  • 46. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Comments
  • 47. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Named Arguments
  • 48. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") String Literals
  • 49. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Buckets, not DBs
  • 50. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Duration Literal
  • 51. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:2018-11-07T00:00:00Z) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Time Literal
  • 52. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Pipe forward operator
  • 53. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Anonymous Function
  • 54. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu") and r.host == “serverA") Predicate Function
  • 55. • Query planner • Query optimizer • Turing complete language, VM, and query engine • InfluxDB, CLI, REPL, Go library
  • 56. !=
  • 58. Flux is more than query
  • 59. r = http.get(url: "https://foo.com/resource") data = if r.status_code == http.status_ok then json.parse( body: r.body, on_err: (err) => {{}} ) else {}
  • 60. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "[email protected]", title: "Alert digest for {now()}",
  • 61. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "[email protected]", title: "Alert digest for {now()}", tasks
  • 62. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "[email protected]", title: "Alert digest for {now()}", cron scheduling
  • 63. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "[email protected]", title: "Alert digest for {now()}", packages & imports
  • 64. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "[email protected]", title: "Alert digest for {now()}", map
  • 65. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "[email protected]", title: "Alert digest for {now()}", String interpolation
  • 66. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "[email protected]", title: "Alert digest for {now()}", Ship data elsewhere
  • 67. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "[email protected]", title: "Alert digest for {now()}", Store secrets in a store like Vault
  • 68. Tasks + Flux = Serverless Time Series
  • 69. +
  • 70. + 2.0
  • 73. option package = { name: "foo", // required, must match the name of the package declaration above description: "an example package", // required author: "Paul Dix <[email protected]>", // required version: "0.1.0", // required license: "MIT", // required homepage: "https://foo.com", // optional documentation: "https://foo.com/docs", // optional repository: "https://github.com/pauldix/foo", //optional flux_versions: ["0.*.*"], // optional, versions of flux this works with tags: ["example"], // optional files: [ "README.md", "stuff.flux", "other.flux", ], // optional, all code can be contained in package.flux file tests: [ "stuff_test.flux", ], // optional packages: [], // optional dependencies: [ {package: "nathaniel/bar", version: 1.*.*"} ], // only required for any other package repo dependencies }
  • 77. import "/pauldix/foo", "0.1.*" // if $FLUX_PATH/pauldix/foo exists // 1. check package.flux version // 2. look in _versions/ for matching // if not, request matching version from repository
  • 78. import "/pauldix/foo", "0.1.*" // if $FLUX_PATH/pauldix/foo exists // 1. check package.flux version // 2. look in _versions/ for matching // if not, request matching version from repository foo.myFunc()
  • 79. Not just for Flux
  • 80. option package = { name: "foo", // required, must match the name of the package declaration above description: "an example package", // required author: "Paul Dix <[email protected]>", // required version: "0.1.0", // required license: "MIT", // required homepage: "https://foo.com", // optional documentation: "https://foo.com/docs", // optional repository: "https://github.com/pauldix/foo", //optional flux_versions: ["0.*.*"], // optional, versions of flux this works with tags: ["example"], // optional files: [ "README.md", "stuff.flux", "other.flux", ], // optional, all code can be contained in package.flux file tests: [ "stuff_test.flux", ], // optional packages: [], // optional dependencies: [ {package: "nathaniel/bar", version: 1.*.*"} ], // only required for any other package repo dependencies }
  • 81. option package = { name: "averages", description: "an example downsampling task package", author: "Paul Dix <[email protected]>", version: "0.1.0", license: "MIT", type: "task", // could be flux, or application files: [ "README.md", "downsample.flux", ], }
  • 82. Flux + UI + InfluxDB

Editor's Notes

  • #3: The first period we’ll talk about represents InfluxDB’s first commit on Sept 26th, 2013 to November of 2014. At the time, I felt that many people had to solve problems that involved time series. Specifically, time series was an abstraction that was useful for solving problems in monitoring, analytics, and sensor data.
  • #4: The most important goal when designing InfluxDB was to optimize for developer happiness. People have different interpretations of what this means, so let me just tell you what it means to me. As a developer, I’m happiest when I’m building the thing I want to build. Not doing some yak shave to solve other problems or wrestling with infrastructure or configuration. Basically, optimizing for developer happiness is the same as optimizing for developer productivity.
  • #5: This is the web site in January 2014. No external dependencies was a big selling point.
  • #6: We created a DB with an HTTP API. One endpoint for writing data (POST) and one for querying data (GET).
  • #7: Schema would be created as you write data into the DB.
  • #8: This was the schema of old InfluxDB. Series name, columns, and then a bunch of points with the values. Time and sequence_number would be automatically inserted. Time was in microsecond precision.
  • #9: SQL like query language
  • #10: Columns and performance were confusing. People wanted different ways to efficiently slice and dice their time series data.
  • #11: The database was only the start of what people needed to solve with time series.
  • #19: After building the initial version of InfluxDB in 2013 we realized that we needed to solve a common set of problems. We could make some problems easier to make users of InfluxDB more productive, thus optimizing for developer happiness
  • #20: Like CollectD, but specific for InfluxDB’s schema. Over 200 plugins to collect from many services like System metrics, MySQL, Redis, SNMP, many others
  • #30: Despite all this, Kapacitor still gets quite a bit of use. It seems to be a lace where people can potentially get a ton of value. If you have something that’s hard to use yet people are still working through it, there must be something there.
  • #31: The contribution model works. Over 200 plugins, mostly community contributed
  • #32: So how do we bring those components to InfluxDB 2.0? How do we continue to optimize for developer happiness?
  • #34: It’s more than just a database, but people know InfluxDB so we’re keeping the name. But it does far more than just store and query. It can be used as a full monitoring system. And it’s all together as a unified platform.
  • #44: We want to make it trivial for developers to create interactive visualizations in their applications based on data coming from InfluxDB. Tim will touch on these in his talk.
  • #45: Flux is the language that we’ve developed for 2.0
  • #57: Flux is designed to integrate with other systems. This means querying and reading data from other sources like databases or HTTP APIs and sending data out to other places.
  • #58: As you’ll see in the PromQL talk later today, the Flux engine can be used for other languages. We’ve already put significant effort into adding InfluxQL support to the Flux engine and we’re investing in adding PromQL support. Maybe someday SQL and others?
  • #62: Tasks are like continuous queries in InfluxDB1, but they’ll have many more options. One of the goals was to have all the information for the task viewable in a single script, so we added them as language options.
  • #63: We support cron scheduling, but we also support simpler scheduling like running every x minutes or y hours
  • #64: Packages and imports! Right now we only have the standard library. Soon users will be able to create their own pure Flux packages and upload them to a hosted package manager and share them. Define functions and new functionality.
  • #65: Loop over records in tables with map. Modify records, change schema, change values, tags, etc.
  • #66: String interpolation
  • #67: Send data to other APIs, databases, or anything
  • #68: Store secrets in a store like Vault
  • #70: Overcome dynamic linking and bloated binary. No more waiting for pull requests, add a plugin by writing it in pure flux.
  • #71: Telegraf becomes a Flux processor at the edge. Deploy new plugins, flux processing rules and more from a central InfluxDB server or the cloud, Telegraf will periodically pick it up.
  • #73: Creates a package.flux file in the current directory with placeholders and example.
  • #77: Note the / at the beginning of the import, that’s against the repo, not the stdlib
  • #78: Note the / at the beginning of the import, that’s against the repo, not the stdlib
  • #79: Note the / at the beginning of the import, that’s against the repo, not the stdlib
  • #81: Files could be of any type. So we can use the package repo as a repo for things like dashboard, task, collector, or entire application templates.
  • #82: Say we add another field called type, which defaults to Flux. But it could be task or application.
  • #83: Tying all these things together, our goal is to give users the ability to create custom applications for working with time series data that they can share with the world, their organization, or whoever.