SlideShare a Scribd company logo
The Computer Science behind a
Modern Distributed Database
Dan Larkin-York
Chicago / February 20, 2018
www.arangodb.com
Overview
Topics
Resilience and Consensus
Sorting
Log-structured Merge Trees
Hybrid Logical Clocks
Distributed ACID Transactions
Bottom line: You need CompSci to implement a modern data store
Resilience and Consensus
The Problem
A modern data store is distributed,
Resilience and Consensus
The Problem
A modern data store is distributed, because it needs to scale out and/or
be resilient.
Resilience and Consensus
The Problem
A modern data store is distributed, because it needs to scale out and/or
be resilient.
Different parts of the system need to agree on things.
Resilience and Consensus
The Problem
A modern data store is distributed, because it needs to scale out and/or
be resilient.
Different parts of the system need to agree on things.
Consensus is the art to achieve this as well as possible in software.
This is relatively easy, if things are good, but very hard, if:
Resilience and Consensus
The Problem
A modern data store is distributed, because it needs to scale out and/or
be resilient.
Different parts of the system need to agree on things.
Consensus is the art to achieve this as well as possible in software.
This is relatively easy, if things are good, but very hard, if:
the network has outages,
the network has dropped, delayed or duplicated packets,
disks fail (and come back with corrupt data),
machines fail (and come back with old data),
racks fail (and come back with or without data).
Resilience and Consensus
The Problem
A modern data store is distributed, because it needs to scale out and/or
be resilient.
Different parts of the system need to agree on things.
Consensus is the art to achieve this as well as possible in software.
This is relatively easy, if things are good, but very hard, if:
the network has outages,
the network has dropped, delayed or duplicated packets,
disks fail (and come back with corrupt data),
machines fail (and come back with old data),
racks fail (and come back with or without data).
(And we have not even talked about malicious attacks and enemy action.)
Paxos and Raft
Traditionally, one uses the Paxos Consensus Protocol (1989 . . . 1998).
More recently, Raft (2013) has been proposed.
Paxos is a challenge to understand and to implement efficiently.
Various variants exist.
Raft is designed to be understandable.
Paxos and Raft
Traditionally, one uses the Paxos Consensus Protocol (1989 . . . 1998).
More recently, Raft (2013) has been proposed.
Paxos is a challenge to understand and to implement efficiently.
Various variants exist.
Raft is designed to be understandable.
My advice:
First try to understand Paxos for some time (do not implement it!), then
enjoy the beauty of Raft,
Paxos and Raft
Traditionally, one uses the Paxos Consensus Protocol (1989 . . . 1998).
More recently, Raft (2013) has been proposed.
Paxos is a challenge to understand and to implement efficiently.
Various variants exist.
Raft is designed to be understandable.
My advice:
First try to understand Paxos for some time (do not implement it!), then
enjoy the beauty of Raft, but do not implement it either!
Paxos and Raft
Traditionally, one uses the Paxos Consensus Protocol (1989 . . . 1998).
More recently, Raft (2013) has been proposed.
Paxos is a challenge to understand and to implement efficiently.
Various variants exist.
Raft is designed to be understandable.
My advice:
First try to understand Paxos for some time (do not implement it!), then
enjoy the beauty of Raft, but do not implement it either!
Use some battle-tested implementation you trust!
Paxos and Raft
Traditionally, one uses the Paxos Consensus Protocol (1989 . . . 1998).
More recently, Raft (2013) has been proposed.
Paxos is a challenge to understand and to implement efficiently.
Various variants exist.
Raft is designed to be understandable.
My advice:
First try to understand Paxos for some time (do not implement it!), then
enjoy the beauty of Raft, but do not implement it either!
Use some battle-tested implementation you trust!
But most importantly: DO NOT TRY TO INVENT YOUR OWN!
Raft in a slide
An odd number of servers each keep a persisted log of events.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Only the leader may append to the replicated log.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Only the leader may append to the replicated log.
An append only counts when a majority has persisted and confirmed it.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Only the leader may append to the replicated log.
An append only counts when a majority has persisted and confirmed it.
Very smart logic to ensure a unique leader and automatic recovery from
failure.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Only the leader may append to the replicated log.
An append only counts when a majority has persisted and confirmed it.
Very smart logic to ensure a unique leader and automatic recovery from
failure.
It is all a lot of fun to get right, but it is proven to work.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Only the leader may append to the replicated log.
An append only counts when a majority has persisted and confirmed it.
Very smart logic to ensure a unique leader and automatic recovery from
failure.
It is all a lot of fun to get right, but it is proven to work.
One puts a key/value store on top, the log contains the changes.
Raft demo
Demo
http://raft.github.io/raftscope/index.html
(by Diego Ongaro)
Sorting
The Problem
Data stores need indexes. In practice, we need to sort things.
Sorting
The Problem
Data stores need indexes. In practice, we need to sort things.
Most published algorithms are rubbish on modern hardware.
Sorting
The Problem
Data stores need indexes. In practice, we need to sort things.
Most published algorithms are rubbish on modern hardware.
The problem is no longer the comparison computations but the data
movement.
Sorting
The Problem
Data stores need indexes. In practice, we need to sort things.
Most published algorithms are rubbish on modern hardware.
The problem is no longer the comparison computations but the data
movement.
Since 1983 and the Apple IIe,
compute power in one core has increased by about ×20000
and now we have 32 cores in some CPUs
a single memory access only by about ×40
this means computation has outpaced memory access by ×16000!
Idea for a parallel sorting algorithm: Merge Sort
Min−Heap:
sorted
merged
Idea for a parallel sorting algorithm: Merge Sort
Min−Heap:
sorted
merged
Nearly all comparisons hit the L2 cache!
Log structured merge trees (LSM-trees)
The Problem
People rightfully expect from a data store, that it
can hold more data than the available RAM,
works well with SSDs and spinning rust,
allows fast bulk inserts into large data sets, and
provides fast reads in a hot set that fits into RAM.
Log structured merge trees (LSM-trees)
The Problem
People rightfully expect from a data store, that it
can hold more data than the available RAM,
works well with SSDs and spinning rust,
allows fast bulk inserts into large data sets, and
provides fast reads in a hot set that fits into RAM.
Traditional B-tree based structures often fail to deliver with the last 2.
Log structured merge trees (LSM-trees)
(Source: http://www.benstopford.com/2015/02/14/log-structured-merge-trees/, Author: Ben Stopford, License: Creative Commons)
Log structured merge trees (LSM-trees)
LSM-trees — summary
writes first go into memtables,
all files are sorted and immutable,
compaction happens in the background,
efficient merge sort can be used,
all writes use sequential I/O,
Bloom filters or Cuckoo filters for fast negatives,
=⇒ good write throughput and reasonable read performance,
used in ArangoDB, BigTable, Cassandra, FaunaDB, HBase, InfluxDB,
LevelDB, MarkLogic, MongoDB, MySQL, RocksDB, SQLite4,
WiredTiger, etc.
Hybrid Logical Clocks (HLC)
The Problem
Clocks in different nodes of distributed systems are not in sync.
Hybrid Logical Clocks (HLC)
The Problem
Clocks in different nodes of distributed systems are not in sync.
general relativity poses fundamental obstructions to synchronicity,
in practice, clock skew happens,
Google can use atomic clocks,
even with NTP (network time protocol) we have to live with ≈ 20ms.
Hybrid Logical Clocks (HLC)
The Problem
Clocks in different nodes of distributed systems are not in sync.
general relativity poses fundamental obstructions to synchronicity,
in practice, clock skew happens,
Google can use atomic clocks,
even with NTP (network time protocol) we have to live with ≈ 20ms.
Therefore, we cannot compare time stamps from different nodes!
Hybrid Logical Clocks (HLC)
The Problem
Clocks in different nodes of distributed systems are not in sync.
general relativity poses fundamental obstructions to synchronicity,
in practice, clock skew happens,
Google can use atomic clocks,
even with NTP (network time protocol) we have to live with ≈ 20ms.
Therefore, we cannot compare time stamps from different nodes!
Why would this help?
establish “happened after” relationship between events,
e.g. for conflict resolution, log sorting, detecting network delays,
time to live could be implemented easily.
Hybrid Logical Clocks (HLC)
The Idea
Every computer has a local clock, and we use NTP to synchronize.
Hybrid Logical Clocks (HLC)
The Idea
Every computer has a local clock, and we use NTP to synchronize.
If two events on different machines are linked by causality, the cause
should have a smaller time stamp than the effect.
Hybrid Logical Clocks (HLC)
The Idea
Every computer has a local clock, and we use NTP to synchronize.
If two events on different machines are linked by causality, the cause
should have a smaller time stamp than the effect.
causality ⇐⇒ a message is sent
Send a time stamp with every message. The HLC always returns a value
> max(local clock, largest time stamp ever seen).
Hybrid Logical Clocks (HLC)
The Idea
Every computer has a local clock, and we use NTP to synchronize.
If two events on different machines are linked by causality, the cause
should have a smaller time stamp than the effect.
causality ⇐⇒ a message is sent
Send a time stamp with every message. The HLC always returns a value
> max(local clock, largest time stamp ever seen).
Causality is preserved, time can “catch up” with logical time eventually.
http://muratbuffalo.blogspot.com.es/2014/07/
hybrid-logical-clocks.html
Distributed ACID Transactions
Atomic either happens in its entirety or not at all
Consistent reading sees a consistent state, writing pre-
serves consistency
Isolated concurrent transactions do not see each
other
Durable committed writes are preserved after shut-
down and crashes
Distributed ACID Transactions
Atomic either happens in its entirety or not at all
Consistent reading sees a consistent state, writing pre-
serves consistency
Isolated concurrent transactions do not see each
other
Durable committed writes are preserved after shut-
down and crashes
(All relatively doable when transactions happen one after another!)
Distributed ACID Transactions
The Problem
In a distributed system:
How to make sure, that all nodes agree on whether the transaction has
happened? (Atomicity)
Distributed ACID Transactions
The Problem
In a distributed system:
How to make sure, that all nodes agree on whether the transaction has
happened? (Atomicity)
How to create a consistent snapshot across nodes? (Consistency)
Distributed ACID Transactions
The Problem
In a distributed system:
How to make sure, that all nodes agree on whether the transaction has
happened? (Atomicity)
How to create a consistent snapshot across nodes? (Consistency)
How to hide ongoing activities until commit? (Isolation)
Distributed ACID Transactions
The Problem
In a distributed system:
How to make sure, that all nodes agree on whether the transaction has
happened? (Atomicity)
How to create a consistent snapshot across nodes? (Consistency)
How to hide ongoing activities until commit? (Isolation)
How to handle lost nodes? (Durability)
Distributed ACID Transactions
The Problem
In a distributed system:
How to make sure, that all nodes agree on whether the transaction has
happened? (Atomicity)
How to create a consistent snapshot across nodes? (Consistency)
How to hide ongoing activities until commit? (Isolation)
How to handle lost nodes? (Durability)
We have to take replication, resilience and failover into account.
Distributed ACID Transactions
WITHOUT
Distributed databases without ACID transactions:
ArangoDB, BigTable, Couchbase, Datastax, Dynamo, Elastic, HBase,
MongoDB, RethinkDB, Riak, and lots more . . .
WITH
Distributed databases with ACID transactions:
CockroachDB, FaunaDB, FoundationDB, MarkLogic, Spanner
Distributed ACID Transactions
WITHOUT
Distributed databases without ACID transactions:
ArangoDB, BigTable, Couchbase, Datastax, Dynamo, Elastic, HBase,
MongoDB, RethinkDB, Riak, and lots more . . .
WITH
Distributed databases with ACID transactions:
CockroachDB, FaunaDB, FoundationDB, MarkLogic, Spanner
=⇒ Very few distributed engines promise ACID, because this is hard!
Distributed ACID Transactions
Basic Idea
Use Multi Version Concurrency Control (MVCC), i.e. multiple
revisions of a data item are kept.
Distributed ACID Transactions
Basic Idea
Use Multi Version Concurrency Control (MVCC), i.e. multiple
revisions of a data item are kept.
Do writes and replication decentrally and distributed, without them
becoming visible from other transactions.
Distributed ACID Transactions
Basic Idea
Use Multi Version Concurrency Control (MVCC), i.e. multiple
revisions of a data item are kept.
Do writes and replication decentrally and distributed, without them
becoming visible from other transactions.
Then have some place, where there is a switch, which decides when
the transaction becomes visible.
Distributed ACID Transactions
Basic Idea
Use Multi Version Concurrency Control (MVCC), i.e. multiple
revisions of a data item are kept.
Do writes and replication decentrally and distributed, without them
becoming visible from other transactions.
Then have some place, where there is a switch, which decides when
the transaction becomes visible.
These “switches” need to
be persisted somewhere (durability),
scale out (no bottleneck for commit/abort),
be replicated (no single point of failure),
be resilient in case of fail-over (fault-tolerance).
Distributed ACID Transactions
Basic Idea
Use Multi Version Concurrency Control (MVCC), i.e. multiple
revisions of a data item are kept.
Do writes and replication decentrally and distributed, without them
becoming visible from other transactions.
Then have some place, where there is a switch, which decides when
the transaction becomes visible.
These “switches” need to
be persisted somewhere (durability),
scale out (no bottleneck for commit/abort),
be replicated (no single point of failure),
be resilient in case of fail-over (fault-tolerance).
Transaction visibility needs to be implemented (MVCC), so comparing
time stamps play a crucial role.
Thank you!
Further questions?
Follow us on twitter: @arangodb
Join our slack: slack.arangodb.com
Download and documentation: https://arangodb.com
Issues and source (Star us!):
https://github.com/arangodb/arangodb
Info and slides:
https://arangodb.com/speakers/daniel-larkin-york
Links
http://the-paper-trail.org/blog/consensus-protocols-paxos
https://raft.github.io
https://en.wikipedia.org/wiki/Merge_sort
http:
//www.benstopford.com/2015/02/14/log-structured-merge-trees/
http://muratbuffalo.blogspot.com.es/2014/07/
hybrid-logical-clocks.html
https://research.google.com/archive/spanner.html
https:
//www.cockroachlabs.com/docs/cockroachdb-architecture.html
https://www.arangodb.com
http://mesos.apache.org

More Related Content

PDF
Using S3 Select to Deliver 100X Performance Improvements Versus the Public Cloud
PDF
Netflix running Presto in the AWS Cloud
PDF
Using Alluxio as a Fault Tolerant Pluggable Optimization Component to Compute...
PPTX
Hadoop 2 cluster architecture
PPTX
Mapreduce Tutorial
PPTX
Data Analytics using MATLAB and HDF5
PDF
Wisely Chen Spark Talk At Spark Gathering in Taiwan
PPTX
Hadoop 1 vs hadoop2
Using S3 Select to Deliver 100X Performance Improvements Versus the Public Cloud
Netflix running Presto in the AWS Cloud
Using Alluxio as a Fault Tolerant Pluggable Optimization Component to Compute...
Hadoop 2 cluster architecture
Mapreduce Tutorial
Data Analytics using MATLAB and HDF5
Wisely Chen Spark Talk At Spark Gathering in Taiwan
Hadoop 1 vs hadoop2

What's hot (18)

PDF
Osd ctw spark
PPT
HDF5 Performance Enhancements with the Elimination of Unlimited Dimension
PDF
Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)
PDF
PySparkの勘所(20170630 sapporo db analytics showcase)
PPTX
Efficiently serving HDF5 via OPeNDAP
PPTX
RedisConf17 - Turbo-charge your apps with Amazon Elasticache for Redis
PDF
RedisConf17 - Redis Graph
PPTX
Putting some Spark into HDF5
PDF
Why You Definitely Don’t Want to Build Your Own Time Series Database
PDF
InfluxDB Internals
PDF
Brian Bulkowski. Aerospike
PDF
Ceph at Spreadshirt (June 2016)
PPTX
Understanding Storage I/O Under Load
PDF
Effectively deploying hadoop to the cloud
PDF
RedisConf17 - Searching Billions of Documents with Redis
PDF
Foss evolution cos-boudnik
PDF
Data Analysis with TensorFlow in PostgreSQL
 
Osd ctw spark
HDF5 Performance Enhancements with the Elimination of Unlimited Dimension
Ceph Object Storage at Spreadshirt (July 2015, Ceph Berlin Meetup)
PySparkの勘所(20170630 sapporo db analytics showcase)
Efficiently serving HDF5 via OPeNDAP
RedisConf17 - Turbo-charge your apps with Amazon Elasticache for Redis
RedisConf17 - Redis Graph
Putting some Spark into HDF5
Why You Definitely Don’t Want to Build Your Own Time Series Database
InfluxDB Internals
Brian Bulkowski. Aerospike
Ceph at Spreadshirt (June 2016)
Understanding Storage I/O Under Load
Effectively deploying hadoop to the cloud
RedisConf17 - Searching Billions of Documents with Redis
Foss evolution cos-boudnik
Data Analysis with TensorFlow in PostgreSQL
 
Ad

Similar to The Computer Science Behind a modern Distributed Database (20)

PDF
The computer science behind a modern disributed data store
PDF
OSDC 2018 | The Computer science behind a modern distributed data store by Ma...
PPTX
Data Engineering for Data Scientists
PDF
Sparklife - Life In The Trenches With Spark
PDF
Data FAIRport Skunkworks: Common Repository Access Via Meta-Metadata Descript...
PPTX
Basho and Riak at GOTO Stockholm: "Don't Use My Database."
PPT
SQL or NoSQL, that is the question!
PPS
Scalable Web Architectures: Common Patterns and Approaches - Web 2.0 Expo NYC
PPTX
Data FAIRport Prototype & Demo - Presentation to Elsevier, Jul 10, 2015
ODP
Clustering In The Wild
KEY
WebWorkersCamp 2010
PDF
Cache Consistency – Requirements and its packet processing Performance implic...
ODP
Storage for next-generation sequencing
PPTX
A gentle introduction to the world of BigData and Hadoop
PDF
Infrastructure as code might be literally impossible part 2
PDF
Hadoop bank
PPTX
"Hints" talk at Walchand College Sangli, March 2017
PPT
Chalmers microprocessor sept 2010
PPTX
Large Components in the Rearview Mirror
PPTX
Adding Simplicity to Complexity
The computer science behind a modern disributed data store
OSDC 2018 | The Computer science behind a modern distributed data store by Ma...
Data Engineering for Data Scientists
Sparklife - Life In The Trenches With Spark
Data FAIRport Skunkworks: Common Repository Access Via Meta-Metadata Descript...
Basho and Riak at GOTO Stockholm: "Don't Use My Database."
SQL or NoSQL, that is the question!
Scalable Web Architectures: Common Patterns and Approaches - Web 2.0 Expo NYC
Data FAIRport Prototype & Demo - Presentation to Elsevier, Jul 10, 2015
Clustering In The Wild
WebWorkersCamp 2010
Cache Consistency – Requirements and its packet processing Performance implic...
Storage for next-generation sequencing
A gentle introduction to the world of BigData and Hadoop
Infrastructure as code might be literally impossible part 2
Hadoop bank
"Hints" talk at Walchand College Sangli, March 2017
Chalmers microprocessor sept 2010
Large Components in the Rearview Mirror
Adding Simplicity to Complexity
Ad

More from ArangoDB Database (20)

PPTX
ATO 2022 - Machine Learning + Graph Databases for Better Recommendations (3)....
PPTX
Machine Learning + Graph Databases for Better Recommendations V2 08/20/2022
PPTX
Machine Learning + Graph Databases for Better Recommendations V1 08/06/2022
PPTX
ArangoDB 3.9 - Further Powering Graphs at Scale
PDF
GraphSage vs Pinsage #InsideArangoDB
PDF
Webinar: ArangoDB 3.8 Preview - Analytics at Scale
PDF
Graph Analytics with ArangoDB
PDF
Getting Started with ArangoDB Oasis
PDF
Custom Pregel Algorithms in ArangoDB
PPTX
Hacktoberfest 2020 - Intro to Knowledge Graphs
PDF
A Graph Database That Scales - ArangoDB 3.7 Release Webinar
PDF
gVisor, Kata Containers, Firecracker, Docker: Who is Who in the Container Space?
PDF
ArangoML Pipeline Cloud - Managed Machine Learning Metadata
PDF
ArangoDB 3.7 Roadmap: Performance at Scale
PDF
Webinar: What to expect from ArangoDB Oasis
PDF
ArangoDB 3.5 Feature Overview Webinar - Sept 12, 2019
PDF
3.5 webinar
PDF
Webinar: How native multi model works in ArangoDB
PDF
An introduction to multi-model databases
PDF
Running complex data queries in a distributed system
ATO 2022 - Machine Learning + Graph Databases for Better Recommendations (3)....
Machine Learning + Graph Databases for Better Recommendations V2 08/20/2022
Machine Learning + Graph Databases for Better Recommendations V1 08/06/2022
ArangoDB 3.9 - Further Powering Graphs at Scale
GraphSage vs Pinsage #InsideArangoDB
Webinar: ArangoDB 3.8 Preview - Analytics at Scale
Graph Analytics with ArangoDB
Getting Started with ArangoDB Oasis
Custom Pregel Algorithms in ArangoDB
Hacktoberfest 2020 - Intro to Knowledge Graphs
A Graph Database That Scales - ArangoDB 3.7 Release Webinar
gVisor, Kata Containers, Firecracker, Docker: Who is Who in the Container Space?
ArangoML Pipeline Cloud - Managed Machine Learning Metadata
ArangoDB 3.7 Roadmap: Performance at Scale
Webinar: What to expect from ArangoDB Oasis
ArangoDB 3.5 Feature Overview Webinar - Sept 12, 2019
3.5 webinar
Webinar: How native multi model works in ArangoDB
An introduction to multi-model databases
Running complex data queries in a distributed system

Recently uploaded (20)

PDF
Getting Started with Data Integration: FME Form 101
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
1. Introduction to Computer Programming.pptx
PDF
Machine learning based COVID-19 study performance prediction
PPTX
Tartificialntelligence_presentation.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PDF
August Patch Tuesday
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
Empathic Computing: Creating Shared Understanding
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
OMC Textile Division Presentation 2021.pptx
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Getting Started with Data Integration: FME Form 101
Building Integrated photovoltaic BIPV_UPV.pdf
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
1. Introduction to Computer Programming.pptx
Machine learning based COVID-19 study performance prediction
Tartificialntelligence_presentation.pptx
Spectral efficient network and resource selection model in 5G networks
Univ-Connecticut-ChatGPT-Presentaion.pdf
August Patch Tuesday
Heart disease approach using modified random forest and particle swarm optimi...
cloud_computing_Infrastucture_as_cloud_p
Empathic Computing: Creating Shared Understanding
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Digital-Transformation-Roadmap-for-Companies.pptx
OMC Textile Division Presentation 2021.pptx
Unlocking AI with Model Context Protocol (MCP)
Mobile App Security Testing_ A Comprehensive Guide.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11

The Computer Science Behind a modern Distributed Database

  • 1. The Computer Science behind a Modern Distributed Database Dan Larkin-York Chicago / February 20, 2018 www.arangodb.com
  • 2. Overview Topics Resilience and Consensus Sorting Log-structured Merge Trees Hybrid Logical Clocks Distributed ACID Transactions Bottom line: You need CompSci to implement a modern data store
  • 3. Resilience and Consensus The Problem A modern data store is distributed,
  • 4. Resilience and Consensus The Problem A modern data store is distributed, because it needs to scale out and/or be resilient.
  • 5. Resilience and Consensus The Problem A modern data store is distributed, because it needs to scale out and/or be resilient. Different parts of the system need to agree on things.
  • 6. Resilience and Consensus The Problem A modern data store is distributed, because it needs to scale out and/or be resilient. Different parts of the system need to agree on things. Consensus is the art to achieve this as well as possible in software. This is relatively easy, if things are good, but very hard, if:
  • 7. Resilience and Consensus The Problem A modern data store is distributed, because it needs to scale out and/or be resilient. Different parts of the system need to agree on things. Consensus is the art to achieve this as well as possible in software. This is relatively easy, if things are good, but very hard, if: the network has outages, the network has dropped, delayed or duplicated packets, disks fail (and come back with corrupt data), machines fail (and come back with old data), racks fail (and come back with or without data).
  • 8. Resilience and Consensus The Problem A modern data store is distributed, because it needs to scale out and/or be resilient. Different parts of the system need to agree on things. Consensus is the art to achieve this as well as possible in software. This is relatively easy, if things are good, but very hard, if: the network has outages, the network has dropped, delayed or duplicated packets, disks fail (and come back with corrupt data), machines fail (and come back with old data), racks fail (and come back with or without data). (And we have not even talked about malicious attacks and enemy action.)
  • 9. Paxos and Raft Traditionally, one uses the Paxos Consensus Protocol (1989 . . . 1998). More recently, Raft (2013) has been proposed. Paxos is a challenge to understand and to implement efficiently. Various variants exist. Raft is designed to be understandable.
  • 10. Paxos and Raft Traditionally, one uses the Paxos Consensus Protocol (1989 . . . 1998). More recently, Raft (2013) has been proposed. Paxos is a challenge to understand and to implement efficiently. Various variants exist. Raft is designed to be understandable. My advice: First try to understand Paxos for some time (do not implement it!), then enjoy the beauty of Raft,
  • 11. Paxos and Raft Traditionally, one uses the Paxos Consensus Protocol (1989 . . . 1998). More recently, Raft (2013) has been proposed. Paxos is a challenge to understand and to implement efficiently. Various variants exist. Raft is designed to be understandable. My advice: First try to understand Paxos for some time (do not implement it!), then enjoy the beauty of Raft, but do not implement it either!
  • 12. Paxos and Raft Traditionally, one uses the Paxos Consensus Protocol (1989 . . . 1998). More recently, Raft (2013) has been proposed. Paxos is a challenge to understand and to implement efficiently. Various variants exist. Raft is designed to be understandable. My advice: First try to understand Paxos for some time (do not implement it!), then enjoy the beauty of Raft, but do not implement it either! Use some battle-tested implementation you trust!
  • 13. Paxos and Raft Traditionally, one uses the Paxos Consensus Protocol (1989 . . . 1998). More recently, Raft (2013) has been proposed. Paxos is a challenge to understand and to implement efficiently. Various variants exist. Raft is designed to be understandable. My advice: First try to understand Paxos for some time (do not implement it!), then enjoy the beauty of Raft, but do not implement it either! Use some battle-tested implementation you trust! But most importantly: DO NOT TRY TO INVENT YOUR OWN!
  • 14. Raft in a slide An odd number of servers each keep a persisted log of events.
  • 15. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody.
  • 16. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority.
  • 17. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority. Only the leader may append to the replicated log.
  • 18. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority. Only the leader may append to the replicated log. An append only counts when a majority has persisted and confirmed it.
  • 19. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority. Only the leader may append to the replicated log. An append only counts when a majority has persisted and confirmed it. Very smart logic to ensure a unique leader and automatic recovery from failure.
  • 20. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority. Only the leader may append to the replicated log. An append only counts when a majority has persisted and confirmed it. Very smart logic to ensure a unique leader and automatic recovery from failure. It is all a lot of fun to get right, but it is proven to work.
  • 21. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority. Only the leader may append to the replicated log. An append only counts when a majority has persisted and confirmed it. Very smart logic to ensure a unique leader and automatic recovery from failure. It is all a lot of fun to get right, but it is proven to work. One puts a key/value store on top, the log contains the changes.
  • 23. Sorting The Problem Data stores need indexes. In practice, we need to sort things.
  • 24. Sorting The Problem Data stores need indexes. In practice, we need to sort things. Most published algorithms are rubbish on modern hardware.
  • 25. Sorting The Problem Data stores need indexes. In practice, we need to sort things. Most published algorithms are rubbish on modern hardware. The problem is no longer the comparison computations but the data movement.
  • 26. Sorting The Problem Data stores need indexes. In practice, we need to sort things. Most published algorithms are rubbish on modern hardware. The problem is no longer the comparison computations but the data movement. Since 1983 and the Apple IIe, compute power in one core has increased by about ×20000 and now we have 32 cores in some CPUs a single memory access only by about ×40 this means computation has outpaced memory access by ×16000!
  • 27. Idea for a parallel sorting algorithm: Merge Sort Min−Heap: sorted merged
  • 28. Idea for a parallel sorting algorithm: Merge Sort Min−Heap: sorted merged Nearly all comparisons hit the L2 cache!
  • 29. Log structured merge trees (LSM-trees) The Problem People rightfully expect from a data store, that it can hold more data than the available RAM, works well with SSDs and spinning rust, allows fast bulk inserts into large data sets, and provides fast reads in a hot set that fits into RAM.
  • 30. Log structured merge trees (LSM-trees) The Problem People rightfully expect from a data store, that it can hold more data than the available RAM, works well with SSDs and spinning rust, allows fast bulk inserts into large data sets, and provides fast reads in a hot set that fits into RAM. Traditional B-tree based structures often fail to deliver with the last 2.
  • 31. Log structured merge trees (LSM-trees) (Source: http://www.benstopford.com/2015/02/14/log-structured-merge-trees/, Author: Ben Stopford, License: Creative Commons)
  • 32. Log structured merge trees (LSM-trees) LSM-trees — summary writes first go into memtables, all files are sorted and immutable, compaction happens in the background, efficient merge sort can be used, all writes use sequential I/O, Bloom filters or Cuckoo filters for fast negatives, =⇒ good write throughput and reasonable read performance, used in ArangoDB, BigTable, Cassandra, FaunaDB, HBase, InfluxDB, LevelDB, MarkLogic, MongoDB, MySQL, RocksDB, SQLite4, WiredTiger, etc.
  • 33. Hybrid Logical Clocks (HLC) The Problem Clocks in different nodes of distributed systems are not in sync.
  • 34. Hybrid Logical Clocks (HLC) The Problem Clocks in different nodes of distributed systems are not in sync. general relativity poses fundamental obstructions to synchronicity, in practice, clock skew happens, Google can use atomic clocks, even with NTP (network time protocol) we have to live with ≈ 20ms.
  • 35. Hybrid Logical Clocks (HLC) The Problem Clocks in different nodes of distributed systems are not in sync. general relativity poses fundamental obstructions to synchronicity, in practice, clock skew happens, Google can use atomic clocks, even with NTP (network time protocol) we have to live with ≈ 20ms. Therefore, we cannot compare time stamps from different nodes!
  • 36. Hybrid Logical Clocks (HLC) The Problem Clocks in different nodes of distributed systems are not in sync. general relativity poses fundamental obstructions to synchronicity, in practice, clock skew happens, Google can use atomic clocks, even with NTP (network time protocol) we have to live with ≈ 20ms. Therefore, we cannot compare time stamps from different nodes! Why would this help? establish “happened after” relationship between events, e.g. for conflict resolution, log sorting, detecting network delays, time to live could be implemented easily.
  • 37. Hybrid Logical Clocks (HLC) The Idea Every computer has a local clock, and we use NTP to synchronize.
  • 38. Hybrid Logical Clocks (HLC) The Idea Every computer has a local clock, and we use NTP to synchronize. If two events on different machines are linked by causality, the cause should have a smaller time stamp than the effect.
  • 39. Hybrid Logical Clocks (HLC) The Idea Every computer has a local clock, and we use NTP to synchronize. If two events on different machines are linked by causality, the cause should have a smaller time stamp than the effect. causality ⇐⇒ a message is sent Send a time stamp with every message. The HLC always returns a value > max(local clock, largest time stamp ever seen).
  • 40. Hybrid Logical Clocks (HLC) The Idea Every computer has a local clock, and we use NTP to synchronize. If two events on different machines are linked by causality, the cause should have a smaller time stamp than the effect. causality ⇐⇒ a message is sent Send a time stamp with every message. The HLC always returns a value > max(local clock, largest time stamp ever seen). Causality is preserved, time can “catch up” with logical time eventually. http://muratbuffalo.blogspot.com.es/2014/07/ hybrid-logical-clocks.html
  • 41. Distributed ACID Transactions Atomic either happens in its entirety or not at all Consistent reading sees a consistent state, writing pre- serves consistency Isolated concurrent transactions do not see each other Durable committed writes are preserved after shut- down and crashes
  • 42. Distributed ACID Transactions Atomic either happens in its entirety or not at all Consistent reading sees a consistent state, writing pre- serves consistency Isolated concurrent transactions do not see each other Durable committed writes are preserved after shut- down and crashes (All relatively doable when transactions happen one after another!)
  • 43. Distributed ACID Transactions The Problem In a distributed system: How to make sure, that all nodes agree on whether the transaction has happened? (Atomicity)
  • 44. Distributed ACID Transactions The Problem In a distributed system: How to make sure, that all nodes agree on whether the transaction has happened? (Atomicity) How to create a consistent snapshot across nodes? (Consistency)
  • 45. Distributed ACID Transactions The Problem In a distributed system: How to make sure, that all nodes agree on whether the transaction has happened? (Atomicity) How to create a consistent snapshot across nodes? (Consistency) How to hide ongoing activities until commit? (Isolation)
  • 46. Distributed ACID Transactions The Problem In a distributed system: How to make sure, that all nodes agree on whether the transaction has happened? (Atomicity) How to create a consistent snapshot across nodes? (Consistency) How to hide ongoing activities until commit? (Isolation) How to handle lost nodes? (Durability)
  • 47. Distributed ACID Transactions The Problem In a distributed system: How to make sure, that all nodes agree on whether the transaction has happened? (Atomicity) How to create a consistent snapshot across nodes? (Consistency) How to hide ongoing activities until commit? (Isolation) How to handle lost nodes? (Durability) We have to take replication, resilience and failover into account.
  • 48. Distributed ACID Transactions WITHOUT Distributed databases without ACID transactions: ArangoDB, BigTable, Couchbase, Datastax, Dynamo, Elastic, HBase, MongoDB, RethinkDB, Riak, and lots more . . . WITH Distributed databases with ACID transactions: CockroachDB, FaunaDB, FoundationDB, MarkLogic, Spanner
  • 49. Distributed ACID Transactions WITHOUT Distributed databases without ACID transactions: ArangoDB, BigTable, Couchbase, Datastax, Dynamo, Elastic, HBase, MongoDB, RethinkDB, Riak, and lots more . . . WITH Distributed databases with ACID transactions: CockroachDB, FaunaDB, FoundationDB, MarkLogic, Spanner =⇒ Very few distributed engines promise ACID, because this is hard!
  • 50. Distributed ACID Transactions Basic Idea Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of a data item are kept.
  • 51. Distributed ACID Transactions Basic Idea Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of a data item are kept. Do writes and replication decentrally and distributed, without them becoming visible from other transactions.
  • 52. Distributed ACID Transactions Basic Idea Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of a data item are kept. Do writes and replication decentrally and distributed, without them becoming visible from other transactions. Then have some place, where there is a switch, which decides when the transaction becomes visible.
  • 53. Distributed ACID Transactions Basic Idea Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of a data item are kept. Do writes and replication decentrally and distributed, without them becoming visible from other transactions. Then have some place, where there is a switch, which decides when the transaction becomes visible. These “switches” need to be persisted somewhere (durability), scale out (no bottleneck for commit/abort), be replicated (no single point of failure), be resilient in case of fail-over (fault-tolerance).
  • 54. Distributed ACID Transactions Basic Idea Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of a data item are kept. Do writes and replication decentrally and distributed, without them becoming visible from other transactions. Then have some place, where there is a switch, which decides when the transaction becomes visible. These “switches” need to be persisted somewhere (durability), scale out (no bottleneck for commit/abort), be replicated (no single point of failure), be resilient in case of fail-over (fault-tolerance). Transaction visibility needs to be implemented (MVCC), so comparing time stamps play a crucial role.
  • 55. Thank you! Further questions? Follow us on twitter: @arangodb Join our slack: slack.arangodb.com Download and documentation: https://arangodb.com Issues and source (Star us!): https://github.com/arangodb/arangodb Info and slides: https://arangodb.com/speakers/daniel-larkin-york