pg_shardman:
PostgreSQL sharding
via postgres_fdw,
pg_pathman and
logical replication.
Arseny Sher, Stas Kelvich
Postgres Professional
Read and write scalability
High availability
ACID transactions
What people typically expect from the cluster
2
CAP theorem: common myths
3
Informal statement: it is impossible to implement a read/write data object that provides
all three properties.
Consistency in CAP means linearizability
wow, so strict
Availability in CAP means that any node must give non-error answer to every
query.
... but execution can take arbitrary time
P in CAP means that the system continues operation after network failure
And in real life, we always want the system to continue operation after network
failure
CAP theorem: common myths
4
This combination of availability and consistency over the wide area is generally
considered impossible due to the CAP Theorem. We show how Spanner achieves this
combination and why it is consistent with CAP.
Eric Brewer. Spanner, TrueTime & The CAP Theorem. February 14, 2017
CAP theorem: conclusions
5
We aim for
Write (and read) horizontal scalability
Mainly OLTP workload with occasional analytical queries
Decent transactions
pg_shardman is PG 10 extension, PostgreSQL license, available at GitHub
Some features require patched Postgres
pg_shardman
6
pg_shardman is a compilation of several technologies.
Scalability: hash-sharding via partitioning and fdw
HA: logical replication
ACID: 2PC + distributed snapshot manager
pg_shardman foundations
7
Let’s go up from partitioning.
Because it’s like sharding, but inside one node.
Partitioning benefits
Sequential access to single (or a few) partitions instead of random access to huge
table
Effective cache usage when most frequently used data located in several partitions
...
Sharding
8
9.6 and below:
Range and list partitioning, complex manual management
Not efficient
New declarative partitioning in 10:
+ Range and list partitioning with handy DDL
- No insertions to foreign partitions, no triggers on parent tables
- Updates moving tuples between partitions are not supported
pg_pathman extension:
Hash and range partitioning
Planning and execution optimizations
FDW support
Partitioning in PostgreSQL
9
Partitioning in PostgreSQL
10
FDW (foreign data wrappers) mechanism in PG gives access to external sources of
data. postgres_fdw extension allows querying one PG instance from another.
Going beyond one node: FDW
11
Since 9.6 postgres_fdw can push-down joins.
Since 10 postgres_fdw can push-down aggregates and more kinds of joins.
explain (analyze, costs off) select count(*)
from remote.customer
group by country_code;
QUERY PLAN
--------------------------------------------------------------
Foreign Scan (actual time=353.786..353.896 rows=100 loops=1)
Relations: Aggregate on (remote.customer)
postgres_fdw optimizations
12
Currently parallel foreign scans are not supported :(
... and limitations
13
partitioning + postgres_fdw => sharding
14
partitioning + postgres_fdw => sharding
15
pg_shardman supports only distribution by hash
It splits the load evenly
Currently it is impossible to change number of shards, it should be chosen
beforehand wisely
Too little shards will balance poorly after of nodes addition/removal
Too many shards bring overhead, especially for replication
~10 shards per node looks like adequate baseline
Another common approach for resharding is consistent hashing
Data distribution schemas
16
Possible schemas of replication
per-node, using streaming (physical) replication of PostgreSQL
High availability
17
1
1
Taken from citus docs
Per-node replication in Citus MX
18
per-node, using streaming (physical) replication of PostgreSQL
Requires 2x nodes, or 2х PG instances per node.
Possible schemas of replication
19
per-node, using streaming (physical) replication of PostgreSQL
Requires 2x nodes, or 2х PG instances per node.
per-shard, using logical replication
Possible schemas of replication
20
Logical replication – new in PostgreSQL 10
21
Logical replication – new in PostgreSQL 10
22
Replicas in pg_shardman
23
Synchronous replication:
We don’t lose transactions reported as committed
Write it blocked if replica doesn’t respond
Slower
Currently we can reliably failover only if we have 1 replica per shard
Asynchronous replication:
Last committed transactions might be lost
Writes don’t block
Faster
Synchronous, asynchronous replication and
availability
24
Node addition with seamless rebalance
25
Node failover
26
We designate one special node ’sharlord’.
It holds tables with metadata.
Metadata can be synchronously replicated somewhere to change shardlord in case
of failure.
Currently shardlord can’t hold usual data itself.
How to manage this zoo
27
select shardman.add_node(’port=5433’);
select shardman.add_node(’port=5434’);
Example
28
select shardman.add_node(’port=5433’);
select shardman.add_node(’port=5434’);
create table pgbench_accounts (aid int not null, bid int, abalance int,
filler char(84));
select shardman.create_hash_partitions(’pgbench_accounts’,’aid’, 30, 1);
Example
29
[local]:5432 ars@ars:5434=# table shardman.partitions;
part_name | node_id | relation
---------------------+---------+------------------
pgbench_accounts_0 | 1 | pgbench_accounts
pgbench_accounts_1 | 2 | pgbench_accounts
pgbench_accounts_2 | 3 | pgbench_accounts
...
Example
30
[local]:5432 ars@ars:5434=# table shardman.replicas;
part_name | node_id | relation
---------------------+---------+------------------
pgbench_accounts_0 | 2 | pgbench_accounts
pgbench_accounts_1 | 3 | pgbench_accounts
pgbench_accounts_2 | 1 | pgbench_accounts
...
Example
31
Distributed transactions:
Distributed atomicity
Distributed isolation
Profit! (distributed)
Transactions in shardman
32
All reliable distributed systems are alike each unreliable is unreliable in its own way.
Kyle Kingsbury and Leo Tolstoy.
Transactions in shardman
33
Distributed transactions:
Atomicity: 2PC
Isolation: Clock-SI
Transactions in shardman
34
Transactions in shardman: 2PC
35
Two-phase commit is the anti-availability protocol.
P. Helland. ACM Queue, Vol. 14, Issue 2, March-April 2016.
Transactions in shardman: 2PC
36
Transactions in shardman: 2PC
37
Transactions in shardman: 2PC
38
Transactions in shardman: 2PC
39
Transactions in shardman: 2PC
40
So what we can do about it?
Make 2PC fail-recovery tolerant: X3PC, Paxos Commit
Back-up partitions!
Transactions in shardman: 2PC
41
Transactions in shardman: 2PC
42
Spanner mitigates this by having each member be a Paxos group, thus ensuring each
2PC “member” is highly available even if some of its Paxos participants are down.
Eric Brewer.
Transactions in shardman: 2PC
43
Profit? Not yet!
Transactions in shardman: isolation
44
Transactions in shardman: isolation
45
postgres_fdw.use_twophase = on
BEGIN;
UPDATE holders SET horns -= 1 WHERE holders.id = $id1;
UPDATE holders SET horns += 1 WHERE holders.id = $id2;
COMMIT;
SELECT sum(horns_count) FROM holders;
-> 1
-> -2
-> 0
Transactions in shardman: isolation
46
MVCC in two sentences:
UPDATE/DELETE create new tuple version, without in-place override
Each tx gets current database version at start (xid, csn,timestamp) and able to see
only appropriate versions.
acc1
ver 10: {1, 0}
ver 20: {1, 2}
ver 30: {1, 4}
––––– snapshot = 34 –––––
ver 40: {1, 2}
Transactions in shardman: isolation
47
BEGIN
Transactions in shardman: isolation
48
Do some serious stuff
Transactions in shardman: isolation
49
COMMIT
Transactions in shardman: isolation
50
BEGIN
Transactions in shardman: isolation
51
Do some serious web scale stuff
Transactions in shardman: isolation
52
COMMIT
Transactions in shardman: isolation
53
Transactions in shardman: Clock Skew
54
Clock-SI slightly changes visibility rules:
version = timestamp
Visibility’: Waits if tuple came from future. (Do not allow time-travel paradoxes!)
Visibility”: Waits if tuple already prepared(P) but not yet commited(C).
Commit’: Receives local versions from partitions on Prepare and Commits with
maximal version.
Transactions in shardman: isolation
55
0 2 4 6 8 10 12 14
nodes
0
10000
20000
30000
40000
50000
TPS
pgbench -N on ec2 c3.2xlarge, client is oblivious about keys distribution
single node, no shardman
pg_shardman, no replication
pg_shardman, redundancy 1, async replication
Some benchmarks
56
pg_shardman with docs is available at github.com/postgrespro/pg_shardman
Report issues on GitHub
Some features require patched postgres
github.com/postgrespro/postgres_cluster/tree/pg_shardman
2PC and distributed snapshot manager
COPY FROM to sharded tables additionaly needs patched pg_pathman
We appreciate feedback!
57

More Related Content

PPTX
випуск 4 клас
PPTX
Lua в нагруженных телеком-системах / Дмитрий Борисов (ИП Борисов Дмитрий Нико...
PDF
Честное перформанс-тестирование / Дмитрий Пивоваров (ZeroTurnaround)
PDF
PostgreSQL Sharding and HA: Theory and Practice (PGConf.ASIA 2017)
PDF
FDW-based Sharding Update and Future
PPTX
CAP: Scaling, HA
PDF
Distributed Computing on PostgreSQL | PGConf EU 2017 | Marco Slot
PDF
The Future of Postgres Sharding / Bruce Momjian (PostgreSQL)
випуск 4 клас
Lua в нагруженных телеком-системах / Дмитрий Борисов (ИП Борисов Дмитрий Нико...
Честное перформанс-тестирование / Дмитрий Пивоваров (ZeroTurnaround)
PostgreSQL Sharding and HA: Theory and Practice (PGConf.ASIA 2017)
FDW-based Sharding Update and Future
CAP: Scaling, HA
Distributed Computing on PostgreSQL | PGConf EU 2017 | Marco Slot
The Future of Postgres Sharding / Bruce Momjian (PostgreSQL)

Similar to pg / shardman: шардинг в PostgreSQL на основе postgres / fdw, pg / pathman и логической репликации / Арсений Шер, Стас Кельвич (Postgres Professional) (20)

PDF
PGConf.ASIA 2019 Bali - How did PostgreSQL Write Load Balancing of Queries Us...
PDF
The Challenges of Distributing Postgres: A Citus Story
PDF
The Challenges of Distributing Postgres: A Citus Story | DataEngConf NYC 2017...
PDF
Architecting peta-byte-scale analytics by scaling out Postgres on Azure with ...
PDF
Horizontally Scalable Relational Databases with Spark: Spark Summit East talk...
PPTX
osi-oss-dbs.pptx
PDF
Countdown to PostgreSQL v9.5 - Foriegn Tables can be part of Inheritance Tree
PPTX
Megastore by Google
PDF
Highly available distributed databases, how they work, javier ramirez at teowaki
PDF
Basics of the Highly Available Distributed Databases - teowaki - javier ramir...
PDF
Everything you always wanted to know about highly available distributed datab...
PDF
Postgres Vienna DB Meetup 2014
PDF
MySQL Conference 2011 -- The Secret Sauce of Sharding -- Ryan Thiessen
PPTX
PostgreSQL 10: What to Look For
PDF
Intro to Databases
PDF
GoshawkDB: Making Time with Vector Clocks
PDF
How We Added Replication to QuestDB - JonTheBeach
ODP
Everything you always wanted to know about Distributed databases, at devoxx l...
PDF
High Availability PostgreSQL with Zalando Patroni
ODP
Fail over fail_back
PGConf.ASIA 2019 Bali - How did PostgreSQL Write Load Balancing of Queries Us...
The Challenges of Distributing Postgres: A Citus Story
The Challenges of Distributing Postgres: A Citus Story | DataEngConf NYC 2017...
Architecting peta-byte-scale analytics by scaling out Postgres on Azure with ...
Horizontally Scalable Relational Databases with Spark: Spark Summit East talk...
osi-oss-dbs.pptx
Countdown to PostgreSQL v9.5 - Foriegn Tables can be part of Inheritance Tree
Megastore by Google
Highly available distributed databases, how they work, javier ramirez at teowaki
Basics of the Highly Available Distributed Databases - teowaki - javier ramir...
Everything you always wanted to know about highly available distributed datab...
Postgres Vienna DB Meetup 2014
MySQL Conference 2011 -- The Secret Sauce of Sharding -- Ryan Thiessen
PostgreSQL 10: What to Look For
Intro to Databases
GoshawkDB: Making Time with Vector Clocks
How We Added Replication to QuestDB - JonTheBeach
Everything you always wanted to know about Distributed databases, at devoxx l...
High Availability PostgreSQL with Zalando Patroni
Fail over fail_back
Ad

More from Ontico (20)

PDF
One-cloud — система управления дата-центром в Одноклассниках / Олег Анастасье...
PDF
Масштабируя DNS / Артем Гавриченков (Qrator Labs)
PPTX
Создание BigData-платформы для ФГУП Почта России / Андрей Бащенко (Luxoft)
PDF
Готовим тестовое окружение, или сколько тестовых инстансов вам нужно / Алекса...
PDF
Новые технологии репликации данных в PostgreSQL / Александр Алексеев (Postgre...
PDF
PostgreSQL Configuration for Humans / Alvaro Hernandez (OnGres)
PDF
Inexpensive Datamasking for MySQL with ProxySQL — Data Anonymization for Deve...
PDF
Опыт разработки модуля межсетевого экранирования для MySQL / Олег Брославский...
PPTX
ProxySQL Use Case Scenarios / Alkin Tezuysal (Percona)
PPTX
MySQL Replication — Advanced Features / Петр Зайцев (Percona)
PDF
Внутренний open-source. Как разрабатывать мобильное приложение большим количе...
PPTX
Подробно о том, как Causal Consistency реализовано в MongoDB / Михаил Тюленев...
PPTX
Балансировка на скорости проводов. Без ASIC, без ограничений. Решения NFWare ...
PDF
Перехват трафика — мифы и реальность / Евгений Усков (Qrator Labs)
PPT
И тогда наверняка вдруг запляшут облака! / Алексей Сушков (ПЕТЕР-СЕРВИС)
PPTX
Как мы заставили Druid работать в Одноклассниках / Юрий Невиницин (OK.RU)
PPTX
Разгоняем ASP.NET Core / Илья Вербицкий (WebStoating s.r.o.)
PPTX
100500 способов кэширования в Oracle Database или как достичь максимальной ск...
PPTX
Apache Ignite Persistence: зачем Persistence для In-Memory, и как он работает...
PDF
Механизмы мониторинга баз данных: взгляд изнутри / Дмитрий Еманов (Firebird P...
One-cloud — система управления дата-центром в Одноклассниках / Олег Анастасье...
Масштабируя DNS / Артем Гавриченков (Qrator Labs)
Создание BigData-платформы для ФГУП Почта России / Андрей Бащенко (Luxoft)
Готовим тестовое окружение, или сколько тестовых инстансов вам нужно / Алекса...
Новые технологии репликации данных в PostgreSQL / Александр Алексеев (Postgre...
PostgreSQL Configuration for Humans / Alvaro Hernandez (OnGres)
Inexpensive Datamasking for MySQL with ProxySQL — Data Anonymization for Deve...
Опыт разработки модуля межсетевого экранирования для MySQL / Олег Брославский...
ProxySQL Use Case Scenarios / Alkin Tezuysal (Percona)
MySQL Replication — Advanced Features / Петр Зайцев (Percona)
Внутренний open-source. Как разрабатывать мобильное приложение большим количе...
Подробно о том, как Causal Consistency реализовано в MongoDB / Михаил Тюленев...
Балансировка на скорости проводов. Без ASIC, без ограничений. Решения NFWare ...
Перехват трафика — мифы и реальность / Евгений Усков (Qrator Labs)
И тогда наверняка вдруг запляшут облака! / Алексей Сушков (ПЕТЕР-СЕРВИС)
Как мы заставили Druid работать в Одноклассниках / Юрий Невиницин (OK.RU)
Разгоняем ASP.NET Core / Илья Вербицкий (WebStoating s.r.o.)
100500 способов кэширования в Oracle Database или как достичь максимальной ск...
Apache Ignite Persistence: зачем Persistence для In-Memory, и как он работает...
Механизмы мониторинга баз данных: взгляд изнутри / Дмитрий Еманов (Firebird P...
Ad

Recently uploaded (20)

PPTX
A Brief Introduction to IoT- Smart Objects: The "Things" in IoT
PPTX
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
PPTX
CN_Unite_1 AI&DS ENGGERING SPPU PUNE UNIVERSITY
PPTX
Feature types and data preprocessing steps
PDF
Design Guidelines and solutions for Plastics parts
PDF
Soil Improvement Techniques Note - Rabbi
PPTX
Building constraction Conveyance of water.pptx
PDF
20250617 - IR - Global Guide for HR - 51 pages.pdf
PDF
Computer organization and architecuture Digital Notes....pdf
PDF
MLpara ingenieira CIVIL, meca Y AMBIENTAL
PDF
Abrasive, erosive and cavitation wear.pdf
PPTX
CyberSecurity Mobile and Wireless Devices
PPTX
Information Storage and Retrieval Techniques Unit III
PDF
distributed database system" (DDBS) is often used to refer to both the distri...
PPTX
Management Information system : MIS-e-Business Systems.pptx
PPTX
Chapter 2 -Technology and Enginerring Materials + Composites.pptx
PPTX
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
PDF
UEFA_Carbon_Footprint_Calculator_Methology_2.0.pdf
PPTX
mechattonicsand iotwith sensor and actuator
PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
A Brief Introduction to IoT- Smart Objects: The "Things" in IoT
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
CN_Unite_1 AI&DS ENGGERING SPPU PUNE UNIVERSITY
Feature types and data preprocessing steps
Design Guidelines and solutions for Plastics parts
Soil Improvement Techniques Note - Rabbi
Building constraction Conveyance of water.pptx
20250617 - IR - Global Guide for HR - 51 pages.pdf
Computer organization and architecuture Digital Notes....pdf
MLpara ingenieira CIVIL, meca Y AMBIENTAL
Abrasive, erosive and cavitation wear.pdf
CyberSecurity Mobile and Wireless Devices
Information Storage and Retrieval Techniques Unit III
distributed database system" (DDBS) is often used to refer to both the distri...
Management Information system : MIS-e-Business Systems.pptx
Chapter 2 -Technology and Enginerring Materials + Composites.pptx
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
UEFA_Carbon_Footprint_Calculator_Methology_2.0.pdf
mechattonicsand iotwith sensor and actuator
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf

pg / shardman: шардинг в PostgreSQL на основе postgres / fdw, pg / pathman и логической репликации / Арсений Шер, Стас Кельвич (Postgres Professional)

  • 1. pg_shardman: PostgreSQL sharding via postgres_fdw, pg_pathman and logical replication. Arseny Sher, Stas Kelvich Postgres Professional
  • 2. Read and write scalability High availability ACID transactions What people typically expect from the cluster 2
  • 4. Informal statement: it is impossible to implement a read/write data object that provides all three properties. Consistency in CAP means linearizability wow, so strict Availability in CAP means that any node must give non-error answer to every query. ... but execution can take arbitrary time P in CAP means that the system continues operation after network failure And in real life, we always want the system to continue operation after network failure CAP theorem: common myths 4
  • 5. This combination of availability and consistency over the wide area is generally considered impossible due to the CAP Theorem. We show how Spanner achieves this combination and why it is consistent with CAP. Eric Brewer. Spanner, TrueTime & The CAP Theorem. February 14, 2017 CAP theorem: conclusions 5
  • 6. We aim for Write (and read) horizontal scalability Mainly OLTP workload with occasional analytical queries Decent transactions pg_shardman is PG 10 extension, PostgreSQL license, available at GitHub Some features require patched Postgres pg_shardman 6
  • 7. pg_shardman is a compilation of several technologies. Scalability: hash-sharding via partitioning and fdw HA: logical replication ACID: 2PC + distributed snapshot manager pg_shardman foundations 7
  • 8. Let’s go up from partitioning. Because it’s like sharding, but inside one node. Partitioning benefits Sequential access to single (or a few) partitions instead of random access to huge table Effective cache usage when most frequently used data located in several partitions ... Sharding 8
  • 9. 9.6 and below: Range and list partitioning, complex manual management Not efficient New declarative partitioning in 10: + Range and list partitioning with handy DDL - No insertions to foreign partitions, no triggers on parent tables - Updates moving tuples between partitions are not supported pg_pathman extension: Hash and range partitioning Planning and execution optimizations FDW support Partitioning in PostgreSQL 9
  • 11. FDW (foreign data wrappers) mechanism in PG gives access to external sources of data. postgres_fdw extension allows querying one PG instance from another. Going beyond one node: FDW 11
  • 12. Since 9.6 postgres_fdw can push-down joins. Since 10 postgres_fdw can push-down aggregates and more kinds of joins. explain (analyze, costs off) select count(*) from remote.customer group by country_code; QUERY PLAN -------------------------------------------------------------- Foreign Scan (actual time=353.786..353.896 rows=100 loops=1) Relations: Aggregate on (remote.customer) postgres_fdw optimizations 12
  • 13. Currently parallel foreign scans are not supported :( ... and limitations 13
  • 14. partitioning + postgres_fdw => sharding 14
  • 15. partitioning + postgres_fdw => sharding 15
  • 16. pg_shardman supports only distribution by hash It splits the load evenly Currently it is impossible to change number of shards, it should be chosen beforehand wisely Too little shards will balance poorly after of nodes addition/removal Too many shards bring overhead, especially for replication ~10 shards per node looks like adequate baseline Another common approach for resharding is consistent hashing Data distribution schemas 16
  • 17. Possible schemas of replication per-node, using streaming (physical) replication of PostgreSQL High availability 17
  • 18. 1 1 Taken from citus docs Per-node replication in Citus MX 18
  • 19. per-node, using streaming (physical) replication of PostgreSQL Requires 2x nodes, or 2х PG instances per node. Possible schemas of replication 19
  • 20. per-node, using streaming (physical) replication of PostgreSQL Requires 2x nodes, or 2х PG instances per node. per-shard, using logical replication Possible schemas of replication 20
  • 21. Logical replication – new in PostgreSQL 10 21
  • 22. Logical replication – new in PostgreSQL 10 22
  • 24. Synchronous replication: We don’t lose transactions reported as committed Write it blocked if replica doesn’t respond Slower Currently we can reliably failover only if we have 1 replica per shard Asynchronous replication: Last committed transactions might be lost Writes don’t block Faster Synchronous, asynchronous replication and availability 24
  • 25. Node addition with seamless rebalance 25
  • 27. We designate one special node ’sharlord’. It holds tables with metadata. Metadata can be synchronously replicated somewhere to change shardlord in case of failure. Currently shardlord can’t hold usual data itself. How to manage this zoo 27
  • 29. select shardman.add_node(’port=5433’); select shardman.add_node(’port=5434’); create table pgbench_accounts (aid int not null, bid int, abalance int, filler char(84)); select shardman.create_hash_partitions(’pgbench_accounts’,’aid’, 30, 1); Example 29
  • 30. [local]:5432 ars@ars:5434=# table shardman.partitions; part_name | node_id | relation ---------------------+---------+------------------ pgbench_accounts_0 | 1 | pgbench_accounts pgbench_accounts_1 | 2 | pgbench_accounts pgbench_accounts_2 | 3 | pgbench_accounts ... Example 30
  • 31. [local]:5432 ars@ars:5434=# table shardman.replicas; part_name | node_id | relation ---------------------+---------+------------------ pgbench_accounts_0 | 2 | pgbench_accounts pgbench_accounts_1 | 3 | pgbench_accounts pgbench_accounts_2 | 1 | pgbench_accounts ... Example 31
  • 32. Distributed transactions: Distributed atomicity Distributed isolation Profit! (distributed) Transactions in shardman 32
  • 33. All reliable distributed systems are alike each unreliable is unreliable in its own way. Kyle Kingsbury and Leo Tolstoy. Transactions in shardman 33
  • 34. Distributed transactions: Atomicity: 2PC Isolation: Clock-SI Transactions in shardman 34
  • 36. Two-phase commit is the anti-availability protocol. P. Helland. ACM Queue, Vol. 14, Issue 2, March-April 2016. Transactions in shardman: 2PC 36
  • 41. So what we can do about it? Make 2PC fail-recovery tolerant: X3PC, Paxos Commit Back-up partitions! Transactions in shardman: 2PC 41
  • 43. Spanner mitigates this by having each member be a Paxos group, thus ensuring each 2PC “member” is highly available even if some of its Paxos participants are down. Eric Brewer. Transactions in shardman: 2PC 43
  • 44. Profit? Not yet! Transactions in shardman: isolation 44
  • 46. postgres_fdw.use_twophase = on BEGIN; UPDATE holders SET horns -= 1 WHERE holders.id = $id1; UPDATE holders SET horns += 1 WHERE holders.id = $id2; COMMIT; SELECT sum(horns_count) FROM holders; -> 1 -> -2 -> 0 Transactions in shardman: isolation 46
  • 47. MVCC in two sentences: UPDATE/DELETE create new tuple version, without in-place override Each tx gets current database version at start (xid, csn,timestamp) and able to see only appropriate versions. acc1 ver 10: {1, 0} ver 20: {1, 2} ver 30: {1, 4} ––––– snapshot = 34 ––––– ver 40: {1, 2} Transactions in shardman: isolation 47
  • 49. Do some serious stuff Transactions in shardman: isolation 49
  • 52. Do some serious web scale stuff Transactions in shardman: isolation 52
  • 54. Transactions in shardman: Clock Skew 54
  • 55. Clock-SI slightly changes visibility rules: version = timestamp Visibility’: Waits if tuple came from future. (Do not allow time-travel paradoxes!) Visibility”: Waits if tuple already prepared(P) but not yet commited(C). Commit’: Receives local versions from partitions on Prepare and Commits with maximal version. Transactions in shardman: isolation 55
  • 56. 0 2 4 6 8 10 12 14 nodes 0 10000 20000 30000 40000 50000 TPS pgbench -N on ec2 c3.2xlarge, client is oblivious about keys distribution single node, no shardman pg_shardman, no replication pg_shardman, redundancy 1, async replication Some benchmarks 56
  • 57. pg_shardman with docs is available at github.com/postgrespro/pg_shardman Report issues on GitHub Some features require patched postgres github.com/postgrespro/postgres_cluster/tree/pg_shardman 2PC and distributed snapshot manager COPY FROM to sharded tables additionaly needs patched pg_pathman We appreciate feedback! 57