SlideShare a Scribd company logo
PG-Strom v2.0 Release
Technical Brief
(17-Apr-2018)
PG-Strom Development Team
<pgstrom@heterodb.com>
What is PG-Strom?
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)2
GPU
[For I/O intensive workloads]
SSD-to-GPU Direct SQL Execution
In-memory Columnar Cache
SCAN+JOIN+GROUP BY
combined GPU kernel
[For Advanced Analytics workloads]
CPU+GPU hybrid parallel
PL/CUDA user defined function
GPU memory store (Gstore_fdw)
off-loading
✓ Designed as PostgreSQL extension
✓ Transparent SQL acceleration
✓ Cooperation with fully transactional
database management system
✓ Various comprehensive tools and
applications for PostgreSQL
PG-Strom: an extension module to accelerate analytic SQL workloads using GPU.
[GPU’s characteristics]
✓ Several thousands of processor cores per device
✓ Nearly terabytes per second memory bandwidth
✓ Much higher cost performance ratio
PG-Strom is an open source extension module for PostgreSQL (v9.6 or later) to accelerate analytic queries, has been developed and
incrementally improved since 2012. It provides PostgreSQL alternative query execution plan for SCAN, JOIN and GROUP BY workloads.
Once optimizer chose the custom plans which use GPU for SQL execution, PG-Strom constructs relative GPU code on the fly. It means
PostgreSQL uses GPU aware execution engine only when it makes sense, so here is no downside for transactional workloads.
PL/CUDA is a derivational feature that allows manual optimization with user defined function (UDF) by CUDA C; which shall be executed on
GPU device. The PG-Strom v2.0 added many features to enhance these basis for more performance and wider use scenarios.
PG-Strom v2.0 features highlight
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)3
▌Storage Enhancement
 SSD-to-GPU Direct SQL Execution
 In-memory Columnar Cache
 GPU memory store (gstore_fdw)
▌Advanced SQL Infrastructure
 PostgreSQL v9.6/v10 support – CPU+GPU Hybrid Parallel
 SCAN+JOIN+GROUP BY combined GPU kernel
 Utilization of demand paging of GPU device memory
▌Miscellaneous
 PL/CUDA related enhancement
 New data type support
 Documentation and Packaging
Storage Enhancement
Storage enhancement for each layer
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)5
Storage Layer
GPU device memory
Band: ~900GB/s
Capacity: ~32GB
Host memory
Band: ~128GB/s
Capacity: ~1.5TB
NVMe-SSD
Band: ~10GB/s
Capacity: 10TB and more
Hot Storage
Cold Storage
Related PG-Strom Feature
GPU memory store
(gstore_fdw)
In-memory columnar cache
SSD-to-GPU Direct SQL
Execution
In general, key of performance is not only number of cores and its clock, but data throughput to be loaded for the processors also.
Individual storage layer has its own characteristics, thus PG-Strom provides several options to optimize the supply of data.
SSD-to-GPU Direct SQL Execution is unique and characteristic feature of PG-Strom. It directly loads the data blocks of PostgreSQL to GPU,
and runs SQL workloads to reduce amount of data to be processed by CPU prior to the arrival. Its data transfer bypasses operating system
software stacks, thus allows to pull out nearly wired performance of the hardware. In-memory columnar cache allows to keep data blocks
in the optimal format for GPU to compute and transfer over the PCIe bus. GPU memory store (gstore_fdw) allows preload on the GPU
device memory using standard SQL statement. It is an ideal data location for PL/CUDA function because it does not need to carry the data
set for each invocation, and no size limitation of 1GB which is maximum length of the varlena.
SSD-to-GPU Direct SQL Execution (1/3)
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)6
Pre-processing of SQL workloads to drop unnecessary rows prior to the data loading to CPU
GPU code generation
PG-Strom automatically
generate CUDA code. It
looks like a transparent
acceleration from the
standpoint of users.
SQL execution on GPU
It loads data blocks of
PostgreSQL to GPU using
peer-to-peer DMA, then
drops unnecessary data
with parallel SQL execution
by GPU.
SQL execution on CPU
CPU runs pre-processed
data set; that is much
smaller than the original.
Eventually, it looks like
GPU accelerated I/O also.
PCIe Bus
NVMe SSD GPU
SSD-to-GPU P2P DMA
(NVMe-Strom driver)
WHERE-clause
JOIN
GROUP BY
Large PostgreSQL
Tables
PostgreSQL
Data Blocks
SSD-to-GPU Direct SQL
Once data blocks are loaded to GPU,
we can process SQL workloads using
thousands cores of GPU. It reduces
the amount of data to be loaded and
executed by CPU; looks like I/O
performance acceleration.
Traditional data flow
Even if unnecessary records, only CPU
can determine whether these are
necessary or not. So, we have to move
any records including junks.
SELECT cat, count(*), avg(X)
FROM t0 JOIN t1 ON t0.id = t1.id
WHERE YMD >= 20120701
GROUP BY cat;
SQL optimization stage
SQL execution stage
SQL-to-GPU
Program
Generator
GPU binary
Just-in-time
Compile
SSD-to-GPU Direct SQL Execution (2/3)
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)7
0
1000
2000
3000
4000
5000
6000
7000
8000
Q1-1 Q1-2 Q1-3 Q2-1 Q2-2 Q2-3 Q3-1 Q3-2 Q3-3 Q3-4 Q4-1 Q4-2 Q4-3
QueryExecutionThroughput[MB/s]
Star Schema Benchmark results on NVMe-SSDx3 with md-raid0
PostgreSQL v10.3 PG-Strom v2.0
This result shows query execution throughput on 13 different queries of the Star Schema Benchmark. Host system mounts just 192GB
memory but database size to be scanned is 353GB. So, it is quite i/o intensive workload.
PG-Strom with SSD-to-GPU Direct SQL Execution shows 7.2-7.7GB/s throughput which is more than x3.5 faster than the vanilla PostgreSQL
for large scale batch processing. The throughput is calculated by (database size) / (query response time). So, average query response time of
PG-Strom is later half of the 40s, and PostgreSQL is between 200s to 300s.
Benchmark environment:
Server: Supermicro 1019GP-TT, CPU: Intel Xeon Gold 6126T (2.6GHz, 12C), RAM: 192GB, GPU: NVIDIA Tesla P40 (3840C, 24GB),
SSD: Intel DC P4600 (HHHL, 2.0TB) x3, OS: CentOS 7.4, SW: CUDA 9.1, PostgreSQL v10.3, PG-Strom v2.0
Tesla GPU
NVIDIA
CUDA
Toolkit
SSD-to-GPU Direct SQL Execution (3/3)
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)8
Filesystem
(ext4, xfs)
nvme driver (inbox)
nvme_strom
kernel
module
NVMe SSD drives
commodity x86_64 hardware
NVIDIA GPUDirect RDMA
NVIDIA
kernel
driver
PostgreSQL
pg_strom
extension
read(2) ioctl(2)
Hardware
Layer
Operating
System
Software
Layer
Database
Software
Layer
Application Software
SQL Interface
I/O path based on
normal filesystem
I/O path based on
SSD-to-GPU Direct SQL Execution
SSD-to-GPU Direct SQL Execution is
a technology built on top of NVIDIA
GPUDirect RDMA which allows P2P
DMA between GPU and 3rd party PCIe
devices.
The nvme_strom kernel module
intermediates P2P DMA from NVMe-
SSD to Tesla GPU [*1].
Once GPU accelerated query execution
plan gets chosen by the query optimizer,
then PG-Strom calls ioctl(2) to deliver
the command of SSD-to-GPU Direct SQL
Execution to nvme_strom in the kernel
space.
This driver has small interaction with
filesystem to convert file descriptor +
file offset to block numbers on the
device. So, only limited filesystems
(Ext4, XFS) are now supported.
You can also use striping of NVMe-SSD
using md-raid0 for more throughput,
however, it is a feature of commercial
subscription. Please contact HeteroDB,
if you need multi-SSDs grade
performance.
[*1] NVIDIA GPUDirect RDMA is available on
only Tesla or Quadro, not GeForce.
In-memory Columnar Cache
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)9
PostgreSQL
heap buffer
(row-format)
In-memory
columnar
cache
(column-format)
GPU kernel
for column
format
GPU kernel
for row
format
GPU kernel
for column
format
GPU kernel
for column
format
GPU kernel
for row
format
Query Execution on GPU device
sequential scan
asynchronous columnar cache builders (background workers)
transactional workloads
UPDATE
cache
invalidation
on write
(Background) Why columnar-format is preferable for GPU
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)10
▌Row-format – random memory access
▌Column-format – coalesced memory access
32bit
Memory transaction width: 256bit
32bit 32bit32bit 32bit 32bit
32bit 32bit 32bit 32bit 32bit 32bit 32bit 32bit
Memory transaction width: 256bit
32bits x 8 = 256bits are valid
in the 256bits memory transaction
(usage ratio: 100.0%)
Only 32bits x 1 = 32bits are valid
in the 256bits memory transaction
(usage ratio: 12.5%)
GPU cores
GPU cores
Memory access pattern affects to GPU’s performance so much because of its memory sub-system architecture. GPU has relatively larger
memory transaction width. If multiple co-operating cores simultaneously read continuous items in array, one memory transaction can load
eight 32bit values (if width is 256bit) at once. It fully depends on the data format, and one of the most significant optimization factor.
Row-format tends to put values of a particular column to be scanned on random place, not continuous, thus it is hard to pull out maximum
performance of GPU. Column-format represents a table like as a set of simple array. So, when column-X is referenced in scan, all the values
are located very closely thus tend to fit the coalesced memory access pattern.
GPU computing world
GPU memory store (Gstore_fdw)
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)11
Storage
SQL world
GPU device memory
Foreign Table
(gstore_fdw)
INSERT
UPDATE
DELETE
SELECT
Reference
by Zero-copy
✓ Data format conversion
✓ Data compression
✓ Transaction management
PL/CUDA
user defined
functions
Gstore_fdw provides a set of interface to read/write GPU device memory using standard SQL. You can load bulk data into GPU’s device
memory using INSERT command, and so on. Because all the operations are handled inside PostgreSQL database system, data keep its binary
form (so, no needs to dump as CSV file once and parse by Python script again). SQL is one of the most flexible tool for data management, so
you can load arbitrary data-set from the master table, and apply pre-processing required by machine-learning algorithm on the fly.
One other significant feature is data-collaboration with external programs like Python scripts. GPU device memory can be shared with
external program once identifier of the acquired memory region (just 64bytes token) is exported. It enables to use PostgreSQL as a powerful
data management infrastructure for machine-learning usage. Even though Gstore_fdw now supports only ‘pgstrom’ internal format, we will
support other internal format in the next or future version.
IPC Handle
User written
Scripts
Machine Learning
Framework
IPC Handle
Advanced SQL Infrastructure
PostgreSQL v9.6/v10 support – CPU+GPU Hybrid Parallel
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)13
SeqScan
Part-Agg
HashJoin
SeqScan
Part-Agg
HashJoin
SeqScan
Part-Agg
HashJoin
Gather
Final-Agg
GpuScan
GpuPreAgg
GpuJoin
Gather
Final-Agg
GpuScan
GpuPreAgg
GpuJoin
GpuScan
GpuPreAgg
GpuJoin
result result
CPU Parallel Execution CPU + GPU Hybrid Parallel Execution
CPU parallel execution per process granularity CPU parallel execution per process granularity
PostgreSQL’s worker
process individually
uses GPU for finer
grained granularity.
 Multi-process pulls up GPU usage more efficiently
One epoch-making feature at PostgreSQL v9.6 was parallel query execution based on concurrent multi-processes. It also extended the
custom-scan interface for extensions to support parallel-query. PG-Strom v2.0 was re-designed according to the new interface set, then
it enables GPU aware custom-plan to run on the background worker process.
Heuristically, capacity of single CPU thread is not sufficient to supply enough amount of data stream to GPU. It is usually much narrower
than GPU’s computing capability. So, we can expect CPU parallel execution assists to pull up GPU usage with much higher data supply ratio.
SCAN + JOIN + GROUP BY combined GPU kernel
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)14
CustomScan interface allows extension to inject alternative implementation of query execution plan. If estimated cost is more reasonable
than built-in implementation, query optimizer of PostgreSQL chooses the alternative plan. PG-Strom provides its custom logic for SCAN,
JOIN and GROUP BY workloads, with GPU acceleration.
Usually, JOIN consumes the result of SCAN then generates records as its output. The output of JOIN often performs as input of GROUP BY,
then it generates aggregation.
In case when GpuPreAgg, GpuJoin and GpuScan are continuously executed, we have an opportunity of further optimization by reduction of
the data transfer over PCIe bus. The result of GpuScan can perform as GpuJoin’s input, and the result of GpuJoin can also perform as
GpuPreAgg’s input, if all of them shall be continuously executed. PG-Strom tries to re-use the result buffer of the previous step as input
buffer of the next step. It allows to eliminate the data ping-pong over the PCIe-bus. Once SCAN + JOIN + GROUP BY combined GPU kernel
gets ready, it can run with the most efficient GPU kernel because it does not need data exchange over the query execution plan.
GpuScan
kernel
GpuJoin
kernel
GpuPreAgg
kernel
Agg
(PostgreSQL)
GPU
CPU
Storage
Results
SCAN + JOIN + GROUP BY Combined Kernel
Data
Blocks
Host
Buffer
Host
Buffer
elimination of
data ping-pong
Utilization of demand paging of GPU device memory
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)15
Buffer size estimation is not an easy job for SQL workloads. In general, we cannot know exact number of result rows unless query is not
actually executed, even though table statistics informs us rough estimation through the query planning.
GPU kernel needs result buffer for each operation, and it has to be acquired prior to its execution. It has been a problematic trade-off
because a tight configuration with small margin often leads lack of the result buffer, on the other hands, buffer allocation with large margin
makes unignorable dead space on the GPU device memory.
The recent GPU (Pascal / Volta) supports demand paging of GPU device memory. It assigns physical page frame on demand, thus, unused
region consumes no physical device memory. It means that large margin configuration consumes no dead physical device memory.
It also allows to simplify the code to estimate the size of result buffer and to retry GPU kernel invocation with larger result buffer. These
logics are very complicated and had many potential bugs around the error pass.
PG-Strom v2.0 fully utilized the demand paging of GPU device, thus we could eliminate the problematic code. It also contributed the
stability of the software.
Hash Table t2
Hash Table t1
Data chunk of
table t0
t0
GpuJoin
t1
t2
result buffer
Kepler / Maxwel
 Dead space (!)
Pascal / Volta
 No physical page frames
are not assigned, so harmless
Miscellaneous Improvement
PL/CUDA related enhancement
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)17
All In-database Analytics Scan
Pre-Process
Analytics
Post-ProcessCREATE FUNCTION
my_logic( reggstore, text )
RETURNS matrix
AS $$
$$ LANGUAGE ‘plcuda’;
Custom CUDA C code block
(runs on GPU device) ✓ manually optimized analytics &
machine-learning algorithms
✓ utilization of a few thousands
processor cores
✓ ultra high memory bandwidth
If “reggstore” type is supplied as argument of PL/CUDA function, user defined part of this PL/CUDA function receives this argument as
pointer to the preserved GPU device memory for gstore_fdw. It allows to reference the data preliminary loaded onto the Gstore_fdw.
The #plcuda_include directive is newly supported to include the code which is returned from the specifies function. You can switch the
code to be compiled according to the arguments, not to create multiple but similar variations.
CREATE FUNCTION my_distance(reggstore, text)
RETURNS text
AS $$
if ($2 = “manhattan”)
return “#define dist(X,Y) abs(X-y)”;
if ($2 = “euclid”)
return “#define dist(X,Y) ((X-y)^2)”
$$ ...
#plcuda_include my_distance
② Inclusion of other function’s result
as a part of CUDA C code.
GPU memory
store
(gstore_fdw)
① reggstore argument as reference to gstore_fdw structure
New data type support
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)18
▌Numeric
 float2
▌Network address
 macaddr, inet, cidr
▌Range data types
 int4range, int8range, tsrange, tstzrange, daterange
▌Miscellaneous
 uuid, reggstore
The “float2” data type is implemented by PG-Strom, not a standard built-in data type. It represents half-precision floating point values.
People in machine-learning area often use FP16 for more short representation of matrix; less device memory consumption and higher
computing throughput.
Documentation and Packaging
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)19
Documentation was totally rewritten with markdown that is much easier for timely update than raw HTML based one.
It is now published at http://heterodb.github.io/pg-strom/
RPM packages are also available for RHEL7.x / CentOS 7.x. PG-Strom and related software are available on the HeteroDB Software
Distribution Center (SWDC) at https://heterodb.github.io/swdc/
You can use the SWDC as yum repository source.
PG-Strom official documentation HeteroDB Software Distribution Center
Post-v2.0 Development Roadmap
PostgreSQL v11 support (1/2) – Parallel Append & Multi-GPUs
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)21
 PG10 Restriction: GPUs not close to SSD must be idle during the scan across partition table.
 PG11 allows to scan partitioned children in parallel. It also makes multiple-GPUs active simultaneously.
Parallel
Append
t1 t2 t4t3
GPU1 GPU2
BgWorker BgWorker BgWorker BgWorker
Background worker process that
contains GPU-aware custom-plans
Choose the GPU1 which shares
the same PCIe root complex of
the NVMe-SSD to be scanned.
auto-tuning
based on PCIe
topology
NVIDIA GPUDirect RDMA, is a basis technology of SSD-to-GPU Direct SQL Execution, requires GPU and SSD should share the PCIe root
complex, thus P2P DMA route should not traverse QPI link. It leads a restriction on multi-GPUs configuration with partitioned table.
When background worker scans partitioned child tables across multiple SSDs, GPU-aware custom plan needs to choose its neighbor GPU to
avoid QPI traversal. In other words, the target table for scan determines the GPU to be attached on the background workers.
In PG10, partitioned child tables are sequentially picked up to scan, so we cannot activate more than one GPUs simultaneously because the
secondary GPU will cause QPI traverse on usual hardware configuration (1:CPU – 1:GPU + n:SSDs).
PG11 supports parallel scan across partitioned child tables. It allows individual background worker activate its neighbor GPU for the tables
they are scanning. It enables to utilize multiple GPUs under SSD-to-GPU Direct SQL Execution for larger data processing.
PostgreSQL v11 support (2/2) – In-box distributed query execution
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)22
PCIe I/O Expansion Box
Host System
(x86_64 server)
NVMe SSD
PostgreSQL Tables
PostgreSQL
Data Blocks
Internal
PCIe Switch
SSD-to-GPU P2P DMA
(Large data size)
GPU
WHERE-clause
JOIN
GROUP BY
PCIe over
Ethernet
Pre-processed
small data
Several GB/s
SQL execution
per Box
Several GB/s
SQL execution
per Box
Several GB/s
SQL execution
per Box
Add performance and capacity
NIC / HBA
Performs as just a simple single node
configuration from the standpoint of
applications and administrators
Multi-GPUs capability will expand the opportunity of big data processing for more bigger data set, probably, close to 100TB.
Multiple vendors provide PCIe I/O expansion box solution that allows to install PCIe devices on the physically separated box which is
connected to the host system using fast network. In addition, some of the solution have internal PCIe switch that can route P2P DMA packet
inside of the I/O box. It means PG-Strom handles SSD-to-GPU Direct SQL Execution on the I/O box with little interaction to the host system,
and runs the partial workload on the partitioned table per box in parallel once multi-GPUs capability get supported at PG11.
From the standpoint of applications and administrators, it is just a simple single node configuration even though many GPUs and SSDs are
installed, thus, no need to pay attention for distributed transaction. It makes application design and daily maintenance so simplified.
Other significant features
PG-Strom v2.0 Release Technical Brief (17-Apr-2018)23
▌cuPy data format support of Gstore_fdw
▌BRIN index support
▌Basic PostGIS support
▌NVMe over Fabric support
▌GPU device function that can return varlena datum
▌Semi- / Anti- Join support
▌MVCC visibility checks on the device side
▌Compression support of in-memory columnar cache
See 003: Development Roadmap for more details.
Run! Beyond
the Limitations

More Related Content

PDF
Luận án: Yếu tố tiên lượng ung thư biểu mô tuyến cổ tử cung, HAY
DOCX
Viêm-tụy-mạn-tong-hop .docx
PDF
chẩn đoán và điều trị bệnh do liên cầu lợn ở người
 
PDF
05 cac thuongtatthuphat
PDF
BYT_ Bệnh thận-tiết-niệu
PPTX
TIẾP CẬN CHẨN ĐOÁN BƯỚU GIÁP NHÂN ĐƠN THUẦN
PDF
hoi_chung_than_hu_KDIGO 2021.pdf
PDF
Viem sinh dục do herpes
 
Luận án: Yếu tố tiên lượng ung thư biểu mô tuyến cổ tử cung, HAY
Viêm-tụy-mạn-tong-hop .docx
chẩn đoán và điều trị bệnh do liên cầu lợn ở người
 
05 cac thuongtatthuphat
BYT_ Bệnh thận-tiết-niệu
TIẾP CẬN CHẨN ĐOÁN BƯỚU GIÁP NHÂN ĐƠN THUẦN
hoi_chung_than_hu_KDIGO 2021.pdf
Viem sinh dục do herpes
 

What's hot (20)

PDF
Benh than man_o tre em
PPT
Hoi chung cushing
PDF
Chẹn Beta trong Tăng huyết áp
PPT
Hoc thuyet kinh lac
PDF
bai-giang-benh-ly-xo-gan.pdf
PDF
ĐAU THẮT NGỰC ỔN ĐỊNH
 
PDF
XÉT NGHIỆM HUYẾT ĐỒ ỨNG DỤNG TRONG LÂM SÀNG
 
PPT
Bài 3 tọa cốt phong
PDF
Tiếp cận đau, yếu cơ và hoặc rối loạn cảm giác.pdf
 
PPTX
Các phương pháp chẩn đoán K.pptx
PPTX
Bệnh cơ - 2019 - Đại học Y dược TPHCM
PPTX
Bài 8 TỨ CHẨN trong Y Học Cổ Truyền.pptx
PDF
sinh lý bệnh học tổn thương thận cấp tính
 
PPT
VÀI LƯU Ý ĐIỀU TRỊ ĐỘNG KINH Ở TRẺ EM
 
PDF
BYT_Bệnh nội-tiết-chuyển-hóa
DOC
TRẦM CẢM TÁI PHÁT
 
PDF
SGK XƠ CỨNG BÌ ĐHYHN rất là hay nha .pdf
PDF
TĂNG HUYẾT ÁP
 
PDF
Benh hoc da lieu
PPTX
ĐỘT QUỴ - BSCKII. NGÔ THỊ KIM TRINH.pptx
Benh than man_o tre em
Hoi chung cushing
Chẹn Beta trong Tăng huyết áp
Hoc thuyet kinh lac
bai-giang-benh-ly-xo-gan.pdf
ĐAU THẮT NGỰC ỔN ĐỊNH
 
XÉT NGHIỆM HUYẾT ĐỒ ỨNG DỤNG TRONG LÂM SÀNG
 
Bài 3 tọa cốt phong
Tiếp cận đau, yếu cơ và hoặc rối loạn cảm giác.pdf
 
Các phương pháp chẩn đoán K.pptx
Bệnh cơ - 2019 - Đại học Y dược TPHCM
Bài 8 TỨ CHẨN trong Y Học Cổ Truyền.pptx
sinh lý bệnh học tổn thương thận cấp tính
 
VÀI LƯU Ý ĐIỀU TRỊ ĐỘNG KINH Ở TRẺ EM
 
BYT_Bệnh nội-tiết-chuyển-hóa
TRẦM CẢM TÁI PHÁT
 
SGK XƠ CỨNG BÌ ĐHYHN rất là hay nha .pdf
TĂNG HUYẾT ÁP
 
Benh hoc da lieu
ĐỘT QUỴ - BSCKII. NGÔ THỊ KIM TRINH.pptx
Ad

Similar to PG-Strom v2.0 Technical Brief (17-Apr-2018) (20)

PDF
20170602_OSSummit_an_intelligent_storage
PDF
GPU/SSD Accelerates PostgreSQL - challenge towards query processing throughpu...
PDF
PGConf.ASIA 2019 Bali - Full-throttle Running on Terabytes Log-data - Kohei K...
PDF
20190909_PGconf.ASIA_KaiGai
PDF
20201006_PGconf_Online_Large_Data_Processing
PDF
20150318-SFPUG-Meetup-PGStrom
PDF
20181116 Massive Log Processing using I/O optimized PostgreSQL
PDF
GPGPU Accelerates PostgreSQL (English)
PDF
20180920_DBTS_PGStrom_EN
PDF
PG-Strom - GPGPU meets PostgreSQL, PGcon2015
PDF
PL/CUDA - Fusion of HPC Grade Power with In-Database Analytics
PDF
20181210 - PGconf.ASIA Unconference
PDF
pgconfasia2016 plcuda en
PDF
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
PDF
SQL+GPU+SSD=∞ (English)
PDF
GPGPU Accelerates PostgreSQL ~Unlock the power of multi-thousand cores~
PDF
20181016_pgconfeu_ssd2gpu_multi
PDF
SQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
PDF
20160407_GTC2016_PgSQL_In_Place
PDF
20171206 PGconf.ASIA LT gstore_fdw
20170602_OSSummit_an_intelligent_storage
GPU/SSD Accelerates PostgreSQL - challenge towards query processing throughpu...
PGConf.ASIA 2019 Bali - Full-throttle Running on Terabytes Log-data - Kohei K...
20190909_PGconf.ASIA_KaiGai
20201006_PGconf_Online_Large_Data_Processing
20150318-SFPUG-Meetup-PGStrom
20181116 Massive Log Processing using I/O optimized PostgreSQL
GPGPU Accelerates PostgreSQL (English)
20180920_DBTS_PGStrom_EN
PG-Strom - GPGPU meets PostgreSQL, PGcon2015
PL/CUDA - Fusion of HPC Grade Power with In-Database Analytics
20181210 - PGconf.ASIA Unconference
pgconfasia2016 plcuda en
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
SQL+GPU+SSD=∞ (English)
GPGPU Accelerates PostgreSQL ~Unlock the power of multi-thousand cores~
20181016_pgconfeu_ssd2gpu_multi
SQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
20160407_GTC2016_PgSQL_In_Place
20171206 PGconf.ASIA LT gstore_fdw
Ad

More from Kohei KaiGai (20)

PDF
20221116_DBTS_PGStrom_History
PDF
20221111_JPUG_CustomScan_API
PDF
20211112_jpugcon_gpu_and_arrow
PDF
20210928_pgunconf_hll_count
PDF
20210731_OSC_Kyoto_PGStrom3.0
PDF
20210511_PGStrom_GpuCache
PDF
20210301_PGconf_Online_GPU_PostGIS_GiST_Index
PDF
20201128_OSC_Fukuoka_Online_GPUPostGIS
PDF
20201113_PGconf_Japan_GPU_PostGIS
PDF
20200828_OSCKyoto_Online
PDF
20200806_PGStrom_PostGIS_GstoreFdw
PDF
20200424_Writable_Arrow_Fdw
PDF
20191211_Apache_Arrow_Meetup_Tokyo
PDF
20191115-PGconf.Japan
PDF
20190926_Try_RHEL8_NVMEoF_Beta
PDF
20190925_DBTS_PGStrom
PDF
20190516_DLC10_PGStrom
PDF
20190418_PGStrom_on_ArrowFdw
PDF
20190314 PGStrom Arrow_Fdw
PDF
20181212 - PGconfASIA - LT - English
20221116_DBTS_PGStrom_History
20221111_JPUG_CustomScan_API
20211112_jpugcon_gpu_and_arrow
20210928_pgunconf_hll_count
20210731_OSC_Kyoto_PGStrom3.0
20210511_PGStrom_GpuCache
20210301_PGconf_Online_GPU_PostGIS_GiST_Index
20201128_OSC_Fukuoka_Online_GPUPostGIS
20201113_PGconf_Japan_GPU_PostGIS
20200828_OSCKyoto_Online
20200806_PGStrom_PostGIS_GstoreFdw
20200424_Writable_Arrow_Fdw
20191211_Apache_Arrow_Meetup_Tokyo
20191115-PGconf.Japan
20190926_Try_RHEL8_NVMEoF_Beta
20190925_DBTS_PGStrom
20190516_DLC10_PGStrom
20190418_PGStrom_on_ArrowFdw
20190314 PGStrom Arrow_Fdw
20181212 - PGconfASIA - LT - English

Recently uploaded (20)

PPTX
Computer Software and OS of computer science of grade 11.pptx
PDF
AI-Powered Threat Modeling: The Future of Cybersecurity by Arun Kumar Elengov...
PDF
Complete Guide to Website Development in Malaysia for SMEs
PDF
Navsoft: AI-Powered Business Solutions & Custom Software Development
PPTX
Advanced SystemCare Ultimate Crack + Portable (2025)
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PPTX
Monitoring Stack: Grafana, Loki & Promtail
PPTX
Weekly report ppt - harsh dattuprasad patel.pptx
PDF
Download FL Studio Crack Latest version 2025 ?
PPTX
Reimagine Home Health with the Power of Agentic AI​
PDF
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
PDF
17 Powerful Integrations Your Next-Gen MLM Software Needs
PPTX
AMADEUS TRAVEL AGENT SOFTWARE | AMADEUS TICKETING SYSTEM
PPTX
CHAPTER 2 - PM Management and IT Context
PPTX
WiFi Honeypot Detecscfddssdffsedfseztor.pptx
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PDF
CCleaner Pro 6.38.11537 Crack Final Latest Version 2025
PDF
Salesforce Agentforce AI Implementation.pdf
DOCX
Greta — No-Code AI for Building Full-Stack Web & Mobile Apps
PPTX
Embracing Complexity in Serverless! GOTO Serverless Bengaluru
Computer Software and OS of computer science of grade 11.pptx
AI-Powered Threat Modeling: The Future of Cybersecurity by Arun Kumar Elengov...
Complete Guide to Website Development in Malaysia for SMEs
Navsoft: AI-Powered Business Solutions & Custom Software Development
Advanced SystemCare Ultimate Crack + Portable (2025)
Adobe Illustrator 28.6 Crack My Vision of Vector Design
Monitoring Stack: Grafana, Loki & Promtail
Weekly report ppt - harsh dattuprasad patel.pptx
Download FL Studio Crack Latest version 2025 ?
Reimagine Home Health with the Power of Agentic AI​
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
17 Powerful Integrations Your Next-Gen MLM Software Needs
AMADEUS TRAVEL AGENT SOFTWARE | AMADEUS TICKETING SYSTEM
CHAPTER 2 - PM Management and IT Context
WiFi Honeypot Detecscfddssdffsedfseztor.pptx
Internet Downloader Manager (IDM) Crack 6.42 Build 41
CCleaner Pro 6.38.11537 Crack Final Latest Version 2025
Salesforce Agentforce AI Implementation.pdf
Greta — No-Code AI for Building Full-Stack Web & Mobile Apps
Embracing Complexity in Serverless! GOTO Serverless Bengaluru

PG-Strom v2.0 Technical Brief (17-Apr-2018)

  • 1. PG-Strom v2.0 Release Technical Brief (17-Apr-2018) PG-Strom Development Team <[email protected]>
  • 2. What is PG-Strom? PG-Strom v2.0 Release Technical Brief (17-Apr-2018)2 GPU [For I/O intensive workloads] SSD-to-GPU Direct SQL Execution In-memory Columnar Cache SCAN+JOIN+GROUP BY combined GPU kernel [For Advanced Analytics workloads] CPU+GPU hybrid parallel PL/CUDA user defined function GPU memory store (Gstore_fdw) off-loading ✓ Designed as PostgreSQL extension ✓ Transparent SQL acceleration ✓ Cooperation with fully transactional database management system ✓ Various comprehensive tools and applications for PostgreSQL PG-Strom: an extension module to accelerate analytic SQL workloads using GPU. [GPU’s characteristics] ✓ Several thousands of processor cores per device ✓ Nearly terabytes per second memory bandwidth ✓ Much higher cost performance ratio PG-Strom is an open source extension module for PostgreSQL (v9.6 or later) to accelerate analytic queries, has been developed and incrementally improved since 2012. It provides PostgreSQL alternative query execution plan for SCAN, JOIN and GROUP BY workloads. Once optimizer chose the custom plans which use GPU for SQL execution, PG-Strom constructs relative GPU code on the fly. It means PostgreSQL uses GPU aware execution engine only when it makes sense, so here is no downside for transactional workloads. PL/CUDA is a derivational feature that allows manual optimization with user defined function (UDF) by CUDA C; which shall be executed on GPU device. The PG-Strom v2.0 added many features to enhance these basis for more performance and wider use scenarios.
  • 3. PG-Strom v2.0 features highlight PG-Strom v2.0 Release Technical Brief (17-Apr-2018)3 ▌Storage Enhancement  SSD-to-GPU Direct SQL Execution  In-memory Columnar Cache  GPU memory store (gstore_fdw) ▌Advanced SQL Infrastructure  PostgreSQL v9.6/v10 support – CPU+GPU Hybrid Parallel  SCAN+JOIN+GROUP BY combined GPU kernel  Utilization of demand paging of GPU device memory ▌Miscellaneous  PL/CUDA related enhancement  New data type support  Documentation and Packaging
  • 5. Storage enhancement for each layer PG-Strom v2.0 Release Technical Brief (17-Apr-2018)5 Storage Layer GPU device memory Band: ~900GB/s Capacity: ~32GB Host memory Band: ~128GB/s Capacity: ~1.5TB NVMe-SSD Band: ~10GB/s Capacity: 10TB and more Hot Storage Cold Storage Related PG-Strom Feature GPU memory store (gstore_fdw) In-memory columnar cache SSD-to-GPU Direct SQL Execution In general, key of performance is not only number of cores and its clock, but data throughput to be loaded for the processors also. Individual storage layer has its own characteristics, thus PG-Strom provides several options to optimize the supply of data. SSD-to-GPU Direct SQL Execution is unique and characteristic feature of PG-Strom. It directly loads the data blocks of PostgreSQL to GPU, and runs SQL workloads to reduce amount of data to be processed by CPU prior to the arrival. Its data transfer bypasses operating system software stacks, thus allows to pull out nearly wired performance of the hardware. In-memory columnar cache allows to keep data blocks in the optimal format for GPU to compute and transfer over the PCIe bus. GPU memory store (gstore_fdw) allows preload on the GPU device memory using standard SQL statement. It is an ideal data location for PL/CUDA function because it does not need to carry the data set for each invocation, and no size limitation of 1GB which is maximum length of the varlena.
  • 6. SSD-to-GPU Direct SQL Execution (1/3) PG-Strom v2.0 Release Technical Brief (17-Apr-2018)6 Pre-processing of SQL workloads to drop unnecessary rows prior to the data loading to CPU GPU code generation PG-Strom automatically generate CUDA code. It looks like a transparent acceleration from the standpoint of users. SQL execution on GPU It loads data blocks of PostgreSQL to GPU using peer-to-peer DMA, then drops unnecessary data with parallel SQL execution by GPU. SQL execution on CPU CPU runs pre-processed data set; that is much smaller than the original. Eventually, it looks like GPU accelerated I/O also. PCIe Bus NVMe SSD GPU SSD-to-GPU P2P DMA (NVMe-Strom driver) WHERE-clause JOIN GROUP BY Large PostgreSQL Tables PostgreSQL Data Blocks SSD-to-GPU Direct SQL Once data blocks are loaded to GPU, we can process SQL workloads using thousands cores of GPU. It reduces the amount of data to be loaded and executed by CPU; looks like I/O performance acceleration. Traditional data flow Even if unnecessary records, only CPU can determine whether these are necessary or not. So, we have to move any records including junks. SELECT cat, count(*), avg(X) FROM t0 JOIN t1 ON t0.id = t1.id WHERE YMD >= 20120701 GROUP BY cat; SQL optimization stage SQL execution stage SQL-to-GPU Program Generator GPU binary Just-in-time Compile
  • 7. SSD-to-GPU Direct SQL Execution (2/3) PG-Strom v2.0 Release Technical Brief (17-Apr-2018)7 0 1000 2000 3000 4000 5000 6000 7000 8000 Q1-1 Q1-2 Q1-3 Q2-1 Q2-2 Q2-3 Q3-1 Q3-2 Q3-3 Q3-4 Q4-1 Q4-2 Q4-3 QueryExecutionThroughput[MB/s] Star Schema Benchmark results on NVMe-SSDx3 with md-raid0 PostgreSQL v10.3 PG-Strom v2.0 This result shows query execution throughput on 13 different queries of the Star Schema Benchmark. Host system mounts just 192GB memory but database size to be scanned is 353GB. So, it is quite i/o intensive workload. PG-Strom with SSD-to-GPU Direct SQL Execution shows 7.2-7.7GB/s throughput which is more than x3.5 faster than the vanilla PostgreSQL for large scale batch processing. The throughput is calculated by (database size) / (query response time). So, average query response time of PG-Strom is later half of the 40s, and PostgreSQL is between 200s to 300s. Benchmark environment: Server: Supermicro 1019GP-TT, CPU: Intel Xeon Gold 6126T (2.6GHz, 12C), RAM: 192GB, GPU: NVIDIA Tesla P40 (3840C, 24GB), SSD: Intel DC P4600 (HHHL, 2.0TB) x3, OS: CentOS 7.4, SW: CUDA 9.1, PostgreSQL v10.3, PG-Strom v2.0
  • 8. Tesla GPU NVIDIA CUDA Toolkit SSD-to-GPU Direct SQL Execution (3/3) PG-Strom v2.0 Release Technical Brief (17-Apr-2018)8 Filesystem (ext4, xfs) nvme driver (inbox) nvme_strom kernel module NVMe SSD drives commodity x86_64 hardware NVIDIA GPUDirect RDMA NVIDIA kernel driver PostgreSQL pg_strom extension read(2) ioctl(2) Hardware Layer Operating System Software Layer Database Software Layer Application Software SQL Interface I/O path based on normal filesystem I/O path based on SSD-to-GPU Direct SQL Execution SSD-to-GPU Direct SQL Execution is a technology built on top of NVIDIA GPUDirect RDMA which allows P2P DMA between GPU and 3rd party PCIe devices. The nvme_strom kernel module intermediates P2P DMA from NVMe- SSD to Tesla GPU [*1]. Once GPU accelerated query execution plan gets chosen by the query optimizer, then PG-Strom calls ioctl(2) to deliver the command of SSD-to-GPU Direct SQL Execution to nvme_strom in the kernel space. This driver has small interaction with filesystem to convert file descriptor + file offset to block numbers on the device. So, only limited filesystems (Ext4, XFS) are now supported. You can also use striping of NVMe-SSD using md-raid0 for more throughput, however, it is a feature of commercial subscription. Please contact HeteroDB, if you need multi-SSDs grade performance. [*1] NVIDIA GPUDirect RDMA is available on only Tesla or Quadro, not GeForce.
  • 9. In-memory Columnar Cache PG-Strom v2.0 Release Technical Brief (17-Apr-2018)9 PostgreSQL heap buffer (row-format) In-memory columnar cache (column-format) GPU kernel for column format GPU kernel for row format GPU kernel for column format GPU kernel for column format GPU kernel for row format Query Execution on GPU device sequential scan asynchronous columnar cache builders (background workers) transactional workloads UPDATE cache invalidation on write
  • 10. (Background) Why columnar-format is preferable for GPU PG-Strom v2.0 Release Technical Brief (17-Apr-2018)10 ▌Row-format – random memory access ▌Column-format – coalesced memory access 32bit Memory transaction width: 256bit 32bit 32bit32bit 32bit 32bit 32bit 32bit 32bit 32bit 32bit 32bit 32bit 32bit Memory transaction width: 256bit 32bits x 8 = 256bits are valid in the 256bits memory transaction (usage ratio: 100.0%) Only 32bits x 1 = 32bits are valid in the 256bits memory transaction (usage ratio: 12.5%) GPU cores GPU cores Memory access pattern affects to GPU’s performance so much because of its memory sub-system architecture. GPU has relatively larger memory transaction width. If multiple co-operating cores simultaneously read continuous items in array, one memory transaction can load eight 32bit values (if width is 256bit) at once. It fully depends on the data format, and one of the most significant optimization factor. Row-format tends to put values of a particular column to be scanned on random place, not continuous, thus it is hard to pull out maximum performance of GPU. Column-format represents a table like as a set of simple array. So, when column-X is referenced in scan, all the values are located very closely thus tend to fit the coalesced memory access pattern.
  • 11. GPU computing world GPU memory store (Gstore_fdw) PG-Strom v2.0 Release Technical Brief (17-Apr-2018)11 Storage SQL world GPU device memory Foreign Table (gstore_fdw) INSERT UPDATE DELETE SELECT Reference by Zero-copy ✓ Data format conversion ✓ Data compression ✓ Transaction management PL/CUDA user defined functions Gstore_fdw provides a set of interface to read/write GPU device memory using standard SQL. You can load bulk data into GPU’s device memory using INSERT command, and so on. Because all the operations are handled inside PostgreSQL database system, data keep its binary form (so, no needs to dump as CSV file once and parse by Python script again). SQL is one of the most flexible tool for data management, so you can load arbitrary data-set from the master table, and apply pre-processing required by machine-learning algorithm on the fly. One other significant feature is data-collaboration with external programs like Python scripts. GPU device memory can be shared with external program once identifier of the acquired memory region (just 64bytes token) is exported. It enables to use PostgreSQL as a powerful data management infrastructure for machine-learning usage. Even though Gstore_fdw now supports only ‘pgstrom’ internal format, we will support other internal format in the next or future version. IPC Handle User written Scripts Machine Learning Framework IPC Handle
  • 13. PostgreSQL v9.6/v10 support – CPU+GPU Hybrid Parallel PG-Strom v2.0 Release Technical Brief (17-Apr-2018)13 SeqScan Part-Agg HashJoin SeqScan Part-Agg HashJoin SeqScan Part-Agg HashJoin Gather Final-Agg GpuScan GpuPreAgg GpuJoin Gather Final-Agg GpuScan GpuPreAgg GpuJoin GpuScan GpuPreAgg GpuJoin result result CPU Parallel Execution CPU + GPU Hybrid Parallel Execution CPU parallel execution per process granularity CPU parallel execution per process granularity PostgreSQL’s worker process individually uses GPU for finer grained granularity.  Multi-process pulls up GPU usage more efficiently One epoch-making feature at PostgreSQL v9.6 was parallel query execution based on concurrent multi-processes. It also extended the custom-scan interface for extensions to support parallel-query. PG-Strom v2.0 was re-designed according to the new interface set, then it enables GPU aware custom-plan to run on the background worker process. Heuristically, capacity of single CPU thread is not sufficient to supply enough amount of data stream to GPU. It is usually much narrower than GPU’s computing capability. So, we can expect CPU parallel execution assists to pull up GPU usage with much higher data supply ratio.
  • 14. SCAN + JOIN + GROUP BY combined GPU kernel PG-Strom v2.0 Release Technical Brief (17-Apr-2018)14 CustomScan interface allows extension to inject alternative implementation of query execution plan. If estimated cost is more reasonable than built-in implementation, query optimizer of PostgreSQL chooses the alternative plan. PG-Strom provides its custom logic for SCAN, JOIN and GROUP BY workloads, with GPU acceleration. Usually, JOIN consumes the result of SCAN then generates records as its output. The output of JOIN often performs as input of GROUP BY, then it generates aggregation. In case when GpuPreAgg, GpuJoin and GpuScan are continuously executed, we have an opportunity of further optimization by reduction of the data transfer over PCIe bus. The result of GpuScan can perform as GpuJoin’s input, and the result of GpuJoin can also perform as GpuPreAgg’s input, if all of them shall be continuously executed. PG-Strom tries to re-use the result buffer of the previous step as input buffer of the next step. It allows to eliminate the data ping-pong over the PCIe-bus. Once SCAN + JOIN + GROUP BY combined GPU kernel gets ready, it can run with the most efficient GPU kernel because it does not need data exchange over the query execution plan. GpuScan kernel GpuJoin kernel GpuPreAgg kernel Agg (PostgreSQL) GPU CPU Storage Results SCAN + JOIN + GROUP BY Combined Kernel Data Blocks Host Buffer Host Buffer elimination of data ping-pong
  • 15. Utilization of demand paging of GPU device memory PG-Strom v2.0 Release Technical Brief (17-Apr-2018)15 Buffer size estimation is not an easy job for SQL workloads. In general, we cannot know exact number of result rows unless query is not actually executed, even though table statistics informs us rough estimation through the query planning. GPU kernel needs result buffer for each operation, and it has to be acquired prior to its execution. It has been a problematic trade-off because a tight configuration with small margin often leads lack of the result buffer, on the other hands, buffer allocation with large margin makes unignorable dead space on the GPU device memory. The recent GPU (Pascal / Volta) supports demand paging of GPU device memory. It assigns physical page frame on demand, thus, unused region consumes no physical device memory. It means that large margin configuration consumes no dead physical device memory. It also allows to simplify the code to estimate the size of result buffer and to retry GPU kernel invocation with larger result buffer. These logics are very complicated and had many potential bugs around the error pass. PG-Strom v2.0 fully utilized the demand paging of GPU device, thus we could eliminate the problematic code. It also contributed the stability of the software. Hash Table t2 Hash Table t1 Data chunk of table t0 t0 GpuJoin t1 t2 result buffer Kepler / Maxwel  Dead space (!) Pascal / Volta  No physical page frames are not assigned, so harmless
  • 17. PL/CUDA related enhancement PG-Strom v2.0 Release Technical Brief (17-Apr-2018)17 All In-database Analytics Scan Pre-Process Analytics Post-ProcessCREATE FUNCTION my_logic( reggstore, text ) RETURNS matrix AS $$ $$ LANGUAGE ‘plcuda’; Custom CUDA C code block (runs on GPU device) ✓ manually optimized analytics & machine-learning algorithms ✓ utilization of a few thousands processor cores ✓ ultra high memory bandwidth If “reggstore” type is supplied as argument of PL/CUDA function, user defined part of this PL/CUDA function receives this argument as pointer to the preserved GPU device memory for gstore_fdw. It allows to reference the data preliminary loaded onto the Gstore_fdw. The #plcuda_include directive is newly supported to include the code which is returned from the specifies function. You can switch the code to be compiled according to the arguments, not to create multiple but similar variations. CREATE FUNCTION my_distance(reggstore, text) RETURNS text AS $$ if ($2 = “manhattan”) return “#define dist(X,Y) abs(X-y)”; if ($2 = “euclid”) return “#define dist(X,Y) ((X-y)^2)” $$ ... #plcuda_include my_distance ② Inclusion of other function’s result as a part of CUDA C code. GPU memory store (gstore_fdw) ① reggstore argument as reference to gstore_fdw structure
  • 18. New data type support PG-Strom v2.0 Release Technical Brief (17-Apr-2018)18 ▌Numeric  float2 ▌Network address  macaddr, inet, cidr ▌Range data types  int4range, int8range, tsrange, tstzrange, daterange ▌Miscellaneous  uuid, reggstore The “float2” data type is implemented by PG-Strom, not a standard built-in data type. It represents half-precision floating point values. People in machine-learning area often use FP16 for more short representation of matrix; less device memory consumption and higher computing throughput.
  • 19. Documentation and Packaging PG-Strom v2.0 Release Technical Brief (17-Apr-2018)19 Documentation was totally rewritten with markdown that is much easier for timely update than raw HTML based one. It is now published at http://heterodb.github.io/pg-strom/ RPM packages are also available for RHEL7.x / CentOS 7.x. PG-Strom and related software are available on the HeteroDB Software Distribution Center (SWDC) at https://heterodb.github.io/swdc/ You can use the SWDC as yum repository source. PG-Strom official documentation HeteroDB Software Distribution Center
  • 21. PostgreSQL v11 support (1/2) – Parallel Append & Multi-GPUs PG-Strom v2.0 Release Technical Brief (17-Apr-2018)21  PG10 Restriction: GPUs not close to SSD must be idle during the scan across partition table.  PG11 allows to scan partitioned children in parallel. It also makes multiple-GPUs active simultaneously. Parallel Append t1 t2 t4t3 GPU1 GPU2 BgWorker BgWorker BgWorker BgWorker Background worker process that contains GPU-aware custom-plans Choose the GPU1 which shares the same PCIe root complex of the NVMe-SSD to be scanned. auto-tuning based on PCIe topology NVIDIA GPUDirect RDMA, is a basis technology of SSD-to-GPU Direct SQL Execution, requires GPU and SSD should share the PCIe root complex, thus P2P DMA route should not traverse QPI link. It leads a restriction on multi-GPUs configuration with partitioned table. When background worker scans partitioned child tables across multiple SSDs, GPU-aware custom plan needs to choose its neighbor GPU to avoid QPI traversal. In other words, the target table for scan determines the GPU to be attached on the background workers. In PG10, partitioned child tables are sequentially picked up to scan, so we cannot activate more than one GPUs simultaneously because the secondary GPU will cause QPI traverse on usual hardware configuration (1:CPU – 1:GPU + n:SSDs). PG11 supports parallel scan across partitioned child tables. It allows individual background worker activate its neighbor GPU for the tables they are scanning. It enables to utilize multiple GPUs under SSD-to-GPU Direct SQL Execution for larger data processing.
  • 22. PostgreSQL v11 support (2/2) – In-box distributed query execution PG-Strom v2.0 Release Technical Brief (17-Apr-2018)22 PCIe I/O Expansion Box Host System (x86_64 server) NVMe SSD PostgreSQL Tables PostgreSQL Data Blocks Internal PCIe Switch SSD-to-GPU P2P DMA (Large data size) GPU WHERE-clause JOIN GROUP BY PCIe over Ethernet Pre-processed small data Several GB/s SQL execution per Box Several GB/s SQL execution per Box Several GB/s SQL execution per Box Add performance and capacity NIC / HBA Performs as just a simple single node configuration from the standpoint of applications and administrators Multi-GPUs capability will expand the opportunity of big data processing for more bigger data set, probably, close to 100TB. Multiple vendors provide PCIe I/O expansion box solution that allows to install PCIe devices on the physically separated box which is connected to the host system using fast network. In addition, some of the solution have internal PCIe switch that can route P2P DMA packet inside of the I/O box. It means PG-Strom handles SSD-to-GPU Direct SQL Execution on the I/O box with little interaction to the host system, and runs the partial workload on the partitioned table per box in parallel once multi-GPUs capability get supported at PG11. From the standpoint of applications and administrators, it is just a simple single node configuration even though many GPUs and SSDs are installed, thus, no need to pay attention for distributed transaction. It makes application design and daily maintenance so simplified.
  • 23. Other significant features PG-Strom v2.0 Release Technical Brief (17-Apr-2018)23 ▌cuPy data format support of Gstore_fdw ▌BRIN index support ▌Basic PostGIS support ▌NVMe over Fabric support ▌GPU device function that can return varlena datum ▌Semi- / Anti- Join support ▌MVCC visibility checks on the device side ▌Compression support of in-memory columnar cache See 003: Development Roadmap for more details.