SlideShare a Scribd company logo
KUMAR RAJEEV RASTOGI (rajeevrastogi@huawei.com)
Go Faster With Native Compilation
PGDayAsia 2016
17th March 2016
PRASANNA VENKATESH (prasanna.venkatesh@huawei.com)
 KUMAR RAJEEV RASTOGI (In Love with PostgreSQL..)
 Senior Technical Leader at Huawei Technology for almost 8 years
 Active PostgreSQL community members, have contributed many patches
 Have presented many papers in various PG based conference (e.g. PGCon
Canada, PGDay Asia, PGDay India etc).
 One of the Organizer for PGDay Asia conference
 Program committee member for PGDay India.
 Holds around 14 patents in my name in various DB technologies.
Blog - rajeevrastogi.blogspot.in
LinkedIn - http://in.linkedin.com/in/kumarrajeevrastogi
Who Am I?
Native Compilation3
5
Current BusinessTrend2
What to Compile
Cost model4
Schema binding6
7 Schema binding Solution
Performance Scenario8
1 Background
9 Procedure Compilation
Agenda
The traditional database executors are based on the fact that “I/O cost dominates
execution”.These executor models are inefficient in terms of CPU instructions.
Now most of the workloads fits into main memory, which is consequence of two
broad trends :
1. Growth in the amount of memory (RAM) per node/machine
2. Prevalence of high speed SSD
Background
So now biggest bottleneck is CPU usage efficiency not I/O. Our problem
statement is to make our database more efficient in terms of CPU instructions –
there by leveraging the larger memory
Source:ICDEConference
Slowly database industries are reaching to a point where increase of
throughput has become very limited. Quoting from a paper on Hekaton -
The only real hope to increase throughput is to reduce the number of instructions
executed but the reduction needs to be dramatic. To go 10X faster, the engine must
execute 90% fewer instructions and yet still get the work done. To go 100X faster, it
must execute 99% fewer instructions.
Such a drastic reduction in instruction without
disturbing whole functionality is only possible by code specialization (a.k.a
Native Compilation or famously as LLVM) i.e. to generate code specific to
object/query.
Current Business Trend
Many DBs are moving into compilation technology to improve
performance by reducing the CPU instruction some of them are:
 Hekaton (SQL Server 2014)
 Oracle
 MemSQL
Current Business Trend Contd…
Hekaton: Comparison of CPU efficiency for lookups
Source: Hekaton Paper
Native Compilation is a methodology to reduce CPU instructions by executing only
instruction specific to given query/objects unlike interpreted execution. Steps are:
1. Generate C-code specific to objects/query.
2. Compile C-code to generate DLL and load with server executable.
3. Call specialized function instead of generalized function.
Native Compilation
e.g. Expression: Col1 + 100
Traditional executor will requires 100’s of instruction to find all
combination of expression before final execution, whereas in vanilla c
code, it can directly execute in 2-3 instructions.
Source:ICDEConference
Cost model of specialized code can be expressed as:
cost of execution = generate specialized code
+ compilation
+ execute compiled code
Execution of compiled code is very efficient but generation of
specialized code and compiling same may be bit expensive affair. So in
order to drive down this cost:
1. Generate and compile the code once and use it many times; this
distributes the constant cost.
2. Improve the performance of generation and compilation
significantly.
Cost model
Any CPU intensive entity of database can be natively compiled, if they have
similar pattern on different execution. Some of the most popular one are:
 Schema (Relation)
 Procedure
 Query
 Algebraic expression
Note: We will target only Schema for this presentation.
What to Native Compile?
Property of each relation:
1. Number of attributes, their length and data-type are fixed.
2. Irrespective of any data, it is going to be stored in similar pattern.
3. Each attributes are accessed in similar pattern.
Disadvantage of current approach for each tuple access:
1. Loops for each attribute.
2. Property of all attributes are checked to take many decisions.
3. Executes many unwanted instructions.
Schema binding
So we can overcome the disadvantage by natively compiling the relation
based on its property to generate specialized code for each functions of
schema.
Schema Binding = Native Compilation of Relation
Benefit:
1. Each attribute access gets flattened.
2. All attribute property decision are taken during code generation.
3. No decision making at run-time.
4. Reduced CPU instruction.
Schema binding Contd…
Schema binding Contd…
CREATE
TABLE
Automatic
Code
generation
C  DLL
LoadAll function
SQL QUERY
Compiled
Functions
Once a create table command
is issued, a C-file with all
specialized access function is
generated, which is in turns
gets loaded as DLL. These
loaded functions are used by
all SQL query accessing the
compiled table
Schema binding Contd…
This show overall
interaction with
schema bound. Any
query issued from
client can use
schema bound
function or normal
function depending
on the underlying
table.
Schema:
create table tbl (id1 int, id2 float,
id3 varchar(10),id4 bool);
Schema binding: Example
Field id1 and id2 is
going to be always
stored at same offset
and with same
alignment, no
change at run time.
Only variable length
attribute and
attribute following
this will have
variable offset.
Using current approach:
Access Using specialized code:
method-1:
method-2:
Conclusion: Specialized code uses fewer number of instruction compare to generalized code
and hence better performance.
Schema binding: Example
Each Line here
is macro, which
invokes
multiple
condition check
to decide the
action
if (thisatt->attlen != -1)
{
offset = att_align_nominal(off, thisatt->attalign)
values[1] = fetchatt(thisatt, tp + offset)
offset = att_addlength_pointer(off, thisatt->attlen,
tp + off);
}
values[1] = ((struct tbl_xxx*)tp)->id2;
offset = DOUBLEALIGN(offset);
values[1] = *((Datum *)(tp + offset));
offset += 8;
See details
about this in
further slides.
Solution can be categorized as:
1  Opting for schema bind.
2  Functions to be customized.
3  Customized function generation.
4  Loading of customized function.
5  Invocation of customized function.
6  How to generate dynamic library.
Schema Binding Solution
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } |
UNLOGGED ] TABLE [ IF NOT EXISTS ] table …[ TABLESPACE
tablespace_name ] [SCHEMA_BOUNDED]
SCHEMA_BOUND is new option with
CREATETABLE to opt for code specialization.
Solution: Opting for schema bind tuple
Function Name (xxx  relname_relid) Purpose
heap_compute_data_size_xxx To calculate size of the data part of the tuple
Heap_fill_tuple_xxx To fill the tuple with the data
Heap_deform_tuple_xxx Deform the heap tuple
Slot_deform_tuple_xxx
To deform the tuple at the end of scan to project
attribute
Nocachegetattr_xxx
To get one attribute value from the tuple for
vacuum case
Solution: Functions to be customized
Customized function for tuple access of a table can be categorized in 3
approaches:
Method-1  WithTuple format change
Method-2  Without changing the tuple format.
Method-3  Re-organize table columns internally to make all
fixed length and variable length attribute in
sequence.
Solution: Function Generation
A structure corresponding to relation will be created in such a way that
each attribute’s value/offset can be directly referenced by typecasting the
data buffer with structure.
e.g. Consider our earlier example table:
Solution: Function Generation-Method-1
Structure member variable
id1, id2 and id4 contains actual
value of column, whereas
id3_offset stores the offset of
the column id3, as during
create table it is not known the
size of the actual value going
to be stored. End of this
structure buffer will hold data
for variable size column and it
can be accessed based on the
corresponding offset stored.
typedef struct schemaBindTbl_xxx
{
int id1;
float id2;
short id3_offset;
bool id4;
/* Actual data for variable size
column*/
} SchemaBindTbl_xxxx;
create table tbl (id1 int, id2 float, id3 varchar(10), id4 bool);
Solution: Function Generation-Method-1 Contd…
ExistingTuple Format
NewTuple Format
All attribute
values stored in
sequence.
Value of fixed
length attribute
but offset of
variable length
attribute stored in
sequence. So
structure typecast
will give either
value or offset of
value.
So using this structure, tuple data can be stored as:
Fixed size data-type storage:
Variable size data-type storage:
Using this approach heap_fill_tuple function can be generated during create
table.
Solution: Function Generation-Method-1 Contd…
((SchemaBindTbl_xxxx*)data)->id1 = DatumGetXXX(values[attno]);
((SchemaBindTbl_xxxx*)data)->id3_offset = data_offset;
data_length = SIZE((char*)values[attno]);
SET_VARSIZE_SHORT(data + data_offset, data_length);
memcpy(data + data_offset + 1, VARDATA((char*)values[attno]), data_length -
1);
data_offset += data_length;
Similarly, each attribute value from tuple can be accessed as:
Fixed size data-type access:
Variable size data-type access:
Using this approach all function related to deformation of tuple (i.e.
heap_deform_tuple, slot_deform_tuple and nocachegettr) can be generated
during create table.
Solution: Function Generation-Method-1 Contd…
values[attno] = ((SchemaBindTbl_xxxx*)data)->id1;
data_offset = ((SchemaBindTbl_xxxx*)data)->id3_offset ;
values[attno] = PointerGetDatum((char *) ((char*)tp + data_offset));
Advantage:
1. No dependency on previous attributes.
2. Any of the attribute value can be accessed directly.
3. Access of attribute value is very efficient as it will take very few
instructions.
Disadvantage:
1. Size of the tuple will increase leading to more memory consumption.
Solution: Function Generation-Method-1 Contd…
This method generates the customized functions without changing the
format of the tuple.
This approach uses slight variation of existing macros:
 fetch_att
 att_addlength_pointer
 att_align_nominal
 att_align_pointer
These macros takes many decision based on the data-type, its size of each
attributes which is going to be same for a relation.
So instead of using these macro for each tuple of a relation at run-
time, it is used once during table schema definition itself to generate all
customized function.
Solution: Function Generation-Method-2
So as per this mechanism, code for accessing float attribute will be as below:
Similarly access for all other data-type attributes can also
be generated.
Using the combination of other macro, customized code
for all other functions used for tuple access can be generated.
Solution: Function Generation-Method-2 Contd…
offset = DOUBLEALIGN(offset); Skipped alignment check
values[1] = *((Datum *)(tp + offset)); Skipped datum size check
offset += 8; Skipped attribute length check
Advantage:
1. Existing tested macro are used, so it is very safe.
2. No change in tuple format and size.
3. Reduces number of overall instruction by huge margin.
Disadvantage:
1. Dependency on previous attribute incase previous attribute is variable
length.
Solution: Function Generation-Method-2 Contd…
This method is intended to use advantages of previous methods i.e.
 Make least number of attribute dependency
All fixed length attributes are grouped together to make initial list of
columns followed by all variable length columns. So all fixed length
attributes can be accessed directly. Change in column order will be
done during creation of table itself.
 No change in tuple size,so access of tuple will be very efficient
In order to achieve this, we use Method-2 to generate specialized
code.
Solution: Function Generation-Method-3
E.g. Consider our earlier example:
create table tbl (id1 int, id2 float, id3 varchar(10), id4
bool);
Solution: Function Generation-Method-3 Contd…
So in this case, while
creating the table id1,
id2 and id4 will be
first 3 columns
followed by id3.
So access code can be generated directly during schema definition
without dependency on any run time parameter because all of the
attribute offset is fixed except of variable length attributes.
If there are more variable length attributes then they
will be stored after id3 and for them it will have to know the length of
the previous columns to find the exact offset.
Advantage:
1. Existing tested macro are used, so it is very safe.
2. No change in tuple format and size.
3. Reduces number of overall instruction by huge margin.
Disadvantage:
1. There will be still dependency among multiple variable length
attributes (if any).
Solution: Function Generation-Method-3 Contd…
Once we generate the code
corresponding to each access
function, the same gets written into
a C-file, which in turn gets
compiled to dynamic linked library
and then this dynamic library gets
loaded with server executable. So
now any function of the library can
be invoked directly from the server
executables.
Solution: Loading of customized functions
The generated C-file should be compiled to generate dynamic library,
which can be done using:
1. LLVM
Compilation using the LLVM will be very fast.
2. GCC
GCC is standard way of compiling C file but it will be slow
compare to LLVM.
Solution: How to generate dynamic library
While forming the tuple,
corresponding relation option
schema_bound will be checked to
decide whether to call customized
function corresponding to this
relation or the standard
generalized function. Also in tuple
flag t_infomask2,
HEAP_SCHEMA_BIND_TUPLE
(with value 0x1800) will be
appended to mark the schema
bounded tuple.
Solution: Invocation of Storage Customized Function
The tuple header’s t_infomask2
flag will be checked to see , if
HEAP_SCHEMA_BIND_TUPLE
is set to decide whether to call
customized function
corresponding to this relation or
the standard generalized function.
Solution: Invocation of access customized function
Performance (TPC-H):
The system configuration is as below:
SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core
TPC-H Configuration: Default
Query-1, 2 and 17 not shown in charts to maintain clear
visibility of chart.
0 2000 4000 6000 8000 10000 12000
Query-3
Query-4
Query-5
Query-6
Query-7
Query-8
Query-9
Query-10
Query-11
Query-12
Query-13
Query-14
Query-15
Query-16
Query-18
Query-19
Time(ms)
TPC-H Performance
Original(ms) SchemaBind (ms)
TPC-H Query Improvement(%)
Query-1 2%
Query-2 36%
Query-3 14%
Query-4 13%
Query-5 2%
Query-6 21%
Query-7 16%
Query-8 5%
Query-9 6%
Query-10 9%
Query-11 3%
Query-12 17%
Query-13 3%
Query-14 20%
Query-15 20%
Query-16 4%
Query-17 25%
Query-18 9%
Query-19 24%
Performance (Hash Join):
0
200
400
600
800
1,000
1,200
slot_deform_tuple Overall
Instruction Reduction
SchemaBind Original
0
50
100
150
200
250
T
i
m
e
(
m
s
)
Latency Improvement
SchemaBind Original
Latency Improvement: 23%
Overall Instruction reduction: 30%
Access method instruction reduction: 89%
OuterTable: Having 10 columns, cardinality 1M
InnerTable: Having 2 columns, cardinality 1K
Query: select sum(tbl.id10) from tbl,tbl2 where tbl.id10=tbl2.id2 group by tbl.id9;
Schema binding mainly depend on the code specialization of access function
for table. Number of instruction reduced per call of slot_deform_function
is more than 70% and hence if this function form good percentage of total
instruction e.g. in
 Aggregate query,
 group
 Join
 Query with multiple attribute
All of above cases with huge table size, then overall instruction reduction
will be also huge and hence much better performance.
Performance Scenario:
Procedure Compilation
First diagram highlights at what step and how procedure will be compiled. Once
the parsing of procedure is done, we will have all information about the
procedure in PLpgSQL_function pointer, which we can traverse as planner
does and for each statement corresponding C-code can be generated.
Second diagram explain, how compiled function will be invoked.
Performance (TPC-C):
The system configuration is as below:
SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core
TPC-C Configuration: runMins=0, runTxnsPerTerminal=200000
Checkpoint_segment = 100
0
10000
20000
30000
40000
50000
60000
70000
New_order All Procedure
tpmC
Original
Compiled
Reading in tpmC
Reading New_order All Procedure
Original 22334 49606
Compiled 28973 64580
Improvement 23% 23%
With some basic compilation of procedure, we are able to get around 23%
performance improvement.
Seeing the industry trend, we have implemented two way of specialization,
which resulted in up to 36% and 23% of performance improvement on
standard benchmarkTPC-H andTPC-C respectively.
This technology will make us align with
current business trend to tackle the CPU bottleneck and also could be one
of the hot technology for work on PostgreSQL.
Conclusion
1. Zhang, Rui, Saumya Debray, and RichardT. Snodgrass. "Micro-specialization: dynamic
code specialization of database management systems." Proceedings of theTenth
International Symposium on Code Generation and Optimization.ACM, 2012.
http://dl.acm.org/citation.cfm?id=2259025
2. Freedman, Craig, Erik Ismert, and Per-Åke Larson. "Compilation in the Microsoft
SQL Server Hekaton Engine." IEEE Data Eng.Bull. 37.1 (2014): 22-30.
http://www.internalrequests.org/showconfirmpage/?url=ftp://131.107.65.22/pub
/debull/A14mar/p22.pdf
Reference
TAPE
DRAM
DISK
CACHE
DRAM
DISK
“Disk is the new tape;
Memory is the new disk.”
-- Jim Gray
PostgreSQL on Big RAM
Source: ICDE Conference
Go Faster With Native Compilation
Go Faster With Native Compilation

More Related Content

PDF
Query Parallelism in PostgreSQL: What's coming next?
PDF
PostgreSQL 9.6 Performance-Scalability Improvements
PDF
Big Data and PostgreSQL
PDF
PostgreSQL Enterprise Class Features and Capabilities
PDF
Agile Oracle to PostgreSQL migrations (PGConf.EU 2013)
PDF
Oracle to Postgres Migration - part 2
PDF
Porting Oracle Applications to PostgreSQL
PDF
PostgreSQL WAL for DBAs
Query Parallelism in PostgreSQL: What's coming next?
PostgreSQL 9.6 Performance-Scalability Improvements
Big Data and PostgreSQL
PostgreSQL Enterprise Class Features and Capabilities
Agile Oracle to PostgreSQL migrations (PGConf.EU 2013)
Oracle to Postgres Migration - part 2
Porting Oracle Applications to PostgreSQL
PostgreSQL WAL for DBAs

What's hot (20)

PDF
Postgrtesql as a NoSQL Document Store - The JSON/JSONB data type
PDF
Migration From Oracle to PostgreSQL
PDF
PostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 Taipei
PDF
Major features postgres 11
 
PDF
[EPPG] Oracle to PostgreSQL, Challenges to Opportunity
PDF
Connecting Hadoop and Oracle
PDF
Learning postgresql
PPTX
Inside sql server in memory oltp sql sat nyc 2017
PDF
Presto Testing Tools: Benchto & Tempto (Presto Boston Meetup 10062015)
PPTX
Stream processing from single node to a cluster
PDF
In Memory Database In Action by Tanel Poder and Kerry Osborne
PDF
Tanel Poder - Performance stories from Exadata Migrations
PPTX
Dive into spark2
PDF
PostgreSQL and Redis - talk at pgcon 2013
PPTX
Low Level CPU Performance Profiling Examples
PDF
PostgreSQL Rocks Indonesia
PDF
Fine Tuning and Enhancing Performance of Apache Spark Jobs
PDF
Meet Spilo, Zalando’s HIGH-AVAILABLE POSTGRESQL CLUSTER - Feike Steenbergen
PDF
Technical Introduction to PostgreSQL and PPAS
PDF
Non-Relational Postgres
 
Postgrtesql as a NoSQL Document Store - The JSON/JSONB data type
Migration From Oracle to PostgreSQL
PostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 Taipei
Major features postgres 11
 
[EPPG] Oracle to PostgreSQL, Challenges to Opportunity
Connecting Hadoop and Oracle
Learning postgresql
Inside sql server in memory oltp sql sat nyc 2017
Presto Testing Tools: Benchto & Tempto (Presto Boston Meetup 10062015)
Stream processing from single node to a cluster
In Memory Database In Action by Tanel Poder and Kerry Osborne
Tanel Poder - Performance stories from Exadata Migrations
Dive into spark2
PostgreSQL and Redis - talk at pgcon 2013
Low Level CPU Performance Profiling Examples
PostgreSQL Rocks Indonesia
Fine Tuning and Enhancing Performance of Apache Spark Jobs
Meet Spilo, Zalando’s HIGH-AVAILABLE POSTGRESQL CLUSTER - Feike Steenbergen
Technical Introduction to PostgreSQL and PPAS
Non-Relational Postgres
 
Ad

Viewers also liked (20)

PDF
(Ab)using 4d Indexing
PDF
PostgreSQL on Amazon RDS
PDF
There is Javascript in my SQL
PDF
Introduction to Vacuum Freezing and XID
PDF
PostgreSQL: Past present Future
PDF
Swapping Pacemaker Corosync with repmgr
PDF
Lightening Talk - PostgreSQL Worst Practices
PDF
Lessons PostgreSQL learned from commercial databases, and didn’t
PDF
Use Case: PostGIS and Agribotics
PDF
How to teach an elephant to rock'n'roll
PDF
Why we love pgpool-II and why we hate it!
PDF
Security Best Practices for your Postgres Deployment
PDF
Best Practices for Becoming an Exceptional Postgres DBA
 
PDF
Postgresql database administration volume 1
PDF
PostgreSQL DBA Neler Yapar?
PDF
Founding a LLC in Turkey
PDF
TTÜ Geeky Weekly
PDF
PostgreSQL Hem Güçlü Hem Güzel!
PDF
PostgreSQL'i öğrenmek ve yönetmek
PDF
What's New In PostgreSQL 9.4
(Ab)using 4d Indexing
PostgreSQL on Amazon RDS
There is Javascript in my SQL
Introduction to Vacuum Freezing and XID
PostgreSQL: Past present Future
Swapping Pacemaker Corosync with repmgr
Lightening Talk - PostgreSQL Worst Practices
Lessons PostgreSQL learned from commercial databases, and didn’t
Use Case: PostGIS and Agribotics
How to teach an elephant to rock'n'roll
Why we love pgpool-II and why we hate it!
Security Best Practices for your Postgres Deployment
Best Practices for Becoming an Exceptional Postgres DBA
 
Postgresql database administration volume 1
PostgreSQL DBA Neler Yapar?
Founding a LLC in Turkey
TTÜ Geeky Weekly
PostgreSQL Hem Güçlü Hem Güzel!
PostgreSQL'i öğrenmek ve yönetmek
What's New In PostgreSQL 9.4
Ad

Similar to Go Faster With Native Compilation (20)

PDF
Go Faster With Native Compilation
PDF
Data herding
PDF
Data herding
PPTX
AI與大數據數據處理 Spark實戰(20171216)
PPT
Migration To Multi Core - Parallel Programming Models
PDF
distage: Purely Functional Staged Dependency Injection; bonus: Faking Kind Po...
PPTX
Compose in Theory
PDF
PofEAA and SQLAlchemy
PDF
Addressing Scenario
PDF
Kerberizing Spark: Spark Summit East talk by Abel Rincon and Jorge Lopez-Malla
RTF
Readme
PDF
Week 1 - 4 summary slides.pdfhxbsnsnanbsbs
PPT
Sql optimize
PPTX
Azure machine learning service
PPTX
Terraform modules restructured
PPTX
Terraform Modules Restructured
PDF
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...
PDF
Ejb3 Struts Tutorial En
PDF
Ejb3 Struts Tutorial En
Go Faster With Native Compilation
Data herding
Data herding
AI與大數據數據處理 Spark實戰(20171216)
Migration To Multi Core - Parallel Programming Models
distage: Purely Functional Staged Dependency Injection; bonus: Faking Kind Po...
Compose in Theory
PofEAA and SQLAlchemy
Addressing Scenario
Kerberizing Spark: Spark Summit East talk by Abel Rincon and Jorge Lopez-Malla
Readme
Week 1 - 4 summary slides.pdfhxbsnsnanbsbs
Sql optimize
Azure machine learning service
Terraform modules restructured
Terraform Modules Restructured
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...
Ejb3 Struts Tutorial En
Ejb3 Struts Tutorial En

More from PGConf APAC (17)

PDF
PGConf APAC 2018: Sponsored Talk by Fujitsu - The growing mandatory requireme...
PDF
PGConf APAC 2018: PostgreSQL 10 - Replication goes Logical
PDF
PGConf APAC 2018 - Lightening Talk #3: How To Contribute to PostgreSQL
PDF
PGConf APAC 2018 - Lightening Talk #2 - Centralizing Authorization in PostgreSQL
PDF
Sponsored Talk @ PGConf APAC 2018 - Choosing the right partner in your Postgr...
PDF
PGConf APAC 2018 - A PostgreSQL DBAs Toolbelt for 2018
PDF
PGConf APAC 2018 - Patroni: Kubernetes-native PostgreSQL companion
PDF
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
PDF
PGConf APAC 2018 - Monitoring PostgreSQL at Scale
PDF
PGConf APAC 2018 - Where's Waldo - Text Search and Pattern in PostgreSQL
PDF
PGConf APAC 2018 - Managing replication clusters with repmgr, Barman and PgBo...
PDF
PGConf APAC 2018 - PostgreSQL HA with Pgpool-II and whats been happening in P...
PDF
PGConf APAC 2018 - PostgreSQL performance comparison in various clouds
PDF
Sponsored Talk @ PGConf APAC 2018 - Migrating Oracle to EDB Postgres Approach...
PDF
PGConf APAC 2018 - Tale from Trenches
PDF
PGConf APAC 2018 Keynote: PostgreSQL goes eleven
PDF
Amazon (AWS) Aurora
PGConf APAC 2018: Sponsored Talk by Fujitsu - The growing mandatory requireme...
PGConf APAC 2018: PostgreSQL 10 - Replication goes Logical
PGConf APAC 2018 - Lightening Talk #3: How To Contribute to PostgreSQL
PGConf APAC 2018 - Lightening Talk #2 - Centralizing Authorization in PostgreSQL
Sponsored Talk @ PGConf APAC 2018 - Choosing the right partner in your Postgr...
PGConf APAC 2018 - A PostgreSQL DBAs Toolbelt for 2018
PGConf APAC 2018 - Patroni: Kubernetes-native PostgreSQL companion
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
PGConf APAC 2018 - Monitoring PostgreSQL at Scale
PGConf APAC 2018 - Where's Waldo - Text Search and Pattern in PostgreSQL
PGConf APAC 2018 - Managing replication clusters with repmgr, Barman and PgBo...
PGConf APAC 2018 - PostgreSQL HA with Pgpool-II and whats been happening in P...
PGConf APAC 2018 - PostgreSQL performance comparison in various clouds
Sponsored Talk @ PGConf APAC 2018 - Migrating Oracle to EDB Postgres Approach...
PGConf APAC 2018 - Tale from Trenches
PGConf APAC 2018 Keynote: PostgreSQL goes eleven
Amazon (AWS) Aurora

Recently uploaded (20)

PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
Empathic Computing: Creating Shared Understanding
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Mushroom cultivation and it's methods.pdf
PDF
Getting Started with Data Integration: FME Form 101
PPTX
Spectroscopy.pptx food analysis technology
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Approach and Philosophy of On baking technology
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Machine learning based COVID-19 study performance prediction
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
cloud_computing_Infrastucture_as_cloud_p
Empathic Computing: Creating Shared Understanding
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Encapsulation_ Review paper, used for researhc scholars
Mushroom cultivation and it's methods.pdf
Getting Started with Data Integration: FME Form 101
Spectroscopy.pptx food analysis technology
Heart disease approach using modified random forest and particle swarm optimi...
Mobile App Security Testing_ A Comprehensive Guide.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Reach Out and Touch Someone: Haptics and Empathic Computing
SOPHOS-XG Firewall Administrator PPT.pptx
Approach and Philosophy of On baking technology
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Machine learning based COVID-19 study performance prediction

Go Faster With Native Compilation

  • 1. KUMAR RAJEEV RASTOGI ([email protected]) Go Faster With Native Compilation PGDayAsia 2016 17th March 2016 PRASANNA VENKATESH ([email protected])
  • 2.  KUMAR RAJEEV RASTOGI (In Love with PostgreSQL..)  Senior Technical Leader at Huawei Technology for almost 8 years  Active PostgreSQL community members, have contributed many patches  Have presented many papers in various PG based conference (e.g. PGCon Canada, PGDay Asia, PGDay India etc).  One of the Organizer for PGDay Asia conference  Program committee member for PGDay India.  Holds around 14 patents in my name in various DB technologies. Blog - rajeevrastogi.blogspot.in LinkedIn - http://in.linkedin.com/in/kumarrajeevrastogi Who Am I?
  • 3. Native Compilation3 5 Current BusinessTrend2 What to Compile Cost model4 Schema binding6 7 Schema binding Solution Performance Scenario8 1 Background 9 Procedure Compilation Agenda
  • 4. The traditional database executors are based on the fact that “I/O cost dominates execution”.These executor models are inefficient in terms of CPU instructions. Now most of the workloads fits into main memory, which is consequence of two broad trends : 1. Growth in the amount of memory (RAM) per node/machine 2. Prevalence of high speed SSD Background So now biggest bottleneck is CPU usage efficiency not I/O. Our problem statement is to make our database more efficient in terms of CPU instructions – there by leveraging the larger memory Source:ICDEConference
  • 5. Slowly database industries are reaching to a point where increase of throughput has become very limited. Quoting from a paper on Hekaton - The only real hope to increase throughput is to reduce the number of instructions executed but the reduction needs to be dramatic. To go 10X faster, the engine must execute 90% fewer instructions and yet still get the work done. To go 100X faster, it must execute 99% fewer instructions. Such a drastic reduction in instruction without disturbing whole functionality is only possible by code specialization (a.k.a Native Compilation or famously as LLVM) i.e. to generate code specific to object/query. Current Business Trend
  • 6. Many DBs are moving into compilation technology to improve performance by reducing the CPU instruction some of them are:  Hekaton (SQL Server 2014)  Oracle  MemSQL Current Business Trend Contd… Hekaton: Comparison of CPU efficiency for lookups Source: Hekaton Paper
  • 7. Native Compilation is a methodology to reduce CPU instructions by executing only instruction specific to given query/objects unlike interpreted execution. Steps are: 1. Generate C-code specific to objects/query. 2. Compile C-code to generate DLL and load with server executable. 3. Call specialized function instead of generalized function. Native Compilation e.g. Expression: Col1 + 100 Traditional executor will requires 100’s of instruction to find all combination of expression before final execution, whereas in vanilla c code, it can directly execute in 2-3 instructions. Source:ICDEConference
  • 8. Cost model of specialized code can be expressed as: cost of execution = generate specialized code + compilation + execute compiled code Execution of compiled code is very efficient but generation of specialized code and compiling same may be bit expensive affair. So in order to drive down this cost: 1. Generate and compile the code once and use it many times; this distributes the constant cost. 2. Improve the performance of generation and compilation significantly. Cost model
  • 9. Any CPU intensive entity of database can be natively compiled, if they have similar pattern on different execution. Some of the most popular one are:  Schema (Relation)  Procedure  Query  Algebraic expression Note: We will target only Schema for this presentation. What to Native Compile?
  • 10. Property of each relation: 1. Number of attributes, their length and data-type are fixed. 2. Irrespective of any data, it is going to be stored in similar pattern. 3. Each attributes are accessed in similar pattern. Disadvantage of current approach for each tuple access: 1. Loops for each attribute. 2. Property of all attributes are checked to take many decisions. 3. Executes many unwanted instructions. Schema binding
  • 11. So we can overcome the disadvantage by natively compiling the relation based on its property to generate specialized code for each functions of schema. Schema Binding = Native Compilation of Relation Benefit: 1. Each attribute access gets flattened. 2. All attribute property decision are taken during code generation. 3. No decision making at run-time. 4. Reduced CPU instruction. Schema binding Contd…
  • 12. Schema binding Contd… CREATE TABLE Automatic Code generation C  DLL LoadAll function SQL QUERY Compiled Functions Once a create table command is issued, a C-file with all specialized access function is generated, which is in turns gets loaded as DLL. These loaded functions are used by all SQL query accessing the compiled table
  • 13. Schema binding Contd… This show overall interaction with schema bound. Any query issued from client can use schema bound function or normal function depending on the underlying table.
  • 14. Schema: create table tbl (id1 int, id2 float, id3 varchar(10),id4 bool); Schema binding: Example Field id1 and id2 is going to be always stored at same offset and with same alignment, no change at run time. Only variable length attribute and attribute following this will have variable offset.
  • 15. Using current approach: Access Using specialized code: method-1: method-2: Conclusion: Specialized code uses fewer number of instruction compare to generalized code and hence better performance. Schema binding: Example Each Line here is macro, which invokes multiple condition check to decide the action if (thisatt->attlen != -1) { offset = att_align_nominal(off, thisatt->attalign) values[1] = fetchatt(thisatt, tp + offset) offset = att_addlength_pointer(off, thisatt->attlen, tp + off); } values[1] = ((struct tbl_xxx*)tp)->id2; offset = DOUBLEALIGN(offset); values[1] = *((Datum *)(tp + offset)); offset += 8; See details about this in further slides.
  • 16. Solution can be categorized as: 1  Opting for schema bind. 2  Functions to be customized. 3  Customized function generation. 4  Loading of customized function. 5  Invocation of customized function. 6  How to generate dynamic library. Schema Binding Solution
  • 17. CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table …[ TABLESPACE tablespace_name ] [SCHEMA_BOUNDED] SCHEMA_BOUND is new option with CREATETABLE to opt for code specialization. Solution: Opting for schema bind tuple
  • 18. Function Name (xxx  relname_relid) Purpose heap_compute_data_size_xxx To calculate size of the data part of the tuple Heap_fill_tuple_xxx To fill the tuple with the data Heap_deform_tuple_xxx Deform the heap tuple Slot_deform_tuple_xxx To deform the tuple at the end of scan to project attribute Nocachegetattr_xxx To get one attribute value from the tuple for vacuum case Solution: Functions to be customized
  • 19. Customized function for tuple access of a table can be categorized in 3 approaches: Method-1  WithTuple format change Method-2  Without changing the tuple format. Method-3  Re-organize table columns internally to make all fixed length and variable length attribute in sequence. Solution: Function Generation
  • 20. A structure corresponding to relation will be created in such a way that each attribute’s value/offset can be directly referenced by typecasting the data buffer with structure. e.g. Consider our earlier example table: Solution: Function Generation-Method-1 Structure member variable id1, id2 and id4 contains actual value of column, whereas id3_offset stores the offset of the column id3, as during create table it is not known the size of the actual value going to be stored. End of this structure buffer will hold data for variable size column and it can be accessed based on the corresponding offset stored. typedef struct schemaBindTbl_xxx { int id1; float id2; short id3_offset; bool id4; /* Actual data for variable size column*/ } SchemaBindTbl_xxxx; create table tbl (id1 int, id2 float, id3 varchar(10), id4 bool);
  • 21. Solution: Function Generation-Method-1 Contd… ExistingTuple Format NewTuple Format All attribute values stored in sequence. Value of fixed length attribute but offset of variable length attribute stored in sequence. So structure typecast will give either value or offset of value.
  • 22. So using this structure, tuple data can be stored as: Fixed size data-type storage: Variable size data-type storage: Using this approach heap_fill_tuple function can be generated during create table. Solution: Function Generation-Method-1 Contd… ((SchemaBindTbl_xxxx*)data)->id1 = DatumGetXXX(values[attno]); ((SchemaBindTbl_xxxx*)data)->id3_offset = data_offset; data_length = SIZE((char*)values[attno]); SET_VARSIZE_SHORT(data + data_offset, data_length); memcpy(data + data_offset + 1, VARDATA((char*)values[attno]), data_length - 1); data_offset += data_length;
  • 23. Similarly, each attribute value from tuple can be accessed as: Fixed size data-type access: Variable size data-type access: Using this approach all function related to deformation of tuple (i.e. heap_deform_tuple, slot_deform_tuple and nocachegettr) can be generated during create table. Solution: Function Generation-Method-1 Contd… values[attno] = ((SchemaBindTbl_xxxx*)data)->id1; data_offset = ((SchemaBindTbl_xxxx*)data)->id3_offset ; values[attno] = PointerGetDatum((char *) ((char*)tp + data_offset));
  • 24. Advantage: 1. No dependency on previous attributes. 2. Any of the attribute value can be accessed directly. 3. Access of attribute value is very efficient as it will take very few instructions. Disadvantage: 1. Size of the tuple will increase leading to more memory consumption. Solution: Function Generation-Method-1 Contd…
  • 25. This method generates the customized functions without changing the format of the tuple. This approach uses slight variation of existing macros:  fetch_att  att_addlength_pointer  att_align_nominal  att_align_pointer These macros takes many decision based on the data-type, its size of each attributes which is going to be same for a relation. So instead of using these macro for each tuple of a relation at run- time, it is used once during table schema definition itself to generate all customized function. Solution: Function Generation-Method-2
  • 26. So as per this mechanism, code for accessing float attribute will be as below: Similarly access for all other data-type attributes can also be generated. Using the combination of other macro, customized code for all other functions used for tuple access can be generated. Solution: Function Generation-Method-2 Contd… offset = DOUBLEALIGN(offset); Skipped alignment check values[1] = *((Datum *)(tp + offset)); Skipped datum size check offset += 8; Skipped attribute length check
  • 27. Advantage: 1. Existing tested macro are used, so it is very safe. 2. No change in tuple format and size. 3. Reduces number of overall instruction by huge margin. Disadvantage: 1. Dependency on previous attribute incase previous attribute is variable length. Solution: Function Generation-Method-2 Contd…
  • 28. This method is intended to use advantages of previous methods i.e.  Make least number of attribute dependency All fixed length attributes are grouped together to make initial list of columns followed by all variable length columns. So all fixed length attributes can be accessed directly. Change in column order will be done during creation of table itself.  No change in tuple size,so access of tuple will be very efficient In order to achieve this, we use Method-2 to generate specialized code. Solution: Function Generation-Method-3
  • 29. E.g. Consider our earlier example: create table tbl (id1 int, id2 float, id3 varchar(10), id4 bool); Solution: Function Generation-Method-3 Contd… So in this case, while creating the table id1, id2 and id4 will be first 3 columns followed by id3. So access code can be generated directly during schema definition without dependency on any run time parameter because all of the attribute offset is fixed except of variable length attributes. If there are more variable length attributes then they will be stored after id3 and for them it will have to know the length of the previous columns to find the exact offset.
  • 30. Advantage: 1. Existing tested macro are used, so it is very safe. 2. No change in tuple format and size. 3. Reduces number of overall instruction by huge margin. Disadvantage: 1. There will be still dependency among multiple variable length attributes (if any). Solution: Function Generation-Method-3 Contd…
  • 31. Once we generate the code corresponding to each access function, the same gets written into a C-file, which in turn gets compiled to dynamic linked library and then this dynamic library gets loaded with server executable. So now any function of the library can be invoked directly from the server executables. Solution: Loading of customized functions
  • 32. The generated C-file should be compiled to generate dynamic library, which can be done using: 1. LLVM Compilation using the LLVM will be very fast. 2. GCC GCC is standard way of compiling C file but it will be slow compare to LLVM. Solution: How to generate dynamic library
  • 33. While forming the tuple, corresponding relation option schema_bound will be checked to decide whether to call customized function corresponding to this relation or the standard generalized function. Also in tuple flag t_infomask2, HEAP_SCHEMA_BIND_TUPLE (with value 0x1800) will be appended to mark the schema bounded tuple. Solution: Invocation of Storage Customized Function
  • 34. The tuple header’s t_infomask2 flag will be checked to see , if HEAP_SCHEMA_BIND_TUPLE is set to decide whether to call customized function corresponding to this relation or the standard generalized function. Solution: Invocation of access customized function
  • 35. Performance (TPC-H): The system configuration is as below: SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core TPC-H Configuration: Default Query-1, 2 and 17 not shown in charts to maintain clear visibility of chart. 0 2000 4000 6000 8000 10000 12000 Query-3 Query-4 Query-5 Query-6 Query-7 Query-8 Query-9 Query-10 Query-11 Query-12 Query-13 Query-14 Query-15 Query-16 Query-18 Query-19 Time(ms) TPC-H Performance Original(ms) SchemaBind (ms) TPC-H Query Improvement(%) Query-1 2% Query-2 36% Query-3 14% Query-4 13% Query-5 2% Query-6 21% Query-7 16% Query-8 5% Query-9 6% Query-10 9% Query-11 3% Query-12 17% Query-13 3% Query-14 20% Query-15 20% Query-16 4% Query-17 25% Query-18 9% Query-19 24%
  • 36. Performance (Hash Join): 0 200 400 600 800 1,000 1,200 slot_deform_tuple Overall Instruction Reduction SchemaBind Original 0 50 100 150 200 250 T i m e ( m s ) Latency Improvement SchemaBind Original Latency Improvement: 23% Overall Instruction reduction: 30% Access method instruction reduction: 89% OuterTable: Having 10 columns, cardinality 1M InnerTable: Having 2 columns, cardinality 1K Query: select sum(tbl.id10) from tbl,tbl2 where tbl.id10=tbl2.id2 group by tbl.id9;
  • 37. Schema binding mainly depend on the code specialization of access function for table. Number of instruction reduced per call of slot_deform_function is more than 70% and hence if this function form good percentage of total instruction e.g. in  Aggregate query,  group  Join  Query with multiple attribute All of above cases with huge table size, then overall instruction reduction will be also huge and hence much better performance. Performance Scenario:
  • 38. Procedure Compilation First diagram highlights at what step and how procedure will be compiled. Once the parsing of procedure is done, we will have all information about the procedure in PLpgSQL_function pointer, which we can traverse as planner does and for each statement corresponding C-code can be generated. Second diagram explain, how compiled function will be invoked.
  • 39. Performance (TPC-C): The system configuration is as below: SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core TPC-C Configuration: runMins=0, runTxnsPerTerminal=200000 Checkpoint_segment = 100 0 10000 20000 30000 40000 50000 60000 70000 New_order All Procedure tpmC Original Compiled Reading in tpmC Reading New_order All Procedure Original 22334 49606 Compiled 28973 64580 Improvement 23% 23% With some basic compilation of procedure, we are able to get around 23% performance improvement.
  • 40. Seeing the industry trend, we have implemented two way of specialization, which resulted in up to 36% and 23% of performance improvement on standard benchmarkTPC-H andTPC-C respectively. This technology will make us align with current business trend to tackle the CPU bottleneck and also could be one of the hot technology for work on PostgreSQL. Conclusion
  • 41. 1. Zhang, Rui, Saumya Debray, and RichardT. Snodgrass. "Micro-specialization: dynamic code specialization of database management systems." Proceedings of theTenth International Symposium on Code Generation and Optimization.ACM, 2012. http://dl.acm.org/citation.cfm?id=2259025 2. Freedman, Craig, Erik Ismert, and Per-Åke Larson. "Compilation in the Microsoft SQL Server Hekaton Engine." IEEE Data Eng.Bull. 37.1 (2014): 22-30. http://www.internalrequests.org/showconfirmpage/?url=ftp://131.107.65.22/pub /debull/A14mar/p22.pdf Reference
  • 42. TAPE DRAM DISK CACHE DRAM DISK “Disk is the new tape; Memory is the new disk.” -- Jim Gray PostgreSQL on Big RAM Source: ICDE Conference