SlideShare a Scribd company logo
KUMAR RAJEEV RASTOGI (rajeevrastogi@huawei.com)
Go Faster With Native Compilation
PGDayAsia 2016
17th March 2016
PRASANNA VENKATESH (prasanna.venkatesh@huawei.com)
 KUMAR RAJEEV RASTOGI (In Love with PostgreSQL..)
 Senior Technical Leader at Huawei Technology for almost 8 years
 Active PostgreSQL community members, have contributed many patches
 Have presented many papers in various PG based conference (e.g. PGCon
Canada, PGDay Asia, PGDay India etc).
 One of the Organizer for PGDay Asia conference
 Program committee member for PGDay India.
 Holds around 14 patents in my name in various DB technologies.
Blog - rajeevrastogi.blogspot.in
LinkedIn - http://in.linkedin.com/in/kumarrajeevrastogi
Who Am I?
Native Compilation3
5
Current BusinessTrend2
What to Compile
Cost model4
Schema binding6
7 Schema binding Solution
Performance Scenario8
1 Background
9 Procedure Compilation
Agenda
The traditional database executors are based on the fact that “I/O cost dominates
execution”.These executor models are inefficient in terms of CPU instructions.
Now most of the workloads fits into main memory, which is consequence of two
broad trends :
1. Growth in the amount of memory (RAM) per node/machine
2. Prevalence of high speed SSD
Background
So now biggest bottleneck is CPU usage efficiency not I/O. Our problem
statement is to make our database more efficient in terms of CPU instructions –
there by leveraging the larger memory
Source:ICDEConference
Slowly database industries are reaching to a point where increase of
throughput has become very limited. Quoting from a paper on Hekaton -
The only real hope to increase throughput is to reduce the number of instructions
executed but the reduction needs to be dramatic. To go 10X faster, the engine must
execute 90% fewer instructions and yet still get the work done. To go 100X faster, it
must execute 99% fewer instructions.
Such a drastic reduction in instruction without
disturbing whole functionality is only possible by code specialization (a.k.a
Native Compilation or famously as LLVM) i.e. to generate code specific to
object/query.
Current Business Trend
Many DBs are moving into compilation technology to improve
performance by reducing the CPU instruction some of them are:
 Hekaton (SQL Server 2014)
 Oracle
 MemSQL
Current Business Trend Contd…
Hekaton: Comparison of CPU efficiency for lookups
Source: Hekaton Paper
Native Compilation is a methodology to reduce CPU instructions by executing only
instruction specific to given query/objects unlike interpreted execution. Steps are:
1. Generate C-code specific to objects/query.
2. Compile C-code to generate DLL and load with server executable.
3. Call specialized function instead of generalized function.
Native Compilation
e.g. Expression: Col1 + 100
Traditional executor will requires 100’s of instruction to find all
combination of expression before final execution, whereas in vanilla c
code, it can directly execute in 2-3 instructions.
Source:ICDEConference
Cost model of specialized code can be expressed as:
cost of execution = generate specialized code
+ compilation
+ execute compiled code
Execution of compiled code is very efficient but generation of
specialized code and compiling same may be bit expensive affair. So in
order to drive down this cost:
1. Generate and compile the code once and use it many times; this
distributes the constant cost.
2. Improve the performance of generation and compilation
significantly.
Cost model
Any CPU intensive entity of database can be natively compiled, if they have
similar pattern on different execution. Some of the most popular one are:
 Schema (Relation)
 Procedure
 Query
 Algebraic expression
Note: We will target only Schema for this presentation.
What to Native Compile?
Property of each relation:
1. Number of attributes, their length and data-type are fixed.
2. Irrespective of any data, it is going to be stored in similar pattern.
3. Each attributes are accessed in similar pattern.
Disadvantage of current approach for each tuple access:
1. Loops for each attribute.
2. Property of all attributes are checked to take many decisions.
3. Executes many unwanted instructions.
Schema binding
So we can overcome the disadvantage by natively compiling the relation
based on its property to generate specialized code for each functions of
schema.
Schema Binding = Native Compilation of Relation
Benefit:
1. Each attribute access gets flattened.
2. All attribute property decision are taken during code generation.
3. No decision making at run-time.
4. Reduced CPU instruction.
Schema binding Contd…
Schema binding Contd…
CREATE
TABLE
Automatic
Code
generation
C  DLL
LoadAll function
SQL QUERY
Compiled
Functions
Once a create table command
is issued, a C-file with all
specialized access function is
generated, which is in turns
gets loaded as DLL. These
loaded functions are used by
all SQL query accessing the
compiled table
Schema binding Contd…
This show overall
interaction with
schema bound. Any
query issued from
client can use
schema bound
function or normal
function depending
on the underlying
table.
Schema:
create table tbl (id1 int, id2 float,
id3 varchar(10),id4 bool);
Schema binding: Example
Field id1 and id2 is
going to be always
stored at same offset
and with same
alignment, no
change at run time.
Only variable length
attribute and
attribute following
this will have
variable offset.
Using current approach:
Access Using specialized code:
method-1:
method-2:
Conclusion: Specialized code uses fewer number of instruction compare to generalized code
and hence better performance.
Schema binding: Example
Each Line here
is macro, which
invokes
multiple
condition check
to decide the
action
if (thisatt->attlen != -1)
{
offset = att_align_nominal(off, thisatt->attalign)
values[1] = fetchatt(thisatt, tp + offset)
offset = att_addlength_pointer(off, thisatt->attlen,
tp + off);
}
values[1] = ((struct tbl_xxx*)tp)->id2;
offset = DOUBLEALIGN(offset);
values[1] = *((Datum *)(tp + offset));
offset += 8;
See details
about this in
further slides.
Solution can be categorized as:
1  Opting for schema bind.
2  Functions to be customized.
3  Customized function generation.
4  Loading of customized function.
5  Invocation of customized function.
6  How to generate dynamic library.
Schema Binding Solution
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } |
UNLOGGED ] TABLE [ IF NOT EXISTS ] table …[ TABLESPACE
tablespace_name ] [SCHEMA_BOUNDED]
SCHEMA_BOUND is new option with
CREATETABLE to opt for code specialization.
Solution: Opting for schema bind tuple
Function Name (xxx  relname_relid) Purpose
heap_compute_data_size_xxx To calculate size of the data part of the tuple
Heap_fill_tuple_xxx To fill the tuple with the data
Heap_deform_tuple_xxx Deform the heap tuple
Slot_deform_tuple_xxx
To deform the tuple at the end of scan to project
attribute
Nocachegetattr_xxx
To get one attribute value from the tuple for
vacuum case
Solution: Functions to be customized
Customized function for tuple access of a table can be categorized in 3
approaches:
Method-1  WithTuple format change
Method-2  Without changing the tuple format.
Method-3  Re-organize table columns internally to make all
fixed length and variable length attribute in
sequence.
Solution: Function Generation
A structure corresponding to relation will be created in such a way that
each attribute’s value/offset can be directly referenced by typecasting the
data buffer with structure.
e.g. Consider our earlier example table:
Solution: Function Generation-Method-1
Structure member variable
id1, id2 and id4 contains actual
value of column, whereas
id3_offset stores the offset of
the column id3, as during
create table it is not known the
size of the actual value going
to be stored. End of this
structure buffer will hold data
for variable size column and it
can be accessed based on the
corresponding offset stored.
typedef struct schemaBindTbl_xxx
{
int id1;
float id2;
short id3_offset;
bool id4;
/* Actual data for variable size
column*/
} SchemaBindTbl_xxxx;
create table tbl (id1 int, id2 float, id3 varchar(10), id4 bool);
Solution: Function Generation-Method-1 Contd…
ExistingTuple Format
NewTuple Format
All attribute
values stored in
sequence.
Value of fixed
length attribute
but offset of
variable length
attribute stored in
sequence. So
structure typecast
will give either
value or offset of
value.
So using this structure, tuple data can be stored as:
Fixed size data-type storage:
Variable size data-type storage:
Using this approach heap_fill_tuple function can be generated during create
table.
Solution: Function Generation-Method-1 Contd…
((SchemaBindTbl_xxxx*)data)->id1 = DatumGetXXX(values[attno]);
((SchemaBindTbl_xxxx*)data)->id3_offset = data_offset;
data_length = SIZE((char*)values[attno]);
SET_VARSIZE_SHORT(data + data_offset, data_length);
memcpy(data + data_offset + 1, VARDATA((char*)values[attno]), data_length -
1);
data_offset += data_length;
Similarly, each attribute value from tuple can be accessed as:
Fixed size data-type access:
Variable size data-type access:
Using this approach all function related to deformation of tuple (i.e.
heap_deform_tuple, slot_deform_tuple and nocachegettr) can be generated
during create table.
Solution: Function Generation-Method-1 Contd…
values[attno] = ((SchemaBindTbl_xxxx*)data)->id1;
data_offset = ((SchemaBindTbl_xxxx*)data)->id3_offset ;
values[attno] = PointerGetDatum((char *) ((char*)tp + data_offset));
Advantage:
1. No dependency on previous attributes.
2. Any of the attribute value can be accessed directly.
3. Access of attribute value is very efficient as it will take very few
instructions.
Disadvantage:
1. Size of the tuple will increase leading to more memory consumption.
Solution: Function Generation-Method-1 Contd…
This method generates the customized functions without changing the
format of the tuple.
This approach uses slight variation of existing macros:
 fetch_att
 att_addlength_pointer
 att_align_nominal
 att_align_pointer
These macros takes many decision based on the data-type, its size of each
attributes which is going to be same for a relation.
So instead of using these macro for each tuple of a relation at run-
time, it is used once during table schema definition itself to generate all
customized function.
Solution: Function Generation-Method-2
So as per this mechanism, code for accessing float attribute will be as below:
Similarly access for all other data-type attributes can also
be generated.
Using the combination of other macro, customized code
for all other functions used for tuple access can be generated.
Solution: Function Generation-Method-2 Contd…
offset = DOUBLEALIGN(offset); Skipped alignment check
values[1] = *((Datum *)(tp + offset)); Skipped datum size check
offset += 8; Skipped attribute length check
Advantage:
1. Existing tested macro are used, so it is very safe.
2. No change in tuple format and size.
3. Reduces number of overall instruction by huge margin.
Disadvantage:
1. Dependency on previous attribute incase previous attribute is variable
length.
Solution: Function Generation-Method-2 Contd…
This method is intended to use advantages of previous methods i.e.
 Make least number of attribute dependency
All fixed length attributes are grouped together to make initial list of
columns followed by all variable length columns. So all fixed length
attributes can be accessed directly. Change in column order will be
done during creation of table itself.
 No change in tuple size,so access of tuple will be very efficient
In order to achieve this, we use Method-2 to generate specialized
code.
Solution: Function Generation-Method-3
E.g. Consider our earlier example:
create table tbl (id1 int, id2 float, id3 varchar(10), id4
bool);
Solution: Function Generation-Method-3 Contd…
So in this case, while
creating the table id1,
id2 and id4 will be
first 3 columns
followed by id3.
So access code can be generated directly during schema definition
without dependency on any run time parameter because all of the
attribute offset is fixed except of variable length attributes.
If there are more variable length attributes then they
will be stored after id3 and for them it will have to know the length of
the previous columns to find the exact offset.
Advantage:
1. Existing tested macro are used, so it is very safe.
2. No change in tuple format and size.
3. Reduces number of overall instruction by huge margin.
Disadvantage:
1. There will be still dependency among multiple variable length
attributes (if any).
Solution: Function Generation-Method-3 Contd…
Once we generate the code
corresponding to each access
function, the same gets written into
a C-file, which in turn gets
compiled to dynamic linked library
and then this dynamic library gets
loaded with server executable. So
now any function of the library can
be invoked directly from the server
executables.
Solution: Loading of customized functions
The generated C-file should be compiled to generate dynamic library,
which can be done using:
1. LLVM
Compilation using the LLVM will be very fast.
2. GCC
GCC is standard way of compiling C file but it will be slow
compare to LLVM.
Solution: How to generate dynamic library
While forming the tuple,
corresponding relation option
schema_bound will be checked to
decide whether to call customized
function corresponding to this
relation or the standard
generalized function. Also in tuple
flag t_infomask2,
HEAP_SCHEMA_BIND_TUPLE
(with value 0x1800) will be
appended to mark the schema
bounded tuple.
Solution: Invocation of Storage Customized Function
The tuple header’s t_infomask2
flag will be checked to see , if
HEAP_SCHEMA_BIND_TUPLE
is set to decide whether to call
customized function
corresponding to this relation or
the standard generalized function.
Solution: Invocation of access customized function
Performance (TPC-H):
The system configuration is as below:
SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core
TPC-H Configuration: Default
Query-1, 2 and 17 not shown in charts to maintain clear
visibility of chart.
0 2000 4000 6000 8000 10000 12000
Query-3
Query-4
Query-5
Query-6
Query-7
Query-8
Query-9
Query-10
Query-11
Query-12
Query-13
Query-14
Query-15
Query-16
Query-18
Query-19
Time(ms)
TPC-H Performance
Original(ms) SchemaBind (ms)
TPC-H Query Improvement(%)
Query-1 2%
Query-2 36%
Query-3 14%
Query-4 13%
Query-5 2%
Query-6 21%
Query-7 16%
Query-8 5%
Query-9 6%
Query-10 9%
Query-11 3%
Query-12 17%
Query-13 3%
Query-14 20%
Query-15 20%
Query-16 4%
Query-17 25%
Query-18 9%
Query-19 24%
Performance (Hash Join):
0
200
400
600
800
1,000
1,200
slot_deform_tuple Overall
Instruction Reduction
SchemaBind Original
0
50
100
150
200
250
T
i
m
e
(
m
s
)
Latency Improvement
SchemaBind Original
Latency Improvement: 23%
Overall Instruction reduction: 30%
Access method instruction reduction: 89%
OuterTable: Having 10 columns, cardinality 1M
InnerTable: Having 2 columns, cardinality 1K
Query: select sum(tbl.id10) from tbl,tbl2 where tbl.id10=tbl2.id2 group by tbl.id9;
Schema binding mainly depend on the code specialization of access function
for table. Number of instruction reduced per call of slot_deform_function
is more than 70% and hence if this function form good percentage of total
instruction e.g. in
 Aggregate query,
 group
 Join
 Query with multiple attribute
All of above cases with huge table size, then overall instruction reduction
will be also huge and hence much better performance.
Performance Scenario:
Procedure Compilation
First diagram highlights at what step and how procedure will be compiled. Once
the parsing of procedure is done, we will have all information about the
procedure in PLpgSQL_function pointer, which we can traverse as planner
does and for each statement corresponding C-code can be generated.
Second diagram explain, how compiled function will be invoked.
Performance (TPC-C):
The system configuration is as below:
SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core
TPC-C Configuration: runMins=0, runTxnsPerTerminal=200000
Checkpoint_segment = 100
0
10000
20000
30000
40000
50000
60000
70000
New_order All Procedure
tpmC
Original
Compiled
Reading in tpmC
Reading New_order All Procedure
Original 22334 49606
Compiled 28973 64580
Improvement 23% 23%
With some basic compilation of procedure, we are able to get around 23%
performance improvement.
Seeing the industry trend, we have implemented two way of specialization,
which resulted in up to 36% and 23% of performance improvement on
standard benchmarkTPC-H andTPC-C respectively.
This technology will make us align with
current business trend to tackle the CPU bottleneck and also could be one
of the hot technology for work on PostgreSQL.
Conclusion
1. Zhang, Rui, Saumya Debray, and RichardT. Snodgrass. "Micro-specialization: dynamic
code specialization of database management systems." Proceedings of theTenth
International Symposium on Code Generation and Optimization.ACM, 2012.
http://dl.acm.org/citation.cfm?id=2259025
2. Freedman, Craig, Erik Ismert, and Per-Åke Larson. "Compilation in the Microsoft
SQL Server Hekaton Engine." IEEE Data Eng.Bull. 37.1 (2014): 22-30.
http://www.internalrequests.org/showconfirmpage/?url=ftp://131.107.65.22/pub
/debull/A14mar/p22.pdf
Reference
TAPE
DRAM
DISK
CACHE
DRAM
DISK
“Disk is the new tape;
Memory is the new disk.”
-- Jim Gray
PostgreSQL on Big RAM
Source: ICDE Conference
Go faster with_native_compilation Part-2
Go faster with_native_compilation Part-2

More Related Content

PDF
Go Faster With Native Compilation
PDF
Autonomous transaction
PDF
DataWeave 2.0 - MuleSoft CONNECT 2019
PPT
Architecture of Native XML Database Sedna
PPTX
DataWeave 2.0 Language Fundamentals
PPT
Sedna XML Database: Query Parser & Optimizing Rewriter
PPT
XQuery Triggers in Native XML Database Sedna
PDF
MuleSoft Nashik Virtual Meetup#3 - Deep Dive Into DataWeave and its Module
Go Faster With Native Compilation
Autonomous transaction
DataWeave 2.0 - MuleSoft CONNECT 2019
Architecture of Native XML Database Sedna
DataWeave 2.0 Language Fundamentals
Sedna XML Database: Query Parser & Optimizing Rewriter
XQuery Triggers in Native XML Database Sedna
MuleSoft Nashik Virtual Meetup#3 - Deep Dive Into DataWeave and its Module

What's hot (20)

PPT
Sedna XML Database System: Internal Representation
PPTX
BASTA 2013: Custom OData Provider
PPTX
Algorithm & data structure lec2
PDF
A Reflective Approach to Actor-Based Concurrent Context-Oriented Systems
PPTX
Unit 3
DOC
Jcl faqs
PPT
Sedna XML Database: Executor Internals
PDF
CN/UML at IPDPS JavaPDC 2007
ODP
Functional programming with Scala
PPSX
Writing code that writes code - Nguyen Luong
KEY
Scala: functional programming for the imperative mind
PPT
106da session5 c++
PPSX
Task Parallel Library Data Flows
PDF
8- java language basics part2
PDF
R, Scikit-Learn and Apache Spark ML - What difference does it make?
PPT
2 b queues
DOC
Java Script Language Tutorial
PPT
Sqlapi0.1
PPT
1 list datastructures
PPT
2 a stacks
Sedna XML Database System: Internal Representation
BASTA 2013: Custom OData Provider
Algorithm & data structure lec2
A Reflective Approach to Actor-Based Concurrent Context-Oriented Systems
Unit 3
Jcl faqs
Sedna XML Database: Executor Internals
CN/UML at IPDPS JavaPDC 2007
Functional programming with Scala
Writing code that writes code - Nguyen Luong
Scala: functional programming for the imperative mind
106da session5 c++
Task Parallel Library Data Flows
8- java language basics part2
R, Scikit-Learn and Apache Spark ML - What difference does it make?
2 b queues
Java Script Language Tutorial
Sqlapi0.1
1 list datastructures
2 a stacks
Ad

Viewers also liked (14)

PDF
Danza - wc016
PDF
Dskp tmk thn 4 2013
DOCX
PANORAMA
PPTX
Sql server enterprise edition awareness
PPTX
Planner for Phialanthropy
DOCX
Evidencia
DOCX
Teoría del origen de la vida nelson santos #33 5to a
DOCX
Brian Wright Resume
PDF
Tecnologia - wc016
PPTX
ΠΑΡΟΣ από την τουρκοκρατία στο 1821
PPTX
A day in our life
PPTX
State Capitals Powerpoint
PPTX
How to Make SQL Server Go Faster
Danza - wc016
Dskp tmk thn 4 2013
PANORAMA
Sql server enterprise edition awareness
Planner for Phialanthropy
Evidencia
Teoría del origen de la vida nelson santos #33 5to a
Brian Wright Resume
Tecnologia - wc016
ΠΑΡΟΣ από την τουρκοκρατία στο 1821
A day in our life
State Capitals Powerpoint
How to Make SQL Server Go Faster
Ad

Similar to Go faster with_native_compilation Part-2 (20)

PDF
Look Ma, “update DB to HTML5 using C++”, no hands! 
PPTX
Skillwise - Enhancing dotnet app
PPTX
Chapter 1 Concepts for Object-oriented Databases.pptx
PPT
19 structured files
PDF
Propel your Performance: AgensGraph, the multi-model database
PDF
Everything We Learned About In-Memory Data Layout While Building VoltDB
PDF
Structured Query Language (SQL) - Lecture 5 - Introduction to Databases (1007...
PPTX
The Other HPC: High Productivity Computing in Polystore Environments
PPTX
PL/SQL User-Defined Functions in the Read World
PDF
FOSSASIA 2015 - 10 Features your developers are missing when stuck with Propr...
PDF
Interactive big data analytics
PPTX
PostgreSQL - Object Relational Database
PDF
NoSQL overview #phptostart turin 11.07.2011
PDF
"Quantum" Performance Effects
PDF
Int306 03
PPTX
Low Level CPU Performance Profiling Examples
ODP
How do You Graph
PDF
Bt0066 database management system2
PDF
Lecture05sql 110406195130-phpapp02
Look Ma, “update DB to HTML5 using C++”, no hands! 
Skillwise - Enhancing dotnet app
Chapter 1 Concepts for Object-oriented Databases.pptx
19 structured files
Propel your Performance: AgensGraph, the multi-model database
Everything We Learned About In-Memory Data Layout While Building VoltDB
Structured Query Language (SQL) - Lecture 5 - Introduction to Databases (1007...
The Other HPC: High Productivity Computing in Polystore Environments
PL/SQL User-Defined Functions in the Read World
FOSSASIA 2015 - 10 Features your developers are missing when stuck with Propr...
Interactive big data analytics
PostgreSQL - Object Relational Database
NoSQL overview #phptostart turin 11.07.2011
"Quantum" Performance Effects
Int306 03
Low Level CPU Performance Profiling Examples
How do You Graph
Bt0066 database management system2
Lecture05sql 110406195130-phpapp02

Recently uploaded (20)

PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Encapsulation theory and applications.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Approach and Philosophy of On baking technology
PDF
cuic standard and advanced reporting.pdf
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Tartificialntelligence_presentation.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Electronic commerce courselecture one. Pdf
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Getting Started with Data Integration: FME Form 101
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
A Presentation on Artificial Intelligence
PDF
Empathic Computing: Creating Shared Understanding
NewMind AI Weekly Chronicles - August'25-Week II
“AI and Expert System Decision Support & Business Intelligence Systems”
Encapsulation theory and applications.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
MIND Revenue Release Quarter 2 2025 Press Release
Advanced methodologies resolving dimensionality complications for autism neur...
Approach and Philosophy of On baking technology
cuic standard and advanced reporting.pdf
Accuracy of neural networks in brain wave diagnosis of schizophrenia
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Tartificialntelligence_presentation.pptx
Building Integrated photovoltaic BIPV_UPV.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Electronic commerce courselecture one. Pdf
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Getting Started with Data Integration: FME Form 101
Programs and apps: productivity, graphics, security and other tools
A Presentation on Artificial Intelligence
Empathic Computing: Creating Shared Understanding

Go faster with_native_compilation Part-2

  • 1. KUMAR RAJEEV RASTOGI ([email protected]) Go Faster With Native Compilation PGDayAsia 2016 17th March 2016 PRASANNA VENKATESH ([email protected])
  • 2.  KUMAR RAJEEV RASTOGI (In Love with PostgreSQL..)  Senior Technical Leader at Huawei Technology for almost 8 years  Active PostgreSQL community members, have contributed many patches  Have presented many papers in various PG based conference (e.g. PGCon Canada, PGDay Asia, PGDay India etc).  One of the Organizer for PGDay Asia conference  Program committee member for PGDay India.  Holds around 14 patents in my name in various DB technologies. Blog - rajeevrastogi.blogspot.in LinkedIn - http://in.linkedin.com/in/kumarrajeevrastogi Who Am I?
  • 3. Native Compilation3 5 Current BusinessTrend2 What to Compile Cost model4 Schema binding6 7 Schema binding Solution Performance Scenario8 1 Background 9 Procedure Compilation Agenda
  • 4. The traditional database executors are based on the fact that “I/O cost dominates execution”.These executor models are inefficient in terms of CPU instructions. Now most of the workloads fits into main memory, which is consequence of two broad trends : 1. Growth in the amount of memory (RAM) per node/machine 2. Prevalence of high speed SSD Background So now biggest bottleneck is CPU usage efficiency not I/O. Our problem statement is to make our database more efficient in terms of CPU instructions – there by leveraging the larger memory Source:ICDEConference
  • 5. Slowly database industries are reaching to a point where increase of throughput has become very limited. Quoting from a paper on Hekaton - The only real hope to increase throughput is to reduce the number of instructions executed but the reduction needs to be dramatic. To go 10X faster, the engine must execute 90% fewer instructions and yet still get the work done. To go 100X faster, it must execute 99% fewer instructions. Such a drastic reduction in instruction without disturbing whole functionality is only possible by code specialization (a.k.a Native Compilation or famously as LLVM) i.e. to generate code specific to object/query. Current Business Trend
  • 6. Many DBs are moving into compilation technology to improve performance by reducing the CPU instruction some of them are:  Hekaton (SQL Server 2014)  Oracle  MemSQL Current Business Trend Contd… Hekaton: Comparison of CPU efficiency for lookups Source: Hekaton Paper
  • 7. Native Compilation is a methodology to reduce CPU instructions by executing only instruction specific to given query/objects unlike interpreted execution. Steps are: 1. Generate C-code specific to objects/query. 2. Compile C-code to generate DLL and load with server executable. 3. Call specialized function instead of generalized function. Native Compilation e.g. Expression: Col1 + 100 Traditional executor will requires 100’s of instruction to find all combination of expression before final execution, whereas in vanilla c code, it can directly execute in 2-3 instructions. Source:ICDEConference
  • 8. Cost model of specialized code can be expressed as: cost of execution = generate specialized code + compilation + execute compiled code Execution of compiled code is very efficient but generation of specialized code and compiling same may be bit expensive affair. So in order to drive down this cost: 1. Generate and compile the code once and use it many times; this distributes the constant cost. 2. Improve the performance of generation and compilation significantly. Cost model
  • 9. Any CPU intensive entity of database can be natively compiled, if they have similar pattern on different execution. Some of the most popular one are:  Schema (Relation)  Procedure  Query  Algebraic expression Note: We will target only Schema for this presentation. What to Native Compile?
  • 10. Property of each relation: 1. Number of attributes, their length and data-type are fixed. 2. Irrespective of any data, it is going to be stored in similar pattern. 3. Each attributes are accessed in similar pattern. Disadvantage of current approach for each tuple access: 1. Loops for each attribute. 2. Property of all attributes are checked to take many decisions. 3. Executes many unwanted instructions. Schema binding
  • 11. So we can overcome the disadvantage by natively compiling the relation based on its property to generate specialized code for each functions of schema. Schema Binding = Native Compilation of Relation Benefit: 1. Each attribute access gets flattened. 2. All attribute property decision are taken during code generation. 3. No decision making at run-time. 4. Reduced CPU instruction. Schema binding Contd…
  • 12. Schema binding Contd… CREATE TABLE Automatic Code generation C  DLL LoadAll function SQL QUERY Compiled Functions Once a create table command is issued, a C-file with all specialized access function is generated, which is in turns gets loaded as DLL. These loaded functions are used by all SQL query accessing the compiled table
  • 13. Schema binding Contd… This show overall interaction with schema bound. Any query issued from client can use schema bound function or normal function depending on the underlying table.
  • 14. Schema: create table tbl (id1 int, id2 float, id3 varchar(10),id4 bool); Schema binding: Example Field id1 and id2 is going to be always stored at same offset and with same alignment, no change at run time. Only variable length attribute and attribute following this will have variable offset.
  • 15. Using current approach: Access Using specialized code: method-1: method-2: Conclusion: Specialized code uses fewer number of instruction compare to generalized code and hence better performance. Schema binding: Example Each Line here is macro, which invokes multiple condition check to decide the action if (thisatt->attlen != -1) { offset = att_align_nominal(off, thisatt->attalign) values[1] = fetchatt(thisatt, tp + offset) offset = att_addlength_pointer(off, thisatt->attlen, tp + off); } values[1] = ((struct tbl_xxx*)tp)->id2; offset = DOUBLEALIGN(offset); values[1] = *((Datum *)(tp + offset)); offset += 8; See details about this in further slides.
  • 16. Solution can be categorized as: 1  Opting for schema bind. 2  Functions to be customized. 3  Customized function generation. 4  Loading of customized function. 5  Invocation of customized function. 6  How to generate dynamic library. Schema Binding Solution
  • 17. CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table …[ TABLESPACE tablespace_name ] [SCHEMA_BOUNDED] SCHEMA_BOUND is new option with CREATETABLE to opt for code specialization. Solution: Opting for schema bind tuple
  • 18. Function Name (xxx  relname_relid) Purpose heap_compute_data_size_xxx To calculate size of the data part of the tuple Heap_fill_tuple_xxx To fill the tuple with the data Heap_deform_tuple_xxx Deform the heap tuple Slot_deform_tuple_xxx To deform the tuple at the end of scan to project attribute Nocachegetattr_xxx To get one attribute value from the tuple for vacuum case Solution: Functions to be customized
  • 19. Customized function for tuple access of a table can be categorized in 3 approaches: Method-1  WithTuple format change Method-2  Without changing the tuple format. Method-3  Re-organize table columns internally to make all fixed length and variable length attribute in sequence. Solution: Function Generation
  • 20. A structure corresponding to relation will be created in such a way that each attribute’s value/offset can be directly referenced by typecasting the data buffer with structure. e.g. Consider our earlier example table: Solution: Function Generation-Method-1 Structure member variable id1, id2 and id4 contains actual value of column, whereas id3_offset stores the offset of the column id3, as during create table it is not known the size of the actual value going to be stored. End of this structure buffer will hold data for variable size column and it can be accessed based on the corresponding offset stored. typedef struct schemaBindTbl_xxx { int id1; float id2; short id3_offset; bool id4; /* Actual data for variable size column*/ } SchemaBindTbl_xxxx; create table tbl (id1 int, id2 float, id3 varchar(10), id4 bool);
  • 21. Solution: Function Generation-Method-1 Contd… ExistingTuple Format NewTuple Format All attribute values stored in sequence. Value of fixed length attribute but offset of variable length attribute stored in sequence. So structure typecast will give either value or offset of value.
  • 22. So using this structure, tuple data can be stored as: Fixed size data-type storage: Variable size data-type storage: Using this approach heap_fill_tuple function can be generated during create table. Solution: Function Generation-Method-1 Contd… ((SchemaBindTbl_xxxx*)data)->id1 = DatumGetXXX(values[attno]); ((SchemaBindTbl_xxxx*)data)->id3_offset = data_offset; data_length = SIZE((char*)values[attno]); SET_VARSIZE_SHORT(data + data_offset, data_length); memcpy(data + data_offset + 1, VARDATA((char*)values[attno]), data_length - 1); data_offset += data_length;
  • 23. Similarly, each attribute value from tuple can be accessed as: Fixed size data-type access: Variable size data-type access: Using this approach all function related to deformation of tuple (i.e. heap_deform_tuple, slot_deform_tuple and nocachegettr) can be generated during create table. Solution: Function Generation-Method-1 Contd… values[attno] = ((SchemaBindTbl_xxxx*)data)->id1; data_offset = ((SchemaBindTbl_xxxx*)data)->id3_offset ; values[attno] = PointerGetDatum((char *) ((char*)tp + data_offset));
  • 24. Advantage: 1. No dependency on previous attributes. 2. Any of the attribute value can be accessed directly. 3. Access of attribute value is very efficient as it will take very few instructions. Disadvantage: 1. Size of the tuple will increase leading to more memory consumption. Solution: Function Generation-Method-1 Contd…
  • 25. This method generates the customized functions without changing the format of the tuple. This approach uses slight variation of existing macros:  fetch_att  att_addlength_pointer  att_align_nominal  att_align_pointer These macros takes many decision based on the data-type, its size of each attributes which is going to be same for a relation. So instead of using these macro for each tuple of a relation at run- time, it is used once during table schema definition itself to generate all customized function. Solution: Function Generation-Method-2
  • 26. So as per this mechanism, code for accessing float attribute will be as below: Similarly access for all other data-type attributes can also be generated. Using the combination of other macro, customized code for all other functions used for tuple access can be generated. Solution: Function Generation-Method-2 Contd… offset = DOUBLEALIGN(offset); Skipped alignment check values[1] = *((Datum *)(tp + offset)); Skipped datum size check offset += 8; Skipped attribute length check
  • 27. Advantage: 1. Existing tested macro are used, so it is very safe. 2. No change in tuple format and size. 3. Reduces number of overall instruction by huge margin. Disadvantage: 1. Dependency on previous attribute incase previous attribute is variable length. Solution: Function Generation-Method-2 Contd…
  • 28. This method is intended to use advantages of previous methods i.e.  Make least number of attribute dependency All fixed length attributes are grouped together to make initial list of columns followed by all variable length columns. So all fixed length attributes can be accessed directly. Change in column order will be done during creation of table itself.  No change in tuple size,so access of tuple will be very efficient In order to achieve this, we use Method-2 to generate specialized code. Solution: Function Generation-Method-3
  • 29. E.g. Consider our earlier example: create table tbl (id1 int, id2 float, id3 varchar(10), id4 bool); Solution: Function Generation-Method-3 Contd… So in this case, while creating the table id1, id2 and id4 will be first 3 columns followed by id3. So access code can be generated directly during schema definition without dependency on any run time parameter because all of the attribute offset is fixed except of variable length attributes. If there are more variable length attributes then they will be stored after id3 and for them it will have to know the length of the previous columns to find the exact offset.
  • 30. Advantage: 1. Existing tested macro are used, so it is very safe. 2. No change in tuple format and size. 3. Reduces number of overall instruction by huge margin. Disadvantage: 1. There will be still dependency among multiple variable length attributes (if any). Solution: Function Generation-Method-3 Contd…
  • 31. Once we generate the code corresponding to each access function, the same gets written into a C-file, which in turn gets compiled to dynamic linked library and then this dynamic library gets loaded with server executable. So now any function of the library can be invoked directly from the server executables. Solution: Loading of customized functions
  • 32. The generated C-file should be compiled to generate dynamic library, which can be done using: 1. LLVM Compilation using the LLVM will be very fast. 2. GCC GCC is standard way of compiling C file but it will be slow compare to LLVM. Solution: How to generate dynamic library
  • 33. While forming the tuple, corresponding relation option schema_bound will be checked to decide whether to call customized function corresponding to this relation or the standard generalized function. Also in tuple flag t_infomask2, HEAP_SCHEMA_BIND_TUPLE (with value 0x1800) will be appended to mark the schema bounded tuple. Solution: Invocation of Storage Customized Function
  • 34. The tuple header’s t_infomask2 flag will be checked to see , if HEAP_SCHEMA_BIND_TUPLE is set to decide whether to call customized function corresponding to this relation or the standard generalized function. Solution: Invocation of access customized function
  • 35. Performance (TPC-H): The system configuration is as below: SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core TPC-H Configuration: Default Query-1, 2 and 17 not shown in charts to maintain clear visibility of chart. 0 2000 4000 6000 8000 10000 12000 Query-3 Query-4 Query-5 Query-6 Query-7 Query-8 Query-9 Query-10 Query-11 Query-12 Query-13 Query-14 Query-15 Query-16 Query-18 Query-19 Time(ms) TPC-H Performance Original(ms) SchemaBind (ms) TPC-H Query Improvement(%) Query-1 2% Query-2 36% Query-3 14% Query-4 13% Query-5 2% Query-6 21% Query-7 16% Query-8 5% Query-9 6% Query-10 9% Query-11 3% Query-12 17% Query-13 3% Query-14 20% Query-15 20% Query-16 4% Query-17 25% Query-18 9% Query-19 24%
  • 36. Performance (Hash Join): 0 200 400 600 800 1,000 1,200 slot_deform_tuple Overall Instruction Reduction SchemaBind Original 0 50 100 150 200 250 T i m e ( m s ) Latency Improvement SchemaBind Original Latency Improvement: 23% Overall Instruction reduction: 30% Access method instruction reduction: 89% OuterTable: Having 10 columns, cardinality 1M InnerTable: Having 2 columns, cardinality 1K Query: select sum(tbl.id10) from tbl,tbl2 where tbl.id10=tbl2.id2 group by tbl.id9;
  • 37. Schema binding mainly depend on the code specialization of access function for table. Number of instruction reduced per call of slot_deform_function is more than 70% and hence if this function form good percentage of total instruction e.g. in  Aggregate query,  group  Join  Query with multiple attribute All of above cases with huge table size, then overall instruction reduction will be also huge and hence much better performance. Performance Scenario:
  • 38. Procedure Compilation First diagram highlights at what step and how procedure will be compiled. Once the parsing of procedure is done, we will have all information about the procedure in PLpgSQL_function pointer, which we can traverse as planner does and for each statement corresponding C-code can be generated. Second diagram explain, how compiled function will be invoked.
  • 39. Performance (TPC-C): The system configuration is as below: SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core TPC-C Configuration: runMins=0, runTxnsPerTerminal=200000 Checkpoint_segment = 100 0 10000 20000 30000 40000 50000 60000 70000 New_order All Procedure tpmC Original Compiled Reading in tpmC Reading New_order All Procedure Original 22334 49606 Compiled 28973 64580 Improvement 23% 23% With some basic compilation of procedure, we are able to get around 23% performance improvement.
  • 40. Seeing the industry trend, we have implemented two way of specialization, which resulted in up to 36% and 23% of performance improvement on standard benchmarkTPC-H andTPC-C respectively. This technology will make us align with current business trend to tackle the CPU bottleneck and also could be one of the hot technology for work on PostgreSQL. Conclusion
  • 41. 1. Zhang, Rui, Saumya Debray, and RichardT. Snodgrass. "Micro-specialization: dynamic code specialization of database management systems." Proceedings of theTenth International Symposium on Code Generation and Optimization.ACM, 2012. http://dl.acm.org/citation.cfm?id=2259025 2. Freedman, Craig, Erik Ismert, and Per-Åke Larson. "Compilation in the Microsoft SQL Server Hekaton Engine." IEEE Data Eng.Bull. 37.1 (2014): 22-30. http://www.internalrequests.org/showconfirmpage/?url=ftp://131.107.65.22/pub /debull/A14mar/p22.pdf Reference
  • 42. TAPE DRAM DISK CACHE DRAM DISK “Disk is the new tape; Memory is the new disk.” -- Jim Gray PostgreSQL on Big RAM Source: ICDE Conference