SlideShare a Scribd company logo
Large scale sql server   best practices
1 - Consider partitioning large fact tables
Consider partitioning fact tables that are 50 to 100GB or larger.
Partitioning can provide manageability and often performance benefits.
Faster, more granular index maintenance.
More flexible backup / restore options.
Faster data loading and deleting

Faster queries when restricted to a single partition..
Typically partition the fact table on the date key.
Enables sliding window.

Enables partition elimination.
2 - Build clustered index on date key of fact table
This supports efficient queries to populate cubes or retrieve a
historical data slice.
If you load data in a batch window then use the options
ALLOW_ROW_LOCKS = OFF and ALLOW_PAGE_LOCKS = OFF for
the clustered index on the fact table. This helps speed up table scan
operations during query time and helps avoid excessive locking
activity during large updates.
Build nonclustered indexes for each foreign key. This helps ‘pinpoint
queries' to extract rows based on a selective dimension
predicate.Use filegroups for administration requirements such as
backup / restore, partial database availability, etc.
3 - Choose partition grain carefully
Most customers use month, quarter, or year.
For efficient deletes, you must delete one full partition at a
time.
It is faster to load a complete partition at a time.
Daily partitions for daily loads may be an attractive option.
However, keep in mind that a table can have a maximum of 1000
partitions.

Avoid a partition design where only 2 or 3 partitions are
touched by frequent queries, if you need MAXDOP parallelism
(assuming MAXDOP =4 or larger).
4 - Design dimension tables appropriately
Use integer surrogate keys for all dimensions, other than the Date dimension. Use the
smallest possible integer for the dimension surrogate keys. This helps to keep fact table
narrow.
Use a meaningful date key of integer type derivable from the DATETIME data type (for
example: 20060215).
Don't use a surrogate Key for the Date dimension
Easy to write queries that put a WHERE clause on this column, which will allow partition
elimination of the fact table.

Build a clustered index on the surrogate key for each dimension table, and build a nonclustered index on the Business Key (potentially combined with a row-effective-date) to
support surrogate key lookups during loads.
Build nonclustered indexes on other frequently searched dimension columns.
Avoid partitioning dimension tables.
5 - Write effective queries for partition elimination
Whenever possible, place a query predicate
(WHERE condition) directly on the partitioning
key (Date dimension key) of the fact table.
6 - Use Sliding Window technique to maintain data
Maintain a rolling time window for online access to the fact tables. Load newest data, unload oldest data.
Always keep empty partitions at both ends of the partition range to guarantee that the partition split
(before loading new data) and partition merge (after unloading old data) do not incur any data movement.
Avoid split or merge of populated partitions. Splitting or merging populated partitions can be extremely
inefficient, as this may cause as much as 4 times more log generation, and also cause severe locking.
Create the load staging table in the same filegroup as the partition you are loading.
Create the unload staging table in the same filegroup as the partition you are deleteing.
It is fastest to load newest full partition at one time, but only possible when partition size is equal to the
data load frequency (for example, you have one partition per day, and you load data once per day).
If the partition size doesn't match the data load frequency, incrementally load the latest partition.
7 - Efficiently load the initial data
Use SIMPLE or BULK LOGGED recovery model during the initial data load.
Create the partitioned fact table with the Clustered index.
Create non-indexed staging tables for each partition, and separate source data files for
populating each partition.
Build a clustered index on each staging table, then create appropriate CHECK constraints.
SWITCH all partitions into the partitioned table.
Build nonclustered indexes on the partitioned table.
Possible to load 1 TB in under an hour on a 64-CPU server with a SAN capable of 14
GB/Sec throughput (non-indexed table)
8 - Efficiently delete old data
Use partition switching whenever possible.
To delete millions of rows from nonpartitioned, indexed tables
Avoid DELETE FROM ...WHERE ...
Huge locking and logging issues
Long rollback if the delete is canceled

Usually faster to
INSERT the records to keep into a non-indexed table
Create index(es) on

the table

Rename the new table to replace the original

Another alternative is to update the row to mark as deleted, then delete later during non
critical time.
9 - Manage statistics manually
Statistics on partitioned tables are maintained for the table as a whole.
Manually update statistics on large fact tables after loading new data.
Manually update statistics after rebuilding index on a partition.
If you regularly update statistics after periodic loads, you may turn off
autostats on that table.
This is important for optimizing queries that may need to read only the newest
data.
Updating statistics on small dimension tables after incremental loads may
also help performance. Use FULLSCAN option on update statistics on
dimension tables for more accurate query plans.
10 - Consider efficient backup strategies
Backing up the entire database may take significant amount of time for a very
large database.
For example, backing up a 2 TB database to a 10-spindle RAID-5 disk on a SAN
may take 2 hours (at the rate 275 MB/sec).

Snapshot backup using SAN technology is a very good option.
Reduce the volume of data to backup regularly.
The filegroups for the historical partitions can be marked as READ ONLY.
Perform a filegroup backup once when a filegroup becomes read-only.
Perform regular backups only on the read / write filegroups.

Note that RESTOREs of the read-only filegroups cannot be performed in
parallel.
Reference
MSDN Blogs
SQLCAT

More Related Content

PPT
Bigtable
ODP
Big table
PDF
Bigtable
PPTX
SQL Server 2008 Development for Programmers
PDF
google Bigtable
PPT
PPTX
Summary of "Google's Big Table" at nosql summer reading in Tokyo
PPTX
Rise of Column Oriented Database
Bigtable
Big table
Bigtable
SQL Server 2008 Development for Programmers
google Bigtable
Summary of "Google's Big Table" at nosql summer reading in Tokyo
Rise of Column Oriented Database

What's hot (20)

PDF
Big table presentation-final
ODP
EDW and Hadoop
PDF
How to identify storage shelf type for netapp
PDF
CS215 - Lec 9 indexing and reclaiming space in files
PPTX
Google Big Table
PPTX
Column oriented database
DOCX
Optimization in essbase
PPTX
Google - Bigtable
PDF
Bigtable: A Distributed Storage System for Structured Data
PPTX
Designing data intensive applications
PDF
Bigtable
PPTX
Backup and restore
PPT
Big table
PDF
FAQ on Dedupe NetApp
PPT
MySQL conference 2010 ignite talk on InfiniDB
PDF
Chap4
PDF
Best Practices in the Use of Columnar Databases
PDF
Delta Lake: Optimizing Merge
PDF
10 basic terms so you can talk to data engineer
Big table presentation-final
EDW and Hadoop
How to identify storage shelf type for netapp
CS215 - Lec 9 indexing and reclaiming space in files
Google Big Table
Column oriented database
Optimization in essbase
Google - Bigtable
Bigtable: A Distributed Storage System for Structured Data
Designing data intensive applications
Bigtable
Backup and restore
Big table
FAQ on Dedupe NetApp
MySQL conference 2010 ignite talk on InfiniDB
Chap4
Best Practices in the Use of Columnar Databases
Delta Lake: Optimizing Merge
10 basic terms so you can talk to data engineer
Ad

Similar to Large scale sql server best practices (20)

PPTX
Data Warehouse Best Practices
DOCX
Mohan Testing
PDF
PostgreSQL Table Partitioning / Sharding
PPT
Informix partitioning interval_rolling_window_table
PDF
White paper on Spool space in teradata
PPT
database-stucture-and-space-managment.ppt
PPT
database-stucture-and-space-managment.ppt
PPTX
Importance of database design (1)
PPT
Myth busters - performance tuning 102 2008
PPTX
Tableau Basic Questions
PPTX
Tech-Spark: Scaling Databases
ODP
Mysql For Developers
PPT
The thinking persons guide to data warehouse design
PDF
Optimize access
PDF
PostgreSQL Performance Tables Partitioning vs. Aggregated Data Tables
PPTX
SAP HANA Interview questions
PPT
Myth busters - performance tuning 103 2008
DOCX
Ssis partitioning and best practices
PPTX
tablespaces and datafiles in database administration
PPTX
Tuning and Optimizing U-SQL Queries (SQLPASS 2016)
Data Warehouse Best Practices
Mohan Testing
PostgreSQL Table Partitioning / Sharding
Informix partitioning interval_rolling_window_table
White paper on Spool space in teradata
database-stucture-and-space-managment.ppt
database-stucture-and-space-managment.ppt
Importance of database design (1)
Myth busters - performance tuning 102 2008
Tableau Basic Questions
Tech-Spark: Scaling Databases
Mysql For Developers
The thinking persons guide to data warehouse design
Optimize access
PostgreSQL Performance Tables Partitioning vs. Aggregated Data Tables
SAP HANA Interview questions
Myth busters - performance tuning 103 2008
Ssis partitioning and best practices
tablespaces and datafiles in database administration
Tuning and Optimizing U-SQL Queries (SQLPASS 2016)
Ad

Recently uploaded (20)

PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Approach and Philosophy of On baking technology
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
Machine learning based COVID-19 study performance prediction
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Getting Started with Data Integration: FME Form 101
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PPTX
Machine Learning_overview_presentation.pptx
Programs and apps: productivity, graphics, security and other tools
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Approach and Philosophy of On baking technology
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Group 1 Presentation -Planning and Decision Making .pptx
Machine learning based COVID-19 study performance prediction
Building Integrated photovoltaic BIPV_UPV.pdf
gpt5_lecture_notes_comprehensive_20250812015547.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Advanced methodologies resolving dimensionality complications for autism neur...
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Diabetes mellitus diagnosis method based random forest with bat algorithm
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Getting Started with Data Integration: FME Form 101
The Rise and Fall of 3GPP – Time for a Sabbatical?
Dropbox Q2 2025 Financial Results & Investor Presentation
SOPHOS-XG Firewall Administrator PPT.pptx
Machine Learning_overview_presentation.pptx

Large scale sql server best practices

  • 2. 1 - Consider partitioning large fact tables Consider partitioning fact tables that are 50 to 100GB or larger. Partitioning can provide manageability and often performance benefits. Faster, more granular index maintenance. More flexible backup / restore options. Faster data loading and deleting Faster queries when restricted to a single partition.. Typically partition the fact table on the date key. Enables sliding window. Enables partition elimination.
  • 3. 2 - Build clustered index on date key of fact table This supports efficient queries to populate cubes or retrieve a historical data slice. If you load data in a batch window then use the options ALLOW_ROW_LOCKS = OFF and ALLOW_PAGE_LOCKS = OFF for the clustered index on the fact table. This helps speed up table scan operations during query time and helps avoid excessive locking activity during large updates. Build nonclustered indexes for each foreign key. This helps ‘pinpoint queries' to extract rows based on a selective dimension predicate.Use filegroups for administration requirements such as backup / restore, partial database availability, etc.
  • 4. 3 - Choose partition grain carefully Most customers use month, quarter, or year. For efficient deletes, you must delete one full partition at a time. It is faster to load a complete partition at a time. Daily partitions for daily loads may be an attractive option. However, keep in mind that a table can have a maximum of 1000 partitions. Avoid a partition design where only 2 or 3 partitions are touched by frequent queries, if you need MAXDOP parallelism (assuming MAXDOP =4 or larger).
  • 5. 4 - Design dimension tables appropriately Use integer surrogate keys for all dimensions, other than the Date dimension. Use the smallest possible integer for the dimension surrogate keys. This helps to keep fact table narrow. Use a meaningful date key of integer type derivable from the DATETIME data type (for example: 20060215). Don't use a surrogate Key for the Date dimension Easy to write queries that put a WHERE clause on this column, which will allow partition elimination of the fact table. Build a clustered index on the surrogate key for each dimension table, and build a nonclustered index on the Business Key (potentially combined with a row-effective-date) to support surrogate key lookups during loads. Build nonclustered indexes on other frequently searched dimension columns. Avoid partitioning dimension tables.
  • 6. 5 - Write effective queries for partition elimination Whenever possible, place a query predicate (WHERE condition) directly on the partitioning key (Date dimension key) of the fact table.
  • 7. 6 - Use Sliding Window technique to maintain data Maintain a rolling time window for online access to the fact tables. Load newest data, unload oldest data. Always keep empty partitions at both ends of the partition range to guarantee that the partition split (before loading new data) and partition merge (after unloading old data) do not incur any data movement. Avoid split or merge of populated partitions. Splitting or merging populated partitions can be extremely inefficient, as this may cause as much as 4 times more log generation, and also cause severe locking. Create the load staging table in the same filegroup as the partition you are loading. Create the unload staging table in the same filegroup as the partition you are deleteing. It is fastest to load newest full partition at one time, but only possible when partition size is equal to the data load frequency (for example, you have one partition per day, and you load data once per day). If the partition size doesn't match the data load frequency, incrementally load the latest partition.
  • 8. 7 - Efficiently load the initial data Use SIMPLE or BULK LOGGED recovery model during the initial data load. Create the partitioned fact table with the Clustered index. Create non-indexed staging tables for each partition, and separate source data files for populating each partition. Build a clustered index on each staging table, then create appropriate CHECK constraints. SWITCH all partitions into the partitioned table. Build nonclustered indexes on the partitioned table. Possible to load 1 TB in under an hour on a 64-CPU server with a SAN capable of 14 GB/Sec throughput (non-indexed table)
  • 9. 8 - Efficiently delete old data Use partition switching whenever possible. To delete millions of rows from nonpartitioned, indexed tables Avoid DELETE FROM ...WHERE ... Huge locking and logging issues Long rollback if the delete is canceled Usually faster to INSERT the records to keep into a non-indexed table Create index(es) on the table Rename the new table to replace the original Another alternative is to update the row to mark as deleted, then delete later during non critical time.
  • 10. 9 - Manage statistics manually Statistics on partitioned tables are maintained for the table as a whole. Manually update statistics on large fact tables after loading new data. Manually update statistics after rebuilding index on a partition. If you regularly update statistics after periodic loads, you may turn off autostats on that table. This is important for optimizing queries that may need to read only the newest data. Updating statistics on small dimension tables after incremental loads may also help performance. Use FULLSCAN option on update statistics on dimension tables for more accurate query plans.
  • 11. 10 - Consider efficient backup strategies Backing up the entire database may take significant amount of time for a very large database. For example, backing up a 2 TB database to a 10-spindle RAID-5 disk on a SAN may take 2 hours (at the rate 275 MB/sec). Snapshot backup using SAN technology is a very good option. Reduce the volume of data to backup regularly. The filegroups for the historical partitions can be marked as READ ONLY. Perform a filegroup backup once when a filegroup becomes read-only. Perform regular backups only on the read / write filegroups. Note that RESTOREs of the read-only filegroups cannot be performed in parallel.