SlideShare a Scribd company logo
Antonios Chatzipavlis
DATA SOLUTIONS CONSULTANT
TRAINER
Data Virtualization
using Polybase
Athens Jan 30,2021
SQL Night #44
1988 2000 2010
Antonios Chatzipavlis
Data Solutions Consultant, Trainer
v 6.0 60 + Founder
A community for
professionals who
use the
Microsoft Data
Platform
Articles Webinars Videos Presentations Events Resources News
./c/sqlschool.gr Sqlschool.gr
Group
@antoniosch
@sqlschool
SQLschool.gr
UG & Page
Connect Explore Learn
Please mute
your mic
• Overview
• Installing and Configure Polybase
• Data Virtualization using Polybase
• DMVs and Polybase
• Performance and Troubleshooting
Presentation
Content
Proliferation of Data Platform technologies
What is Data Virtualization?
What is Polybase?
Data Virtualization using Polybase
Overview
Connect / Explore / Learn
Proliferation of Data Platform technologies
Massive increasing
amount of data
The Problem Technologies RDBMS
Connect / Explore / Learn
A modern take on the classic problem of ETL.
Data appears to come from one source system while under the
covers defining links to where the data really lives.
End user or analyst:
• Can read this data using one SQL dialect.
• Join with structured data sets from different systems without needing to
know the source of each data set.
• No dependencies from database developers to build in ETL flows to move
data from one system to the next.
What is
Data Virtualization?
Connect / Explore / Learn
Polybase has been available since 2010.
General Available in SQL Server 2016.
Polybase purpose was to integrate SQL Server with Hadoop by allowing us to
run MapReduce jobs against a remote Hadoop cluster and bringing the
results back into SQL Server reducing the computational burden on our
relatively more expensive SQL Server instances.
PolyBase in SQL Server 2019 has grown and adapted to this era of data
virtualization and gives us the ability to integrate with a variety of source
systems like Hadoop cluster, Azure Blob Storage, other SQL Server instances,
Oracle database, Teradata, MongoDB, Cosmos DB, an Apache Spark cluster,
Apache Hive tables, and even Microsoft Excel.
The best part of it is that developers need only T-SQL.
PolyBase is no panacea, and there are trade-offs compared to storing all data
natively in one source system, particularly around performance.
What is Polybase?
Feature Selection
Polybase Configuration
Java Installation
Polybase Services and Accounts
Firewall Rules
Data Virtualization using Polybase
Installing and Configure Polybase
Connect / Explore / Learn
Feature Selection
Connect / Explore / Learn
Polybase
Configuration
Scale-out group rules
Each machine hosting SQL Server must be part of the same Active Directory domain.
You must use the same Active Directory service account for each installation of the
PolyBase Engine and PolyBase Data Movement services.
Each machine hosting SQL Server must be able to communicate with all other Scale-Out
Group members in close physical proximity and on the same network, avoiding
geographically distributed servers and communications through the Internet.
Each SQL Server instance must be running the same major version of SQL Server
PolyBase services are machine-level rather than instance-level services.
Connect / Explore / Learn
Java Installation
Connect / Explore / Learn
Polybase Services
and Accounts
Connect / Explore / Learn
Setup Complete
Connect / Explore / Learn
Firewall Rules
Polybase
Configuration
Connecting to Azure Blob Storage
Connecting to SQL Server
Data Virtualization using Polybase
Data Virtualization using Polybase
Connecting to
Azure Blob Storage
Connecting to
SQL Server
Connect / Explore / Learn
Polybase
Vs.
Linked Servers
PolyBase External Table Linked Server
Object scope
Database level, focusing on a
single table
Instance level
Operational
intent
Read-only Read and write
Scale-out Able to use Scale-Out Groups No scale-out capabilities
Expected data
size
Large tables with analytic
workloads
OLTP-style workloads
querying a small number of
rows
Metadata DMVs
Service and Node Resources DMVs
Data Movement Service DMVs
Troubleshooting Queries DMVs
Data Virtualization using Polybase
Polybase DMVs
Connect / Explore / Learn
use PolybaseDemo;
select * from
sys.external_data_sources;
select * from
sys.external_file_formats;
select * from
sys.external_tables;
go
Metadata DMVs
Connect / Explore / Learn
use master;
select * from
sys.dm_exec_compute_nodes;
select * from
sys.dm_exec_compute_node_status;
select * from
sys.dm_exec_compute_node_errors;
go
Service and
Node Resources
DMVs
Connect / Explore / Learn
use master;
select * from
sys.dm_exec_dms_services;
select * from
sys.dm_exec_dms_workers;
go
Data Movement
Service DMVs
Connect / Explore / Learn
use PolybaseDemo ;
select * from
sys.dm_exec_external_work;
select * from
sys.dm_exec_external_operations;
select * from
sys.dm_exec_distributed_requests;
select * from
sys.dm_exec_distributed_request_steps;
select * from
sys.dm_exec_distributed_sql_requests;
go
Troubleshooting
Queries DMVs
Polybase DMVs
Statistics on External Tables
Predicate Pushdown
Polybase Log Files
Data Issues
Data Virtualization using Polybase
Performance and Troubleshooting
Connect / Explore / Learn
Statistics on
External Tables
• Fundamentally are the same as statistics on regular
tables
• Because data lives outside of SQL Server
 We cannot automatically create or maintain statistics against external
tables.
 We can create statistics from 100% of data (default) or from a sample of
data.
 Disk space needed during statistics creation because all the data from
external table streamed into temporary table.
• Performance Impact
 External statistics can make a difference when they help the optimizer
decide whether to push down a predicate or reorder joins to other tables,
not in full scans.
Connect / Explore / Learn
Predicate
Pushdown
• Pushdown computation improves the performance of
queries on external data sources.
• In SQL Server 2019 is available in Hadoop, Oracle,
Teradata, MongoDB, ODBC generic types, SQL Server.
• SQL Server allows the following basic expressions and
operators for predicate pushdown.
 Binary comparison operators (<, >, =, !=, <>, >=, <=) for numeric, date,
and time values.
 Arithmetic operators (+, -, *, /, %).
 Logical operators (AND, OR).
 Unary operators (NOT, IS NULL, IS NOT NULL).
Predicate Pushdown
Connect / Explore / Learn
Located at
%PROGRAMFILES%Microsoft SQL Server
MSSQL##.MSSQLSERVERMSSQLLogPolybase
Polybase Log Files
Connect / Explore / Learn
Data Issues
• Structural
• Unsupported characters
• Date formats
• Limitations
 The maximum possible row size (full length of variable length columns)
can't exceed 32 KB in SQL Server or 1 MB in Azure Synapse Analytics.
 Text-heavy columns might be limited.
Any questions?
Thank you!
Antonios Chatzipavlis
Data Solutions Consultant & Trainer
• @antoniosch
• @sqlschool
twitter
• ./sqlschoolgr
• ./groups/sqlschool
Facebook
• ./c/SqlschoolGr
YouTube
• SQLschool.gr
• Group
LinkedIn
Data virtualization using polybase

More Related Content

PPTX
Capacity Planning of Data Warehousing
PDF
Ibm informatica interview question answers
PDF
DOCX
Informatica interview questions and answers|Informatica Faqs 2014
PDF
Informatica interview questions
PPT
Medical center using Data warehousing
PDF
Data analysis step by step guide
PPTX
Microsoft Power BI | Brief Introduction | PPT
Capacity Planning of Data Warehousing
Ibm informatica interview question answers
Informatica interview questions and answers|Informatica Faqs 2014
Informatica interview questions
Medical center using Data warehousing
Data analysis step by step guide
Microsoft Power BI | Brief Introduction | PPT

What's hot (20)

PPTX
SAP Data Services
PPTX
Power BI : A Detailed Discussion
PPTX
Mongo db intro.pptx
PPTX
Oracle Database Multitenant Architecture.pptx
PPTX
PowerBI - Porto.Data - 20150219
PDF
Tableau Tutorial Complete by Rohit Dubey
PDF
Performance Acceleration: Summaries, Recommendation, MPP and more
PPTX
Your Roadmap for An Enterprise Graph Strategy
PPT
Tableau PPT Intro, Features, Advantages, Disadvantages
PDF
Lesson 1: Introduction to DBMS
PDF
Modern Data Architecture for a Data Lake with Informatica and Hortonworks Dat...
PDF
SQL Server 2019 Big Data Cluster
PPTX
Power BI Made Simple
PDF
ENEL Electricity Topology Network on Neo4j Graph DB
PPTX
Introduction to DAX - Part 1
PPTX
Introduction to Pivot Tables
PPT
Entity Relationship Model
PPT
1\9.SSIS 2008R2_Training - Introduction to SSIS
PPTX
Lists and Tuples
PPTX
Strata NY 2018: The deconstructed database
SAP Data Services
Power BI : A Detailed Discussion
Mongo db intro.pptx
Oracle Database Multitenant Architecture.pptx
PowerBI - Porto.Data - 20150219
Tableau Tutorial Complete by Rohit Dubey
Performance Acceleration: Summaries, Recommendation, MPP and more
Your Roadmap for An Enterprise Graph Strategy
Tableau PPT Intro, Features, Advantages, Disadvantages
Lesson 1: Introduction to DBMS
Modern Data Architecture for a Data Lake with Informatica and Hortonworks Dat...
SQL Server 2019 Big Data Cluster
Power BI Made Simple
ENEL Electricity Topology Network on Neo4j Graph DB
Introduction to DAX - Part 1
Introduction to Pivot Tables
Entity Relationship Model
1\9.SSIS 2008R2_Training - Introduction to SSIS
Lists and Tuples
Strata NY 2018: The deconstructed database
Ad

Similar to Data virtualization using polybase (20)

PPTX
Andriy Zrobok "MS SQL 2019 - new for Big Data Processing"
PDF
Migrate SQL Workloads to Azure
PPTX
Exploring Microsoft Azure Infrastructures
 
PDF
44spotkaniePLSSUGWRO_CoNowegowKrainieChmur
PPTX
Brk2045 upgrade sql server 2017 (on prem, iaa-s and paas)
PDF
KoprowskiT_SQLSat230_Rheinland_SQLAzure-fromPlantoBackuptoCloud
PDF
Taming the shrew Power BI
PPTX
Why does Microsoft care about NoSQL, SQL and Polyglot Persistence?
PPTX
SQL Saturday Redmond 2019 ETL Patterns in the Cloud
PPTX
Azure Data Factory ETL Patterns in the Cloud
PDF
Unlocking big data with Hadoop + MySQL
PDF
Taming the shrew, Optimizing Power BI Options
PPTX
Module_01_formation-PowerBI Desktop.pptx
PPTX
Introducing Azure SQL Data Warehouse
PDF
Whitepaper tableau for-the-enterprise-0
PDF
Azure - Data Platform
PDF
Power BI with Essbase in the Oracle Cloud
PDF
Azure Days 2019: Business Intelligence auf Azure (Marco Amhof & Yves Mauron)
PPTX
Azure SQL Database Managed Instance
PPTX
2014.11.14 Data Opportunities with Azure
Andriy Zrobok "MS SQL 2019 - new for Big Data Processing"
Migrate SQL Workloads to Azure
Exploring Microsoft Azure Infrastructures
 
44spotkaniePLSSUGWRO_CoNowegowKrainieChmur
Brk2045 upgrade sql server 2017 (on prem, iaa-s and paas)
KoprowskiT_SQLSat230_Rheinland_SQLAzure-fromPlantoBackuptoCloud
Taming the shrew Power BI
Why does Microsoft care about NoSQL, SQL and Polyglot Persistence?
SQL Saturday Redmond 2019 ETL Patterns in the Cloud
Azure Data Factory ETL Patterns in the Cloud
Unlocking big data with Hadoop + MySQL
Taming the shrew, Optimizing Power BI Options
Module_01_formation-PowerBI Desktop.pptx
Introducing Azure SQL Data Warehouse
Whitepaper tableau for-the-enterprise-0
Azure - Data Platform
Power BI with Essbase in the Oracle Cloud
Azure Days 2019: Business Intelligence auf Azure (Marco Amhof & Yves Mauron)
Azure SQL Database Managed Instance
2014.11.14 Data Opportunities with Azure
Ad

More from Antonios Chatzipavlis (20)

PDF
SQL server Backup Restore Revealed
PDF
Machine Learning in SQL Server 2019
PDF
Workload Management in SQL Server 2019
PDF
Loading Data into Azure SQL DW (Synapse Analytics)
PDF
Introduction to DAX Language
PDF
Building diagnostic queries using DMVs and DMFs
PDF
Exploring T-SQL Anti-Patterns
PDF
Designing a modern data warehouse in azure
PDF
Modernizing your database with SQL Server 2019
PDF
Designing a modern data warehouse in azure
PDF
SQLServer Database Structures
PDF
Sqlschool 2017 recap - 2018 plans
PDF
Azure SQL Database for the SQL Server DBA - Azure Bootcamp Athens 2018
PDF
Microsoft SQL Family and GDPR
PDF
Statistics and Indexes Internals
PDF
Introduction to Azure Data Lake
PDF
Azure SQL Data Warehouse
PDF
Introduction to azure document db
PDF
Introduction to Machine Learning on Azure
PDF
Introduction to sql database on azure
SQL server Backup Restore Revealed
Machine Learning in SQL Server 2019
Workload Management in SQL Server 2019
Loading Data into Azure SQL DW (Synapse Analytics)
Introduction to DAX Language
Building diagnostic queries using DMVs and DMFs
Exploring T-SQL Anti-Patterns
Designing a modern data warehouse in azure
Modernizing your database with SQL Server 2019
Designing a modern data warehouse in azure
SQLServer Database Structures
Sqlschool 2017 recap - 2018 plans
Azure SQL Database for the SQL Server DBA - Azure Bootcamp Athens 2018
Microsoft SQL Family and GDPR
Statistics and Indexes Internals
Introduction to Azure Data Lake
Azure SQL Data Warehouse
Introduction to azure document db
Introduction to Machine Learning on Azure
Introduction to sql database on azure

Recently uploaded (20)

PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PDF
Lecture1 pattern recognition............
PDF
Clinical guidelines as a resource for EBP(1).pdf
PPTX
Introduction to machine learning and Linear Models
PPTX
Qualitative Qantitative and Mixed Methods.pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPTX
Introduction to Knowledge Engineering Part 1
PPTX
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PPTX
IB Computer Science - Internal Assessment.pptx
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PDF
.pdf is not working space design for the following data for the following dat...
PDF
annual-report-2024-2025 original latest.
PPTX
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Miokarditis (Inflamasi pada Otot Jantung)
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Lecture1 pattern recognition............
Clinical guidelines as a resource for EBP(1).pdf
Introduction to machine learning and Linear Models
Qualitative Qantitative and Mixed Methods.pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Introduction to Knowledge Engineering Part 1
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
IB Computer Science - Internal Assessment.pptx
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
Data_Analytics_and_PowerBI_Presentation.pptx
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
oil_refinery_comprehensive_20250804084928 (1).pptx
.pdf is not working space design for the following data for the following dat...
annual-report-2024-2025 original latest.
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj

Data virtualization using polybase

  • 1. Antonios Chatzipavlis DATA SOLUTIONS CONSULTANT TRAINER Data Virtualization using Polybase Athens Jan 30,2021 SQL Night #44
  • 2. 1988 2000 2010 Antonios Chatzipavlis Data Solutions Consultant, Trainer v 6.0 60 + Founder
  • 3. A community for professionals who use the Microsoft Data Platform Articles Webinars Videos Presentations Events Resources News ./c/sqlschool.gr Sqlschool.gr Group @antoniosch @sqlschool SQLschool.gr UG & Page Connect Explore Learn
  • 5. • Overview • Installing and Configure Polybase • Data Virtualization using Polybase • DMVs and Polybase • Performance and Troubleshooting Presentation Content
  • 6. Proliferation of Data Platform technologies What is Data Virtualization? What is Polybase? Data Virtualization using Polybase Overview
  • 7. Connect / Explore / Learn Proliferation of Data Platform technologies Massive increasing amount of data The Problem Technologies RDBMS
  • 8. Connect / Explore / Learn A modern take on the classic problem of ETL. Data appears to come from one source system while under the covers defining links to where the data really lives. End user or analyst: • Can read this data using one SQL dialect. • Join with structured data sets from different systems without needing to know the source of each data set. • No dependencies from database developers to build in ETL flows to move data from one system to the next. What is Data Virtualization?
  • 9. Connect / Explore / Learn Polybase has been available since 2010. General Available in SQL Server 2016. Polybase purpose was to integrate SQL Server with Hadoop by allowing us to run MapReduce jobs against a remote Hadoop cluster and bringing the results back into SQL Server reducing the computational burden on our relatively more expensive SQL Server instances. PolyBase in SQL Server 2019 has grown and adapted to this era of data virtualization and gives us the ability to integrate with a variety of source systems like Hadoop cluster, Azure Blob Storage, other SQL Server instances, Oracle database, Teradata, MongoDB, Cosmos DB, an Apache Spark cluster, Apache Hive tables, and even Microsoft Excel. The best part of it is that developers need only T-SQL. PolyBase is no panacea, and there are trade-offs compared to storing all data natively in one source system, particularly around performance. What is Polybase?
  • 10. Feature Selection Polybase Configuration Java Installation Polybase Services and Accounts Firewall Rules Data Virtualization using Polybase Installing and Configure Polybase
  • 11. Connect / Explore / Learn Feature Selection
  • 12. Connect / Explore / Learn Polybase Configuration Scale-out group rules Each machine hosting SQL Server must be part of the same Active Directory domain. You must use the same Active Directory service account for each installation of the PolyBase Engine and PolyBase Data Movement services. Each machine hosting SQL Server must be able to communicate with all other Scale-Out Group members in close physical proximity and on the same network, avoiding geographically distributed servers and communications through the Internet. Each SQL Server instance must be running the same major version of SQL Server PolyBase services are machine-level rather than instance-level services.
  • 13. Connect / Explore / Learn Java Installation
  • 14. Connect / Explore / Learn Polybase Services and Accounts
  • 15. Connect / Explore / Learn Setup Complete
  • 16. Connect / Explore / Learn Firewall Rules
  • 18. Connecting to Azure Blob Storage Connecting to SQL Server Data Virtualization using Polybase Data Virtualization using Polybase
  • 21. Connect / Explore / Learn Polybase Vs. Linked Servers PolyBase External Table Linked Server Object scope Database level, focusing on a single table Instance level Operational intent Read-only Read and write Scale-out Able to use Scale-Out Groups No scale-out capabilities Expected data size Large tables with analytic workloads OLTP-style workloads querying a small number of rows
  • 22. Metadata DMVs Service and Node Resources DMVs Data Movement Service DMVs Troubleshooting Queries DMVs Data Virtualization using Polybase Polybase DMVs
  • 23. Connect / Explore / Learn use PolybaseDemo; select * from sys.external_data_sources; select * from sys.external_file_formats; select * from sys.external_tables; go Metadata DMVs
  • 24. Connect / Explore / Learn use master; select * from sys.dm_exec_compute_nodes; select * from sys.dm_exec_compute_node_status; select * from sys.dm_exec_compute_node_errors; go Service and Node Resources DMVs
  • 25. Connect / Explore / Learn use master; select * from sys.dm_exec_dms_services; select * from sys.dm_exec_dms_workers; go Data Movement Service DMVs
  • 26. Connect / Explore / Learn use PolybaseDemo ; select * from sys.dm_exec_external_work; select * from sys.dm_exec_external_operations; select * from sys.dm_exec_distributed_requests; select * from sys.dm_exec_distributed_request_steps; select * from sys.dm_exec_distributed_sql_requests; go Troubleshooting Queries DMVs
  • 28. Statistics on External Tables Predicate Pushdown Polybase Log Files Data Issues Data Virtualization using Polybase Performance and Troubleshooting
  • 29. Connect / Explore / Learn Statistics on External Tables • Fundamentally are the same as statistics on regular tables • Because data lives outside of SQL Server  We cannot automatically create or maintain statistics against external tables.  We can create statistics from 100% of data (default) or from a sample of data.  Disk space needed during statistics creation because all the data from external table streamed into temporary table. • Performance Impact  External statistics can make a difference when they help the optimizer decide whether to push down a predicate or reorder joins to other tables, not in full scans.
  • 30. Connect / Explore / Learn Predicate Pushdown • Pushdown computation improves the performance of queries on external data sources. • In SQL Server 2019 is available in Hadoop, Oracle, Teradata, MongoDB, ODBC generic types, SQL Server. • SQL Server allows the following basic expressions and operators for predicate pushdown.  Binary comparison operators (<, >, =, !=, <>, >=, <=) for numeric, date, and time values.  Arithmetic operators (+, -, *, /, %).  Logical operators (AND, OR).  Unary operators (NOT, IS NULL, IS NOT NULL).
  • 32. Connect / Explore / Learn Located at %PROGRAMFILES%Microsoft SQL Server MSSQL##.MSSQLSERVERMSSQLLogPolybase Polybase Log Files
  • 33. Connect / Explore / Learn Data Issues • Structural • Unsupported characters • Date formats • Limitations  The maximum possible row size (full length of variable length columns) can't exceed 32 KB in SQL Server or 1 MB in Azure Synapse Analytics.  Text-heavy columns might be limited.
  • 35. Thank you! Antonios Chatzipavlis Data Solutions Consultant & Trainer • @antoniosch • @sqlschool twitter • ./sqlschoolgr • ./groups/sqlschool Facebook • ./c/SqlschoolGr YouTube • SQLschool.gr • Group LinkedIn

Editor's Notes

  • #6: Hello and welcome to another SQL Night I am Antonios Chatzipavlis I am a Data Solutions Consultant and Trainer and I have been in the Information Technology Industry since 1988 I have been an MCT since 2000 and Microsoft Data platform MVP since 2010. I started using SQL Server since version 6.0 this means I have more than 25 of experience with this product in large scale environments. I have more than 60 (sixty) of certifications mostly in MS products Finally I am the founder of SQLschool.gr
  • #8: SQLschool.gr is a community for Greek professionals who use the Microsoft Data Platform. In this you will find Articles, Webinars, Videos, Resources, news about Microsoft Data Platform. You can join us as a member or follow us in social media to keep up with our community This year SQLschool.gr became 10 years old and I would like to thank you all for your participation and support.
  • #9: SQLschool.gr is a community for Greek professionals who use the Microsoft Data Platform. In this you will find Articles, Webinars, Videos, Resources, news about Microsoft Data Platform. You can join us as a member or follow us in social media to keep up with our community This year SQLschool.gr became 10 years old and I would like to thank you all for your participation and support.
  • #17: There are two components selected aside from Database Engine Services: the PolyBase Query Service for External Data and the Java connector for HDFS data sources. The Java connector for HDFS data sources provides us support for connecting to Hadoop and Azure Blob Storage, which were the two endpoints available with PolyBase in SQL Server 2016 and SQL Server 2017; I refer to this throughout the book as PolyBase V1. SQL Server 2019 also adds the PolyBase Query Service for External Data component, which includes support for services like Oracle, Teradata, MongoDB, Cosmos DB, and even other SQL Server instances. In order to install this component, SQL Server’s installer will also install the Microsoft Visual C++ 2017 Redistributable.
  • #18: Before you begin installation, it is important to know whether you want to install PolyBase as a standalone service or as part of a Scale-Out Group because you will not be able to switch between the two afterward without uninstalling and reinstalling the PolyBase features. If you are using SQL Server on Linux, the only option available to you at this time is to install standalone; SQL Server on Windows allows for both installation methods. All other things equal, a Scale-Out Group is preferable to a standalone installation. The reason for this is that PolyBase is a Massively Parallel Processing (MPP) technology. This means we can scale PolyBase horizontally, improving performance by adding additional servers. But that only works if you incorporate your machine as part of a Scale-Out Group, however; as a standalone installation, your SQL Server instance will not be able to enlist the support of other SQL Server instances when using PolyBase to perform queries. The preceding text makes sense when all other things are equal, but installing PolyBase as part of a Scale-Out Group has some requirements which standalone PolyBase does not. To wit, in order to install PolyBase as part of a Scale-Out Group, all of the following must be true:
  • #19: The first option is to install the Azul Zulu Open JRE. This is a distribution of Oracle’s Open Java Runtime Environment which Azul Systems supports. Your license for SQL Server includes support for this particular distribution of Open JRE, meaning that you could contact Microsoft support for issues related to the JRE. The link on the installation page includes more information on this licensing agreement. If you are already a licensed Oracle Standard Edition (SE) customer, you can of course install the Oracle SE version of the Java Runtime Environment. To do so, select the “Provide the location of a different version that has been installed to on this computer” option and navigate to your already-installed version of the Java Runtime Environment. SQL Server 2016 and 2017 supported JRE version 7 update 51 and later, as well as JRE version 8. SQL Server 2019 supports later versions of the Java Runtime Environment, including version 11. If you are not a licensed Oracle SE customer, you can also install Oracle’s Open JRE. The downside to this is that your support options are limited to public forum access.
  • #23: Configuration.sql
  • #25: PolybaseBlob.sql
  • #26: PolybaseSQL.sql
  • #27: Linked servers are a classic technique database administrators and developers can use to query another server’s data from the local server. On the plus side, there is extensive OLEDB driver support, and linked servers can reach out to technologies like Oracle, Apache Hive, other SQL Server instances, and even Excel. On the minus side, linked servers have an oft-deserved reputation for bringing over too much data from the remote server during queries and a somewhat undeserved reputation for being a security issue. Still, introducing the idea of an alternative for linked servers should excite many a DBA. Here is where I have mixed news for you: PolyBase can be superior to linked servers in some circumstances, but you will not want to replace all of your linked servers with external tables, as there are some cases where linked servers will be superior. Instead, think of these as two complementary technologies with considerable overlap. Object Scopes Linked servers are scoped at the instance level, which means that when you create a linked server, any database on that instance has access to the linked server. Furthermore, on the remote side, linked servers allow you to query any table or view on any database where the remote login has rights. The advantage to the linked server model is its flexibility: you can use linked servers for any number of queries across an indefinite number of remote tables or views. The biggest disadvantage of this approach is that it promotes the idea that perhaps you ought to make that cross-server join of two very large tables. By contrast, PolyBase requires more deliberation: a database administrator or developer needs to create the external table link on a table-by-table or view-by-view basis before anybody can use it. This additional effort should make the creator think about whether a cross-server link is really necessary and can provide a bit of extra documentation about which tables the staff intend to use for cross-server queries. The downside to this is, if you have a large number of tables to query, it means writing a large number of external table definitions and also maintaining these definitions across table changes. This makes PolyBase a better choice for more stable data models and linked servers for more dynamic data models. Operational Intent Linked servers allow for reads as well as inserts, updates, and deletes. With PolyBase V1, we were able to read and insert but could not update or delete data. For the PolyBase V2 types, we are able to read but the engine prohibits any data modification, including inserts. If you attempt a data modification statement against a PolyBase V2 external table, you will get an error message similar to that Msg 46519 – DML Operations are not supported with external tables Scale-Out Capabilities Linked servers offer no ability to scale out. One SQL Server instance may read from one SQL Server instance. If you experience performance problems, there is no way to add additional SQL Server instances to the mix to share the load. PolyBase, meanwhile, offers Scale-Out Groups for cases when three or four servers are better than one. In this regard, PolyBase is strictly superior. Data Sizes Tying in with scale-out capabilities, linked servers and PolyBase have different expectations for ideal data size. If you intend to pull back one row or a few rows from a small table, linked servers will generally be a superior option because there are fewer moving parts. As you get more complicated queries with larger data sets, PolyBase tends to do at least as well and often better. Over the rest of this chapter, we will test the performance of PolyBase vs. linked servers in several scenarios to see when PolyBase succeeds and when linked servers come out ahead.
  • #28: There are 13 Dynamic Management Views available in SQL Server 2019 which relate to PolyBase. In this section, we will review each of these at a high level, starting with basic metadata resources, followed by the DMVs which help with service and node setup, and finishing with DMVs for query troubleshooting.
  • #29: External Data Sources Returns one row per external data source External File Formats Shows each of the most important settings for an external file External Tables Inherits several columns from sys.objects. Contains PolyBase-specific columns. External table The useful external table columns include external data source and external file format IDs, allowing us to tie these three tables together. For PolyBase V2 tables, the file format ID will be 0, as we do not use external file formats for these data sources.
  • #30: Compute Nodes returns one row for the head node and one row for each PolyBase compute node, including the server name and port, as well as its IP address. If you have a standalone installation of PolyBase, you will get two rows back: one for the head and one for the local instance’s compute node. If you are using a scale-out cluster, you will get back the two rows in a standalone installation as well as one row for each scale-out compute node you have in the cluster. The sys.dm_exec_compute_node_status DMV connects to each compute node in order to determine if it is available. It retrieves server-level information such as allocated and available memory (in bytes), process and total CPU utilization (in ticks), the last communication time per node, and the latest error to have occurred as well. Figure 10-5 shows an example of some of the columns in this DMV. When it comes to errors, however, we can see the value of all of the columns while on-premises by querying sys.dm_exec_compute_node_errors. This DMV holds a history of error messages and is a good place to look when troubleshooting failures on a system. In addition to its unique ID data type, the data in sys.dm_exec_compute_node_errors will persist even after we restart the SQL Server services. Most Dynamic Management Views—for example, wait stat measures—reset when the database engine restarts, but compute node errors will stick around.
  • #31: The first of these is sys.dm_exec_dms_services . This view returns one row per compute node—including one row for the head instance’s compute node—and the status for each of these nodes. Figure 10-7 shows the output of this DMV. We also have the ability to see the outputs of data movement service operations using the sys.dm_exec_dms_workers DMV. This gives us one row for each execution ID and execution step and includes performance measures, including bytes and rows, total elapsed time, CPU utilization time, and more To clear up potential confusion, the total elapsed time and query time values are in milliseconds, whereas CPU time is in ticks, where 10,000 ticks add up to a millisecond. Therefore, to get a clearer measure across the board, we want to divide the CPU time column by 10,000 to get a better picture of just how much CPU time we are actually using in relation to total elapsed time. In addition to these measures, we are also able to see the source SQL query for these operations, as well as the error ID if an operation fails. Unlike the compute node errors DMV, the DMS workers Dynamic Management View resets every time you restart the PolyBase engine service.
  • #32: The final five Dynamic Management Views help us learn more about the SQL queries users run on our instances. Like the server and node resource DMVs we just looked at, these are all instance-level DMVs, meaning we will get the same results when running in any database. These views break down into two types, based on their names: external work and distributed requests. The external work results reset each time we restart the PolyBase engine, whereas the distributed requests DMVs persist even after service restarts. First up in our set of views is sys.dm_exec_external_work. This Dynamic Management View returns one row for each of the last 1000 PolyBase queries we have run since the last time the PolyBase engine started, as well as any active queries currently running. This DMV contains information on the current status of each execution, including the latest step for each compute node and Data Movement Service step. We can see the type of operation, which is “File Split” for PolyBase V1 queries and “ODBC Data Split” for PolyBase V2 queries. The input name tells us which file, folder, or table we are reading—for the SQL Server example on the first line, the input name is sqlserver://sqlcontrol/PolyBaseRevealed.dbo.Person. If we are reading from a file, the read_location field gives us the starting offset from 0 bytes. In the three cases in Figure 10-9, we read the file starting from the beginning. We can see the actual ODBC command next in the read_command column, which is a new field for SQL Server 2019. Finally, there are some columns containing top-level metrics, including bytes processed, file length (when reading files), start and end dates, the total elapsed time in milliseconds, and the status of each request. This status will be one of the following values: Pending, Processing, Done, Failed, or Aborted. If you perform a predicate pushdown operation against a Hadoop cluster, the sys.dm_exec_external_operations Dynamic Management View will give you a rundown of these pushdown operations. Figure 10-10 shows an example of a pushdown MapReduce job which failed—we can see that the map and reduce progress values are both at 0%. The sys.dm_exec_distributed_requests view returns one line per distributed operation. It provides us with one extremely helpful piece of information: a SQL handle, which we can use to return query text or an execution plan for our PolyBase queries. Figure 10-11 shows several rows from this table, including QID2260, which failed in the prior figure. The sys.dm_exec_distributed_request_steps view returns one row per execution ID and step. It is particularly useful when you already know an execution ID and want to understand what happened at each step along the way. Figure 10-12 gives us a glimpse at some of the most important columns here. The sys.dm_exec_distributed_sql_requests Dynamic Management View is our final DMV of note. It contains one row per SQL-related step on each compute node and for each distribution. Figure 10-13 shows an example of this for execution ID QID2148. This view makes clear the distributed nature of PolyBase: each distributed request step has eight separate SPIDs running on a single compute node. As with the distributed request steps DMV, we will look at this DMV in some greater detail next.
  • #33: The first step creates the name of our temp table, TEMP_ID_XX. This appears to be an incrementing value, and the operation runs on the head node. The second step has us create a temporary table on each compute node named TEMP_ID_XX . The shape of this table is the set of columns that we will need for our query: population type, year, and population. The third step adds an extended property named IS_EXTERNAL_STREAMING_TABLE to each of the temp tables, presumably to make it easier to track which temp tables are used for loading external data. The step 4 runs a statistics update, updating statistics and telling SQL Server that we expect the temp table will have 566 rows. Our fifth step (i.e., step index 4) runs on the head node once more and is a MultiStreamOperation. There is no official documentation on this step, but it takes up 848 of the 919 total milliseconds of elapsed time and appears to be the operation which causes our compute nodes to do work. From there, we see a HadoopShuffleOperation on the Data Movement Service. This returns all 13,607 rows in the population table. We can see from the cleaned-up query in Listing 10-3 that this is a simple query of all rows from our population table. While we shuffle data across our compute nodes’ Data Movement Services, the next step runs, a StreamingReturnOperation . We can tell these are running concurrently because the shuffle operation takes 845 milliseconds and the streaming return operation 804 milliseconds, yet our entire query finished in under a second. This streaming query, which again runs on each of the compute nodes, queries TEMP_ID_73 and performs the aggregation we requested. Of interest is the fact that this query does not follow exactly the same shape as what we sent the database engine.
  • #38: Polybase Log Files C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Log\Polybase DMS Errors The DMS error log gives stack traces when an exception occurs in the data movement service. One of the more common errors you might find when reading through this log is System.Data.SqlClient.SqlException: Operation cancelled by user. This exception occurs when a user or application stops a query, such as when a user hits the “Stop” button in Azure Data Studio. You can safely ignore this error. This particular log file tends to give you a high-level view of when errors occur but little information on the root cause or even the specific error. One of the more common errors I tend to see in this log is Internal Query Processor Error: The query processor encountered an unexpected error during the processing of a remote query phase. This phrase will not help me diagnose the problem, but this log file does tend to include information like the query ID and plan ID, which I can use to figure out which queries are failing. DMS Movement The data movement service writes a good amount of information to the DMS Movement log and includes detailed information on what data moves over from Azure Blob Storage or Hadoop to SQL Server. This includes the SQL queries the PolyBase data movement service generates, configuration settings such as the number of readers the DMS will use to migrate data, and detailed operation at each step. Combined with the DMS error log, we can start to piece together our errors. DWEngine Errors Like the DMS error log, the DWEngine error log gives a higher-level overview of when errors occur, as well as stack traces. This file can help you pinpoint when an error occurs. The errors in this file tend to be a bit more descriptive than the ones in the DMS error log. For example, we can find errors relating to the maximum reject threshold in this file: Query aborted-- the maximum reject threshold (1 rows) was reached while reading from an external source: 2 rows rejected out of total 2 rows processed. DWEngine Movement This log provides us with more detail on queries and errors which the DWEngine error log captures. In some cases, this file has enough information to drive to the root cause. In Figure 5-3, we see an example of a clear error message where I defined a column in an ORC file as a string data type but am trying to use an integer data type to access it via PolyBase. DWEngine Server The DWEngine Server log contains a few pieces of useful information. One of the most useful is that it contains the create statements for external data sources, file formats, and tables. We can use this log to determine what our external resources looked like at the time of exception, just in case somebody changed one of them during troubleshooting. This log also contains information on failed external table access attempts. If you have firewall or connection problems, this should be your first log to review. Figure 5-4 shows an example of a common HDFS bridge error whose root cause is insufficient permissions granted to the PolyBase pdw_user account. DMS PolyBase The DMS PolyBase log shows us something extremely important: any data translation failure. Figure 5-5 gives us three examples of data translation errors, including column conversion errors, data length errors, and string delimiter errors. We can also find cases where values are NULL, but the external table requires a non-nullable field, invalid date conversion attempts, and more. DWEngine PolyBase This file is much less interesting than most of the other logs. In my work, I have not seen it stretch to more than a few lines, and the most interesting thing in this log is the location of new Hadoop clusters as you create external data sources.
  • #39: Structural Mismatch The first common data problem is structural mismatch—that is, when you define your external table one way but the data does not comport to that structure. For example, you might define an external table as having eight columns, but the underlying data set has seven or nine columns. In that case, the PolyBase engine will reject rows because they do not fit the expected structure. Caution In production Hadoop systems, developers are liable to change the structure of files and leave old files as is. For example, a report with eight columns might suddenly populate with nine columns on a certain date. The PolyBase engine cannot support multiple data structures for the same external table and will reject at least one of the two structures. This might cause a previously working external table query suddenly and unexpectedly to fail. Aside from column totals, there are several other mismatch problems which can cause queries to fail. For example, text files might have different schemas or delimiters: one type might be comma-delimited and another pipe-delimited. Some text files might use the quotation mark as a string delimiter, and others might use brackets or tildes. Any lack of consistency will cause the PolyBase engine to fail processing. If you do run into this scenario, an easy solution would be to create several external tables—one for each distinct file structure—and use a view to combine them together as one logical unit. Unsupported Characters or Formats PolyBase supports only a limited number of date formats. The safest route is to limit your text file dates to use supported formats. You can find these on Microsoft Docs (https://docs.microsoft.com/en-us/sql/t-sql/statements/create-external-file-format-transact-sql). PolyBase also struggles with newlines in text fields, so strip those out before trying to load data. Even within a quoted delimiter, newlines will cause the PolyBase engine to think it is starting a new record. PolyBase Data Limitations PolyBase also has limits to what data it can support. From Microsoft Docs (https://docs.microsoft.com/en-us/sql/relational-databases/polybase/polybase-versioned-feature-summary), we can see that the maximum row size cannot exceed 32KB for SQL Server or 1MB for Azure Synapse Analytics. In addition, if you save your data in ORC format, you might receive Java out-of-memory exceptions due to data size. For text-heavy files, it might be best to keep them as delimited files rather than ORC files. The maximum possible row size, which includes the full length of variable length columns, can't exceed 32 KB in SQL Server or 1 MB in Azure Synapse Analytics. When data is exported into an ORC file format from SQL Server or Azure Synapse Analytics, text-heavy columns might be limited. They can be limited to as few as 50 columns because of Java out-of-memory error messages. To work around this issue, export only a subset of the columns. PolyBase can't connect to a Hortonworks instance if Knox is enabled. If you use Hive tables with transactional = true, PolyBase can't access the data in the Hive table's directory.