SlideShare a Scribd company logo
Bridging Oracle Database and Hadoop
Alex Gorbachev
October 2015
Alex Gorbachev
• Chief Technology Officer at Pythian
• Blogger
• Cloudera Champion of Big Data
• OakTable Network member
• Oracle ACE Director
• Founder of BattleAgainstAnyGuess.com
• Founder of Sydney Oracle Meetup
• EVP, IOUG
What is Big Data?
and why Big Data today?
Why Big Data boom now?
• Advances in communication – it’s now feasible to
transfer large amounts of data economically by
anyone from virtually anywhere
• Commodity hardware – high performance and high
capacity at low price is available
• Commodity software – open-source phenomena
made advanced software products affordable to
anyone
• New data sources – mobile, sensors, social media
data-sources
• What’s been only possible at very high cost in the
past, can now be done by any small or large
business
Big Data = Affordable at Scale
Not everyone is
Facebook, Google, Yahoo
and etc.
These guys had to
push the envelope
because traditional
technology didn’t
scale
Not everyone is
Facebook, Google, Yahoo
and etc.
These guys had to
push the envelope
because traditional
technology didn’t
scale
Mere mortals’ challenge
is cost and agility
System capability per $
Big Data technology
may be expensive at low
scale due to high
engineering efforts.
Traditional technology
becomes too complex
and expensive to scale.
investments, $
capabilities
traditional
Big Data
What is Hadoop?
Hadoop Design Principle #1
Scalable Affordable Reliable Data Store
HDFS – Hadoop Distributed Filesystem
Hadoop Design Principle #2
Bring Code to Data
Code
Data
Why is Hadoop so affordable?
• Cheap hardware
• Resiliency through software
• Horizontal scalability
• Open-source software
How much does it cost?
Oracle Big Data Appliance
X5-2 rack - $525K list price
• 18 data nodes
• 648 CPU cores
• 2.3 TB RAM
• 216 x 4TB disks
• 864TB of raw disk capacity
• 288TB usable (triple
mirror)
• 40G InfiniBand + 10GbE
networking
• Cloudera Enterprise
Hadoop is very flexible
• Rich ecosystem of tools
• Can handle any data format
– Relational
– Text
– Audio, video
– Streaming data
– Logs
– Non-relational structured data (JSON, XML, binary
formats)
– Graph data
• Not limited to relational data processing
Challenges with Hadoop
for those of us used to Oracle
• New data access tools
– Relational and non-relational data
• Non-Oracle (and non-ANSI) Hive SQL
– Java-based UDFs and UDAFs
• Security features are not there out-of-the-box
• Maybe slow for “small data”
Tables in Hadoop
using Hadoop with relational data abstractions
Apache Hive
• Apache Hive provides a SQL layer over Hadoop
– data in HDFS (structured or unstructured via SerDe)
– using one of distributed processing frameworks –
MapReduce, Spark, Tez
• Presents data from HDFS as tables and columns
– Hive metastore (aka data dictionary)
• SQL language access (HiveQL)
– Parses SQL and creates execution plans in MR, Spark or
Tez
• JDBC and ODBC drivers
– Access from ETL and BI tools
– Custom apps
– Development tools
Native Hadoop tools
• Demo
• HUE
– HDFS files
– Hive
– Impala
Access Hive using SQL Developer
• Demo
• Use Cloudera JDBC drivers
• Query data & browse metadata
• Run DDL from SQL tab
• Create Hive table definitions inside Oracle DB
Hadoop and OBIEE 11g
• OBIEE 11.1.1.7 can query Hive/Hadoop as a
data source
– Hive ODBC drivers
– Apache Hive Physical Layer database type
• Limited features
– OBIEE 11.1.1.7 OBIEE has HiveServer1 ODBC
drivers
– HiveQL is only a subset of ANSI SQL
• Hive query response time is slow for speed of
thought response time
ODI 12c
• ODI – data transformation tool
– ELT approach pushes transformations down to
Hadoop - leveraging power of cluster
– Hive, HBase, Sqoop and OLH/ODCH KMs provide
native Hadoop loading / transformation
• Upcoming support for Pig and Spark
• Workflow orchestration
• Metadata and model-driven
• GUI workflow design
• Transformation audit & data quality
Moving Data to Hadoop using ODI
• Interface with Apache Sqoop using IKM SQL to
Hive-HBase-File knowledge module
– Hadoop ecosystem tool
– Able to run in parallel
– Optimized Sqoop JDBC drivers integration for Oracle
– Bi-directional in-and-out of Hadoop to RDBMS
– Data is moved directly between Hadoop cluster and
database
• Export RBDMS data to file and load using IKM
File to Hive
Integrating Hadoop with Oracle
Database
Oracle Big Data Connectors
• Oracle Loader for Hadoop
– Offloads some pre-processing to Hadoop MR jobs (data
type conversion, partitioning, sorting).
– Direct load into the database (online method)
– Data Pump binary files in HDFS (offline method)
• These can then be accessed as external tables on
HDFS
• Oracle Direct Connector for Hadoop
– Create external table on files in HDFS
– Text files or Data Pump binary files
– WARNING: lots of data movement! Great for archival non-
frequently accessed data to HDFS
Oracle Big Data SQL
25
Source: http://www.slideshare.net/gwenshap/data-wrangling-and-oracle-connectors-for-hadoop
Oracle Big Data SQL
• Transparent access from Oracle DB
to Hadoop
– Oracle SQL dialect
– Oracle DB security model
– Join data from Hadoop and Oracle
• SmartScan - pushing code to data
– Same software base as on Exadata
Storage Cells
– Minimize data transfer from Hadoop to
Oracle
• Requires BDA and Exadata
• Licensed per Hadoop disk spindle
26
Big Data SQL Demo
Big Data SQL in Oracle tools
• Transparent to any app
• SQL Developer
• ODI
• OBIEE
Hadoop as Data Warehouse
Traditional Needs of Data Warehouses
• Speed of thought end user analytics experience
– BI tools coupled with DW databases
• Scalable data platform
– DW database
• Versatile and scalable data transformation
engine
– ETL tools sometimes coupled with DW databases
• Data quality control and audit
– ETL tools
What drives Hadoop adoption for
Data Warehousing?
What drives Hadoop adoption for
Data Warehousing?
1. Cost efficiency
What drives Hadoop adoption for
Data Warehousing?
1. Cost efficiency
2. Agility needs
Why is Hadoop Cost Efficient?
Hadoop leverages two main trends in IT
industry
• Commodity hardware – high performance and
high capacity at low price is available
• Commodity software – open-source
phenomena made advanced software products
affordable to anyone
How Does Hadoop Enable Agility?
• Load first, structure later
– Don’t need to spend months changing DW to add
new types of data without knowing for sure it will be
valuable for end users
– Quick and easy to verify hypothesis – perfect data
exploration platform
• All data in one place is very powerful
– Much easier to test new theories
• Natural fit for “unstructured” data
Traditional needs of DW & Hadoop
• Speed of thought end user analytics experience?
– Very recent features – Impala, Presto, Drill, Hadapt, etc.
– BI tools embracing Hadoop as DW
– Totally new products become available
• Scalable data platform?
– Yes
• Versatile and scalable data transformation engine?
– Yes but needs a lot of DIY
– ETL vendors embraced Hadoop
• Data quality control and audit?
– Hadoop makes it more difficult because of flexibility it
brings
– A lot of DIY but ETL vendors getting better supporting
Hadoop + new products appear
Unique Hadoop Challenges
• Still “young” technology
– requires a lot of high quality engineering talent
• Security doesn’t come out of the box
– Capabilities are there but very tedious to implement
and somewhat fragile
• Challenge of selecting the right tool for the job
– Hadoop ecosystem is huge
• Hadoop breaks IT silos
• Requires commoditization of IT operations
– Large footprint with agile deployments
Typical Hadoop adoption in modern
Enterprise IT
Data WarehouseHadoop
BI tools
Bring the world in your data center
Rare historical report
Find a needle in a haystack
Will Hadoop displace traditional DW
platforms?
Hadoop
BI tools
Example pure Hadoop DW stack
HDFS
Hive/Pig FlumeSqoop DIY
Impala
Kerberos
Oozie + DIY -
data sources
Do you have a Big Data
problem?
Your Data
is NOT
as BIG
as you think
is NOT a Big Data problem
Using 8 years old hardware…
is NOT a Big Data problem
Misconfigured infrastructure…
is NOT a Big Data problem
Lack of purging policy…
is NOT a Big Data problem
Bad data model design…
is NOT a Big Data problem
Bad SQL…
Your Data
is NOT
as BIG
as you think
Controversy…
Thanks and Q&A
Contact info
gorbachev@pythian.com
+1-877-PYTHIAN
To follow us
pythian.com/blog
@alexgorbachev
@pythian
linkedin.com/company/pythian

More Related Content

What's hot (20)

POTX
Oracle OpenWorld - A quick take on all 22 press releases of Day #1 - #3
Holger Mueller
 
PDF
[db tech showcase Tokyo 2018] #dbts2018 #B31 『1,2,3 and Done! 3 easy ways to ...
Insight Technology, Inc.
 
PPTX
What is the Oracle PaaS Cloud for Developers (Oracle Cloud Day, The Netherlan...
Lucas Jellema
 
PPTX
2019 - OOW - Database Migration Methods from On-Premise to Cloud
Marcus Vinicius Miguel Pedro
 
PPTX
Oracle OpenWorld 2016 Review - High Level Overview of major themes and grand ...
Lucas Jellema
 
PDF
Meetup Oracle Database MAD_BCN: 1.2 Oracle Database 18c (autonomous database)
avanttic Consultoría Tecnológica
 
PPTX
Oracle OpenWorld 2016 Review - Focus on Data, BigData, Streaming Data, Machin...
Lucas Jellema
 
PDF
FOSDEM 2015 - NoSQL and SQL the best of both worlds
Andrew Morgan
 
PPTX
6Reinventing Oracle Systems in a Cloudy World (Sangam20, December 2020)
Lucas Jellema
 
PDF
Oracle Enterprise Manager 12c: updates and upgrades.
Rolta
 
PDF
Open Innovation with Power Systems
IBM Power Systems
 
PDF
SOA Suite 12c Customer implementation
Michel Schildmeijer
 
PPTX
Review Oracle OpenWorld 2015 - Overview, Main themes, Announcements and Future
Lucas Jellema
 
PDF
It's a wrap - closing keynote for nlOUG Tech Experience 2017 (16th June, The ...
Lucas Jellema
 
PDF
Moving your Oracle Databases to the Oracle Cloud
Alex Zaballa
 
PDF
Java & SOA Cloud Service for Fusion Middleware Administrators
Simon Haslam
 
PPTX
AMIS Oracle OpenWorld 2015 Review – part 3- PaaS Database, Integration, Ident...
Getting value from IoT, Integration and Data Analytics
 
PPTX
Business and IT agility through DevOps and microservice architecture powered ...
Lucas Jellema
 
PDF
Oracle WebLogic 12c New Multitenancy features
Michel Schildmeijer
 
PPTX
AMIS Oracle OpenWorld 2015 Review – part 2- Hardware & IaaS and PaaS Cloud Fo...
Getting value from IoT, Integration and Data Analytics
 
Oracle OpenWorld - A quick take on all 22 press releases of Day #1 - #3
Holger Mueller
 
[db tech showcase Tokyo 2018] #dbts2018 #B31 『1,2,3 and Done! 3 easy ways to ...
Insight Technology, Inc.
 
What is the Oracle PaaS Cloud for Developers (Oracle Cloud Day, The Netherlan...
Lucas Jellema
 
2019 - OOW - Database Migration Methods from On-Premise to Cloud
Marcus Vinicius Miguel Pedro
 
Oracle OpenWorld 2016 Review - High Level Overview of major themes and grand ...
Lucas Jellema
 
Meetup Oracle Database MAD_BCN: 1.2 Oracle Database 18c (autonomous database)
avanttic Consultoría Tecnológica
 
Oracle OpenWorld 2016 Review - Focus on Data, BigData, Streaming Data, Machin...
Lucas Jellema
 
FOSDEM 2015 - NoSQL and SQL the best of both worlds
Andrew Morgan
 
6Reinventing Oracle Systems in a Cloudy World (Sangam20, December 2020)
Lucas Jellema
 
Oracle Enterprise Manager 12c: updates and upgrades.
Rolta
 
Open Innovation with Power Systems
IBM Power Systems
 
SOA Suite 12c Customer implementation
Michel Schildmeijer
 
Review Oracle OpenWorld 2015 - Overview, Main themes, Announcements and Future
Lucas Jellema
 
It's a wrap - closing keynote for nlOUG Tech Experience 2017 (16th June, The ...
Lucas Jellema
 
Moving your Oracle Databases to the Oracle Cloud
Alex Zaballa
 
Java & SOA Cloud Service for Fusion Middleware Administrators
Simon Haslam
 
AMIS Oracle OpenWorld 2015 Review – part 3- PaaS Database, Integration, Ident...
Getting value from IoT, Integration and Data Analytics
 
Business and IT agility through DevOps and microservice architecture powered ...
Lucas Jellema
 
Oracle WebLogic 12c New Multitenancy features
Michel Schildmeijer
 
AMIS Oracle OpenWorld 2015 Review – part 2- Hardware & IaaS and PaaS Cloud Fo...
Getting value from IoT, Integration and Data Analytics
 

Viewers also liked (15)

PDF
Under The Hood of Pluggable Databases by Alex Gorbachev, Pythian, Oracle OpeW...
Alex Gorbachev
 
PDF
APAC Big Data Strategy RadhaKrishna Hiremane
IntelAPAC
 
PPT
Hadoop at eBay
Shalini Madan
 
PDF
Integrating Hadoop in Your Existing DW and BI Environment
Cloudera, Inc.
 
PPT
Hive ICDE 2010
ragho
 
PPTX
A Basic Hive Inspection
Linda Tillman
 
PDF
Using Hadoop and Hive to Optimize Travel Search , WindyCityDB 2010
Jonathan Seidman
 
PPTX
Big Data Project using HIVE - college scorecard
Abhishek Gupta
 
PDF
Facebooks Petabyte Scale Data Warehouse using Hive and Hadoop
royans
 
KEY
Hadoop, Pig, and Twitter (NoSQL East 2009)
Kevin Weil
 
PDF
Integration of Hive and HBase
Hortonworks
 
PPTX
Pig, Making Hadoop Easy
Nick Dimiduk
 
PDF
Practical Problem Solving with Apache Hadoop & Pig
Milind Bhandarkar
 
PDF
Hive Quick Start Tutorial
Carl Steinbach
 
PPT
HIVE: Data Warehousing & Analytics on Hadoop
Zheng Shao
 
Under The Hood of Pluggable Databases by Alex Gorbachev, Pythian, Oracle OpeW...
Alex Gorbachev
 
APAC Big Data Strategy RadhaKrishna Hiremane
IntelAPAC
 
Hadoop at eBay
Shalini Madan
 
Integrating Hadoop in Your Existing DW and BI Environment
Cloudera, Inc.
 
Hive ICDE 2010
ragho
 
A Basic Hive Inspection
Linda Tillman
 
Using Hadoop and Hive to Optimize Travel Search , WindyCityDB 2010
Jonathan Seidman
 
Big Data Project using HIVE - college scorecard
Abhishek Gupta
 
Facebooks Petabyte Scale Data Warehouse using Hive and Hadoop
royans
 
Hadoop, Pig, and Twitter (NoSQL East 2009)
Kevin Weil
 
Integration of Hive and HBase
Hortonworks
 
Pig, Making Hadoop Easy
Nick Dimiduk
 
Practical Problem Solving with Apache Hadoop & Pig
Milind Bhandarkar
 
Hive Quick Start Tutorial
Carl Steinbach
 
HIVE: Data Warehousing & Analytics on Hadoop
Zheng Shao
 
Ad

Similar to Bridging Oracle Database and Hadoop by Alex Gorbachev, Pythian from Oracle OpenWorld IOUG Forum (20)

PPTX
Big Data Management System: Smart SQL Processing Across Hadoop and your Data ...
DataWorks Summit
 
PDF
The Future of Analytics, Data Integration and BI on Big Data Platforms
Mark Rittman
 
PDF
VMUGIT UC 2013 - 08a VMware Hadoop
VMUG IT
 
PDF
Big data and mstr bridge the elephant
Kognitio
 
PDF
New World Hadoop Architectures (& What Problems They Really Solve) for Oracle...
Rittman Analytics
 
PDF
50 Shades of SQL
DataWorks Summit
 
PDF
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
Mark Rittman
 
PDF
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Mark Rittman
 
PPTX
Apache Hive for modern DBAs
Luis Marques
 
PDF
Big Data and Enterprise Data - Oracle -1663869
Edgar Alejandro Villegas
 
PPTX
2013 05 Oracle big_dataapplianceoverview
jdijcks
 
PPTX
The modern analytics architecture
Joseph D'Antoni
 
PDF
Modern data warehouse
Stephen Alex
 
PDF
Modern data warehouse
Stephen Alex
 
PPTX
Hadoop and Hive in Enterprises
markgrover
 
PPTX
Oct 2011 CHADNUG Presentation on Hadoop
Josh Patterson
 
PPTX
Overview of big data & hadoop version 1 - Tony Nguyen
Thanh Nguyen
 
PPTX
Overview of Big data, Hadoop and Microsoft BI - version1
Thanh Nguyen
 
PDF
Hadoop and SQL: Delivery Analytics Across the Organization
Seeling Cheung
 
PPTX
Overview of big data & hadoop v1
Thanh Nguyen
 
Big Data Management System: Smart SQL Processing Across Hadoop and your Data ...
DataWorks Summit
 
The Future of Analytics, Data Integration and BI on Big Data Platforms
Mark Rittman
 
VMUGIT UC 2013 - 08a VMware Hadoop
VMUG IT
 
Big data and mstr bridge the elephant
Kognitio
 
New World Hadoop Architectures (& What Problems They Really Solve) for Oracle...
Rittman Analytics
 
50 Shades of SQL
DataWorks Summit
 
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
Mark Rittman
 
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Mark Rittman
 
Apache Hive for modern DBAs
Luis Marques
 
Big Data and Enterprise Data - Oracle -1663869
Edgar Alejandro Villegas
 
2013 05 Oracle big_dataapplianceoverview
jdijcks
 
The modern analytics architecture
Joseph D'Antoni
 
Modern data warehouse
Stephen Alex
 
Modern data warehouse
Stephen Alex
 
Hadoop and Hive in Enterprises
markgrover
 
Oct 2011 CHADNUG Presentation on Hadoop
Josh Patterson
 
Overview of big data & hadoop version 1 - Tony Nguyen
Thanh Nguyen
 
Overview of Big data, Hadoop and Microsoft BI - version1
Thanh Nguyen
 
Hadoop and SQL: Delivery Analytics Across the Organization
Seeling Cheung
 
Overview of big data & hadoop v1
Thanh Nguyen
 
Ad

More from Alex Gorbachev (8)

PDF
Introduction to Machine Learning for Oracle Database Professionals
Alex Gorbachev
 
PDF
UTHOC2 - Under The Hood of Oracle Clusterware 2.0 - Grid Infrastructure by Al...
Alex Gorbachev
 
PDF
Benchmarking Oracle I/O Performance with Orion by Alex Gorbachev
Alex Gorbachev
 
PDF
Demystifying Oracle RAC Workload Management by Alex Gorbachev, Pythian | NoCO...
Alex Gorbachev
 
KEY
MOW2010: 1TB MySQL Database Migration and HA Infrastructure by Alex Gorbachev...
Alex Gorbachev
 
KEY
MOW2010: Under the Hood of Oracle Clusterware by Alex Gorbachev, Pythian
Alex Gorbachev
 
KEY
Oracle ASM 11g - The Evolution
Alex Gorbachev
 
KEY
Oracle 11g New Features Out-of-the-Box by Alex Gorbachev (from Sydney Oracle ...
Alex Gorbachev
 
Introduction to Machine Learning for Oracle Database Professionals
Alex Gorbachev
 
UTHOC2 - Under The Hood of Oracle Clusterware 2.0 - Grid Infrastructure by Al...
Alex Gorbachev
 
Benchmarking Oracle I/O Performance with Orion by Alex Gorbachev
Alex Gorbachev
 
Demystifying Oracle RAC Workload Management by Alex Gorbachev, Pythian | NoCO...
Alex Gorbachev
 
MOW2010: 1TB MySQL Database Migration and HA Infrastructure by Alex Gorbachev...
Alex Gorbachev
 
MOW2010: Under the Hood of Oracle Clusterware by Alex Gorbachev, Pythian
Alex Gorbachev
 
Oracle ASM 11g - The Evolution
Alex Gorbachev
 
Oracle 11g New Features Out-of-the-Box by Alex Gorbachev (from Sydney Oracle ...
Alex Gorbachev
 

Recently uploaded (20)

PPTX
Simplifica la seguridad en la nube y la detección de amenazas con FortiCNAPP
Cristian Garcia G.
 
PDF
UiPath Agentic AI ile Akıllı Otomasyonun Yeni Çağı
UiPathCommunity
 
PPTX
reInforce 2025 Lightning Talk - Scott Francis.pptx
ScottFrancis51
 
PPTX
Practical Applications of AI in Local Government
OnBoard
 
PPSX
Usergroup - OutSystems Architecture.ppsx
Kurt Vandevelde
 
PDF
Kubernetes - Architecture & Components.pdf
geethak285
 
DOCX
Daily Lesson Log MATATAG ICT TEchnology 8
LOIDAALMAZAN3
 
PDF
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
PDF
Cracking the Code - Unveiling Synergies Between Open Source Security and AI.pdf
Priyanka Aash
 
PPTX
𝙳𝚘𝚠𝚗𝚕𝚘𝚊𝚍—Wondershare Filmora Crack 14.0.7 + Key Download 2025
sebastian aliya
 
PDF
Database Benchmarking for Performance Masterclass: Session 2 - Data Modeling ...
ScyllaDB
 
PDF
Why aren't you using FME Flow's CPU Time?
Safe Software
 
PDF
Optimizing the trajectory of a wheel loader working in short loading cycles
Reno Filla
 
PPTX
Smarter Governance with AI: What Every Board Needs to Know
OnBoard
 
PDF
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
Edge AI and Vision Alliance
 
PDF
Python Conference Singapore - 19 Jun 2025
ninefyi
 
PDF
Hello I'm "AI" Your New _________________
Dr. Tathagat Varma
 
PPTX
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
PDF
5 Things to Consider When Deploying AI in Your Enterprise
Safe Software
 
PDF
Database Benchmarking for Performance Masterclass: Session 1 - Benchmarking F...
ScyllaDB
 
Simplifica la seguridad en la nube y la detección de amenazas con FortiCNAPP
Cristian Garcia G.
 
UiPath Agentic AI ile Akıllı Otomasyonun Yeni Çağı
UiPathCommunity
 
reInforce 2025 Lightning Talk - Scott Francis.pptx
ScottFrancis51
 
Practical Applications of AI in Local Government
OnBoard
 
Usergroup - OutSystems Architecture.ppsx
Kurt Vandevelde
 
Kubernetes - Architecture & Components.pdf
geethak285
 
Daily Lesson Log MATATAG ICT TEchnology 8
LOIDAALMAZAN3
 
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
Cracking the Code - Unveiling Synergies Between Open Source Security and AI.pdf
Priyanka Aash
 
𝙳𝚘𝚠𝚗𝚕𝚘𝚊𝚍—Wondershare Filmora Crack 14.0.7 + Key Download 2025
sebastian aliya
 
Database Benchmarking for Performance Masterclass: Session 2 - Data Modeling ...
ScyllaDB
 
Why aren't you using FME Flow's CPU Time?
Safe Software
 
Optimizing the trajectory of a wheel loader working in short loading cycles
Reno Filla
 
Smarter Governance with AI: What Every Board Needs to Know
OnBoard
 
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
Edge AI and Vision Alliance
 
Python Conference Singapore - 19 Jun 2025
ninefyi
 
Hello I'm "AI" Your New _________________
Dr. Tathagat Varma
 
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
5 Things to Consider When Deploying AI in Your Enterprise
Safe Software
 
Database Benchmarking for Performance Masterclass: Session 1 - Benchmarking F...
ScyllaDB
 

Bridging Oracle Database and Hadoop by Alex Gorbachev, Pythian from Oracle OpenWorld IOUG Forum

  • 1. Bridging Oracle Database and Hadoop Alex Gorbachev October 2015
  • 2. Alex Gorbachev • Chief Technology Officer at Pythian • Blogger • Cloudera Champion of Big Data • OakTable Network member • Oracle ACE Director • Founder of BattleAgainstAnyGuess.com • Founder of Sydney Oracle Meetup • EVP, IOUG
  • 3. What is Big Data? and why Big Data today?
  • 4. Why Big Data boom now? • Advances in communication – it’s now feasible to transfer large amounts of data economically by anyone from virtually anywhere • Commodity hardware – high performance and high capacity at low price is available • Commodity software – open-source phenomena made advanced software products affordable to anyone • New data sources – mobile, sensors, social media data-sources • What’s been only possible at very high cost in the past, can now be done by any small or large business
  • 5. Big Data = Affordable at Scale
  • 6. Not everyone is Facebook, Google, Yahoo and etc. These guys had to push the envelope because traditional technology didn’t scale
  • 7. Not everyone is Facebook, Google, Yahoo and etc. These guys had to push the envelope because traditional technology didn’t scale Mere mortals’ challenge is cost and agility
  • 8. System capability per $ Big Data technology may be expensive at low scale due to high engineering efforts. Traditional technology becomes too complex and expensive to scale. investments, $ capabilities traditional Big Data
  • 10. Hadoop Design Principle #1 Scalable Affordable Reliable Data Store HDFS – Hadoop Distributed Filesystem
  • 11. Hadoop Design Principle #2 Bring Code to Data Code Data
  • 12. Why is Hadoop so affordable? • Cheap hardware • Resiliency through software • Horizontal scalability • Open-source software
  • 13. How much does it cost? Oracle Big Data Appliance X5-2 rack - $525K list price • 18 data nodes • 648 CPU cores • 2.3 TB RAM • 216 x 4TB disks • 864TB of raw disk capacity • 288TB usable (triple mirror) • 40G InfiniBand + 10GbE networking • Cloudera Enterprise
  • 14. Hadoop is very flexible • Rich ecosystem of tools • Can handle any data format – Relational – Text – Audio, video – Streaming data – Logs – Non-relational structured data (JSON, XML, binary formats) – Graph data • Not limited to relational data processing
  • 15. Challenges with Hadoop for those of us used to Oracle • New data access tools – Relational and non-relational data • Non-Oracle (and non-ANSI) Hive SQL – Java-based UDFs and UDAFs • Security features are not there out-of-the-box • Maybe slow for “small data”
  • 16. Tables in Hadoop using Hadoop with relational data abstractions
  • 17. Apache Hive • Apache Hive provides a SQL layer over Hadoop – data in HDFS (structured or unstructured via SerDe) – using one of distributed processing frameworks – MapReduce, Spark, Tez • Presents data from HDFS as tables and columns – Hive metastore (aka data dictionary) • SQL language access (HiveQL) – Parses SQL and creates execution plans in MR, Spark or Tez • JDBC and ODBC drivers – Access from ETL and BI tools – Custom apps – Development tools
  • 18. Native Hadoop tools • Demo • HUE – HDFS files – Hive – Impala
  • 19. Access Hive using SQL Developer • Demo • Use Cloudera JDBC drivers • Query data & browse metadata • Run DDL from SQL tab • Create Hive table definitions inside Oracle DB
  • 20. Hadoop and OBIEE 11g • OBIEE 11.1.1.7 can query Hive/Hadoop as a data source – Hive ODBC drivers – Apache Hive Physical Layer database type • Limited features – OBIEE 11.1.1.7 OBIEE has HiveServer1 ODBC drivers – HiveQL is only a subset of ANSI SQL • Hive query response time is slow for speed of thought response time
  • 21. ODI 12c • ODI – data transformation tool – ELT approach pushes transformations down to Hadoop - leveraging power of cluster – Hive, HBase, Sqoop and OLH/ODCH KMs provide native Hadoop loading / transformation • Upcoming support for Pig and Spark • Workflow orchestration • Metadata and model-driven • GUI workflow design • Transformation audit & data quality
  • 22. Moving Data to Hadoop using ODI • Interface with Apache Sqoop using IKM SQL to Hive-HBase-File knowledge module – Hadoop ecosystem tool – Able to run in parallel – Optimized Sqoop JDBC drivers integration for Oracle – Bi-directional in-and-out of Hadoop to RDBMS – Data is moved directly between Hadoop cluster and database • Export RBDMS data to file and load using IKM File to Hive
  • 23. Integrating Hadoop with Oracle Database
  • 24. Oracle Big Data Connectors • Oracle Loader for Hadoop – Offloads some pre-processing to Hadoop MR jobs (data type conversion, partitioning, sorting). – Direct load into the database (online method) – Data Pump binary files in HDFS (offline method) • These can then be accessed as external tables on HDFS • Oracle Direct Connector for Hadoop – Create external table on files in HDFS – Text files or Data Pump binary files – WARNING: lots of data movement! Great for archival non- frequently accessed data to HDFS
  • 25. Oracle Big Data SQL 25 Source: http://www.slideshare.net/gwenshap/data-wrangling-and-oracle-connectors-for-hadoop
  • 26. Oracle Big Data SQL • Transparent access from Oracle DB to Hadoop – Oracle SQL dialect – Oracle DB security model – Join data from Hadoop and Oracle • SmartScan - pushing code to data – Same software base as on Exadata Storage Cells – Minimize data transfer from Hadoop to Oracle • Requires BDA and Exadata • Licensed per Hadoop disk spindle 26
  • 27. Big Data SQL Demo
  • 28. Big Data SQL in Oracle tools • Transparent to any app • SQL Developer • ODI • OBIEE
  • 29. Hadoop as Data Warehouse
  • 30. Traditional Needs of Data Warehouses • Speed of thought end user analytics experience – BI tools coupled with DW databases • Scalable data platform – DW database • Versatile and scalable data transformation engine – ETL tools sometimes coupled with DW databases • Data quality control and audit – ETL tools
  • 31. What drives Hadoop adoption for Data Warehousing?
  • 32. What drives Hadoop adoption for Data Warehousing? 1. Cost efficiency
  • 33. What drives Hadoop adoption for Data Warehousing? 1. Cost efficiency 2. Agility needs
  • 34. Why is Hadoop Cost Efficient? Hadoop leverages two main trends in IT industry • Commodity hardware – high performance and high capacity at low price is available • Commodity software – open-source phenomena made advanced software products affordable to anyone
  • 35. How Does Hadoop Enable Agility? • Load first, structure later – Don’t need to spend months changing DW to add new types of data without knowing for sure it will be valuable for end users – Quick and easy to verify hypothesis – perfect data exploration platform • All data in one place is very powerful – Much easier to test new theories • Natural fit for “unstructured” data
  • 36. Traditional needs of DW & Hadoop • Speed of thought end user analytics experience? – Very recent features – Impala, Presto, Drill, Hadapt, etc. – BI tools embracing Hadoop as DW – Totally new products become available • Scalable data platform? – Yes • Versatile and scalable data transformation engine? – Yes but needs a lot of DIY – ETL vendors embraced Hadoop • Data quality control and audit? – Hadoop makes it more difficult because of flexibility it brings – A lot of DIY but ETL vendors getting better supporting Hadoop + new products appear
  • 37. Unique Hadoop Challenges • Still “young” technology – requires a lot of high quality engineering talent • Security doesn’t come out of the box – Capabilities are there but very tedious to implement and somewhat fragile • Challenge of selecting the right tool for the job – Hadoop ecosystem is huge • Hadoop breaks IT silos • Requires commoditization of IT operations – Large footprint with agile deployments
  • 38. Typical Hadoop adoption in modern Enterprise IT Data WarehouseHadoop BI tools
  • 39. Bring the world in your data center
  • 41. Find a needle in a haystack
  • 42. Will Hadoop displace traditional DW platforms? Hadoop BI tools
  • 43. Example pure Hadoop DW stack HDFS Hive/Pig FlumeSqoop DIY Impala Kerberos Oozie + DIY - data sources
  • 44. Do you have a Big Data problem?
  • 45. Your Data is NOT as BIG as you think
  • 46. is NOT a Big Data problem Using 8 years old hardware…
  • 47. is NOT a Big Data problem Misconfigured infrastructure…
  • 48. is NOT a Big Data problem Lack of purging policy…
  • 49. is NOT a Big Data problem Bad data model design…
  • 50. is NOT a Big Data problem Bad SQL…
  • 51. Your Data is NOT as BIG as you think Controversy…
  • 52. Thanks and Q&A Contact info [email protected] +1-877-PYTHIAN To follow us pythian.com/blog @alexgorbachev @pythian linkedin.com/company/pythian

Editor's Notes

  • #26: WHERE Clause Evaluation Column Projection Bloom Filters for Better Join Performance JSON Parsing, Data Mining Model Evaluation
  • #40: There is a lot of interesting data that is not generated by your company. Listings of businesses in specific locations. Connections in social media The data may be un-structured, semi-structured or even structured. but it isn’t structured in the way your DWH expects and needs. We need a landing pad for cleanup, pre-processing, aggregating, filtering and structuring. Hadoop is perfect for this. Mappers can scrape data from websites efficiently. Map-reduce jobs that cleanup and process the data. And then load the results into your DWH.
  • #41: We want the top 3 items bought by left handed women between ages of 41 and 43, on November 15, 1998. How long it will take you to answer this question? For one of my customers, the answer is 25 minutes. As data grows older, it usually becomes less valuable to the business, and it gets aggregated and shelved off to tapes or other cheap storage. This means that for many organizations, answering details questions about events that happened more than few month ago is impossible or at least very challenging. The business learned to never ask those questions, because the answer is “you can’t”. Hadoop combines cheap storage and massive processing power, this allows us to store detailed history of our business, and to generate reports about it. And once the answer for questions about history is “You will have your data in 25 minutes” instead of “impossible”, the questions turn out to be less rare than we assumed.
  • #42: 7 Petabytes of log file data 3 lines point to the security hole that allowed a break-in last week Your DWH has aggregated information from the logs. Maybe. Hadoop is very cost effective about storing data. Lots of cheap disks, easy to throw data in without pre-processing. Search the data when you need it.
  • #46: Bad schema design is not big data Using 8 year old hardware is not big data Not having purging policy is not big data Not configuring your database and operating system correctly is not big data Poor data filtering is not big data either Keep the data you need and use. In a way that you can actually use it. If doing this requires cutting edge technology, excellent! But don’t tell me you need NoSQL because you don’t purge data and have un-optimized PL/SQL running on 10-yo hardware.
  • #52: Bad schema design is not big data Using 8 year old hardware is not big data Not having purging policy is not big data Not configuring your database and operating system correctly is not big data Poor data filtering is not big data either Keep the data you need and use. In a way that you can actually use it. If doing this requires cutting edge technology, excellent! But don’t tell me you need NoSQL because you don’t purge data and have un-optimized PL/SQL running on 10-yo hardware.