SlideShare a Scribd company logo
thanachart@imcinstitute.com1
Big Data Processing
Using Cloudera Quickstart
with a Docker Container
July 2016
Dr.Thanachart Numnonda
IMC Institute
thanachart@imcinstitute.com
Modifiy from Original Version by Danairat T.
Certified Java Programmer, TOGAF – Silver
danairat@gmail.com
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Outline
●
Launch AWS EC2 Instance
●
Install Docker on Ubuntu
●
Pull Cloudera QuickStart to the docker
●
HDFS
●
HBase
●
MapReduce
●
Hive
●
Pig
●
Impala
●
Sqoop
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Cloudera VM
This lab will use a EC2 virtual server on AWS to install
Cloudera, However, you can also use Cloudera QuickStart VM
which can be downloaded from:
http://www.cloudera.com/content/www/en-us/downloads.html
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-On: Launch a virtual server
on EC2 Amazon Web Services
(Note: You can skip this session if you use your own
computer or another cloud service)
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Virtual Server
This lab will use a EC2 virtual server to install a
Cloudera Cluster using the following features:
Ubuntu Server 14.04 LTS
Four m3.xLarge 4vCPU, 15 GB memory,80 GB
SSD
Security group: default
Keypair: imchadoop
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Select a EC2 service and click on Lunch Instance
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Select an Amazon Machine Image (AMI) and
Ubuntu Server 14.04 LTS (PV)
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Choose m3.xlarge Type virtual server
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Add Storage: 80 GB
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Name the instance
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Select Create an existing security group > Default
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Click Launch and choose imchadoop as a key pair
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Review an instance and rename one instance as a
master / click Connect for an instruction to connect
to the instance
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Connect to an instance from Mac/Linux
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Can also view details of the instance such as Public
IP and Private IP
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Connect to an instance from Windows using Putty
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Connect to the instance
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-On: Installing Cloudera
Quickstart on Docker Container
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Installation Steps
●
Update OS
●
Install Docker
●
Pull Cloudera Quickstart
●
Run Cloudera Quickstart
●
Run Cloudera Manager
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Update OS (Ubuntu)
●
Command: sudo apt-get update
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Docker Installation
●
Command: sudo apt-get install docker.io
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Pull Cloudera Quickstart
●
Command: sudo docker pull cloudera/quickstart:latest
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Show docker images
●
Command: sudo docker images
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Run Cloudera quickstart
●
Command: sudo docker run
--hostname=quickstart.cloudera --privileged=true -t -i
[OPTIONS] [IMAGE] /usr/bin/docker-quickstart
Example: sudo docker run
--hostname=quickstart.cloudera --privileged=true -t -i -p
8888:8888 cloudera/quickstart /usr/bin/docker-quickstart
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Finding the EC2 instance's DNS
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Login to Hue
http://ec2-54-173-154-79.compute-1.amazonaws.com:8888
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
thanachart@imcinstitute.com31
Hadoop File System (HDFS)
Dr.Thanachart Numnonda
IMC Institute
thanachart@imcinstitute.com
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
HDFS
●
Default storage for the Hadoop cluster
●
Data is distributed and replicated over multiple machines
●
Designed to handle very large files with straming data
access patterns.
●
NameNode/DataNode
●
Master/slave architecture (1 master 'n' slaves)
●
Designed for large files (64 MB default, but configurable)
across all the nodes
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
HDFS Architecture
Source Hadoop: Shashwat Shriparv
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Data Replication in HDFS
Source Hadoop: Shashwat Shriparv
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
How does HDFS work?
Source Introduction to Apache Hadoop-Pig: PrashantKommireddi
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
How does HDFS work?
Source Introduction to Apache Hadoop-Pig: PrashantKommireddi
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
How does HDFS work?
Source Introduction to Apache Hadoop-Pig: PrashantKommireddi
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
How does HDFS work?
Source Introduction to Apache Hadoop-Pig: PrashantKommireddi
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
How does HDFS work?
Source Introduction to Apache Hadoop-Pig: PrashantKommireddi
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-On: Importing/Exporting
Data to HDFS
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Review file in Hadoop HDFS using
File Browse
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Create a new directory name as: input & output
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Upload a local file to HDFS
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-On: Connect to a master node
via SSH
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
SSH Login to a master node
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hadoop syntax for HDFS
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Install wget
●
Command: yum install wget
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Download an example text file
Make your own durectory at a master node to avoid mixing with others
$mkdir guest1
$cd guest1
$wget https://s3.amazonaws.com/imcbucket/input/pg2600.txt
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Upload Data to Hadoop
$hadoop fs -ls /user/cloudera/input
$hadoop fs -rm /user/cloudera/input/*
$hadoop fs -put pg2600.txt /user/cloudera/input/
$hadoop fs -ls /user/cloudera/input
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Lecture
Understanding HBase
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Introduction
An open source, non-relational, distributed database
HBase is an open source, non-relational, distributed database
modeled after Google's BigTable and is written in Java. It is
developed as part of Apache Software Foundation's Apache
Hadoop project and runs on top of HDFS (, providing
BigTable-like capabilities for Hadoop. That is, it provides a
fault-tolerant way of storing large quantities of sparse data.
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
HBase Features
●
Hadoop database modelled after Google's Bigtable
●
Column oriented data store, known as Hadoop Database
●
Support random realtime CRUD operations (unlike
HDFS)
●
No SQL Database
●
Opensource, written in Java
●
Run on a cluster of commodity hardware
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
When to use HBase?
●
When you need high volume data to be stored
●
Un-structured data
●
Sparse data
●
Column-oriented data
●
Versioned data (same data template, captured at various
time, time-elapse data)
●
When you need high scalability
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Which one to use?
●
HDFS
●
Only append dataset (no random write)
●
Read the whole dataset (no random read)
●
HBase
●
Need random write and/or read
●
Has thousands of operation per second on TB+ of data
●
RDBMS
●
Data fits on one big node
●
Need full transaction support
●
Need real-time query capabilities
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
HBase Components
Hive.apache.org
●
Region
●
Row of table are stores
●
Region Server
●
Hosts the tables
●
Master
●
Coordinating the Region
Servers
●
ZooKeeper
●
HDFS
●
API
●
The Java Client API
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
HBase Shell Commands
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-On: Running HBase
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hbase shell
$hbase shell
hbase(main):001:0> create 'employee', 'personal data',
'professional data'
hbase(main):002:0> list
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Create Data
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Running HBase Browser
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Viewing Employee Table
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Create a table in HBase
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Insert a new row in a table
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Add field into a new row
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Lecture: Understanding Map Reduce
Processing
Client
Name Node Job Tracker
Data Node
Task Tracker
Data Node
Task Tracker
Data Node
Task Tracker
Map Reduce
thanachart@imcinstitute.com70
Hadoop Ecosystem
thanachart@imcinstitute.com71
Source: The evolution and future of Hadoop storage: Cloudera
thanachart@imcinstitute.com72
Before MapReduce…
●
Large scale data processing was difficult!
– Managing hundreds or thousands of processors
– Managing parallelization and distribution
– I/O Scheduling
– Status and monitoring
– Fault/crash tolerance
●
MapReduce provides all of these, easily!
Source: http://labs.google.com/papers/mapreduce-osdi04-slides/index-auto-0002.html
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
MapReduce Framework
Source: www.bigdatauniversity.com
thanachart@imcinstitute.com74
How Map and Reduce Work Together
●
Map returns information
●
Reduces accepts information
●
Reduce applies a user defined function to reduce the
amount of data
thanachart@imcinstitute.com75
Map Abstraction
●
Inputs a key/value pair
– Key is a reference to the input value
– Value is the data set on which to operate
●
Evaluation
– Function defined by user
– Applies to every value in value input
●
Might need to parse input
●
Produces a new list of key/value pairs
– Can be different type from input pair
thanachart@imcinstitute.com76
Reduce Abstraction
●
Starts with intermediate Key / Value pairs
●
Ends with finalized Key / Value pairs
●
Starting pairs are sorted by key
●
Iterator supplies the values for a given key to the
Reduce function.
thanachart@imcinstitute.com77
Reduce Abstraction
●
Typically a function that:
– Starts with a large number of key/value pairs
●
One key/value for each word in all files being greped
(including multiple entries for the same word)
– Ends with very few key/value pairs
●
One key/value for each unique word across all the files with
the number of instances summed into this entry
●
Broken up so a given worker works with input of the
same key.
thanachart@imcinstitute.com78
Why is this approach better?
●
Creates an abstraction for dealing with complex
overhead
– The computations are simple, the overhead is messy
●
Removing the overhead makes programs much
smaller and thus easier to use
– Less testing is required as well. The MapReduce
libraries can be assumed to work properly, so only
user code needs to be tested
●
Division of labor also handled by the
MapReduce libraries, so programmers only
need to focus on the actual computation
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-On: Writing you own Map
Reduce Program
thanachart@imcinstitute.com80
Example MapReduce: WordCount
thanachart@imcinstitute.com81
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Running Map Reduce Program
$cd /root/guest1
$wget https://dl.dropboxusercontent.com/u/12655380/wordcount.jar
$hadoop jar wordcount.jar org.myorg.WordCount
/user/cloudera/input/* /user/cloudera/output/wordcount
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Reviewing MapReduce Job in Hue
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Reviewing MapReduce Job in Hue
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Reviewing MapReduce Output Result
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Reviewing MapReduce Output Result
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Lecture
Understanding Oozie
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Introduction
Workslow scheduler for Hadoop
Oozie is a workflow scheduler system to manage Apache
Hadoop jobs. Oozie is integrated with the rest of the Hadoop
stack supporting several types of Hadoop jobs out of the box
(such as Java map-reduce, Streaming map-reduce, Pig, Hive,
Sqoop and Distcp) as well as system specific jobs (such as
Java programs and shell scripts).
thanachart@imcinstitute.com89
What is Oozie?
●
Work flow scheduler for Hadoop
●
Manages Hadoop Jobs
●
Integrated with many Hadoop apps i.e. Pig, Hive
●
Scaleable
●
Schedule jobs
●
A work flow is a collection of actions.
●
A work flow is
– Arranged as a DAG ( direct acyclic graph )
– Graph stored as hPDL ( XML process definition )
thanachart@imcinstitute.com90
Oozie Architecture
Source: info@semtech-solutions.co.nz
thanachart@imcinstitute.com91
Oozie Server
Source: Oozie – Now and Beyond, Yahoo, 2013
thanachart@imcinstitute.com92
Layer of Abstraction in Oozie
Source: Oozie – Now and Beyond, Yahoo, 2013
thanachart@imcinstitute.com93
Workflow Example: Data Analytics
●
Logs => fact table(s)
●
Database backup => Dimension tables
●
Complete rollups/cubes
●
Load data into a low-latency storage (e.g. Hbae, HDFS)
●
Dashboard & BI tools
Source: Workflow Engines for Hadoop, Joe Crobak, 2013
thanachart@imcinstitute.com94
Workflow Example: Data Analytics
Source: Workflow Engines for Hadoop, Joe Crobak, 2013
thanachart@imcinstitute.com95
Workflow Example: Data Analytics
●
What happens if there is a failure?
– Rebuild the failed day
– .. and any downstream datasets
●
With Hadoop Workflow
– Possible OK to skip a day
– Workflow tends to be self-contained, so you do not need to run
downstream.
– Sanity check your data before pushing to production.
Source: Workflow Engines for Hadoop, Joe Crobak, 2013
thanachart@imcinstitute.com96
Oozie Workflow
Source: Oozie – Now and Beyond, Yahoo, 2013
thanachart@imcinstitute.com97
Oozie Use Cases
●
Time Triggers
– Execute your workflow every 15 minutes
●
Time and Data Triggers
– Materialize your workflow every hour, but only run them when
the input data is ready (that is loaded to the grid every hour)
●
Rolling Window
– Access 15 minute datasets and roll them up into hourly
datasets
Source: Oozie – Now and Beyond, Yahoo, 2013
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-On: Running Map Reduce
using Oozie workflow
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Using Hue: select WorkFlows >> Editors >>
Workflows
thanachart@imcinstitute.com100
Create a new workflow
●
Click Create button; the following screen will be displayed
●
Name the workflow as WordCountWorkflow
thanachart@imcinstitute.com101
thanachart@imcinstitute.com102
Select a Java job for the workflow
●
From the Oozie editor, drag Java Program and drop between start and
end
thanachart@imcinstitute.com103
Edit the Java Job
●
Assign the following value
–
– Jar name: wordcount.jar (select … choose upload from local
machine)
– Main Class: org.myorg.WordCount
– Arguments: /user/cloudera/input/*
– /user/cloudera/output/wordcount
thanachart@imcinstitute.com104
Submit the workflow
●
Click Done, follow by Save
●
Then click submit
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Introduction
A Petabyte Scale Data Warehouse Using Hadoop
Hive is developed by Facebook, designed to enable easy data
summarization, ad-hoc querying and analysis of large
volumes of data. It provides a simple query language called
Hive QL, which is based on SQL
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
What Hive is NOT
Hive is not designed for online transaction processing and
does not offer real-time queries and row level updates. It is
best used for batch jobs over large sets of immutable data
(like web logs, etc.).
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Sample HiveQL
The Query compiler uses the information stored in the metastore to
convert SQL queries into a sequence of map/reduce jobs, e.g. the
following query
SELECT * FROM t where t.c = 'xyz'
SELECT t1.c2 FROM t1 JOIN t2 ON (t1.c1 = t2.c1)
SELECT t1.c1, count(1) from t1 group by t1.c1
Hive.apache.or
g
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Sample HiveQL
The Query compiler uses the information stored in the metastore to
convert SQL queries into a sequence of map/reduce jobs, e.g. the
following query
SELECT * FROM t where t.c = 'xyz'
SELECT t1.c2 FROM t1 JOIN t2 ON (t1.c1 = t2.c1)
SELECT t1.c1, count(1) from t1 group by t1.c1
Hive.apache.or
g
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
System Architecture and Components
Metastore: To store the meta data.
Query compiler and execution engine: To convert SQL queries to a
sequence of map/reduce jobs that are then executed on Hadoop.
SerDe and ObjectInspectors: Programmable interfaces and
implementations of common data formats and types.
A SerDe is a combination of a Serializer and a Deserializer (hence, Ser-De). The Deserializer interface takes a string or
binary representation of a record, and translates it into a Java object that Hive can manipulate. The Serializer, however,
will take a Java object that Hive has been working with, and turn it into something that Hive can write to HDFS or
another supported system.
UDF and UDAF: Programmable interfaces and implementations for
user defined functions (scalar and aggregate functions).
Clients: Command line client similar to Mysql command line.
hive.apache.or
g
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Architecture Overview
HDFS
Hive CLI
Querie
s
Browsin
g
Map Reduce
MetaStore
Thrift API
SerDe
Thrift Jute JSON..
Execution
Hive QL
Parser
Planner
Mgmt.
WebUI
HDFS
DDL
Hive
Hive.apache.org
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Hive Metastore
Hive Metastore is a repository to keep all Hive
metadata; Tables and Partitions definition.
By default, Hive will store its metadata in
Derby DB
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Hive Built in Functions
Return Type Function Name (Signature) Description
BIGINT round(double a) returns the rounded BIGINT value of the double
BIGINT floor(double a) returns the maximum BIGINT value that is equal or less than the double
BIGINT ceil(double a) returns the minimum BIGINT value that is equal or greater than the double
double rand(), rand(int seed)
returns a random number (that changes from row to row). Specifiying the seed will make sure the generated
random number sequence is deterministic.
string concat(string A, string B,...)
returns the string resulting from concatenating B after A. For example, concat('foo', 'bar') results in 'foobar'. This
function accepts arbitrary number of arguments and return the concatenation of all of them.
string substr(string A, int start)
returns the substring of A starting from start position till the end of string A. For example, substr('foobar', 4) results
in 'bar'
string substr(string A, int start, int length) returns the substring of A starting from start position with the given length e.g. substr('foobar', 4, 2) results in 'ba'
string upper(string A)
returns the string resulting from converting all characters of A to upper case e.g. upper('fOoBaR') results in
'FOOBAR'
string ucase(string A) Same as upper
string lower(string A) returns the string resulting from converting all characters of B to lower case e.g. lower('fOoBaR') results in 'foobar'
string lcase(string A) Same as lower
string trim(string A) returns the string resulting from trimming spaces from both ends of A e.g. trim(' foobar ') results in 'foobar'
string ltrim(string A)
returns the string resulting from trimming spaces from the beginning(left hand side) of A. For example, ltrim(' foobar
') results in 'foobar '
string rtrim(string A)
returns the string resulting from trimming spaces from the end(right hand side) of A. For example, rtrim(' foobar ')
results in ' foobar'
string
regexp_replace(string A, string B,
string C)
returns the string resulting from replacing all substrings in B that match the Java regular expression syntax(See
Java regular expressions syntax) with C. For example, regexp_replace('foobar', 'oo|ar', ) returns 'fb'
string from_unixtime(int unixtime)
convert the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp
of that moment in the current system time zone in the format of "1970-01-01 00:00:00"
string to_date(string timestamp) Return the date part of a timestamp string: to_date("1970-01-01 00:00:00") = "1970-01-01"
int year(string date)
Return the year part of a date or a timestamp string: year("1970-01-01 00:00:00") = 1970, year("1970-01-01") =
1970
int month(string date)
Return the month part of a date or a timestamp string: month("1970-11-01 00:00:00") = 11, month("1970-11-01") =
11
int day(string date) Return the day part of a date or a timestamp string: day("1970-11-01 00:00:00") = 1, day("1970-11-01") = 1
string
get_json_object(string json_string,
string path)
Extract json object from a json string based on json path specified, and return json string of the extracted json
object. It will return null if the input json string is invalid
hive.apache.org
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Hive Aggregate Functions
Return Type
Aggregation Function Name
(Signature)
Description
BIGINT
count(*), count(expr),
count(DISTINCT expr[,
expr_.])
count(*) - Returns the total number of retrieved rows,
including rows containing NULL values; count(expr) - Returns
the number of rows for which the supplied expression is non-
NULL; count(DISTINCT expr[, expr]) - Returns the number of
rows for which the supplied expression(s) are unique and
non-NULL.
DOUBLE
sum(col), sum(DISTINCT
col)
returns the sum of the elements in the group or the sum of
the distinct values of the column in the group
DOUBLE avg(col), avg(DISTINCT col)
returns the average of the elements in the group or the
average of the distinct values of the column in the group
DOUBLE min(col) returns the minimum value of the column in the group
DOUBLE max(col) returns the maximum value of the column in the group
hive.apache.org
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Running Hive
Hive Shell
Interactive
hive
Script
hive -f myscript
Inline
hive -e 'SELECT * FROM mytable'
Hive.apache.or
g
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Hive Commands
ortonworks.com
: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Hive Tables
●
Managed- CREATE TABLE
●
LOAD- File moved into Hive's data warehouse directory
●
DROP- Both data and metadata are deleted.
●
External- CREATE EXTERNAL TABLE
●
LOAD- No file moved
●
DROP- Only metadata deleted
●
Use when sharing data between Hive and Hadoop applications
or you want to use multiple schema on the same data
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Hive External Table
Dropping External Table using Hive:-
Hive will delete metadata from metastore
Hive will NOT delete the HDFS file
You need to manually delete the HDFS file
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Java JDBC for Hive
import java.sql.SQLException;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.DriverManager;
 
public class HiveJdbcClient {
  private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";
 
  public static void main(String[] args) throws SQLException {
    try {
      Class.forName(driverName);
    } catch (ClassNotFoundException e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
      System.exit(1);
    }
    Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/default", "", "");
    Statement stmt = con.createStatement();
    String tableName = "testHiveDriverTable";
    stmt.executeQuery("drop table " + tableName);
    ResultSet res = stmt.executeQuery("create table " + tableName + " (key int, value string)");
    // show tables
    String sql = "show tables '" + tableName + "'";
    System.out.println("Running: " + sql);
    res = stmt.executeQuery(sql);
    if (res.next()) {
      System.out.println(res.getString(1));
    }
    // describe table
    sql = "describe " + tableName;
    System.out.println("Running: " + sql);
    res = stmt.executeQuery(sql);
    while (res.next()) {
      System.out.println(res.getString(1) + "t" + res.getString(2));
    }
 
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
Java JDBC for Hive
import java.sql.SQLException;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.DriverManager;
 
public class HiveJdbcClient {
  private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";
 
  public static void main(String[] args) throws SQLException {
    try {
      Class.forName(driverName);
    } catch (ClassNotFoundException e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
      System.exit(1);
    }
    Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/default", "", "");
    Statement stmt = con.createStatement();
    String tableName = "testHiveDriverTable";
    stmt.executeQuery("drop table " + tableName);
    ResultSet res = stmt.executeQuery("create table " + tableName + " (key int, value string)");
    // show tables
    String sql = "show tables '" + tableName + "'";
    System.out.println("Running: " + sql);
    res = stmt.executeQuery(sql);
    if (res.next()) {
      System.out.println(res.getString(1));
    }
    // describe table
    sql = "describe " + tableName;
    System.out.println("Running: " + sql);
    res = stmt.executeQuery(sql);
    while (res.next()) {
      System.out.println(res.getString(1) + "t" + res.getString(2));
    }
 
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
HiveQL and MySQL Comparison
ortonworks.com
Danairat T., danairat@gmail.com: Thanachart N., thanachart@imcinstitute.com April 2015Big Data Hadoop Workshop
HiveQL and MySQL Query Comparison
ortonworks.com
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-On: Loading Data using Hive
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
hive> quit;
Quit from Hive
Start Hive
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
See also: https://cwiki.apache.org/Hive/languagemanual-ddl.html
Create Hive Table
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Reviewing Hive Table in HDFS
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Alter and Drop Hive Table
Hive > alter table test_tbl add columns (remarks STRING);
hive > describe test_tbl;
OK
id int
country string
remarks string
Time taken: 0.077 seconds
hive > drop table test_tbl;
OK
Time taken: 0.9 seconds
See also: https://cwiki.apache.org/Hive/adminmanual-metastoreadmin.html
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Preparing Large Dataset
http://grouplens.org/datasets/movielens/
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
MovieLen Dataset
1)Type command > wget
http://files.grouplens.org/datasets/movielens/ml-100k.zip
2)Type command > yum install unzip
3)Type command > unzip ml-100k.zip
4)Type command > more ml-100k/u.user
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Moving dataset to HDFS
1)Type command > cd ml-100k
2)Type command > hadoop fs -mkdir /user/cloudera/movielens
3)Type command > hadoop fs -put u.user /user/cloudera/movielens
4)Type command > hadoop fs -ls /user/cloudera/movielens
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
CREATE & SELECT Table
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Bay Area Bike Share (BABS)
http://www.bayareabikeshare.com/open-data
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Preparing a bike data
$wget https://s3.amazonaws.com/babs-open-data/
babs_open_data_year_1.zip
$unzip babs_open_data_year_1.zip
$cd 201402_babs_open_data/
$hadoop fs -put 201402_trip_data.csv
/user/cloudera
$ hadoop fs -ls /user/cloudera
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Importing CSV Data with the Metastore App
The BABS data set contains 4 CSVs that contain data for
stations, trips, rebalancing (availability), and weather. We will
import trips dataset using Metastore Tables
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Select: Create a new table from a file
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Name a table and select a file
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Choose Delimiter
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Define Column Types
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Create Table : Done
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Starting Hive Editor
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Find the top 10 most popular start stations
based on the trip data
SELECT startterminal, startstation, COUNT(1) AS count FROM trip
GROUP BY startterminal, startstation ORDER BY count DESC LIMIT 10
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Introduction
A high-level platform for creating MapReduce programs Using Hadoop
Pig is a platform for analyzing large data sets that consists of
a high-level language for expressing data analysis programs,
coupled with infrastructure for evaluating these programs.
The salient property of Pig programs is that their structure is
amenable to substantial parallelization, which in turns enables
them to handle very large data sets.
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Pig Components
●
Two Compnents
●
Language (Pig Latin)
●
Compiler
●
Two Execution Environments
●
Local
pig -x local
●
Distributed
pig -x mapreduce
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Running Pig
●
Script
pig myscript
●
Command line (Grunt)
pig
●
Embedded
Writing a java program
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Pig Latin
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Pig Execution Stages
Hive.apache.orgSource Introduction to Apache Hadoop-Pig: PrashantKommireddi
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Why Pig?
●
Makes writing Hadoop jobs easier
●
5% of the code, 5% of the time
●
You don't need to be a programmer to write Pig scripts
●
Provide major functionality required for
DatawareHouse and Analytics
●
Load, Filter, Join, Group By, Order, Transform
●
User can write custom UDFs (User Defined Function)
Hive.apache.orgSource Introduction to Apache Hadoop-Pig: PrashantKommireddi
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Pig v.s. Hive
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-On: Running a Pig script
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Starting Pig Command Line
$ pig -x mapreduce
2013-08-01 10:29:00,027 [main] INFO org.apache.pig.Main - Apache Pig
version 0.11.1 (r1459641) compiled Mar 22 2013, 02:13:53
2013-08-01 10:29:00,027 [main] INFO org.apache.pig.Main - Logging error
messages to: /home/hdadmin/pig_1375327740024.log
2013-08-01 10:29:00,066 [main] INFO org.apache.pig.impl.util.Utils -
Default bootup file /home/hdadmin/.pigbootup not found
2013-08-01 10:29:00,212 [main] INFO
org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting
to hadoop file system at: file:///
grunt>
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Writing a Pig Script for wordcount
A = load '/user/cloudera/input/*';
B = foreach A generate flatten(TOKENIZE((chararray)$0)) as
word;
C = group B by word;
D = foreach C generate COUNT(B), group;
store D into '/user/cloudera/output/wordcountPig';
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
thanachart@imcinstitute.com155
Impala
Dr.Thanachart Numnonda
IMC Institute
thanachart@imcinstitute.com
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Introduction
open source massively parallel processing (MPP) SQL query engine
Cloudera Impala is a query engine that runs on Apache
Hadoop. Impala brings scalable parallel database technology
to Hadoop, enabling users to issue low-latency SQL queries to
data stored in HDFS and Apache HBase without requiring data
movement or transformation.
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
What is Impala?
General--- purpose SQL engine
Real--time queries in Apache Hadoop
Opensource under Apache License
Runs directly within Hadoop
High performance
– C++ instead of Java
– Runtime code generator
– Roughly 4-100 x Hive
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Impala Overview
Impala daemon run on HDFS nodes
Statestore (for cluster metadata) v.s. Metastore (for
database metastore)
Queries run on “revelants” nodes
Support common HDFS file formats
Submit quries via Hue/Beeswax
No fault tolerant
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Impala Architecture
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Start Impala Query Editor
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Update the list of tables/metadata by excute
the command invalidate metadata
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Restart Impala Query Editor and refresh the
table list
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Find the top 10 most popular start stations
based on the trip data: Using Impala
SELECT startterminal, startstation, COUNT(1) AS count FROM trip
GROUP BY startterminal, startstation ORDER BY count DESC LIMIT 10
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Find the total number of trips and average duration
(in minutes) of those trips, grouped by hour
SELECT
hour,
COUNT(1) AS trips,
ROUND(AVG(duration) / 60) AS avg_duration
FROM (
SELECT
CAST(SPLIT(SPLIT(t.startdate, ' ')[1], ':')[0] AS INT) AS
hour,
t.duration AS duration
FROM `bikeshare`.`trips` t
WHERE
t.startterminal = 70
AND
t.duration IS NOT NULL
) r
GROUP BY hour
ORDER BY hour ASC;
thanachart@imcinstitute.com166
Apache Sqoop
Dr.Thanachart Numnonda
IMC Institute
thanachart@imcinstitute.com
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Introduction
Sqoop (“SQL-to-Hadoop”) is a straightforward command-line
tool with the following capabilities:
●
Imports individual tables or entire databases to files in HDFS
●
Generates Java classes to allow you to interact with your
imported data
●
Provides the ability to import from SQL databases straight
into your Hive data warehouse
See also: http://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Architecture Overview
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Sqoop Benefit
Leverages RDBMS metadata to get the column data
types
It is simple to script and uses SQL
It can be used to handle change data capture by
importing daily transactional data to Hadoop
It uses MapReduce for export and import that enables
parallel and efficient data movement
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Sqoop Mode
Sqoop import: Data moves from RDBMS to Hadoop
Sqoop export: Data moves from Hadoop to RDBMS
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Use Case #1: ETL for Data Warehouse
Source: Mastering Apache Sqoop, David Yahalom, 2016
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Use Case #2: ELT
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Use Case #3: Data Analysis
Source: Mastering Apache Sqoop, David Yahalom, 2016
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Use Case #4: Data Archival
Source: Mastering Apache Sqoop, David Yahalom, 2016
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Use Case #5: Data Consolidation
Source: Mastering Apache Sqoop, David Yahalom, 2016
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Use Case #6: Move reports to Hadoop
Source: Mastering Apache Sqoop, David Yahalom, 2016
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Import Commands
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Architecture of the import process
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Incremental import
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Export Commands
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-On: Loading Data from RDBMS
to Hadoop
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
●
Command: sudo docker pull mysql
●
Command: sudo docker run --name imcMysql -e
MYSQL_ROOT_PASSWORD=imcinstitute -p 3306:3306
-d mysql
●
Command: sudo docker exec -it imcMysql bash
Running MySQL Docker
root@f1922a70e09c:/# mysql -uroot -p"imcinstitute"
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Prepare a test database table
mysql> CREATE DATABASE imc_db;
mysql> USE imc_db;
mysql> CREATE TABLE country_tbl(id INT NOT NULL, country
VARCHAR(50), PRIMARY KEY (id));
mysql> INSERT INTO country_tbl VALUES(1, 'USA');
mysql> INSERT INTO country_tbl VALUES(2, 'CANADA');
mysql> INSERT INTO country_tbl VALUES(3, 'Mexico');
mysql> INSERT INTO country_tbl VALUES(4, 'Brazil');
mysql> INSERT INTO country_tbl VALUES(61, 'Japan');
mysql> INSERT INTO country_tbl VALUES(65, 'Singapore');
mysql> INSERT INTO country_tbl VALUES(66, 'Thailand');
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
View data in the table
mysql> SELECT * FROM country_tbl;
mysql> exit;
Then exit from the container by press Ctrl-P & Ctrl-Q
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Command: sudo run --hostname=quickstart.cloudera
--privileged=true --link imcMysql:mysqldb -t -i -p
8888:8888 cloudera/quickstart /usr/bin/docker-quickstart
If both of these Dockers are up and running, you can find
out the internal IP address of each of them by running
this command. This gets the IP for imcMysql.
●
Command: sudo docker inspect imcMysql | grep
IPAddress
Restart the Cloudera docker with linking to the
MySQL Docker
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Command: sudo docker run
--hostname=quickstart.cloudera --privileged=true --link
imcMysql:mysqldb -t -i -p 8888:8888 cloudera/quickstart
/usr/bin/docker-quickstart
If both of these Dockers are up and running, you can find
out the internal IP address of each of them by running
this command. This gets the IP for imcMysql.
●
Command: sudo docker inspect imcMysql | grep
IPAddress
Restart the Cloudera docker with linking to the
MySQL Docker
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Check a MySQL driver for Sqoop
$ cd /var/lib/sqoop
$ ls
Note: If you do not see the driver file, you need to install
one by using the following command
$ wget https://s3.amazonaws.com/imcbucket/apps/mysql-connector-java-
5.1.23-bin.jar
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Importing data from MySQL to HDFS
$sqoop import --connect jdbc:mysql://172.17.0.7/imc_db
--username root --password imcinstitute --table country_tbl
--target-dir /user/cloudera/testtable -m 1
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Importing data from MySQL to Hive Table
$sqoop import --connect jdbc:mysql://172.17.0.7/imc_db
--username root --password imcinstitute --table country_tbl
--hive-import --hive-table country -m 1
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Reviewing data from Hive Table
[root@quickstart /]# hive
hive> show tables;
hive> select * from country;
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Running from Hue: Beewax
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Importing data from MySQL to HBase
$sqoop import --connect jdbc:mysql://172.17.0.7/imc_db
--username root --password imcinstitute --table country_tbl
--hbase-table country --column-family hbase_country_cf --hbase-row-key
id --hbase-create-table -m 1
$hbase shell
hbase(main):001:0> list
Start HBase
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Viewing Hbase data
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Viewing data from Hbase browser
thanachart@imcinstitute.com195
Apache Flume
Dr.Thanachart Numnonda
IMC Institute
thanachart@imcinstitute.com
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Introduction
Apache Flume is:
●
A distributed data transport and aggregation system for
event- or log-structured data
●
Principally designed for continuous data ingestion into
Hadoop… But more flexible than that
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
What is Flume?
●
Apache Flume is a continuous data ingestion system that
is...
●
open-source,
●
reliable,
●
scalable,
●
manageable,
●
Customizable,
●
and designed for
Big Data ecosystem
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Architecture Overview
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Flume Agent
Source: Using Flume, Hari Shreedharan, 2014
●
A source writes events to one or more channels.
●
A channel is the holding area as events are passed from
a source to a sink.
●
A sink receives events from one channel only.
●
An agent can have many channels.
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Sources
●
Different Source types:
●
Require at least one channel to function
●
Specialized sources for integrating with well-
known systems.
●
Example: Spooling Files, Syslog, Netcat, JMS
●
Auto-Generating Sources: Exec, SEQ
●
IPC sources for Agent-to-Agent communication: Avro,
Thrift
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Channel
●
Different Channels offer different levels of
persistence:
●
Memory Channel
●
File Channel:
●
Eventually, when the agent comes back data can
be accessed.
●
Channels are fully transactional
●
Provide weak ordering guarantees
●
Can work with any number of Sources and Sinks
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Sink
●
Different types of Sinks:
●
Terminal sinks that deposit events to their final
destination. For example: HDFS, HBase, Morphline-
Solr, Elastic Search
●
Sinks support serialization to user’s preferred formats.
●
HDFS sink supports time-based and arbitrary
bucketing of data while writing to HDFS.
●
IPC sink for Agent-to-Agent communication: Avro,
Thrift
●
Require exactly one channel to function
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Flume Process
Source: Using Flume, Hari Shreedharan, 2014
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Flume Process
Source: Using Flume, Hari Shreedharan, 2014
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Flow
Source: Using Flume, Hari Shreedharan, 2014
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Flume terminology
●
A source writes events to one or more channels.
●
A channel is the holding area as events are passed from
a source to a sink.
●
A sink receives events from one channel only.
●
An agent can have many channels.
Odiago
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Flume Agent Configuration : Example
Odiago
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Flume Agent Configuration : Example
Odiago
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Stream Processing Architecture
Odiago
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-On: Loading Twitter Data to
Hadoop HDFS
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Exercise Overview
Hive.apache.org
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Installing Pre-built version of flume
$ wget
http://files.cloudera.com/samples/flume-sources-1.0-SNAPSHOT.jar
$ sudo cp flume-sources-1.0-SNAPSHOT.jar
/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/flume-
ng/lib/
$sudo cp /etc/flume-ng/conf/flume-env.sh.template
/etc/flume-ng/conf/flume-env.sh
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Create a new Twitter App
Login to your Twitter @ twitter.com
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Create a new Twitter App (cont.)
Create a new Twitter App @ apps.twitter.com
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Create a new Twitter App (cont.)
Enter all the details in the application:
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Create a new Twitter App (cont.)
Your application will be created:
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Create a new Twitter App (cont.)
Click on Keys and Access Tokens:
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Create a new Twitter App (cont.)
Your Access token got created:
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Add classpath in Cloudera Manager
"Services" -> "flume1" -> "Configuration" -> -> "Advanced"
-> "Java Configuration Options for Flume Agent", add:
--classpath /opt/cloudera/parcels/CDH-5.5.1-
1.cdh5.5.1.p0.11/lib/flume-ng/lib/flume-sources-1.0-
SNAPSHOT.jar
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Change the Flume Agent Name
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Configuring the Flume Agent
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Agent Configuration
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
TwitterAgent.sources.Twitter.type =
org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey =
MjpswndxVj27ylnpOoSBrnfLX
TwitterAgent.sources.Twitter.consumerSecret =
QYmuBO1smD5Yc3zE0ZF9ByCgeEQxnxUmhRVCisAvPFudYVjC4a
TwitterAgent.sources.Twitter.accessToken = 921172807-
EfMXJj6as2dFECDH1vDe5goyTHcxPrF1RIJozqgx
TwitterAgent.sources.Twitter.accessTokenSecret =
HbpZEVip3D5j80GP21a37HxA4y10dH9BHcgEFXUNcA9xy
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Agent Configuration
TwitterAgent.sources.Twitter.keywords = hadoop, big data,
analytics, bigdata, cloudera, data science, data scientiest,
business intelligence, mapreduce, data warehouse, data
warehousing, mahout, hbase, nosql, newsql,
businessintelligence, cloudcomputing
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path =
hdfs://xx.xx.xx.xx:8020/user/flume/tweets/
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 100
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Restart Flume
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
View an agent log file
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
View an agent log file
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
View a result using Hue
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Stop the agent
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
7. Analyse data using Hive
$ wget
http://files.cloudera.com/samples/hive-serdes-1.0-SNAPSHOT.jar
$ mv hive-serdes-1.0-SNAPSHOT.jar /usr/local/apache-hive-
1.1.0-bin/lib/
$ hive
hive> ADD JAR /usr/local/apache-hive-1.1.0-bin/lib/hive-
serdes-1.0-SNAPSHOT.jar;
Get a Serde Jar File for parsing JSON file
Register the Jar file.
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
ุAnalyse data using Hive (cont.)
Running the following hive command
http://www.thecloudavenue.com/2013/03/analyse-tweets-using-flume-hadoop-and.html
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Analyse data using Hive (cont)
hive> select user.screen_name, user.followers_count c from
tweets order by c desc;
Finding user who has the most number of followers
thanachart@imcinstitute.com232
Apache Kafka
Dr.Thanachart Numnonda
IMC Institute
thanachart@imcinstitute.com
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Introduction
Open-source message broker project
An open-source message broker project developed by the
Apache Software Foundation written in Scala. The project
aims to provide a unified, high-throughput, low-latency
platform for handling real-time data feeds. It is, in its essence,
a "massively scalable pub/sub message queue architected as
a distributed transaction log", making it highly valuable for
enterprise infrastructures.
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
What is Kafka?
An apache project initially developed at LinkedIn
Distributed publish-subscribe messaging system
Designed for processing of real time activity stream data
e.g. logs, metrics collections
Written in Scala
Does not follow JMS Standards, neither uses JMS APIs
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Kafka: Features
Persistent messaging
High-throughput
Supports both queue and topic semantics
Uses Zookeeper for forming a cluster of nodes
(producer/consumer/broker)
and many more…
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Why Kafka?
Built with speed and scalability in mind.
Enabled near real-time access to any data source
Empowered hadoop jobs
Allowed us to build real-time analytics
Vastly improved our site monitoring and alerting
capability
Enabled us to visualize and track our call graphs.
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Messaging System Concept: Queue
Source: Real time Analytics with Apache Kafka and Spark, Rahul Jain
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Messaging System Concept: Topic
Source: Real time Analytics with Apache Kafka and Spark, Rahul Jain
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Terminology
Kafka maintains feeds of messages in categories called
topics.
Processes that publish messages to a Kafka topic are
called producers.
Processes that subscribe to topics and process the feed
of published messages are called consumers.
Kafka is run as a cluster comprised of one or more
servers each of which is called a broker.
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Kafka
Source: Real time Analytics with Apache Kafka and Spark, Rahul Jain
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Topics
Topic: feed name to which messages are published
Source: Apache Kafka with Spark Streaming - Real Time Analytics Redefined
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Topics
Source: Apache Kafka with Spark Streaming - Real Time Analytics Redefined
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Topics
A topic consists of partitions.
Partition: ordered + immutable sequence of messages
that is continually appended
Source: Apache Kafka with Spark Streaming - Real Time Analytics Redefined
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Kafka Architecture
Source: Real time Analytics with Apache Kafka and Spark, Rahul Jain
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Hands-on
SparkStreaming with Kafka
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Install & Start Kafka Server
# wget http://www-us.apache.org/dist/kafka/0.9.0.1/kafka_2.10-
0.9.0.1.tgz
# tar xzf kafka_2.10-0.9.0.1.tgz
# cd kafka_2.10-0.9.0.1
# bin/kafka-server-start.sh config/server.properties&
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Running Kafka Producer
# bin/kafka-console-producer.sh --topic test --broker-list
localhost:9092
type some random messages followed by Ctrl-D to finish
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Running Kafka Consumer
# bin/kafka-console-consumer.sh --topic test --zookeeper
localhost:2181 --from-beginning
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Start Spark-shell with extra memory
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Spark Streaming with Kafka
$ scala> :paste
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.storage.StorageLevel
import StorageLevel._
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.streaming.kafka.KafkaUtils
val ssc = new StreamingContext(sc, Seconds(2))
val kafkaStream = KafkaUtils.createStream(ssc,
"localhost:2181","spark-streaming-consumer-group", Map("spark-
topic" -> 5))
kafkaStream.print()
ssc.start
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Running Kafka Producer on another terminal
# docker ps
# docker exec -i -t c77e4dc1ed9b /bin/bash
[root@quickstart ~]# cd /root/kafka_2.10-0.9.0.1
[root@quickstart kafka_2.10-0.9.0.1]# bin/kafka-console-
producer.sh --broker-list localhost:9092 --topic spark-topic
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Test & View the result
Result from another terminal
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thanachart Numnonda, thanachart@imcinstitute.com July 2016Hadoop Workshop using Cloudera on Amazon EC2
Thank you
www.imcinstitute.com
www.facebook.com/imcinstitute

More Related Content

PDF
Best Practices for Running PostgreSQL on AWS
PPTX
FIWARE Orion Context Broker コンテキスト情報管理 (Orion 3.7.0対応)
PPTX
コンテナネットワーキング(CNI)最前線
PDF
KubeVirt 201 How to Using the GPU
PDF
Babelfish Compatibility
PDF
単なるキャッシュじゃないよ!?infinispanの紹介
PDF
Kubernetesを使う上で抑えておくべきAWSの基礎概念
PDF
Oci object storage deep dive 20190329 ss
Best Practices for Running PostgreSQL on AWS
FIWARE Orion Context Broker コンテキスト情報管理 (Orion 3.7.0対応)
コンテナネットワーキング(CNI)最前線
KubeVirt 201 How to Using the GPU
Babelfish Compatibility
単なるキャッシュじゃないよ!?infinispanの紹介
Kubernetesを使う上で抑えておくべきAWSの基礎概念
Oci object storage deep dive 20190329 ss

What's hot (20)

PDF
20191127 AWS Black Belt Online Seminar Amazon CloudWatch Container Insights で...
PPTX
AWSメンテナンス ElastiCache編
KEY
大規模環境でRailsと4年間付き合ってきて@ クックパッド * 食べログ合同勉強会
PDF
20190806 AWS Black Belt Online Seminar AWS Glue
PDF
ECS to EKS 마이그레이션 경험기 - 유용환(Superb AI) :: AWS Community Day Online 2021
PDF
Oracle Cloud Platform:IDCSを使ったアイデンティティ・ドメイン管理者ガイド
PDF
20190130 AWS Black Belt Online Seminar AWS Identity and Access Management (AW...
PDF
AWS Elastic Beanstalk Tutorial | AWS Certification | AWS Tutorial | Edureka
PPTX
Docker 101 - High level introduction to docker
PPTX
What you need to know about ceph
PDF
AWS Summit Seoul 2023 | 실시간 CDC 데이터 처리! Modern Transactional Data Lake 구축하기
PPTX
Best Practices for Enterprise User Management in Hadoop Environment
PPTX
ビッグデータだけじゃない Amazon DynamoDBの活用事例
PDF
Ansible AWXを導入してみた
PPTX
AWS Well Architected Framework - Walk Through
PDF
FlutterでGraphQLを扱う
PPTX
Ss systemdのwslディストロを作る kernelvm探検隊online part 3
PDF
Hadoopのシステム設計・運用のポイント
PDF
AWS 初心者向けWebinar 利用者が実施するAWS上でのセキュリティ対策
PDF
[AWS Migration Workshop] AWS 클라우드로의 안전하고 신속한 마이그레이션 방안
20191127 AWS Black Belt Online Seminar Amazon CloudWatch Container Insights で...
AWSメンテナンス ElastiCache編
大規模環境でRailsと4年間付き合ってきて@ クックパッド * 食べログ合同勉強会
20190806 AWS Black Belt Online Seminar AWS Glue
ECS to EKS 마이그레이션 경험기 - 유용환(Superb AI) :: AWS Community Day Online 2021
Oracle Cloud Platform:IDCSを使ったアイデンティティ・ドメイン管理者ガイド
20190130 AWS Black Belt Online Seminar AWS Identity and Access Management (AW...
AWS Elastic Beanstalk Tutorial | AWS Certification | AWS Tutorial | Edureka
Docker 101 - High level introduction to docker
What you need to know about ceph
AWS Summit Seoul 2023 | 실시간 CDC 데이터 처리! Modern Transactional Data Lake 구축하기
Best Practices for Enterprise User Management in Hadoop Environment
ビッグデータだけじゃない Amazon DynamoDBの活用事例
Ansible AWXを導入してみた
AWS Well Architected Framework - Walk Through
FlutterでGraphQLを扱う
Ss systemdのwslディストロを作る kernelvm探検隊online part 3
Hadoopのシステム設計・運用のポイント
AWS 初心者向けWebinar 利用者が実施するAWS上でのセキュリティ対策
[AWS Migration Workshop] AWS 클라우드로의 안전하고 신속한 마이그레이션 방안
Ad

Viewers also liked (15)

PDF
Mobile User and App Analytics in China
PDF
Big data: Loading your data with flume and sqoop
PDF
New Data Transfer Tools for Hadoop: Sqoop 2
PDF
Apache Sqoop: A Data Transfer Tool for Hadoop
PDF
Big Data Analytics using Mahout
PDF
Introduction to Apache Sqoop
PDF
Thai Software & Software Market Survey 2015
PDF
สมุดกิจกรรม Code for Kids
PPT
ITSS Overview
PPTX
Apache sqoop with an use case
PPTX
Advanced Sqoop
PDF
Install Apache Hadoop for Development/Production
PDF
Machine Learning using Apache Spark MLlib
PDF
Kanban boards step by step
PPTX
Flume vs. kafka
Mobile User and App Analytics in China
Big data: Loading your data with flume and sqoop
New Data Transfer Tools for Hadoop: Sqoop 2
Apache Sqoop: A Data Transfer Tool for Hadoop
Big Data Analytics using Mahout
Introduction to Apache Sqoop
Thai Software & Software Market Survey 2015
สมุดกิจกรรม Code for Kids
ITSS Overview
Apache sqoop with an use case
Advanced Sqoop
Install Apache Hadoop for Development/Production
Machine Learning using Apache Spark MLlib
Kanban boards step by step
Flume vs. kafka
Ad

Similar to Big data processing using Hadoop with Cloudera Quickstart (20)

PDF
Big data processing using Cloudera Quickstart
PDF
Hadoop Workshop using Cloudera on Amazon EC2
PDF
Hadoop Workshop on EC2 : March 2015
PPTX
Cloudera amazon-ec2
PDF
Apache Spark & Hadoop : Train-the-trainer
PPTX
Big data journey to the cloud 5.30.18 asher bartch
PDF
Set up Hadoop Cluster on Amazon EC2
PDF
Apache Spark in Action
PPTX
Edge to AI: Analytics from Edge to Cloud with Efficient Movement of Machine ...
PDF
Cluster management and automation with cloudera manager
PDF
Edge to ai analytics from edge to cloud with efficient movement of machine data
PDF
Big Data Hadoop using Amazon Elastic MapReduce: Hands-On Labs
PPTX
Cloudera Director: Unlock the Full Potential of Hadoop in the Cloud
PPTX
Cloudera Analytics and Machine Learning Platform - Optimized for Cloud
PDF
Introducing Cloudera Director at Big Data Bash
PPTX
Hadoop Essentials -- The What, Why and How to Meet Agency Objectives
PDF
Webinar: Productionizing Hadoop: Lessons Learned - 20101208
PDF
Hadoop Hand-on Lab: Installing Hadoop 2
PDF
Big Data Programming Using Hadoop Workshop
PPTX
Pa cloudera manager-api's_extensibility_v2
Big data processing using Cloudera Quickstart
Hadoop Workshop using Cloudera on Amazon EC2
Hadoop Workshop on EC2 : March 2015
Cloudera amazon-ec2
Apache Spark & Hadoop : Train-the-trainer
Big data journey to the cloud 5.30.18 asher bartch
Set up Hadoop Cluster on Amazon EC2
Apache Spark in Action
Edge to AI: Analytics from Edge to Cloud with Efficient Movement of Machine ...
Cluster management and automation with cloudera manager
Edge to ai analytics from edge to cloud with efficient movement of machine data
Big Data Hadoop using Amazon Elastic MapReduce: Hands-On Labs
Cloudera Director: Unlock the Full Potential of Hadoop in the Cloud
Cloudera Analytics and Machine Learning Platform - Optimized for Cloud
Introducing Cloudera Director at Big Data Bash
Hadoop Essentials -- The What, Why and How to Meet Agency Objectives
Webinar: Productionizing Hadoop: Lessons Learned - 20101208
Hadoop Hand-on Lab: Installing Hadoop 2
Big Data Programming Using Hadoop Workshop
Pa cloudera manager-api's_extensibility_v2

More from IMC Institute (20)

PDF
นิตยสาร Digital Trends ฉบับที่ 14
PDF
Digital trends Vol 4 No. 13 Sep-Dec 2019
PDF
บทความ The evolution of AI
PDF
IT Trends eMagazine Vol 4. No.12
PDF
เพราะเหตุใด Digitization ไม่ตอบโจทย์ Digital Transformation
PDF
IT Trends 2019: Putting Digital Transformation to Work
PDF
มูลค่าตลาดดิจิทัลไทย 3 อุตสาหกรรม
PDF
IT Trends eMagazine Vol 4. No.11
PDF
แนวทางการทำ Digital transformation
PDF
บทความ The New Silicon Valley
PDF
นิตยสาร IT Trends ของ IMC Institute ฉบับที่ 10
PDF
แนวทางการทำ Digital transformation
PDF
The Power of Big Data for a new economy (Sample)
PDF
บทความ Robotics แนวโน้มใหม่สู่บริการเฉพาะทาง
PDF
IT Trends eMagazine Vol 3. No.9
PDF
Thailand software & software market survey 2016
PPTX
Developing Business Blockchain Applications on Hyperledger
PDF
Digital transformation @thanachart.org
PDF
บทความ Big Data จากบล็อก thanachart.org
PDF
กลยุทธ์ 5 ด้านกับการทำ Digital Transformation
นิตยสาร Digital Trends ฉบับที่ 14
Digital trends Vol 4 No. 13 Sep-Dec 2019
บทความ The evolution of AI
IT Trends eMagazine Vol 4. No.12
เพราะเหตุใด Digitization ไม่ตอบโจทย์ Digital Transformation
IT Trends 2019: Putting Digital Transformation to Work
มูลค่าตลาดดิจิทัลไทย 3 อุตสาหกรรม
IT Trends eMagazine Vol 4. No.11
แนวทางการทำ Digital transformation
บทความ The New Silicon Valley
นิตยสาร IT Trends ของ IMC Institute ฉบับที่ 10
แนวทางการทำ Digital transformation
The Power of Big Data for a new economy (Sample)
บทความ Robotics แนวโน้มใหม่สู่บริการเฉพาะทาง
IT Trends eMagazine Vol 3. No.9
Thailand software & software market survey 2016
Developing Business Blockchain Applications on Hyperledger
Digital transformation @thanachart.org
บทความ Big Data จากบล็อก thanachart.org
กลยุทธ์ 5 ด้านกับการทำ Digital Transformation

Recently uploaded (20)

PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Empathic Computing: Creating Shared Understanding
PDF
Chapter 2 Digital Image Fundamentals.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
HCSP-Presales-Campus Network Planning and Design V1.0 Training Material-Witho...
PDF
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Advanced Soft Computing BINUS July 2025.pdf
PDF
NewMind AI Monthly Chronicles - July 2025
PPTX
Big Data Technologies - Introduction.pptx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
Cloud computing and distributed systems.
PPTX
Telecom Fraud Prevention Guide | Hyperlink InfoSystem
PDF
Advanced IT Governance
PDF
Transforming Manufacturing operations through Intelligent Integrations
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Chapter 3 Spatial Domain Image Processing.pdf
Empathic Computing: Creating Shared Understanding
Chapter 2 Digital Image Fundamentals.pdf
20250228 LYD VKU AI Blended-Learning.pptx
HCSP-Presales-Campus Network Planning and Design V1.0 Training Material-Witho...
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
Advanced methodologies resolving dimensionality complications for autism neur...
“AI and Expert System Decision Support & Business Intelligence Systems”
Reach Out and Touch Someone: Haptics and Empathic Computing
Advanced Soft Computing BINUS July 2025.pdf
NewMind AI Monthly Chronicles - July 2025
Big Data Technologies - Introduction.pptx
Dropbox Q2 2025 Financial Results & Investor Presentation
Cloud computing and distributed systems.
Telecom Fraud Prevention Guide | Hyperlink InfoSystem
Advanced IT Governance
Transforming Manufacturing operations through Intelligent Integrations
How UI/UX Design Impacts User Retention in Mobile Apps.pdf

Big data processing using Hadoop with Cloudera Quickstart

  • 1. [email protected] Big Data Processing Using Cloudera Quickstart with a Docker Container July 2016 Dr.Thanachart Numnonda IMC Institute [email protected] Modifiy from Original Version by Danairat T. Certified Java Programmer, TOGAF – Silver [email protected]
  • 2. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Outline ● Launch AWS EC2 Instance ● Install Docker on Ubuntu ● Pull Cloudera QuickStart to the docker ● HDFS ● HBase ● MapReduce ● Hive ● Pig ● Impala ● Sqoop Hive.apache.org
  • 3. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Cloudera VM This lab will use a EC2 virtual server on AWS to install Cloudera, However, you can also use Cloudera QuickStart VM which can be downloaded from: http://www.cloudera.com/content/www/en-us/downloads.html
  • 4. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-On: Launch a virtual server on EC2 Amazon Web Services (Note: You can skip this session if you use your own computer or another cloud service)
  • 5. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 6. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 7. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Virtual Server This lab will use a EC2 virtual server to install a Cloudera Cluster using the following features: Ubuntu Server 14.04 LTS Four m3.xLarge 4vCPU, 15 GB memory,80 GB SSD Security group: default Keypair: imchadoop
  • 8. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Select a EC2 service and click on Lunch Instance
  • 9. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Select an Amazon Machine Image (AMI) and Ubuntu Server 14.04 LTS (PV)
  • 10. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Choose m3.xlarge Type virtual server
  • 11. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 12. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Add Storage: 80 GB
  • 13. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Name the instance
  • 14. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Select Create an existing security group > Default
  • 15. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Click Launch and choose imchadoop as a key pair
  • 16. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Review an instance and rename one instance as a master / click Connect for an instruction to connect to the instance
  • 17. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Connect to an instance from Mac/Linux
  • 18. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Can also view details of the instance such as Public IP and Private IP
  • 19. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Connect to an instance from Windows using Putty
  • 20. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Connect to the instance
  • 21. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-On: Installing Cloudera Quickstart on Docker Container
  • 22. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Installation Steps ● Update OS ● Install Docker ● Pull Cloudera Quickstart ● Run Cloudera Quickstart ● Run Cloudera Manager Hive.apache.org
  • 23. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Update OS (Ubuntu) ● Command: sudo apt-get update
  • 24. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Docker Installation ● Command: sudo apt-get install docker.io
  • 25. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Pull Cloudera Quickstart ● Command: sudo docker pull cloudera/quickstart:latest
  • 26. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Show docker images ● Command: sudo docker images
  • 27. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Run Cloudera quickstart ● Command: sudo docker run --hostname=quickstart.cloudera --privileged=true -t -i [OPTIONS] [IMAGE] /usr/bin/docker-quickstart Example: sudo docker run --hostname=quickstart.cloudera --privileged=true -t -i -p 8888:8888 cloudera/quickstart /usr/bin/docker-quickstart
  • 28. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Finding the EC2 instance's DNS
  • 29. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Login to Hue http://ec2-54-173-154-79.compute-1.amazonaws.com:8888
  • 30. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 31. [email protected] Hadoop File System (HDFS) Dr.Thanachart Numnonda IMC Institute [email protected]
  • 32. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 HDFS ● Default storage for the Hadoop cluster ● Data is distributed and replicated over multiple machines ● Designed to handle very large files with straming data access patterns. ● NameNode/DataNode ● Master/slave architecture (1 master 'n' slaves) ● Designed for large files (64 MB default, but configurable) across all the nodes Hive.apache.org
  • 33. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 HDFS Architecture Source Hadoop: Shashwat Shriparv
  • 34. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Data Replication in HDFS Source Hadoop: Shashwat Shriparv
  • 35. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 How does HDFS work? Source Introduction to Apache Hadoop-Pig: PrashantKommireddi
  • 36. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 How does HDFS work? Source Introduction to Apache Hadoop-Pig: PrashantKommireddi
  • 37. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 How does HDFS work? Source Introduction to Apache Hadoop-Pig: PrashantKommireddi
  • 38. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 How does HDFS work? Source Introduction to Apache Hadoop-Pig: PrashantKommireddi
  • 39. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 How does HDFS work? Source Introduction to Apache Hadoop-Pig: PrashantKommireddi
  • 40. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-On: Importing/Exporting Data to HDFS
  • 41. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Review file in Hadoop HDFS using File Browse
  • 42. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Create a new directory name as: input & output
  • 43. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 44. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Upload a local file to HDFS
  • 45. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 46. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-On: Connect to a master node via SSH
  • 47. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 SSH Login to a master node
  • 48. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hadoop syntax for HDFS
  • 49. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Install wget ● Command: yum install wget
  • 50. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Download an example text file Make your own durectory at a master node to avoid mixing with others $mkdir guest1 $cd guest1 $wget https://s3.amazonaws.com/imcbucket/input/pg2600.txt
  • 51. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Upload Data to Hadoop $hadoop fs -ls /user/cloudera/input $hadoop fs -rm /user/cloudera/input/* $hadoop fs -put pg2600.txt /user/cloudera/input/ $hadoop fs -ls /user/cloudera/input
  • 52. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Lecture Understanding HBase
  • 53. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Introduction An open source, non-relational, distributed database HBase is an open source, non-relational, distributed database modeled after Google's BigTable and is written in Java. It is developed as part of Apache Software Foundation's Apache Hadoop project and runs on top of HDFS (, providing BigTable-like capabilities for Hadoop. That is, it provides a fault-tolerant way of storing large quantities of sparse data.
  • 54. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 HBase Features ● Hadoop database modelled after Google's Bigtable ● Column oriented data store, known as Hadoop Database ● Support random realtime CRUD operations (unlike HDFS) ● No SQL Database ● Opensource, written in Java ● Run on a cluster of commodity hardware Hive.apache.org
  • 55. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 When to use HBase? ● When you need high volume data to be stored ● Un-structured data ● Sparse data ● Column-oriented data ● Versioned data (same data template, captured at various time, time-elapse data) ● When you need high scalability Hive.apache.org
  • 56. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Which one to use? ● HDFS ● Only append dataset (no random write) ● Read the whole dataset (no random read) ● HBase ● Need random write and/or read ● Has thousands of operation per second on TB+ of data ● RDBMS ● Data fits on one big node ● Need full transaction support ● Need real-time query capabilities Hive.apache.org
  • 57. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 58. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 59. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 HBase Components Hive.apache.org ● Region ● Row of table are stores ● Region Server ● Hosts the tables ● Master ● Coordinating the Region Servers ● ZooKeeper ● HDFS ● API ● The Java Client API
  • 60. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 HBase Shell Commands Hive.apache.org
  • 61. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-On: Running HBase
  • 62. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hbase shell $hbase shell hbase(main):001:0> create 'employee', 'personal data', 'professional data' hbase(main):002:0> list
  • 63. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Create Data
  • 64. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Running HBase Browser
  • 65. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Viewing Employee Table
  • 66. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Create a table in HBase
  • 67. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Insert a new row in a table
  • 68. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Add field into a new row
  • 69. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Lecture: Understanding Map Reduce Processing Client Name Node Job Tracker Data Node Task Tracker Data Node Task Tracker Data Node Task Tracker Map Reduce
  • 71. [email protected] Source: The evolution and future of Hadoop storage: Cloudera
  • 72. [email protected] Before MapReduce… ● Large scale data processing was difficult! – Managing hundreds or thousands of processors – Managing parallelization and distribution – I/O Scheduling – Status and monitoring – Fault/crash tolerance ● MapReduce provides all of these, easily! Source: http://labs.google.com/papers/mapreduce-osdi04-slides/index-auto-0002.html
  • 73. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 MapReduce Framework Source: www.bigdatauniversity.com
  • 74. [email protected] How Map and Reduce Work Together ● Map returns information ● Reduces accepts information ● Reduce applies a user defined function to reduce the amount of data
  • 75. [email protected] Map Abstraction ● Inputs a key/value pair – Key is a reference to the input value – Value is the data set on which to operate ● Evaluation – Function defined by user – Applies to every value in value input ● Might need to parse input ● Produces a new list of key/value pairs – Can be different type from input pair
  • 76. [email protected] Reduce Abstraction ● Starts with intermediate Key / Value pairs ● Ends with finalized Key / Value pairs ● Starting pairs are sorted by key ● Iterator supplies the values for a given key to the Reduce function.
  • 77. [email protected] Reduce Abstraction ● Typically a function that: – Starts with a large number of key/value pairs ● One key/value for each word in all files being greped (including multiple entries for the same word) – Ends with very few key/value pairs ● One key/value for each unique word across all the files with the number of instances summed into this entry ● Broken up so a given worker works with input of the same key.
  • 78. [email protected] Why is this approach better? ● Creates an abstraction for dealing with complex overhead – The computations are simple, the overhead is messy ● Removing the overhead makes programs much smaller and thus easier to use – Less testing is required as well. The MapReduce libraries can be assumed to work properly, so only user code needs to be tested ● Division of labor also handled by the MapReduce libraries, so programmers only need to focus on the actual computation
  • 79. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-On: Writing you own Map Reduce Program
  • 82. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Running Map Reduce Program $cd /root/guest1 $wget https://dl.dropboxusercontent.com/u/12655380/wordcount.jar $hadoop jar wordcount.jar org.myorg.WordCount /user/cloudera/input/* /user/cloudera/output/wordcount
  • 83. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Reviewing MapReduce Job in Hue
  • 84. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Reviewing MapReduce Job in Hue
  • 85. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Reviewing MapReduce Output Result
  • 86. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Reviewing MapReduce Output Result
  • 87. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Lecture Understanding Oozie
  • 88. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Introduction Workslow scheduler for Hadoop Oozie is a workflow scheduler system to manage Apache Hadoop jobs. Oozie is integrated with the rest of the Hadoop stack supporting several types of Hadoop jobs out of the box (such as Java map-reduce, Streaming map-reduce, Pig, Hive, Sqoop and Distcp) as well as system specific jobs (such as Java programs and shell scripts).
  • 89. [email protected] What is Oozie? ● Work flow scheduler for Hadoop ● Manages Hadoop Jobs ● Integrated with many Hadoop apps i.e. Pig, Hive ● Scaleable ● Schedule jobs ● A work flow is a collection of actions. ● A work flow is – Arranged as a DAG ( direct acyclic graph ) – Graph stored as hPDL ( XML process definition )
  • 91. [email protected] Oozie Server Source: Oozie – Now and Beyond, Yahoo, 2013
  • 92. [email protected] Layer of Abstraction in Oozie Source: Oozie – Now and Beyond, Yahoo, 2013
  • 93. [email protected] Workflow Example: Data Analytics ● Logs => fact table(s) ● Database backup => Dimension tables ● Complete rollups/cubes ● Load data into a low-latency storage (e.g. Hbae, HDFS) ● Dashboard & BI tools Source: Workflow Engines for Hadoop, Joe Crobak, 2013
  • 94. [email protected] Workflow Example: Data Analytics Source: Workflow Engines for Hadoop, Joe Crobak, 2013
  • 95. [email protected] Workflow Example: Data Analytics ● What happens if there is a failure? – Rebuild the failed day – .. and any downstream datasets ● With Hadoop Workflow – Possible OK to skip a day – Workflow tends to be self-contained, so you do not need to run downstream. – Sanity check your data before pushing to production. Source: Workflow Engines for Hadoop, Joe Crobak, 2013
  • 96. [email protected] Oozie Workflow Source: Oozie – Now and Beyond, Yahoo, 2013
  • 97. [email protected] Oozie Use Cases ● Time Triggers – Execute your workflow every 15 minutes ● Time and Data Triggers – Materialize your workflow every hour, but only run them when the input data is ready (that is loaded to the grid every hour) ● Rolling Window – Access 15 minute datasets and roll them up into hourly datasets Source: Oozie – Now and Beyond, Yahoo, 2013
  • 98. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-On: Running Map Reduce using Oozie workflow
  • 99. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Using Hue: select WorkFlows >> Editors >> Workflows
  • 100. [email protected] Create a new workflow ● Click Create button; the following screen will be displayed ● Name the workflow as WordCountWorkflow
  • 102. [email protected] Select a Java job for the workflow ● From the Oozie editor, drag Java Program and drop between start and end
  • 103. [email protected] Edit the Java Job ● Assign the following value – – Jar name: wordcount.jar (select … choose upload from local machine) – Main Class: org.myorg.WordCount – Arguments: /user/cloudera/input/* – /user/cloudera/output/wordcount
  • 104. [email protected] Submit the workflow ● Click Done, follow by Save ● Then click submit
  • 105. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Introduction A Petabyte Scale Data Warehouse Using Hadoop Hive is developed by Facebook, designed to enable easy data summarization, ad-hoc querying and analysis of large volumes of data. It provides a simple query language called Hive QL, which is based on SQL
  • 106. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop What Hive is NOT Hive is not designed for online transaction processing and does not offer real-time queries and row level updates. It is best used for batch jobs over large sets of immutable data (like web logs, etc.).
  • 107. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Sample HiveQL The Query compiler uses the information stored in the metastore to convert SQL queries into a sequence of map/reduce jobs, e.g. the following query SELECT * FROM t where t.c = 'xyz' SELECT t1.c2 FROM t1 JOIN t2 ON (t1.c1 = t2.c1) SELECT t1.c1, count(1) from t1 group by t1.c1 Hive.apache.or g
  • 108. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Sample HiveQL The Query compiler uses the information stored in the metastore to convert SQL queries into a sequence of map/reduce jobs, e.g. the following query SELECT * FROM t where t.c = 'xyz' SELECT t1.c2 FROM t1 JOIN t2 ON (t1.c1 = t2.c1) SELECT t1.c1, count(1) from t1 group by t1.c1 Hive.apache.or g
  • 109. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop System Architecture and Components Metastore: To store the meta data. Query compiler and execution engine: To convert SQL queries to a sequence of map/reduce jobs that are then executed on Hadoop. SerDe and ObjectInspectors: Programmable interfaces and implementations of common data formats and types. A SerDe is a combination of a Serializer and a Deserializer (hence, Ser-De). The Deserializer interface takes a string or binary representation of a record, and translates it into a Java object that Hive can manipulate. The Serializer, however, will take a Java object that Hive has been working with, and turn it into something that Hive can write to HDFS or another supported system. UDF and UDAF: Programmable interfaces and implementations for user defined functions (scalar and aggregate functions). Clients: Command line client similar to Mysql command line. hive.apache.or g
  • 110. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Architecture Overview HDFS Hive CLI Querie s Browsin g Map Reduce MetaStore Thrift API SerDe Thrift Jute JSON.. Execution Hive QL Parser Planner Mgmt. WebUI HDFS DDL Hive Hive.apache.org
  • 111. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Hive Metastore Hive Metastore is a repository to keep all Hive metadata; Tables and Partitions definition. By default, Hive will store its metadata in Derby DB
  • 112. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Hive Built in Functions Return Type Function Name (Signature) Description BIGINT round(double a) returns the rounded BIGINT value of the double BIGINT floor(double a) returns the maximum BIGINT value that is equal or less than the double BIGINT ceil(double a) returns the minimum BIGINT value that is equal or greater than the double double rand(), rand(int seed) returns a random number (that changes from row to row). Specifiying the seed will make sure the generated random number sequence is deterministic. string concat(string A, string B,...) returns the string resulting from concatenating B after A. For example, concat('foo', 'bar') results in 'foobar'. This function accepts arbitrary number of arguments and return the concatenation of all of them. string substr(string A, int start) returns the substring of A starting from start position till the end of string A. For example, substr('foobar', 4) results in 'bar' string substr(string A, int start, int length) returns the substring of A starting from start position with the given length e.g. substr('foobar', 4, 2) results in 'ba' string upper(string A) returns the string resulting from converting all characters of A to upper case e.g. upper('fOoBaR') results in 'FOOBAR' string ucase(string A) Same as upper string lower(string A) returns the string resulting from converting all characters of B to lower case e.g. lower('fOoBaR') results in 'foobar' string lcase(string A) Same as lower string trim(string A) returns the string resulting from trimming spaces from both ends of A e.g. trim(' foobar ') results in 'foobar' string ltrim(string A) returns the string resulting from trimming spaces from the beginning(left hand side) of A. For example, ltrim(' foobar ') results in 'foobar ' string rtrim(string A) returns the string resulting from trimming spaces from the end(right hand side) of A. For example, rtrim(' foobar ') results in ' foobar' string regexp_replace(string A, string B, string C) returns the string resulting from replacing all substrings in B that match the Java regular expression syntax(See Java regular expressions syntax) with C. For example, regexp_replace('foobar', 'oo|ar', ) returns 'fb' string from_unixtime(int unixtime) convert the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the format of "1970-01-01 00:00:00" string to_date(string timestamp) Return the date part of a timestamp string: to_date("1970-01-01 00:00:00") = "1970-01-01" int year(string date) Return the year part of a date or a timestamp string: year("1970-01-01 00:00:00") = 1970, year("1970-01-01") = 1970 int month(string date) Return the month part of a date or a timestamp string: month("1970-11-01 00:00:00") = 11, month("1970-11-01") = 11 int day(string date) Return the day part of a date or a timestamp string: day("1970-11-01 00:00:00") = 1, day("1970-11-01") = 1 string get_json_object(string json_string, string path) Extract json object from a json string based on json path specified, and return json string of the extracted json object. It will return null if the input json string is invalid hive.apache.org
  • 113. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Hive Aggregate Functions Return Type Aggregation Function Name (Signature) Description BIGINT count(*), count(expr), count(DISTINCT expr[, expr_.]) count(*) - Returns the total number of retrieved rows, including rows containing NULL values; count(expr) - Returns the number of rows for which the supplied expression is non- NULL; count(DISTINCT expr[, expr]) - Returns the number of rows for which the supplied expression(s) are unique and non-NULL. DOUBLE sum(col), sum(DISTINCT col) returns the sum of the elements in the group or the sum of the distinct values of the column in the group DOUBLE avg(col), avg(DISTINCT col) returns the average of the elements in the group or the average of the distinct values of the column in the group DOUBLE min(col) returns the minimum value of the column in the group DOUBLE max(col) returns the maximum value of the column in the group hive.apache.org
  • 114. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Running Hive Hive Shell Interactive hive Script hive -f myscript Inline hive -e 'SELECT * FROM mytable' Hive.apache.or g
  • 115. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Hive Commands ortonworks.com
  • 116. : Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Hive Tables ● Managed- CREATE TABLE ● LOAD- File moved into Hive's data warehouse directory ● DROP- Both data and metadata are deleted. ● External- CREATE EXTERNAL TABLE ● LOAD- No file moved ● DROP- Only metadata deleted ● Use when sharing data between Hive and Hadoop applications or you want to use multiple schema on the same data
  • 117. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Hive External Table Dropping External Table using Hive:- Hive will delete metadata from metastore Hive will NOT delete the HDFS file You need to manually delete the HDFS file
  • 118. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Java JDBC for Hive import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager;   public class HiveJdbcClient {   private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";     public static void main(String[] args) throws SQLException {     try {       Class.forName(driverName);     } catch (ClassNotFoundException e) {       // TODO Auto-generated catch block       e.printStackTrace();       System.exit(1);     }     Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/default", "", "");     Statement stmt = con.createStatement();     String tableName = "testHiveDriverTable";     stmt.executeQuery("drop table " + tableName);     ResultSet res = stmt.executeQuery("create table " + tableName + " (key int, value string)");     // show tables     String sql = "show tables '" + tableName + "'";     System.out.println("Running: " + sql);     res = stmt.executeQuery(sql);     if (res.next()) {       System.out.println(res.getString(1));     }     // describe table     sql = "describe " + tableName;     System.out.println("Running: " + sql);     res = stmt.executeQuery(sql);     while (res.next()) {       System.out.println(res.getString(1) + "t" + res.getString(2));     }  
  • 119. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop Java JDBC for Hive import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager;   public class HiveJdbcClient {   private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";     public static void main(String[] args) throws SQLException {     try {       Class.forName(driverName);     } catch (ClassNotFoundException e) {       // TODO Auto-generated catch block       e.printStackTrace();       System.exit(1);     }     Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/default", "", "");     Statement stmt = con.createStatement();     String tableName = "testHiveDriverTable";     stmt.executeQuery("drop table " + tableName);     ResultSet res = stmt.executeQuery("create table " + tableName + " (key int, value string)");     // show tables     String sql = "show tables '" + tableName + "'";     System.out.println("Running: " + sql);     res = stmt.executeQuery(sql);     if (res.next()) {       System.out.println(res.getString(1));     }     // describe table     sql = "describe " + tableName;     System.out.println("Running: " + sql);     res = stmt.executeQuery(sql);     while (res.next()) {       System.out.println(res.getString(1) + "t" + res.getString(2));     }  
  • 120. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop HiveQL and MySQL Comparison ortonworks.com
  • 121. Danairat T., [email protected]: Thanachart N., [email protected] April 2015Big Data Hadoop Workshop HiveQL and MySQL Query Comparison ortonworks.com
  • 122. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-On: Loading Data using Hive
  • 123. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 hive> quit; Quit from Hive Start Hive
  • 124. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 See also: https://cwiki.apache.org/Hive/languagemanual-ddl.html Create Hive Table
  • 125. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Reviewing Hive Table in HDFS
  • 126. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Alter and Drop Hive Table Hive > alter table test_tbl add columns (remarks STRING); hive > describe test_tbl; OK id int country string remarks string Time taken: 0.077 seconds hive > drop table test_tbl; OK Time taken: 0.9 seconds See also: https://cwiki.apache.org/Hive/adminmanual-metastoreadmin.html
  • 127. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Preparing Large Dataset http://grouplens.org/datasets/movielens/
  • 128. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 MovieLen Dataset 1)Type command > wget http://files.grouplens.org/datasets/movielens/ml-100k.zip 2)Type command > yum install unzip 3)Type command > unzip ml-100k.zip 4)Type command > more ml-100k/u.user
  • 129. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Moving dataset to HDFS 1)Type command > cd ml-100k 2)Type command > hadoop fs -mkdir /user/cloudera/movielens 3)Type command > hadoop fs -put u.user /user/cloudera/movielens 4)Type command > hadoop fs -ls /user/cloudera/movielens
  • 130. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 CREATE & SELECT Table
  • 131. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Bay Area Bike Share (BABS) http://www.bayareabikeshare.com/open-data
  • 132. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Preparing a bike data $wget https://s3.amazonaws.com/babs-open-data/ babs_open_data_year_1.zip $unzip babs_open_data_year_1.zip $cd 201402_babs_open_data/ $hadoop fs -put 201402_trip_data.csv /user/cloudera $ hadoop fs -ls /user/cloudera
  • 133. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Importing CSV Data with the Metastore App The BABS data set contains 4 CSVs that contain data for stations, trips, rebalancing (availability), and weather. We will import trips dataset using Metastore Tables
  • 134. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Select: Create a new table from a file
  • 135. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Name a table and select a file
  • 136. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Choose Delimiter
  • 137. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Define Column Types
  • 138. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Create Table : Done
  • 139. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 140. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Starting Hive Editor
  • 141. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Find the top 10 most popular start stations based on the trip data SELECT startterminal, startstation, COUNT(1) AS count FROM trip GROUP BY startterminal, startstation ORDER BY count DESC LIMIT 10
  • 142. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 143. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Introduction A high-level platform for creating MapReduce programs Using Hadoop Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large data sets.
  • 144. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Pig Components ● Two Compnents ● Language (Pig Latin) ● Compiler ● Two Execution Environments ● Local pig -x local ● Distributed pig -x mapreduce Hive.apache.org
  • 145. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Running Pig ● Script pig myscript ● Command line (Grunt) pig ● Embedded Writing a java program Hive.apache.org
  • 146. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Pig Latin Hive.apache.org
  • 147. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Pig Execution Stages Hive.apache.orgSource Introduction to Apache Hadoop-Pig: PrashantKommireddi
  • 148. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Why Pig? ● Makes writing Hadoop jobs easier ● 5% of the code, 5% of the time ● You don't need to be a programmer to write Pig scripts ● Provide major functionality required for DatawareHouse and Analytics ● Load, Filter, Join, Group By, Order, Transform ● User can write custom UDFs (User Defined Function) Hive.apache.orgSource Introduction to Apache Hadoop-Pig: PrashantKommireddi
  • 149. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Pig v.s. Hive Hive.apache.org
  • 150. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-On: Running a Pig script
  • 151. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Starting Pig Command Line $ pig -x mapreduce 2013-08-01 10:29:00,027 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.1 (r1459641) compiled Mar 22 2013, 02:13:53 2013-08-01 10:29:00,027 [main] INFO org.apache.pig.Main - Logging error messages to: /home/hdadmin/pig_1375327740024.log 2013-08-01 10:29:00,066 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/hdadmin/.pigbootup not found 2013-08-01 10:29:00,212 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:/// grunt>
  • 152. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Writing a Pig Script for wordcount A = load '/user/cloudera/input/*'; B = foreach A generate flatten(TOKENIZE((chararray)$0)) as word; C = group B by word; D = foreach C generate COUNT(B), group; store D into '/user/cloudera/output/wordcountPig';
  • 153. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 154. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 156. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Introduction open source massively parallel processing (MPP) SQL query engine Cloudera Impala is a query engine that runs on Apache Hadoop. Impala brings scalable parallel database technology to Hadoop, enabling users to issue low-latency SQL queries to data stored in HDFS and Apache HBase without requiring data movement or transformation.
  • 157. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 What is Impala? General--- purpose SQL engine Real--time queries in Apache Hadoop Opensource under Apache License Runs directly within Hadoop High performance – C++ instead of Java – Runtime code generator – Roughly 4-100 x Hive
  • 158. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Impala Overview Impala daemon run on HDFS nodes Statestore (for cluster metadata) v.s. Metastore (for database metastore) Queries run on “revelants” nodes Support common HDFS file formats Submit quries via Hue/Beeswax No fault tolerant
  • 159. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Impala Architecture
  • 160. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Start Impala Query Editor
  • 161. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Update the list of tables/metadata by excute the command invalidate metadata
  • 162. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Restart Impala Query Editor and refresh the table list
  • 163. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Find the top 10 most popular start stations based on the trip data: Using Impala SELECT startterminal, startstation, COUNT(1) AS count FROM trip GROUP BY startterminal, startstation ORDER BY count DESC LIMIT 10
  • 164. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 165. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Find the total number of trips and average duration (in minutes) of those trips, grouped by hour SELECT hour, COUNT(1) AS trips, ROUND(AVG(duration) / 60) AS avg_duration FROM ( SELECT CAST(SPLIT(SPLIT(t.startdate, ' ')[1], ':')[0] AS INT) AS hour, t.duration AS duration FROM `bikeshare`.`trips` t WHERE t.startterminal = 70 AND t.duration IS NOT NULL ) r GROUP BY hour ORDER BY hour ASC;
  • 167. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Introduction Sqoop (“SQL-to-Hadoop”) is a straightforward command-line tool with the following capabilities: ● Imports individual tables or entire databases to files in HDFS ● Generates Java classes to allow you to interact with your imported data ● Provides the ability to import from SQL databases straight into your Hive data warehouse See also: http://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html
  • 168. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Architecture Overview Hive.apache.org
  • 169. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Sqoop Benefit Leverages RDBMS metadata to get the column data types It is simple to script and uses SQL It can be used to handle change data capture by importing daily transactional data to Hadoop It uses MapReduce for export and import that enables parallel and efficient data movement
  • 170. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Sqoop Mode Sqoop import: Data moves from RDBMS to Hadoop Sqoop export: Data moves from Hadoop to RDBMS
  • 171. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Use Case #1: ETL for Data Warehouse Source: Mastering Apache Sqoop, David Yahalom, 2016
  • 172. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Use Case #2: ELT
  • 173. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Use Case #3: Data Analysis Source: Mastering Apache Sqoop, David Yahalom, 2016
  • 174. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Use Case #4: Data Archival Source: Mastering Apache Sqoop, David Yahalom, 2016
  • 175. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Use Case #5: Data Consolidation Source: Mastering Apache Sqoop, David Yahalom, 2016
  • 176. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Use Case #6: Move reports to Hadoop Source: Mastering Apache Sqoop, David Yahalom, 2016
  • 177. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Import Commands
  • 178. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Architecture of the import process
  • 179. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Incremental import
  • 180. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Export Commands
  • 181. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-On: Loading Data from RDBMS to Hadoop
  • 182. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 ● Command: sudo docker pull mysql ● Command: sudo docker run --name imcMysql -e MYSQL_ROOT_PASSWORD=imcinstitute -p 3306:3306 -d mysql ● Command: sudo docker exec -it imcMysql bash Running MySQL Docker root@f1922a70e09c:/# mysql -uroot -p"imcinstitute"
  • 183. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Prepare a test database table mysql> CREATE DATABASE imc_db; mysql> USE imc_db; mysql> CREATE TABLE country_tbl(id INT NOT NULL, country VARCHAR(50), PRIMARY KEY (id)); mysql> INSERT INTO country_tbl VALUES(1, 'USA'); mysql> INSERT INTO country_tbl VALUES(2, 'CANADA'); mysql> INSERT INTO country_tbl VALUES(3, 'Mexico'); mysql> INSERT INTO country_tbl VALUES(4, 'Brazil'); mysql> INSERT INTO country_tbl VALUES(61, 'Japan'); mysql> INSERT INTO country_tbl VALUES(65, 'Singapore'); mysql> INSERT INTO country_tbl VALUES(66, 'Thailand');
  • 184. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 View data in the table mysql> SELECT * FROM country_tbl; mysql> exit; Then exit from the container by press Ctrl-P & Ctrl-Q
  • 185. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Command: sudo run --hostname=quickstart.cloudera --privileged=true --link imcMysql:mysqldb -t -i -p 8888:8888 cloudera/quickstart /usr/bin/docker-quickstart If both of these Dockers are up and running, you can find out the internal IP address of each of them by running this command. This gets the IP for imcMysql. ● Command: sudo docker inspect imcMysql | grep IPAddress Restart the Cloudera docker with linking to the MySQL Docker
  • 186. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Command: sudo docker run --hostname=quickstart.cloudera --privileged=true --link imcMysql:mysqldb -t -i -p 8888:8888 cloudera/quickstart /usr/bin/docker-quickstart If both of these Dockers are up and running, you can find out the internal IP address of each of them by running this command. This gets the IP for imcMysql. ● Command: sudo docker inspect imcMysql | grep IPAddress Restart the Cloudera docker with linking to the MySQL Docker
  • 187. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Check a MySQL driver for Sqoop $ cd /var/lib/sqoop $ ls Note: If you do not see the driver file, you need to install one by using the following command $ wget https://s3.amazonaws.com/imcbucket/apps/mysql-connector-java- 5.1.23-bin.jar
  • 188. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Importing data from MySQL to HDFS $sqoop import --connect jdbc:mysql://172.17.0.7/imc_db --username root --password imcinstitute --table country_tbl --target-dir /user/cloudera/testtable -m 1
  • 189. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Importing data from MySQL to Hive Table $sqoop import --connect jdbc:mysql://172.17.0.7/imc_db --username root --password imcinstitute --table country_tbl --hive-import --hive-table country -m 1
  • 190. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Reviewing data from Hive Table [root@quickstart /]# hive hive> show tables; hive> select * from country;
  • 191. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Running from Hue: Beewax
  • 192. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Importing data from MySQL to HBase $sqoop import --connect jdbc:mysql://172.17.0.7/imc_db --username root --password imcinstitute --table country_tbl --hbase-table country --column-family hbase_country_cf --hbase-row-key id --hbase-create-table -m 1 $hbase shell hbase(main):001:0> list Start HBase
  • 193. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Viewing Hbase data
  • 194. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Viewing data from Hbase browser
  • 196. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Introduction Apache Flume is: ● A distributed data transport and aggregation system for event- or log-structured data ● Principally designed for continuous data ingestion into Hadoop… But more flexible than that
  • 197. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 What is Flume? ● Apache Flume is a continuous data ingestion system that is... ● open-source, ● reliable, ● scalable, ● manageable, ● Customizable, ● and designed for Big Data ecosystem
  • 198. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Architecture Overview
  • 199. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Flume Agent Source: Using Flume, Hari Shreedharan, 2014 ● A source writes events to one or more channels. ● A channel is the holding area as events are passed from a source to a sink. ● A sink receives events from one channel only. ● An agent can have many channels.
  • 200. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Sources ● Different Source types: ● Require at least one channel to function ● Specialized sources for integrating with well- known systems. ● Example: Spooling Files, Syslog, Netcat, JMS ● Auto-Generating Sources: Exec, SEQ ● IPC sources for Agent-to-Agent communication: Avro, Thrift
  • 201. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Channel ● Different Channels offer different levels of persistence: ● Memory Channel ● File Channel: ● Eventually, when the agent comes back data can be accessed. ● Channels are fully transactional ● Provide weak ordering guarantees ● Can work with any number of Sources and Sinks
  • 202. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Sink ● Different types of Sinks: ● Terminal sinks that deposit events to their final destination. For example: HDFS, HBase, Morphline- Solr, Elastic Search ● Sinks support serialization to user’s preferred formats. ● HDFS sink supports time-based and arbitrary bucketing of data while writing to HDFS. ● IPC sink for Agent-to-Agent communication: Avro, Thrift ● Require exactly one channel to function
  • 203. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Flume Process Source: Using Flume, Hari Shreedharan, 2014
  • 204. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Flume Process Source: Using Flume, Hari Shreedharan, 2014
  • 205. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Flow Source: Using Flume, Hari Shreedharan, 2014
  • 206. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Flume terminology ● A source writes events to one or more channels. ● A channel is the holding area as events are passed from a source to a sink. ● A sink receives events from one channel only. ● An agent can have many channels. Odiago
  • 207. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Flume Agent Configuration : Example Odiago
  • 208. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Flume Agent Configuration : Example Odiago
  • 209. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Stream Processing Architecture Odiago
  • 210. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-On: Loading Twitter Data to Hadoop HDFS
  • 211. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Exercise Overview Hive.apache.org
  • 212. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Installing Pre-built version of flume $ wget http://files.cloudera.com/samples/flume-sources-1.0-SNAPSHOT.jar $ sudo cp flume-sources-1.0-SNAPSHOT.jar /opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/flume- ng/lib/ $sudo cp /etc/flume-ng/conf/flume-env.sh.template /etc/flume-ng/conf/flume-env.sh
  • 213. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Create a new Twitter App Login to your Twitter @ twitter.com
  • 214. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Create a new Twitter App (cont.) Create a new Twitter App @ apps.twitter.com
  • 215. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Create a new Twitter App (cont.) Enter all the details in the application:
  • 216. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Create a new Twitter App (cont.) Your application will be created:
  • 217. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Create a new Twitter App (cont.) Click on Keys and Access Tokens:
  • 218. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Create a new Twitter App (cont.) Your Access token got created:
  • 219. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Add classpath in Cloudera Manager "Services" -> "flume1" -> "Configuration" -> -> "Advanced" -> "Java Configuration Options for Flume Agent", add: --classpath /opt/cloudera/parcels/CDH-5.5.1- 1.cdh5.5.1.p0.11/lib/flume-ng/lib/flume-sources-1.0- SNAPSHOT.jar
  • 220. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Change the Flume Agent Name
  • 221. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Configuring the Flume Agent
  • 222. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Agent Configuration TwitterAgent.sources = Twitter TwitterAgent.channels = MemChannel TwitterAgent.sinks = HDFS TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource TwitterAgent.sources.Twitter.channels = MemChannel TwitterAgent.sources.Twitter.consumerKey = MjpswndxVj27ylnpOoSBrnfLX TwitterAgent.sources.Twitter.consumerSecret = QYmuBO1smD5Yc3zE0ZF9ByCgeEQxnxUmhRVCisAvPFudYVjC4a TwitterAgent.sources.Twitter.accessToken = 921172807- EfMXJj6as2dFECDH1vDe5goyTHcxPrF1RIJozqgx TwitterAgent.sources.Twitter.accessTokenSecret = HbpZEVip3D5j80GP21a37HxA4y10dH9BHcgEFXUNcA9xy
  • 223. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Agent Configuration TwitterAgent.sources.Twitter.keywords = hadoop, big data, analytics, bigdata, cloudera, data science, data scientiest, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing TwitterAgent.sinks.HDFS.channel = MemChannel TwitterAgent.sinks.HDFS.type = hdfs TwitterAgent.sinks.HDFS.hdfs.path = hdfs://xx.xx.xx.xx:8020/user/flume/tweets/ TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000 TwitterAgent.sinks.HDFS.hdfs.rollSize = 0 TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000 TwitterAgent.channels.MemChannel.type = memory TwitterAgent.channels.MemChannel.capacity = 10000 TwitterAgent.channels.MemChannel.transactionCapacity = 100
  • 224. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Restart Flume
  • 225. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 View an agent log file
  • 226. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 View an agent log file
  • 227. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 View a result using Hue
  • 228. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Stop the agent
  • 229. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 7. Analyse data using Hive $ wget http://files.cloudera.com/samples/hive-serdes-1.0-SNAPSHOT.jar $ mv hive-serdes-1.0-SNAPSHOT.jar /usr/local/apache-hive- 1.1.0-bin/lib/ $ hive hive> ADD JAR /usr/local/apache-hive-1.1.0-bin/lib/hive- serdes-1.0-SNAPSHOT.jar; Get a Serde Jar File for parsing JSON file Register the Jar file.
  • 230. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 ุAnalyse data using Hive (cont.) Running the following hive command http://www.thecloudavenue.com/2013/03/analyse-tweets-using-flume-hadoop-and.html
  • 231. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Analyse data using Hive (cont) hive> select user.screen_name, user.followers_count c from tweets order by c desc; Finding user who has the most number of followers
  • 233. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Introduction Open-source message broker project An open-source message broker project developed by the Apache Software Foundation written in Scala. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. It is, in its essence, a "massively scalable pub/sub message queue architected as a distributed transaction log", making it highly valuable for enterprise infrastructures.
  • 234. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 What is Kafka? An apache project initially developed at LinkedIn Distributed publish-subscribe messaging system Designed for processing of real time activity stream data e.g. logs, metrics collections Written in Scala Does not follow JMS Standards, neither uses JMS APIs
  • 235. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Kafka: Features Persistent messaging High-throughput Supports both queue and topic semantics Uses Zookeeper for forming a cluster of nodes (producer/consumer/broker) and many more…
  • 236. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Why Kafka? Built with speed and scalability in mind. Enabled near real-time access to any data source Empowered hadoop jobs Allowed us to build real-time analytics Vastly improved our site monitoring and alerting capability Enabled us to visualize and track our call graphs.
  • 237. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Messaging System Concept: Queue Source: Real time Analytics with Apache Kafka and Spark, Rahul Jain
  • 238. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Messaging System Concept: Topic Source: Real time Analytics with Apache Kafka and Spark, Rahul Jain
  • 239. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Terminology Kafka maintains feeds of messages in categories called topics. Processes that publish messages to a Kafka topic are called producers. Processes that subscribe to topics and process the feed of published messages are called consumers. Kafka is run as a cluster comprised of one or more servers each of which is called a broker.
  • 240. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Kafka Source: Real time Analytics with Apache Kafka and Spark, Rahul Jain
  • 241. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Topics Topic: feed name to which messages are published Source: Apache Kafka with Spark Streaming - Real Time Analytics Redefined
  • 242. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Topics Source: Apache Kafka with Spark Streaming - Real Time Analytics Redefined
  • 243. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Topics A topic consists of partitions. Partition: ordered + immutable sequence of messages that is continually appended Source: Apache Kafka with Spark Streaming - Real Time Analytics Redefined
  • 244. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Kafka Architecture Source: Real time Analytics with Apache Kafka and Spark, Rahul Jain
  • 245. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Hands-on SparkStreaming with Kafka
  • 246. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Install & Start Kafka Server # wget http://www-us.apache.org/dist/kafka/0.9.0.1/kafka_2.10- 0.9.0.1.tgz # tar xzf kafka_2.10-0.9.0.1.tgz # cd kafka_2.10-0.9.0.1 # bin/kafka-server-start.sh config/server.properties&
  • 247. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Running Kafka Producer # bin/kafka-console-producer.sh --topic test --broker-list localhost:9092 type some random messages followed by Ctrl-D to finish
  • 248. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Running Kafka Consumer # bin/kafka-console-consumer.sh --topic test --zookeeper localhost:2181 --from-beginning
  • 249. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Start Spark-shell with extra memory
  • 250. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Spark Streaming with Kafka $ scala> :paste import org.apache.spark.SparkConf import org.apache.spark.streaming.{Seconds, StreamingContext} import org.apache.spark.storage.StorageLevel import StorageLevel._ import org.apache.spark._ import org.apache.spark.streaming._ import org.apache.spark.streaming.StreamingContext._ import org.apache.spark.streaming.kafka.KafkaUtils val ssc = new StreamingContext(sc, Seconds(2)) val kafkaStream = KafkaUtils.createStream(ssc, "localhost:2181","spark-streaming-consumer-group", Map("spark- topic" -> 5)) kafkaStream.print() ssc.start
  • 251. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Running Kafka Producer on another terminal # docker ps # docker exec -i -t c77e4dc1ed9b /bin/bash [root@quickstart ~]# cd /root/kafka_2.10-0.9.0.1 [root@quickstart kafka_2.10-0.9.0.1]# bin/kafka-console- producer.sh --broker-list localhost:9092 --topic spark-topic
  • 252. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Test & View the result Result from another terminal
  • 253. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2
  • 254. Thanachart Numnonda, [email protected] July 2016Hadoop Workshop using Cloudera on Amazon EC2 Thank you www.imcinstitute.com www.facebook.com/imcinstitute