SlideShare a Scribd company logo
thanachart@imcinstitute.com1
Hands-on: Exercise
Machine Learning using
Apache Spark MLlib
July 2016
Dr.Thanachart Numnonda
IMC Institute
thanachart@imcinstitute.com
thanachart@imcinstitute.com2
What is MLlib?
Source: MapR Academy
thanachart@imcinstitute.com3
MLlib is a Spark subproject providing machine
learning primitives:
– initial contribution from AMPLab, UC Berkeley
– shipped with Spark since version 0.8
– 33 contributors
What is MLlib?
thanachart@imcinstitute.com4
Classification: logistic regression, linear support vector
machine(SVM), naive Bayes
Regression: generalized linear regression (GLM)
Collaborative filtering: alternating least squares (ALS)
Clustering: k-means
Decomposition: singular value decomposition (SVD),
principal component analysis (PCA)
Mllib Algorithms
thanachart@imcinstitute.com5
What is in MLlib?
Source: Mllib:Spark's Machine Learning Library, A. Talwalkar
thanachart@imcinstitute.com6
Part of Spark
Scalable
Support: Python, Scala, Java
Broad coverage of applications & algorithms
Rapid developments in speed & robustness
MLlib: Benefits
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Machine Learning
Machine learning is a scientific discipline that
explores the construction and study of algorithms
that can learn from data.
[Wikipedia]
thanachart@imcinstitute.com8
A point is just a set of numbers. This set of numbers or
coordinates defines the point's position in space.
Points and vectors are same thing.
Dimensions in vectors are called features
Hyperspace is a space with more than three dimensions.
Example: A person has the following dimensions:
– Weight
– Height
– Age
Thus, the interpretation of point (160,69,24) would be
160 lb weight, 69 inches height, and 24 years age.
Vectors
Source:Spark Cookbook
thanachart@imcinstitute.com9
Spark has local vectors and matrices and also distributed
matrices.
– Distributed matrix is backed by one or more RDDs.
– A local vector has numeric indices and double values, and is
stored on a single machine.
Two types of local vectors in MLlib:
– Dense vector is backed by an array of its values.
– Sparse vector is backed by two parallel arrays, one for indices
and another for values.
Example
– Dense vector: [160.0,69.0,24.0]
– Sparse vector: (3,[0,1,2],[160.0,69.0,24.0])
Vectors in MLlib
Source:Spark Cookbook
thanachart@imcinstitute.com10
Library
– import org.apache.spark.mllib.linalg.{Vectors,Vector}
Signature of Vectors.dense:
– def dense(values: Array[Double]): Vector
Signature of Vectors.sparse:
– def sparse(size: Int, indices: Array[Int], values: Array[Double]):
Vector
Vectors in Mllib (cont.)
thanachart@imcinstitute.com11
Example
thanachart@imcinstitute.com12
Labeled point is a local vector (sparse/dense), ), which
has an associated label with it.
Labeled data is used in supervised learning to help train
algorithms.
Label is stored as a double value in LabeledPoint.
Labeled point
Source:Spark Cookbook
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Example
scala> import org.apache.spark.mllib.linalg.{Vectors,Vector}
scala> import org.apache.spark.mllib.regression.LabeledPoint
scala> val willBuySUV =
LabeledPoint(1.0,Vectors.dense(300.0,80,40))
scala> val willNotBuySUV =
LabeledPoint(0.0,Vectors.dense(150.0,60,25))
scala> val willBuySUV =
LabeledPoint(1.0,Vectors.sparse(3,Array(0,1,2),Array(300.0,80,
40)))
scala> val willNotBuySUV =
LabeledPoint(0.0,Vectors.sparse(3,Array(0,1,2),Array(150.0,60,
25)))
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Example (cont)
# vi person_libsvm.txt
scala> import org.apache.spark.mllib.util.MLUtils
scala> import org.apache.spark.rdd.RDD
scala> val persons =
MLUtils.loadLibSVMFile(sc,"hdfs:///user/cloudera/person_libsvm
.txt")
scala> persons.first()
thanachart@imcinstitute.com15
Spark has local matrices and also distributed matrices.
– Distributed matrix is backed by one or more RDDs.
– A local matrix stored on a single machine.
There are three types of distributed matrices in MLlib:
– RowMatrix: This has each row as a feature vector.
– IndexedRowMatrix: This also has row indices.
– CoordinateMatrix: This is simply a matrix of MatrixEntry. A
MatrixEntry represents an entry in the matrix represented by its
row and column index
Matrices in MLlib
Source:Spark Cookbook
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Example
scala> import org.apache.spark.mllib.linalg.{Vectors,Matrix,
Matrices}
scala> val people = Matrices.dense(3,2,Array(150d,60d,25d,
300d,80d,40d))
scala> val personRDD =
sc.parallelize(List(Vectors.dense(150,60,25),
Vectors.dense(300,80,40)))
scala> import org.apache.spark.mllib.linalg.distributed.
{IndexedRow, IndexedRowMatrix,RowMatrix, CoordinateMatrix,
MatrixEntry}
scala> val personMat = new RowMatrix(personRDD)
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Example
scala> print(personMat.numRows)
scala> val personRDD = sc.parallelize(List(IndexedRow(0L,
Vectors.dense(150,60,25)), IndexedRow(1L,
Vectors.dense(300,80,40))))
scala> val pirmat = new IndexedRowMatrix(personRDD)
scala> val personMat = pirmat.toRowMatrix
scala> val meRDD = sc.parallelize(List(
MatrixEntry(0,0,150), MatrixEntry(1,0,60),
MatrixEntry(2,0,25), MatrixEntry(0,1,300),
MatrixEntry(1,1,80),MatrixEntry(2,1,40) ))
scala> val pcmat = new CoordinateMatrix(meRDD)
thanachart@imcinstitute.com18
Central tendency of data—mean, mode, median
Spread of data—variance, standard deviation
Boundary conditions—min, max
Statistic functions
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Example
scala> import org.apache.spark.mllib.linalg.{Vectors,Vector}
scala> import org.apache.spark.mllib.stat.Statistics
scala> val personRDD =
sc.parallelize(List(Vectors.dense(150,60,25),
Vectors.dense(300,80,40)))
scala> val summary = Statistics.colStats(personRDD)
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Hands-on
Movie Recommendation
thanachart@imcinstitute.com21
Source: Mllib:Spark's Machine Learning Library, A. Talwalkar
Recommendation
thanachart@imcinstitute.com22
Source: Mllib:Spark's Machine Learning Library, A. Talwalkar
Recommendation: Collaborative Filtering
thanachart@imcinstitute.com23
Source: Mllib:Spark's Machine Learning Library, A. Talwalkar
Recommendation
thanachart@imcinstitute.com24
Source: Mllib:Spark's Machine Learning Library, A. Talwalkar
Recommendation: ALS
thanachart@imcinstitute.com25
Source: MLlib: Scalable Machine Learning on Spark, X. Meng
Alternating least squares (ALS)
thanachart@imcinstitute.com26
numBlocks is the number of blocks used to parallelize
computation (set to -1 to autoconfigure)
rank is the number of latent factors in the model
iterations is the number of iterations to run
lambda specifies the regularization parameter in ALS
implicitPrefs specifies whether to use the explicit feedback
ALS variant or one adapted for an implicit feedback data
alpha is a parameter applicable to the implicit feedback
variant of ALS that governs the baseline confidence in
preference observations
MLlib: ALS Algorithm
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
MovieLen Dataset
1)Type command > wget
http://files.grouplens.org/datasets/movielens/ml-100k.zip
2)Type command > yum install unzip
3)Type command > unzip ml-100k.zip
4)Type command > more ml-100k/u.user
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Moving dataset to HDFS
1)Type command > cd ml-100k
2)Type command > hadoop fs -mkdir /user/cloudera/movielens
3)Type command > hadoop fs -put u.user /user/cloudera/movielens
4)Type command > hadoop fs -put u.data /user/cloudera/movielens
4)Type command > hadoop fs -put u.genre /user/cloudera/movielens
5)Type command > hadoop fs -put u.item /user/cloudera/movielens
6)Type command > hadoop fs -ls /user/cloudera/movielens
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Start Spark-shell with extra memory
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Extracting features from the MovieLens dataset
scala> val rawData =
sc.textFile("hdfs:///user/cloudera/movielens/u.data")
scala> rawData.first()
scala> val rawRatings = rawData.map(_.split("t").take(3))
scala> rawRatings.first()
scala> import org.apache.spark.mllib.recommendation.Rating
scala> val ratings = rawRatings.map { case Array(user, movie,
rating) =>Rating(user.toInt, movie.toInt, rating.toDouble) }
scala> ratings.first()
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Training the recommendation model
scala> import org.apache.spark.mllib.recommendation.ALS
scala> val model = ALS.train(ratings, 50, 10, 0.01)
Note: We'll use rank of 50, 10 iterations, and a lambda parameter of 0.01
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Inspecting the recommendations
scala> val movies =
sc.textFile("hdfs:///user/cloudera/movielens/u.item")
scala> val titles = movies.map(line =>
line.split("|").take(2)).map(array
=>(array(0).toInt,array(1))).collectAsMap()
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Inspecting the recommendations (cont.)
scala> val moviesForUser = ratings.keyBy(_.user).lookup(789)
scala> moviesForUser.sortBy(-_.rating).take(10).map(rating =>
(titles(rating.product), rating.rating)).foreach(println)
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Top 10 Recommendation for userid 789
scala> val topKRecs = model.recommendProducts(789,10)
scala> topKRecs.map(rating => (titles(rating.product),
rating.rating)).foreach(println)
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Evaluating Performance: Mean Squared Error
scala> val actualRating = moviesForUser.take(1)(0)
scala> val predictedRating = model.predict(789,
actualRating.product)
scala> val squaredError = math.pow(predictedRating -
actualRating.rating, 2.0)
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Overall Mean Squared Error
scala> val usersProducts = ratings.map{ case Rating(user,
product, rating) => (user, product)}
scala> val predictions = model.predict(usersProducts).map{
case Rating(user, product, rating) => ((user, product),
rating)}
scala> val ratingsAndPredictions = ratings.map{
case Rating(user, product, rating) => ((user, product),
rating)
}.join(predictions)
scala> val MSE = ratingsAndPredictions.map{
case ((user, product), (actual, predicted)) =>
math.pow((actual - predicted), 2)
}.reduce(_ + _) / ratingsAndPredictions.count
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Clustering using K-Means
thanachart@imcinstitute.com38
Market segmentation
Social network analysis: Finding a coherent group of
people in the social network for ad targeting
Data center computing clusters
Real estate: Identifying neighborhoods based on similar
features
Text analysis: Dividing text documents, such as novels or
essays, into genres
Clustering use cases
thanachart@imcinstitute.com39Source: Mahout in Action
thanachart@imcinstitute.com40Source: Mahout in Action
Sample Data
thanachart@imcinstitute.com41
Distance Measures
Source www.edureka.in/data-science
thanachart@imcinstitute.com42
Euclidean distance measure
Squared Euclidean distance measure
Manhattan distance measure
Cosine distance measure
Distance Measures
thanachart@imcinstitute.com43
Distance Measures
thanachart@imcinstitute.com44
K-Means Clustering
Source: www.edureka.in/data-science
thanachart@imcinstitute.com45
Example of K-Means Clustering
thanachart@imcinstitute.com46
http://stanford.edu/class/ee103/visualizations/kmeans/kmeans.html
thanachart@imcinstitute.com47
K-Means with different distance measures
Source: Mahout in Action
thanachart@imcinstitute.com48
Choosing number of clusters
thanachart@imcinstitute.com49
Dimensionality reduction
Process of reducing the number of dimensions or
features.
Dimensionality reduction serves several purposes
– Data compression
– Visualization
The most popular algorithm: Principal component
analysis (PCA).
thanachart@imcinstitute.com50
Dimensionality reduction
Source: Spark Cookbook
thanachart@imcinstitute.com51
Dimensionality reduction with SVD
Singular Value Decomposition (SVD): is based on a
theorem from linear algebra that a rectangular matrix A
can be broken down into a product of three matrices
thanachart@imcinstitute.com52
Dimensionality reduction with SVD
The basic idea behind SVD
– Take a high dimension, a highly variable set of data
points
– Reduce it to a lower dimensional space that exposes
the structure of the original data more clearly and
orders it from the most variation to the least.
So we can simply ignore variation below a certain
threshold to massively reduce the original data, making
sure that the original relationship interests are retained.
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Hands-on
Clustering on MovieLens Dataset
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Extracting features from the MovieLens dataset
scala> val rawData =
sc.textFile("hdfs:///user/cloudera/movielens/u.item")
scala> println(movies.first)
scala> val genres =
sc.textFile("hdfs:///user/cloudera/movielens/u.genre")
scala> genres.take(5).foreach(println)
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Extracting features from the MovieLens dataset (cont.)
scala> val genreMap = genres.filter(!_.isEmpty).map(line =>
line.split("|")).map(array=> (array(1),
array(0))).collectAsMap
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Extracting features from the MovieLens dataset (cont.)
scala> val titlesAndGenres = movies.map(_.split("|")).map
{ array =>
val genres = array.toSeq.slice(5, array.size)
val genresAssigned = genres.zipWithIndex.filter { case (g,
idx) =>
g == "1"
}.map { case (g, idx) =>
genreMap(idx.toString)
}
(array(0).toInt, (array(1), genresAssigned))
}
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Training the recommendation model
scala> :paste
import org.apache.spark.mllib.recommendation.ALS
import org.apache.spark.mllib.recommendation.Rating
val rawData =
sc.textFile("hdfs:///user/cloudera/movielens/u.data")
val rawRatings = rawData.map(_.split("t").take(3))
val ratings = rawRatings.map{ case Array(user, movie,
rating) => Rating(user.toInt, movie.toInt,
rating.toDouble) }
ratings.cache
val alsModel = ALS.train(ratings, 50, 10, 0.1)
import org.apache.spark.mllib.linalg.Vectors
val movieFactors = alsModel.productFeatures.map { case (id,
factor) => (id, Vectors.dense(factor)) }
val movieVectors = movieFactors.map(_._2)
val userFactors = alsModel.userFeatures.map { case (id,
factor) => (id, Vectors.dense(factor)) }
val userVectors = userFactors.map(_._2)
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Normalization
scala> :paste
import org.apache.spark.mllib.linalg.distributed.RowMatrix
val movieMatrix = new RowMatrix(movieVectors)
val movieMatrixSummary =
movieMatrix.computeColumnSummaryStatistics()
val userMatrix = new RowMatrix(userVectors)
val userMatrixSummary =
userMatrix.computeColumnSummaryStatistics()
println("Movie factors mean: " + movieMatrixSummary.mean)
println("Movie factors variance: " +
movieMatrixSummary.variance)
println("User factors mean: " + userMatrixSummary.mean)
println("User factors variance: " +
userMatrixSummary.variance)
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Output from Normalization
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Training a clustering model
scala> import org.apache.spark.mllib.clustering.KMeans
scala> val numClusters = 5
scala> val numIterations = 10
scala> val numRuns = 3
scala> val movieClusterModel = KMeans.train(movieVectors,
numClusters, numIterations, numRuns)
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Making predictions using a clustering model
scala> val movie1 = movieVectors.first
scala> val movieCluster = movieClusterModel.predict(movie1)
scala> val predictions =
movieClusterModel.predict(movieVectors)
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Interpreting cluster predictions
scala> :paste
import breeze.linalg._
import breeze.numerics.pow
def computeDistance(v1: DenseVector[Double], v2:
DenseVector[Double]) = pow(v1 - v2, 2).sum
val titlesWithFactors = titlesAndGenres.join(movieFactors)
val moviesAssigned = titlesWithFactors.map { case (id,
((title, genres), vector)) =>
val pred = movieClusterModel.predict(vector)
val clusterCentre = movieClusterModel.clusterCenters(pred)
val dist =
computeDistance(DenseVector(clusterCentre.toArray),
DenseVector(vector.toArray))
(id, title, genres.mkString(" "), pred, dist)
}
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Interpreting cluster predictions (cont.)
val clusterAssignments = moviesAssigned.groupBy { case (id,
title, genres, cluster, dist) => cluster }.collectAsMap
for ( (k, v) <- clusterAssignments.toSeq.sortBy(_._1)) {
println(s"Cluster $k:")
val m = v.toSeq.sortBy(_._5)
println(m.take(20).map { case (_, title, genres, _, d) =>
(title, genres, d) }.mkString("n"))
println("=====n")
}
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Real-time Machine Learning
using Streaming K-Means
thanachart@imcinstitute.com66
Online learning with Spark Streaming
Streaming regression
– trainOn: This takes DStream[LabeledPoint] as its
argument.
– predictOn: This also takes DStream[LabeledPoint].
Streaming KMeans
– An extension of the mini-batch K-means algorithm
thanachart@imcinstitute.com67
Streaming K-Means Program
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
MovieLen Training Dataset
●
The rows of the training text files must be vector data
in the form
[x1,x2,x3,...,xn]
1)Type command > wget
https://s3.amazonaws.com/imcbucket/data/movietest.data
2)Type command > more movietest.data
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Install & Start Kafka Server
# wget http://www-us.apache.org/dist/kafka/0.9.0.1/kafka_2.10-
0.9.0.1.tgz
# tar xzf kafka_2.10-0.9.0.1.tgz
# cd kafka_2.10-0.9.0.1
# bin/kafka-server-start.sh config/server.properties&
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Start Spark-shell with extra memory
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Streaming K-Means
$ scala> :paste
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.clustering.StreamingKMeans
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.storage.StorageLevel
import StorageLevel._
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.streaming.kafka.KafkaUtils
val ssc = new StreamingContext(sc, Seconds(2))
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
val kafkaStream = KafkaUtils.createStream(ssc,
"localhost:2181","spark-streaming-consumer-group", Map("java-
topic" -> 5))
val lines = kafkaStream.map(_._2)
val ratings = lines.map(Vectors.parse)
val numDimensions = 3
val numClusters = 5
val model = new StreamingKMeans()
.setK(numClusters)
.setDecayFactor(1.0)
.setRandomCenters(numDimensions, 0.0)
model.trainOn(ratings)
model.predictOn(ratings).print()
ssc.start()
ssc.awaitTermination()
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Running HelloKafkaProducer on another windows
●
Open a new ssh windows
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Java Code: Kafka Producer
import java.util.Properties;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
import java.io.*;
public class HelloKafkaProducer {
final static String TOPIC = "java-topic";
public static void main(String[] argv){
Properties properties = new Properties();
properties.put("metadata.broker.list","localhost:9092");
properties.put("serializer.class","kafka.serializer.StringEnco
der");
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Java Code: Kafka Producer (cont.)
try(BufferedReader br = new BufferedReader(new
FileReader(argv[0]))) {
StringBuilder sb = new StringBuilder();
ProducerConfig producerConfig = new
ProducerConfig(properties);
kafka.javaapi.producer.Producer<String,String>
producer = new kafka.javaapi.producer.Producer<String,
String>(producerConfig);
String line = br.readLine();
while (line != null) {
KeyedMessage<String, String> message
=new KeyedMessage<String, String>(TOPIC,line);
producer.send(message);
line = br.readLine();
}
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Java Code: Kafka Producer (cont.)
producer.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Compile & Run the program
// Using a vi Editor to edit the sourcecode
# vi HelloKafkaProducer.java
// Alternatively
# wget
https://s3.amazonaws.com/imcbucket/apps/HelloKafkaProducer.java
// Compile progeram
# export CLASSPATH=".:/root/kafka_2.10-0.9.0.1/libs/*"
# javac HelloKafkaProducer.java
//prepare the data
# cd
# wget https://s3.amazonaws.com/imcbucket/input/pg2600.txt
# cd kafka_2.10-0.9.0.1
// Run the program
# java HelloKafkaProducer /root/movietest.data
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Example Result
thanachart@imcinstitute.com79
Recommended Books
thanachart@imcinstitute.com80
Thanachart Numnonda, thanachart@imcinstitute.com June 2016Apache Spark : Train the trainer
Thank you
www.imcinstitute.com
www.facebook.com/imcinstitute

More Related Content

What's hot (20)

PDF
Learn to Use Databricks for the Full ML Lifecycle
Databricks
 
PPTX
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
Databricks
 
PDF
Introduction to Cassandra
Gokhan Atil
 
PDF
Layout lm paper review
taeseon ryu
 
PPTX
Relational databases vs Non-relational databases
James Serra
 
PDF
Introduction to Spark Streaming
datamantra
 
PDF
MLOps for production-level machine learning
cnvrg.io AI OS - Hands-on ML Workshops
 
PDF
Introduction to MLflow
Databricks
 
PPT
Data mining techniques unit 1
malathieswaran29
 
PDF
MLOps Using MLflow
Databricks
 
PPTX
Azure Synapse Analytics Overview (r1)
James Serra
 
PPTX
MLOps - The Assembly Line of ML
Jordan Birdsell
 
PDF
Using MLOps to Bring ML to Production/The Promise of MLOps
Weaveworks
 
PDF
Spark SQL
Joud Khattab
 
PDF
Managing the Machine Learning Lifecycle with MLflow
Databricks
 
PDF
Big Data Architecture
Guido Schmutz
 
PDF
DevOps for Databricks
Databricks
 
PDF
MLOps Virtual Event: Automating ML at Scale
Databricks
 
PDF
Advanced MLflow: Multi-Step Workflows, Hyperparameter Tuning and Integrating ...
Databricks
 
PPTX
Text Classification
RAX Automation Suite
 
Learn to Use Databricks for the Full ML Lifecycle
Databricks
 
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
Databricks
 
Introduction to Cassandra
Gokhan Atil
 
Layout lm paper review
taeseon ryu
 
Relational databases vs Non-relational databases
James Serra
 
Introduction to Spark Streaming
datamantra
 
MLOps for production-level machine learning
cnvrg.io AI OS - Hands-on ML Workshops
 
Introduction to MLflow
Databricks
 
Data mining techniques unit 1
malathieswaran29
 
MLOps Using MLflow
Databricks
 
Azure Synapse Analytics Overview (r1)
James Serra
 
MLOps - The Assembly Line of ML
Jordan Birdsell
 
Using MLOps to Bring ML to Production/The Promise of MLOps
Weaveworks
 
Spark SQL
Joud Khattab
 
Managing the Machine Learning Lifecycle with MLflow
Databricks
 
Big Data Architecture
Guido Schmutz
 
DevOps for Databricks
Databricks
 
MLOps Virtual Event: Automating ML at Scale
Databricks
 
Advanced MLflow: Multi-Step Workflows, Hyperparameter Tuning and Integrating ...
Databricks
 
Text Classification
RAX Automation Suite
 

Viewers also liked (20)

PDF
Apache Spark Machine Learning
Carol McDonald
 
PDF
MLlib: Spark's Machine Learning Library
jeykottalam
 
PDF
Crab: A Python Framework for Building Recommender Systems
Marcel Caraciolo
 
PPTX
Large-scale Parallel Collaborative Filtering and Clustering using MapReduce f...
Varad Meru
 
PPTX
Collaborative Filtering using KNN
Şeyda Hatipoğlu
 
PDF
Recommender Systems with Apache Spark's ALS Function
Will Johnson
 
PDF
Big data processing using Hadoop with Cloudera Quickstart
IMC Institute
 
PDF
Apache Sqoop: A Data Transfer Tool for Hadoop
Cloudera, Inc.
 
PDF
สมุดกิจกรรม Code for Kids
IMC Institute
 
PPT
ITSS Overview
IMC Institute
 
PDF
Big Data Analytics using Mahout
IMC Institute
 
PPTX
Apache sqoop with an use case
Davin Abraham
 
PDF
Thai Software & Software Market Survey 2015
IMC Institute
 
PDF
Introduction to Apache Sqoop
Avkash Chauhan
 
PDF
New Data Transfer Tools for Hadoop: Sqoop 2
DataWorks Summit
 
PDF
Big data: Loading your data with flume and sqoop
Christophe Marchal
 
PPTX
Advanced Sqoop
Yogesh Kulkarni
 
PDF
Mobile User and App Analytics in China
IMC Institute
 
PDF
Collaborative Filtering and Recommender Systems By Navisro Analytics
Navisro Analytics
 
PDF
Install Apache Hadoop for Development/Production
IMC Institute
 
Apache Spark Machine Learning
Carol McDonald
 
MLlib: Spark's Machine Learning Library
jeykottalam
 
Crab: A Python Framework for Building Recommender Systems
Marcel Caraciolo
 
Large-scale Parallel Collaborative Filtering and Clustering using MapReduce f...
Varad Meru
 
Collaborative Filtering using KNN
Şeyda Hatipoğlu
 
Recommender Systems with Apache Spark's ALS Function
Will Johnson
 
Big data processing using Hadoop with Cloudera Quickstart
IMC Institute
 
Apache Sqoop: A Data Transfer Tool for Hadoop
Cloudera, Inc.
 
สมุดกิจกรรม Code for Kids
IMC Institute
 
ITSS Overview
IMC Institute
 
Big Data Analytics using Mahout
IMC Institute
 
Apache sqoop with an use case
Davin Abraham
 
Thai Software & Software Market Survey 2015
IMC Institute
 
Introduction to Apache Sqoop
Avkash Chauhan
 
New Data Transfer Tools for Hadoop: Sqoop 2
DataWorks Summit
 
Big data: Loading your data with flume and sqoop
Christophe Marchal
 
Advanced Sqoop
Yogesh Kulkarni
 
Mobile User and App Analytics in China
IMC Institute
 
Collaborative Filtering and Recommender Systems By Navisro Analytics
Navisro Analytics
 
Install Apache Hadoop for Development/Production
IMC Institute
 
Ad

Similar to Machine Learning using Apache Spark MLlib (20)

PPTX
Machine learning with Spark
Khalid Salama
 
PDF
Recent Developments in Spark MLlib and Beyond
Xiangrui Meng
 
PDF
Recent Developments in Spark MLlib and Beyond
DataWorks Summit
 
PPTX
MLconf NYC Xiangrui Meng
MLconf
 
PDF
Machine learning with Apache Spark MLlib | Big Data Hadoop Spark Tutorial | C...
CloudxLab
 
PDF
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
Jen Aman
 
PPTX
Apache Spark MLlib
Zahra Eskandari
 
PDF
Doubt Truth to be a Liar: Non Triviality of Type Safety for Machine Learning ...
Matthew Tovbin
 
PDF
Advanced Data Science on Spark-(Reza Zadeh, Stanford)
Spark Summit
 
PDF
TransmogrifAI - Automate Machine Learning Workflow with the power of Scala an...
Chetan Khatri
 
PDF
Introduction to Machine Learning with Spark
datamantra
 
PPTX
MLlib and Machine Learning on Spark
Petr Zapletal
 
PDF
Porting R Models into Scala Spark
carl_pulley
 
PDF
Large-Scale Machine Learning with Apache Spark
DB Tsai
 
PDF
Deep Learning in Spark with BigDL by Petar Zecevic at Big Data Spain 2017
Big Data Spain
 
PDF
Spark m llib
Milad Alshomary
 
PPTX
Learning spark ch11 - Machine Learning with MLlib
phanleson
 
PDF
GeeCON Prague 2015
Mateusz Dymczyk
 
PDF
Big Things Conference 2019 - Distributed Deep Learning with Keras/TensorFlow ...
Guglielmo Iozzia
 
PDF
Spark: Taming Big Data
Leonardo Gamas
 
Machine learning with Spark
Khalid Salama
 
Recent Developments in Spark MLlib and Beyond
Xiangrui Meng
 
Recent Developments in Spark MLlib and Beyond
DataWorks Summit
 
MLconf NYC Xiangrui Meng
MLconf
 
Machine learning with Apache Spark MLlib | Big Data Hadoop Spark Tutorial | C...
CloudxLab
 
Embrace Sparsity At Web Scale: Apache Spark MLlib Algorithms Optimization For...
Jen Aman
 
Apache Spark MLlib
Zahra Eskandari
 
Doubt Truth to be a Liar: Non Triviality of Type Safety for Machine Learning ...
Matthew Tovbin
 
Advanced Data Science on Spark-(Reza Zadeh, Stanford)
Spark Summit
 
TransmogrifAI - Automate Machine Learning Workflow with the power of Scala an...
Chetan Khatri
 
Introduction to Machine Learning with Spark
datamantra
 
MLlib and Machine Learning on Spark
Petr Zapletal
 
Porting R Models into Scala Spark
carl_pulley
 
Large-Scale Machine Learning with Apache Spark
DB Tsai
 
Deep Learning in Spark with BigDL by Petar Zecevic at Big Data Spain 2017
Big Data Spain
 
Spark m llib
Milad Alshomary
 
Learning spark ch11 - Machine Learning with MLlib
phanleson
 
GeeCON Prague 2015
Mateusz Dymczyk
 
Big Things Conference 2019 - Distributed Deep Learning with Keras/TensorFlow ...
Guglielmo Iozzia
 
Spark: Taming Big Data
Leonardo Gamas
 
Ad

More from IMC Institute (20)

PDF
นิตยสาร Digital Trends ฉบับที่ 14
IMC Institute
 
PDF
Digital trends Vol 4 No. 13 Sep-Dec 2019
IMC Institute
 
PDF
บทความ The evolution of AI
IMC Institute
 
PDF
IT Trends eMagazine Vol 4. No.12
IMC Institute
 
PDF
เพราะเหตุใด Digitization ไม่ตอบโจทย์ Digital Transformation
IMC Institute
 
PDF
IT Trends 2019: Putting Digital Transformation to Work
IMC Institute
 
PDF
มูลค่าตลาดดิจิทัลไทย 3 อุตสาหกรรม
IMC Institute
 
PDF
IT Trends eMagazine Vol 4. No.11
IMC Institute
 
PDF
แนวทางการทำ Digital transformation
IMC Institute
 
PDF
บทความ The New Silicon Valley
IMC Institute
 
PDF
นิตยสาร IT Trends ของ IMC Institute ฉบับที่ 10
IMC Institute
 
PDF
แนวทางการทำ Digital transformation
IMC Institute
 
PDF
The Power of Big Data for a new economy (Sample)
IMC Institute
 
PDF
บทความ Robotics แนวโน้มใหม่สู่บริการเฉพาะทาง
IMC Institute
 
PDF
IT Trends eMagazine Vol 3. No.9
IMC Institute
 
PDF
Thailand software & software market survey 2016
IMC Institute
 
PPTX
Developing Business Blockchain Applications on Hyperledger
IMC Institute
 
PDF
Digital transformation @thanachart.org
IMC Institute
 
PDF
บทความ Big Data จากบล็อก thanachart.org
IMC Institute
 
PDF
กลยุทธ์ 5 ด้านกับการทำ Digital Transformation
IMC Institute
 
นิตยสาร Digital Trends ฉบับที่ 14
IMC Institute
 
Digital trends Vol 4 No. 13 Sep-Dec 2019
IMC Institute
 
บทความ The evolution of AI
IMC Institute
 
IT Trends eMagazine Vol 4. No.12
IMC Institute
 
เพราะเหตุใด Digitization ไม่ตอบโจทย์ Digital Transformation
IMC Institute
 
IT Trends 2019: Putting Digital Transformation to Work
IMC Institute
 
มูลค่าตลาดดิจิทัลไทย 3 อุตสาหกรรม
IMC Institute
 
IT Trends eMagazine Vol 4. No.11
IMC Institute
 
แนวทางการทำ Digital transformation
IMC Institute
 
บทความ The New Silicon Valley
IMC Institute
 
นิตยสาร IT Trends ของ IMC Institute ฉบับที่ 10
IMC Institute
 
แนวทางการทำ Digital transformation
IMC Institute
 
The Power of Big Data for a new economy (Sample)
IMC Institute
 
บทความ Robotics แนวโน้มใหม่สู่บริการเฉพาะทาง
IMC Institute
 
IT Trends eMagazine Vol 3. No.9
IMC Institute
 
Thailand software & software market survey 2016
IMC Institute
 
Developing Business Blockchain Applications on Hyperledger
IMC Institute
 
Digital transformation @thanachart.org
IMC Institute
 
บทความ Big Data จากบล็อก thanachart.org
IMC Institute
 
กลยุทธ์ 5 ด้านกับการทำ Digital Transformation
IMC Institute
 

Recently uploaded (20)

PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PDF
Unlocking FME Flow’s Potential: Architecture Design for Modern Enterprises
Safe Software
 
PPTX
Smarter Governance with AI: What Every Board Needs to Know
OnBoard
 
PDF
Redefining Work in the Age of AI - What to expect? How to prepare? Why it mat...
Malinda Kapuruge
 
PDF
Understanding The True Cost of DynamoDB Webinar
ScyllaDB
 
PDF
DoS Attack vs DDoS Attack_ The Silent Wars of the Internet.pdf
CyberPro Magazine
 
PPSX
Usergroup - OutSystems Architecture.ppsx
Kurt Vandevelde
 
PDF
How to Comply With Saudi Arabia’s National Cybersecurity Regulations.pdf
Bluechip Advanced Technologies
 
PDF
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
PDF
How to Visualize the ​Spatio-Temporal Data Using CesiumJS​
SANGHEE SHIN
 
PDF
Understanding AI Optimization AIO, LLMO, and GEO
CoDigital
 
PPTX
The birth and death of Stars - earth and life science
rizellemarieastrolo
 
PDF
FME as an Orchestration Tool with Principles From Data Gravity
Safe Software
 
PDF
GDG Cloud Southlake #44: Eyal Bukchin: Tightening the Kubernetes Feedback Loo...
James Anderson
 
PPTX
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
PDF
Pipeline Industry IoT - Real Time Data Monitoring
Safe Software
 
PDF
Automating the Geo-Referencing of Historic Aerial Photography in Flanders
Safe Software
 
PDF
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Saikat Basu
 
PDF
TrustArc Webinar - Navigating APAC Data Privacy Laws: Compliance & Challenges
TrustArc
 
PDF
LLM Search Readiness Audit - Dentsu x SEO Square - June 2025.pdf
Nick Samuel
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
Unlocking FME Flow’s Potential: Architecture Design for Modern Enterprises
Safe Software
 
Smarter Governance with AI: What Every Board Needs to Know
OnBoard
 
Redefining Work in the Age of AI - What to expect? How to prepare? Why it mat...
Malinda Kapuruge
 
Understanding The True Cost of DynamoDB Webinar
ScyllaDB
 
DoS Attack vs DDoS Attack_ The Silent Wars of the Internet.pdf
CyberPro Magazine
 
Usergroup - OutSystems Architecture.ppsx
Kurt Vandevelde
 
How to Comply With Saudi Arabia’s National Cybersecurity Regulations.pdf
Bluechip Advanced Technologies
 
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
How to Visualize the ​Spatio-Temporal Data Using CesiumJS​
SANGHEE SHIN
 
Understanding AI Optimization AIO, LLMO, and GEO
CoDigital
 
The birth and death of Stars - earth and life science
rizellemarieastrolo
 
FME as an Orchestration Tool with Principles From Data Gravity
Safe Software
 
GDG Cloud Southlake #44: Eyal Bukchin: Tightening the Kubernetes Feedback Loo...
James Anderson
 
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
Pipeline Industry IoT - Real Time Data Monitoring
Safe Software
 
Automating the Geo-Referencing of Historic Aerial Photography in Flanders
Safe Software
 
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Saikat Basu
 
TrustArc Webinar - Navigating APAC Data Privacy Laws: Compliance & Challenges
TrustArc
 
LLM Search Readiness Audit - Dentsu x SEO Square - June 2025.pdf
Nick Samuel
 

Machine Learning using Apache Spark MLlib

  • 1. [email protected] Hands-on: Exercise Machine Learning using Apache Spark MLlib July 2016 Dr.Thanachart Numnonda IMC Institute [email protected]
  • 3. [email protected] MLlib is a Spark subproject providing machine learning primitives: – initial contribution from AMPLab, UC Berkeley – shipped with Spark since version 0.8 – 33 contributors What is MLlib?
  • 4. [email protected] Classification: logistic regression, linear support vector machine(SVM), naive Bayes Regression: generalized linear regression (GLM) Collaborative filtering: alternating least squares (ALS) Clustering: k-means Decomposition: singular value decomposition (SVD), principal component analysis (PCA) Mllib Algorithms
  • 5. [email protected] What is in MLlib? Source: Mllib:Spark's Machine Learning Library, A. Talwalkar
  • 6. [email protected] Part of Spark Scalable Support: Python, Scala, Java Broad coverage of applications & algorithms Rapid developments in speed & robustness MLlib: Benefits
  • 7. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Machine Learning Machine learning is a scientific discipline that explores the construction and study of algorithms that can learn from data. [Wikipedia]
  • 8. [email protected] A point is just a set of numbers. This set of numbers or coordinates defines the point's position in space. Points and vectors are same thing. Dimensions in vectors are called features Hyperspace is a space with more than three dimensions. Example: A person has the following dimensions: – Weight – Height – Age Thus, the interpretation of point (160,69,24) would be 160 lb weight, 69 inches height, and 24 years age. Vectors Source:Spark Cookbook
  • 9. [email protected] Spark has local vectors and matrices and also distributed matrices. – Distributed matrix is backed by one or more RDDs. – A local vector has numeric indices and double values, and is stored on a single machine. Two types of local vectors in MLlib: – Dense vector is backed by an array of its values. – Sparse vector is backed by two parallel arrays, one for indices and another for values. Example – Dense vector: [160.0,69.0,24.0] – Sparse vector: (3,[0,1,2],[160.0,69.0,24.0]) Vectors in MLlib Source:Spark Cookbook
  • 10. [email protected] Library – import org.apache.spark.mllib.linalg.{Vectors,Vector} Signature of Vectors.dense: – def dense(values: Array[Double]): Vector Signature of Vectors.sparse: – def sparse(size: Int, indices: Array[Int], values: Array[Double]): Vector Vectors in Mllib (cont.)
  • 12. [email protected] Labeled point is a local vector (sparse/dense), ), which has an associated label with it. Labeled data is used in supervised learning to help train algorithms. Label is stored as a double value in LabeledPoint. Labeled point Source:Spark Cookbook
  • 13. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Example scala> import org.apache.spark.mllib.linalg.{Vectors,Vector} scala> import org.apache.spark.mllib.regression.LabeledPoint scala> val willBuySUV = LabeledPoint(1.0,Vectors.dense(300.0,80,40)) scala> val willNotBuySUV = LabeledPoint(0.0,Vectors.dense(150.0,60,25)) scala> val willBuySUV = LabeledPoint(1.0,Vectors.sparse(3,Array(0,1,2),Array(300.0,80, 40))) scala> val willNotBuySUV = LabeledPoint(0.0,Vectors.sparse(3,Array(0,1,2),Array(150.0,60, 25)))
  • 14. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Example (cont) # vi person_libsvm.txt scala> import org.apache.spark.mllib.util.MLUtils scala> import org.apache.spark.rdd.RDD scala> val persons = MLUtils.loadLibSVMFile(sc,"hdfs:///user/cloudera/person_libsvm .txt") scala> persons.first()
  • 15. [email protected] Spark has local matrices and also distributed matrices. – Distributed matrix is backed by one or more RDDs. – A local matrix stored on a single machine. There are three types of distributed matrices in MLlib: – RowMatrix: This has each row as a feature vector. – IndexedRowMatrix: This also has row indices. – CoordinateMatrix: This is simply a matrix of MatrixEntry. A MatrixEntry represents an entry in the matrix represented by its row and column index Matrices in MLlib Source:Spark Cookbook
  • 16. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Example scala> import org.apache.spark.mllib.linalg.{Vectors,Matrix, Matrices} scala> val people = Matrices.dense(3,2,Array(150d,60d,25d, 300d,80d,40d)) scala> val personRDD = sc.parallelize(List(Vectors.dense(150,60,25), Vectors.dense(300,80,40))) scala> import org.apache.spark.mllib.linalg.distributed. {IndexedRow, IndexedRowMatrix,RowMatrix, CoordinateMatrix, MatrixEntry} scala> val personMat = new RowMatrix(personRDD)
  • 17. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Example scala> print(personMat.numRows) scala> val personRDD = sc.parallelize(List(IndexedRow(0L, Vectors.dense(150,60,25)), IndexedRow(1L, Vectors.dense(300,80,40)))) scala> val pirmat = new IndexedRowMatrix(personRDD) scala> val personMat = pirmat.toRowMatrix scala> val meRDD = sc.parallelize(List( MatrixEntry(0,0,150), MatrixEntry(1,0,60), MatrixEntry(2,0,25), MatrixEntry(0,1,300), MatrixEntry(1,1,80),MatrixEntry(2,1,40) )) scala> val pcmat = new CoordinateMatrix(meRDD)
  • 18. [email protected] Central tendency of data—mean, mode, median Spread of data—variance, standard deviation Boundary conditions—min, max Statistic functions
  • 19. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Example scala> import org.apache.spark.mllib.linalg.{Vectors,Vector} scala> import org.apache.spark.mllib.stat.Statistics scala> val personRDD = sc.parallelize(List(Vectors.dense(150,60,25), Vectors.dense(300,80,40))) scala> val summary = Statistics.colStats(personRDD)
  • 20. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Hands-on Movie Recommendation
  • 21. [email protected] Source: Mllib:Spark's Machine Learning Library, A. Talwalkar Recommendation
  • 22. [email protected] Source: Mllib:Spark's Machine Learning Library, A. Talwalkar Recommendation: Collaborative Filtering
  • 23. [email protected] Source: Mllib:Spark's Machine Learning Library, A. Talwalkar Recommendation
  • 24. [email protected] Source: Mllib:Spark's Machine Learning Library, A. Talwalkar Recommendation: ALS
  • 25. [email protected] Source: MLlib: Scalable Machine Learning on Spark, X. Meng Alternating least squares (ALS)
  • 26. [email protected] numBlocks is the number of blocks used to parallelize computation (set to -1 to autoconfigure) rank is the number of latent factors in the model iterations is the number of iterations to run lambda specifies the regularization parameter in ALS implicitPrefs specifies whether to use the explicit feedback ALS variant or one adapted for an implicit feedback data alpha is a parameter applicable to the implicit feedback variant of ALS that governs the baseline confidence in preference observations MLlib: ALS Algorithm
  • 27. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer MovieLen Dataset 1)Type command > wget http://files.grouplens.org/datasets/movielens/ml-100k.zip 2)Type command > yum install unzip 3)Type command > unzip ml-100k.zip 4)Type command > more ml-100k/u.user
  • 28. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Moving dataset to HDFS 1)Type command > cd ml-100k 2)Type command > hadoop fs -mkdir /user/cloudera/movielens 3)Type command > hadoop fs -put u.user /user/cloudera/movielens 4)Type command > hadoop fs -put u.data /user/cloudera/movielens 4)Type command > hadoop fs -put u.genre /user/cloudera/movielens 5)Type command > hadoop fs -put u.item /user/cloudera/movielens 6)Type command > hadoop fs -ls /user/cloudera/movielens
  • 29. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Start Spark-shell with extra memory
  • 30. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Extracting features from the MovieLens dataset scala> val rawData = sc.textFile("hdfs:///user/cloudera/movielens/u.data") scala> rawData.first() scala> val rawRatings = rawData.map(_.split("t").take(3)) scala> rawRatings.first() scala> import org.apache.spark.mllib.recommendation.Rating scala> val ratings = rawRatings.map { case Array(user, movie, rating) =>Rating(user.toInt, movie.toInt, rating.toDouble) } scala> ratings.first()
  • 31. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Training the recommendation model scala> import org.apache.spark.mllib.recommendation.ALS scala> val model = ALS.train(ratings, 50, 10, 0.01) Note: We'll use rank of 50, 10 iterations, and a lambda parameter of 0.01
  • 32. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Inspecting the recommendations scala> val movies = sc.textFile("hdfs:///user/cloudera/movielens/u.item") scala> val titles = movies.map(line => line.split("|").take(2)).map(array =>(array(0).toInt,array(1))).collectAsMap()
  • 33. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Inspecting the recommendations (cont.) scala> val moviesForUser = ratings.keyBy(_.user).lookup(789) scala> moviesForUser.sortBy(-_.rating).take(10).map(rating => (titles(rating.product), rating.rating)).foreach(println)
  • 34. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Top 10 Recommendation for userid 789 scala> val topKRecs = model.recommendProducts(789,10) scala> topKRecs.map(rating => (titles(rating.product), rating.rating)).foreach(println)
  • 35. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Evaluating Performance: Mean Squared Error scala> val actualRating = moviesForUser.take(1)(0) scala> val predictedRating = model.predict(789, actualRating.product) scala> val squaredError = math.pow(predictedRating - actualRating.rating, 2.0)
  • 36. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Overall Mean Squared Error scala> val usersProducts = ratings.map{ case Rating(user, product, rating) => (user, product)} scala> val predictions = model.predict(usersProducts).map{ case Rating(user, product, rating) => ((user, product), rating)} scala> val ratingsAndPredictions = ratings.map{ case Rating(user, product, rating) => ((user, product), rating) }.join(predictions) scala> val MSE = ratingsAndPredictions.map{ case ((user, product), (actual, predicted)) => math.pow((actual - predicted), 2) }.reduce(_ + _) / ratingsAndPredictions.count
  • 37. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Clustering using K-Means
  • 38. [email protected] Market segmentation Social network analysis: Finding a coherent group of people in the social network for ad targeting Data center computing clusters Real estate: Identifying neighborhoods based on similar features Text analysis: Dividing text documents, such as novels or essays, into genres Clustering use cases
  • 40. [email protected]: Mahout in Action Sample Data
  • 42. [email protected] Euclidean distance measure Squared Euclidean distance measure Manhattan distance measure Cosine distance measure Distance Measures
  • 47. [email protected] K-Means with different distance measures Source: Mahout in Action
  • 49. [email protected] Dimensionality reduction Process of reducing the number of dimensions or features. Dimensionality reduction serves several purposes – Data compression – Visualization The most popular algorithm: Principal component analysis (PCA).
  • 51. [email protected] Dimensionality reduction with SVD Singular Value Decomposition (SVD): is based on a theorem from linear algebra that a rectangular matrix A can be broken down into a product of three matrices
  • 52. [email protected] Dimensionality reduction with SVD The basic idea behind SVD – Take a high dimension, a highly variable set of data points – Reduce it to a lower dimensional space that exposes the structure of the original data more clearly and orders it from the most variation to the least. So we can simply ignore variation below a certain threshold to massively reduce the original data, making sure that the original relationship interests are retained.
  • 53. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Hands-on Clustering on MovieLens Dataset
  • 54. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Extracting features from the MovieLens dataset scala> val rawData = sc.textFile("hdfs:///user/cloudera/movielens/u.item") scala> println(movies.first) scala> val genres = sc.textFile("hdfs:///user/cloudera/movielens/u.genre") scala> genres.take(5).foreach(println)
  • 55. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Extracting features from the MovieLens dataset (cont.) scala> val genreMap = genres.filter(!_.isEmpty).map(line => line.split("|")).map(array=> (array(1), array(0))).collectAsMap
  • 56. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Extracting features from the MovieLens dataset (cont.) scala> val titlesAndGenres = movies.map(_.split("|")).map { array => val genres = array.toSeq.slice(5, array.size) val genresAssigned = genres.zipWithIndex.filter { case (g, idx) => g == "1" }.map { case (g, idx) => genreMap(idx.toString) } (array(0).toInt, (array(1), genresAssigned)) }
  • 57. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Training the recommendation model scala> :paste import org.apache.spark.mllib.recommendation.ALS import org.apache.spark.mllib.recommendation.Rating val rawData = sc.textFile("hdfs:///user/cloudera/movielens/u.data") val rawRatings = rawData.map(_.split("t").take(3)) val ratings = rawRatings.map{ case Array(user, movie, rating) => Rating(user.toInt, movie.toInt, rating.toDouble) } ratings.cache val alsModel = ALS.train(ratings, 50, 10, 0.1) import org.apache.spark.mllib.linalg.Vectors val movieFactors = alsModel.productFeatures.map { case (id, factor) => (id, Vectors.dense(factor)) } val movieVectors = movieFactors.map(_._2) val userFactors = alsModel.userFeatures.map { case (id, factor) => (id, Vectors.dense(factor)) } val userVectors = userFactors.map(_._2)
  • 58. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Normalization scala> :paste import org.apache.spark.mllib.linalg.distributed.RowMatrix val movieMatrix = new RowMatrix(movieVectors) val movieMatrixSummary = movieMatrix.computeColumnSummaryStatistics() val userMatrix = new RowMatrix(userVectors) val userMatrixSummary = userMatrix.computeColumnSummaryStatistics() println("Movie factors mean: " + movieMatrixSummary.mean) println("Movie factors variance: " + movieMatrixSummary.variance) println("User factors mean: " + userMatrixSummary.mean) println("User factors variance: " + userMatrixSummary.variance)
  • 59. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Output from Normalization
  • 60. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Training a clustering model scala> import org.apache.spark.mllib.clustering.KMeans scala> val numClusters = 5 scala> val numIterations = 10 scala> val numRuns = 3 scala> val movieClusterModel = KMeans.train(movieVectors, numClusters, numIterations, numRuns)
  • 61. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Making predictions using a clustering model scala> val movie1 = movieVectors.first scala> val movieCluster = movieClusterModel.predict(movie1) scala> val predictions = movieClusterModel.predict(movieVectors)
  • 62. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Interpreting cluster predictions scala> :paste import breeze.linalg._ import breeze.numerics.pow def computeDistance(v1: DenseVector[Double], v2: DenseVector[Double]) = pow(v1 - v2, 2).sum val titlesWithFactors = titlesAndGenres.join(movieFactors) val moviesAssigned = titlesWithFactors.map { case (id, ((title, genres), vector)) => val pred = movieClusterModel.predict(vector) val clusterCentre = movieClusterModel.clusterCenters(pred) val dist = computeDistance(DenseVector(clusterCentre.toArray), DenseVector(vector.toArray)) (id, title, genres.mkString(" "), pred, dist) }
  • 63. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Interpreting cluster predictions (cont.) val clusterAssignments = moviesAssigned.groupBy { case (id, title, genres, cluster, dist) => cluster }.collectAsMap for ( (k, v) <- clusterAssignments.toSeq.sortBy(_._1)) { println(s"Cluster $k:") val m = v.toSeq.sortBy(_._5) println(m.take(20).map { case (_, title, genres, _, d) => (title, genres, d) }.mkString("n")) println("=====n") }
  • 64. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer
  • 65. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Real-time Machine Learning using Streaming K-Means
  • 66. [email protected] Online learning with Spark Streaming Streaming regression – trainOn: This takes DStream[LabeledPoint] as its argument. – predictOn: This also takes DStream[LabeledPoint]. Streaming KMeans – An extension of the mini-batch K-means algorithm
  • 68. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer MovieLen Training Dataset ● The rows of the training text files must be vector data in the form [x1,x2,x3,...,xn] 1)Type command > wget https://s3.amazonaws.com/imcbucket/data/movietest.data 2)Type command > more movietest.data
  • 69. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Install & Start Kafka Server # wget http://www-us.apache.org/dist/kafka/0.9.0.1/kafka_2.10- 0.9.0.1.tgz # tar xzf kafka_2.10-0.9.0.1.tgz # cd kafka_2.10-0.9.0.1 # bin/kafka-server-start.sh config/server.properties&
  • 70. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Start Spark-shell with extra memory
  • 71. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Streaming K-Means $ scala> :paste import org.apache.spark.mllib.linalg.Vectors import org.apache.spark.mllib.regression.LabeledPoint import org.apache.spark.mllib.clustering.StreamingKMeans import org.apache.spark.SparkConf import org.apache.spark.streaming.{Seconds, StreamingContext} import org.apache.spark.storage.StorageLevel import StorageLevel._ import org.apache.spark._ import org.apache.spark.streaming._ import org.apache.spark.streaming.StreamingContext._ import org.apache.spark.streaming.kafka.KafkaUtils val ssc = new StreamingContext(sc, Seconds(2))
  • 72. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer val kafkaStream = KafkaUtils.createStream(ssc, "localhost:2181","spark-streaming-consumer-group", Map("java- topic" -> 5)) val lines = kafkaStream.map(_._2) val ratings = lines.map(Vectors.parse) val numDimensions = 3 val numClusters = 5 val model = new StreamingKMeans() .setK(numClusters) .setDecayFactor(1.0) .setRandomCenters(numDimensions, 0.0) model.trainOn(ratings) model.predictOn(ratings).print() ssc.start() ssc.awaitTermination()
  • 73. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Running HelloKafkaProducer on another windows ● Open a new ssh windows
  • 74. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Java Code: Kafka Producer import java.util.Properties; import kafka.producer.KeyedMessage; import kafka.producer.ProducerConfig; import java.io.*; public class HelloKafkaProducer { final static String TOPIC = "java-topic"; public static void main(String[] argv){ Properties properties = new Properties(); properties.put("metadata.broker.list","localhost:9092"); properties.put("serializer.class","kafka.serializer.StringEnco der");
  • 75. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Java Code: Kafka Producer (cont.) try(BufferedReader br = new BufferedReader(new FileReader(argv[0]))) { StringBuilder sb = new StringBuilder(); ProducerConfig producerConfig = new ProducerConfig(properties); kafka.javaapi.producer.Producer<String,String> producer = new kafka.javaapi.producer.Producer<String, String>(producerConfig); String line = br.readLine(); while (line != null) { KeyedMessage<String, String> message =new KeyedMessage<String, String>(TOPIC,line); producer.send(message); line = br.readLine(); }
  • 76. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Java Code: Kafka Producer (cont.) producer.close(); } catch (IOException ex) { ex.printStackTrace(); } } }
  • 77. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Compile & Run the program // Using a vi Editor to edit the sourcecode # vi HelloKafkaProducer.java // Alternatively # wget https://s3.amazonaws.com/imcbucket/apps/HelloKafkaProducer.java // Compile progeram # export CLASSPATH=".:/root/kafka_2.10-0.9.0.1/libs/*" # javac HelloKafkaProducer.java //prepare the data # cd # wget https://s3.amazonaws.com/imcbucket/input/pg2600.txt # cd kafka_2.10-0.9.0.1 // Run the program # java HelloKafkaProducer /root/movietest.data
  • 78. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Example Result
  • 81. Thanachart Numnonda, [email protected] June 2016Apache Spark : Train the trainer Thank you www.imcinstitute.com www.facebook.com/imcinstitute