SlideShare a Scribd company logo
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME
91
EXPERIMENTAL EVALUATION OF DIFFERENT CLASSIFICATION
TECHNIQUES FOR WEB PAGE CLASSIFICATION
Ms. Rutu Joshi1
, Priyank Thakkar2
Department of Computer Science and Engineering,
Institute of Technology, Nirma University, Ahmedabad
ABSTRACT
Classification of web pages is essential for improving the quality of web search, focused
crawling, development of web directories like Yahoo, ODP etc. This paper compares various
classification techniques for the task of web page classification. The classification techniques
compared include k-Nearest Neighbours (KNN), Naive Bayes (NB), Support Vector Machine
(SVM), Classification and Regression Trees (CART), Random Forest (RF) and Particle Swarm
Optimization (PSO).Impact of using different representations of web pages is also studied. The
different representations of the web pages that are used comprise Boolean, bag-of-words and Term
Frequency and Inverse Document Frequency (TFIDF). Experiments are performed using WebKB
and R8 data sets. Accuracy and F-measure are used as the evaluation measures. Impact of feature
selectionon the accuracy of the classifier is moreover demonstrated.
Keywords: Classification and Regression Trees (CART), K-Nearest Neighbours (KNN), Naive
Bayes (NB), Particle Swarm Optimization (PSO), Random Forest, Support Vector Machine (SVM),
Web Page Classification.
1. INTRODUCTION
The internet consists of millions of web pages corresponding to each and every search word
which provides highly useful information. Search engines help users retrieve web pages related to a
keyword but searching those innumerable pages is tedious. Also, web pages are dynamic and volatile
in nature. There is no unique format for the web pages. Some web pages may be unstructured (text),
some pages may be semi structured (HTML pages) and some pages may be structured (database).
This heterogeneous format on the web presents additional challenges for classification. Hence it is
important for us to find a technique which accurately classifies web pages and provide only the most
INTERNATIONAL JOURNAL OF ADVANCED RESEARCH
IN ENGINEERING AND TECHNOLOGY (IJARET)
ISSN 0976 - 6480 (Print)
ISSN 0976 - 6499 (Online)
Volume 5, Issue 5, May (2014), pp. 91-101
© IAEME: www.iaeme.com/ijaret.asp
Journal Impact Factor (2014): 7.8273 (Calculated by GISI)
www.jifactor.com
IJARET
© I A E M E
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME
92
relevant web pages. Classification is a data mining technique which predicts pre-defined classes for
data sets. Classification is a supervised learning technique. Here, the classifier is learnt using the
training data set. The trained classifier then assigns class labels to the testing data set. In Web Page
classification, web pages are assigned to pre-defined classes mainly according to their content [1].
The rest of the paper is organized as follows. Section 2 focuses on related work. Section 3
discusses different classifiers for web page classification. Implementation methodology is described
in Section 4. The paper finally ends with conclusions as section 5.
2. RELATED WORK
Classification of web pages using various techniques was extensively studied. Four
classification techniques namely decision trees, k-nearest neighbour, SVM and naive Bayes were
discussed in [4]. The paper focussed on obtaining accurate system results. When decision tree gave
accurate result, Bayesian network did not and vice versa due to their different operational profiles.
Since many methods of web page classification were proposed, no clear conclusion about the best
method was obtained. In [5],the best result was obtained with SVM employing linear kernel function
(followed by method of k-nearest neighbours) and term frequency (TF) document model using
feature selection by mutual information score. Here, special attention was paid while treating short
documents, which frequently occurred on the web.
In [6], authors concentrated on the effects of using context features (text, title and anchor
words) in web page classification using SVM classifiers. Experiment showed that SVM technique
gave very good result on the WebKB data set even using the text components only. Also, the
accuracy of classification improved significantly when context features consisting of title
components and anchor words were used. But elimination of anchor words could not render
consistently good classification results for all the classes of data set. The performance of SVM was
compared using four different kernel functions in [7]. Experimental results showed that out of these
four kernel functions, Analysis of Variance (ANOVA) kernel function yielded the best result.
Thereafter, Latent Semantic Analysis SVM (LSA-SVM), Weighted Vote SVM (WVSVM) and Back
Propagation Neural Network (BPN) were also compared. From the experimental results it was
concluded that WVSVM could classify accurately even with a small data set. Even if the smaller
category had less training data, WVSVM could still classify those web pages with acceptable
accuracy. Whereas in [9], even with a small data set LS-SVM yielded better accuracy with faster
speed and reduced runtime of the algorithm.
PSO produced more accurate classification models than associative classifiers in [10]. PSO
was used for classifying multidimensional real data set in [11], where the parameters were tuned in
such a way that it gave the best result. PSO, KNN, Naive Bayes and Decision Tree classification
techniques were applied on Reuter-21578 and TREC-APdata set and the results were compared in
[12]. The experimental results indicated that PSO yielded much better performance than other
conventional algorithms.
Three different fitness functions were used in [13] on different data sets. PSO was compared
with nine other classification techniques, like Multi-Layer Perceptron Artificial Neural Network
(MLP), Bayes Network, Naive Bayes Tree etc. Here, PSO was in fourth position, quite close to its
predecessors. Also, PSO seemed effective for two class problem but contrasting results were
obtained for more than two classes. Hence, no clear conclusion was inferred.
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME
93
3. CLASSIFICATION METHODS
The following classifiers are used in this paper for the task of web page classification.
3.1 KNN Classifier
K-Nearest Neighbour is a lazy learning method [2]. Here, the training data set is not used to
train the classifier. For a test instance say ‫,ݐ‬ the KNN method compares ‫ݐ‬ with the training data set to
find the k most similar training instances. It then returns the class which represents the maximum of
the ݇instances of the data set. Normally ݇ ൌ 1 is not opted for classification due to noise and other
anomalies in the data set. Hence,݇ ൌ 3 is chosen forKNN classification in this study.
3.2 Naive Bayes Classifier
Naive Bayes classifier is based on Bayes’ theorem. Here, classification is considered as
estimating the posterior probabilities of class C for test instance X.
PሺC|Xሻ=
PሺX|CሻPሺCሻ
PሺXሻ
PሺX|Cሻ=P(x1, xଶ, … , ‫ݔ‬௡|‫ܥ‬ሻ ൌ ෑ ܲሺ‫ݔ‬௜
௡
௜ୀଵ
|‫ܥ‬ሻ
Where,ܲሺ‫ܺ|ܥ‬ሻ is the posterior probability of class given attribute, ܲሺܺ|‫ܥ‬ሻ is the probability
of predictor given class,ܲሺ‫ܥ‬ሻ is the prior probability of class and ܲሺܺሻ is the prior probability of
predictor. Here, for each class, probability is calculated. Thereafter, the class which has the highest
probability is assigned to the test instance.
Based on how the web pages are represented, appropriate distribution is fitted to the data.
Gaussian distribution is fitted when web pages are represented by means of TFIDF scores of the
terms. For Boolean representation, multivariate Bernoulli distribution and for bag-of-words
representation, multinomial distribution is fitted to the data.
3.3 Support Vector Machine (SVM)
SVM is one of the most popular classification method. SVM uses supervised learning
technique and can be used for both classification and regression. In general, linear SVMs are used for
binary classification. For more than two classes, SVM network. To build a classifier, SVM finds a
maximum margin hyper plane ݂ሺ‫ݔ‬ሻ ൌ ‫ݓ‬ ‫כ‬ ‫ݔ‬ ൅ ܾ. Thereafter, an input vector say‫ݔ‬௜ is assigned to the
positive class, if ݂ሺ‫ݔ‬௜ሻ ൐ൌ 0, and to the negative class otherwise. In essence, SVM finds a hyper
plane‫ݓ‬ ‫כ‬ ‫ݔ‬ ൅ ܾ ൌ 0 that separates positive and negative training examples. This hyper plane is called
the decision boundary or decision surface [2]. The main objective function here is to maximize hyper
plane’s margin between positive and negative data points.
If the data set is noisy, linear SVM is not be able to find a solution. In this case, soft margin
SVMs are used. Also, if the data set cannot be separated linearly, kernel functions are used. The
kernel function transforms the original space to a higher dimensional space so that a linear decision
boundary can be formed in the transformed space to accurately separate positive and negative
examples. Here the transformed space is called the feature space. Kernel functions can be polynomial
functions, linear kernels etc.
There are various methods to find the separating hyper plane. The “Least Square (LS)”
method finds solution by solving a set of linear equations. The “Sequential Minimal Optimization
(SMO)” method breaks a problem into 2D sub-problems that may be solved analytically, eliminating
the need of a numerical optimization algorithms.
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME
94
3.4 Classification and Regression Tree (CART)
CART was first developed by Breiman et. al [8]. CART is a non-parametric decision tree
learning technique that produces either classification or regression trees, depending on whether the
dependent variable is categorical or numeric, respectively. In CART, leaves represent class labels,
while branches represent conditions that will lead to any of the class labels i.e. leaves. The decision
tree consists of linear combination of features that help in determining a class for test data set. CART
uses historical data to construct decision trees which thereafter classify new data set. In order to use
CART for categorizing the instances, it is necessary to know number of classes a priori. To perform
classification/regression for a test instance, follow the decisions in the tree from the root node to the
leaf node. The leaf node predicts the result for the test instance. Classification trees give nominal
responses such as 'true' or 'false'. Regression trees give numeric responses.
3.5 Random Forest
Random Forest consists of an ensemble of decision trees that may be used for either
classification or regression. To train each tree, different subsets of the training data set (probably
2/3rd
) are selected. To predict class labels of an ensemble of trees for testing data set, Random Forest
takes an average of predictions from individual trees. For estimating the prediction error, predictions
are computed for each tree on its out-of-bag observations (those observations that were not used to
train the trees). Thereafter these predictions are averaged over the entire ensemble for each
observation and then compared with the true value of this observation. Here, an ensemble of 50 trees
is considered for classifying web pages using random forest.
3.6 Particle Swarm Optimization (PSO)
The PSO is a population-based stochastic optimization method first proposed by Kennedy
and Eberhart [3] in 1995. It is simple as well as efficient in global search. In PSO, each particle
represents a possible solution. PSO finds optimal solution using this swarm of particles. PSO
Algorithm is of two types: global best (gbest) PSO and local best (lbest) PSO. In gbest PSO, the
neighbourhood of the particle is the entire swarm while in lbest PSO, a particle may have social or
geographical neighbourhood. The PSO algorithm starts with initializing the position and velocity of
each particle. The function that is to be optimized for the PSO algorithm is called the fitness
function. For each iteration, the velocity of the particles is updated by considering the previous
velocity along with the personal best and global best position.
Vij(t+1) = Vij(t) + C1 * R1 ( Pib(t) - Xij(t) ) + C2 * R2 (Pigb(t) – Xij(t) )
where,ܸ௜௝ሺ‫ݐ‬ሻ is the velocity at iteration ‫,ݐ‬ ‫ܥ‬ଵ and ‫ܥ‬ଶ are acceleration constants, ܴଵ and ܴଶ are
random values in the range ሾ0,1ሿ, ܲሻ௜௕ሺ‫ݐ‬ሻ is the personal best position of particle for iteration ‫,ݐ‬
ܺ௜௝ሺ‫ݐ‬ሻ is the position of particle for iteration ‫ݐ‬ and ܲ௜௚௕ሺ‫ݐ‬ሻ is the global best position of particle.The
personal best position is calculated by comparing the fitness of all the previous positions of the
particle and selecting the position with the best fitness value. The global best position of the particle
is obtained by selecting the personal best position of particle having the best fitness value. The
position of the particle is updated using the new velocity and older position of the particle.
Xij(t+1) = Xij(t) + Vij(t)
These iterations are repeated until the algorithm satisfies the stopping criteria. The stopping
criteria may be no of iterations or when the motion of the particles ceases. The algorithm renders the
position of the particle having the best fitness value.
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME
95
Selection of appropriate parameters is essential for the algorithm to render best results. For
distinguishing web pages, hyperplane is used. So, in this study, initially fifty particles are used for
PSO which are the hyperplanes obtained from SVMs. The initial velocity for all particles is zero and
the value for ‫ܥ‬ଵ, ‫ܥ‬ଶ is 2 and 0.8 respectively. The algorithm is iterated 10 times.
Fig 4.1: PSO Algorithm
4. IMPLEMENTATION METHODOLOGY
4.1 Performance Parameters
F-measure is used to measure the performance of the algorithms.
F-measure is a measure that combines precision and recall. It is the harmonic mean of precision and
recall. It is also known as F1 measure as precision and recall are evenly weighted. F-measure is used
for better visualization.
F െ measure ൌ
2 ‫כ‬ Precision ‫כ‬ Recall
Precision ൅ Recall
Here, Precision (also called positive predictive value) is the ratio of true positives elements
and the total number of elements that are predicted as positives (regardless of whether they are
positive or not).
Precision ൌ
True Positive
True Positive ൅ False Positive
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME
96
Here, Recall (also known as sensitivity) is the ratio of true positives elements and the total no
of elements that are actually positive.
Recall ൌ
True Positive
True Positive ൅ False Negative
4.2 Data sets
Two data sets are used for experimentations.
1) WebKB data set
It consists of 4 classes: Project (Training 335, Testing 166 webpages), Course(Training 620,
Testing 306 web pages), Faculty(Training 745, Testing 371 web pages) and Student(Training 1085
Testing 540 web pages). The web pages in the data set are represented by means of 7771 terms
appearing in these web pages.
2) R8 data set of Reuters-21578
It consists of 8 classes: acq(Training 1596, Testing 696 web pages),crude(Training 253, Testing
121 web pages), earn(Training 2840, Testing 1083 web pages), grain(Training 41, Testing 10 web
pages), interest(Training 190, Testing 81, web pages), money-fx(Training 206, Testing 87 web
pages), ship(Training 108, Testing 36 web pages) and trade(Training 251, Testing 75 web pages).
The web pages in this data set are represented by means of 17386 terms appearing in the web pages
of this data set.
4.3 Pre-Processing of Web Pages
Pre-processing of web pages is necessary to improve subsequent classification process. First
of all, all the terms are converted to lower case. Each word in the document is extracted and the stop
words are removed from the data set. The Boolean, TFIDF and bag-of-words representations are then
obtained. Boolean representation consists of zeroes and ones, zero indicating the absence of the word
in the web page while one indicating the presence of the word in the web page. In bag-of-words
representation, number of times, the specific word appears in the web page is used as the value of the
feature corresponding to that word. In TFIDF representation, feature values are in terms of TFIDF
scores of the words.
4.4 Feature Selection
Feature selection focuses on removing redundant or irrelevant attributes. Redundant features
are those which provide no more information than the currently selected features, and irrelevant
features provide no useful information in any context. Feature selection reduces the set of terms to be
used in classification, thus improving both efficiency and accuracy. In this paper, feature selection is
done using Information Gain [14]. Information Gain helps us determine which attributes in a given
training set are most useful for discriminating between classes. It tells us how important a given
attribute of the feature is.
‫݊݅ܽܩ݋݂݊ܫ‬ሺ‫,ݏݏ݈ܽܥ‬ ‫݁ݐݑܾ݅ݎݐݐܣ‬ሻ ൌ ‫ܪ‬ሺ‫ݏݏ݈ܽܥ‬ሻ െ ‫ܪ‬ሺ‫݁ݐݑܾ݅ݎݐݐܣ/ݏݏ݈ܽܥ‬ሻ
Where,‫ܪ‬ stands for Entropy.Entropy measures the level of impurity in a group.
‫ݕ݌݋ݎݐ݊ܧ‬ ൌ ෍ െ‫݌‬௜݈‫݃݋‬ଶ‫݌‬௜
௜
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME
97
4.5 Results and Discussions
All the classification methods discussed in section 3 are applied on the three different
representations of web page collection of both the data set. Number of features are varied to see the
impact on the performance of the classifiers.
Figure 4.1: Impact of feature selection on F-measure (WebKB data set, Boolean representation)
Figure 4.2: Impact of feature selection on F-measure (WebKB data set, TFIDF representation)
Figures 4.1 to 4.3 show the impact of feature selection on f-measure for Boolean, TFIDF and
bag-of-words representation respectively for WebKB dataset. Similar results for R8 data set are
depicted in Figures 4.4 to 4.6. It can be seen that a classifier learnt using appropriate number of
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME
98
features improves the performance. It is also evident that LS-SVM is most sensitive to the number of
features.
Figure 4.3: Impact of feature selection on F-measure (WebKB data set, bag-of-words representation)
Figure 4.4: Impact of feature selection on F-measure (R8 data set, Boolean representation)
Figures 4.7 to 4.9 depict the best performance of each of the classification techniques for both
the data set. Number of features when each of the classification techniques has performed the best,
along with the best performance, is also shown in this figures.
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME
99
Figure 4.5: Impact of feature selection on F-measure (R8 data set, TFIDF representation)
Figure 4.6: Impact of feature selection on F-measure (R8 data set, bag-of-words representation)
It can be seen that, random forest achieves the best results for both the data sets. However,
the best results are obtained for bag-of-words representation in case of R8 data set while in case of
WebKB data set, best results are achieved for TFIDF representation.
5. CONCLUSIONS
This paper addresses the task of classifying web pages using various classification
techniques. Performance of KNN, NB, SVM, CART, RF and PSO is compared for different possible
representation of web pages. Among all the methods, Random Forest (RF) gives best overall result.
Results also demonstrate that the performance of the classifier is affected by the representation used.
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME
100
Figure 4.7: Best F-measure (Boolean representation)
Figure 4.8: Best F-measure (TFIDF representation)
Figure 4.9: Best F-measure (Bag-of-words representation)
It can be seen from the results that different classification techniques perform best for
different representation of the web pages. This implies that there is no single representation which
works best for all the classification techniques. One should select the representation based on the
techniques to be used. Impact of feature selection is also studied in the paper and results show that
selecting right number of features definitely improves the performance of the classifier.
International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME
101
REFERENCES
[1] Thair Nu Phyu, “Survey of Classification Techniques in Data Mining”, Proceedings of the
International MultiConference of Engineers and Computer Scientists, vol. I, 2009.
[2] Bing Liu, Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data (Data-Centric
Systems and Applications), Springer-Verlag New York, Inc., Secaucus, NJ, 2006.
[3] R.C.Eberhart, J.Kennedy, “A new optimizer using particle swarm theory”, Proceedings of the 6th
Symposium MicroMachine and Human Science, IEEE Press, Los Alamitos, CA, October 1995,
pp. 39-43.
[4] M. A. Nayak, “A comparative study of web page classification techniques," GIT Journal of
Engineering and Technology, vol. 6, 2013.
[5] J. Materna, “Automatic web page classification," 2008.
[6] W.-K. N. Aixin Sun, Ee-Peng Lim, “Web classification using support vector machine."
Proceedings of the fourth international workshop on Web information and data management -
WIDM '02, 2002.
[7] R.-C. Chen and C.-H. Hsieh, “Web page classification based on a support vector machine using a
weighted vote schema," Expert Systems with Applications, vol. 31, 2006
[8] Breiman, L. and Friedman, J. H. and Olshen, R. A. and Stone, "Classification and Regression
Trees", 1984.
[9] L.-b. X. Yong Zhang, Bin Fan, “Web page classification based-on a least square support vector
machine with latent semantic analysis," Fifth International Conference on Fuzzy Systems and
Knowledge Discovery, 2008.
[10] D. Radha Damodaram, “Phishing website detection and optimization using particle swarm
optimization technique," 2011.
[11] A. K. J. Sarita Mahapatra and B. Naik, “Performance evaluation of pso based classifier for
classification of multidimensional data with variation of pso parameters in knowledge discovery
database," vol. 34, 2011.
[12] D. Z. Ziqiang Wang, Qingzhou Zhang, “A pso-based web document classification algorithm,"
Eighth ACIS International Conference on Software Engineering, Artificial Intelligence,
Networking, and Parallel/Distributed Computing (SNPD2007), 2007.
[13] E. T. De Falco, A. Della Cioppa, “Facing classification problems with particles warm
optimization," Applied Soft Computing, vol. 7, 2007.
[14] Jiawei Han, Micheline Kamber and Jian Pei, “Data mining: concepts and techniques”, Morgan
Kaufmann, 2006.
[15] R. Manickam, D. Boominath and V. Bhuvaneswari, “An Analysis of Data Mining: Past, Present
and Future”, International Journal of Computer Engineering & Technology (IJCET), Volume 3,
Issue 1, 2012, pp. 1 - 9, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.
[16] Sandip S. Patil and Asha P. Chaudhari, “Classification of Emotions from Text using SVM Based
Opinion Mining”, International Journal of Computer Engineering & Technology (IJCET),
Volume 3, Issue 1, 2012, pp. 330 - 338, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.
[17] Prof. Sindhu P Menon and Dr. Nagaratna P Hegde, “Research on Classification Algorithms and
its Impact on Web Mining”, International Journal of Computer Engineering & Technology
(IJCET), Volume 4, Issue 4, 2013, pp. 495 - 504, ISSN Print: 0976 – 6367, ISSN Online:
0976 – 6375.
[18] Priyank Thakkar, Samir Kariya and K Kotecha, “Web Page Clustering using Cemetery
Organization Behavior of Ants”, International Journal of Advanced Research in Engineering
& Technology (IJARET), Volume 5, Issue 1, 2014, pp. 7 - 17, ISSN Print: 0976-6480,
ISSN Online: 0976-6499.
[19] Alamelu Mangai J, Santhosh Kumar V and Sugumaran V, “Recent Research in Web Page
Classification – A Review”, International Journal of Computer Engineering & Technology
(IJCET), Volume 1, Issue 1, 2010, pp. 112 - 122, ISSN Print: 0976 – 6367, ISSN Online:
0976 – 6375.

More Related Content

PDF
Study of the Class and Structural Changes Caused By Incorporating the Target ...
PPTX
Text clustering
PDF
Novel text categorization by amalgamation of augmented k nearest neighbourhoo...
PDF
Different Similarity Measures for Text Classification Using Knn
PDF
Centralized Class Specific Dictionary Learning for wearable sensors based phy...
PDF
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...
PDF
Comparision of methods for combination of multiple classifiers that predict b...
PDF
A systematic study of text mining techniques
Study of the Class and Structural Changes Caused By Incorporating the Target ...
Text clustering
Novel text categorization by amalgamation of augmented k nearest neighbourhoo...
Different Similarity Measures for Text Classification Using Knn
Centralized Class Specific Dictionary Learning for wearable sensors based phy...
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...
Comparision of methods for combination of multiple classifiers that predict b...
A systematic study of text mining techniques

What's hot (19)

PDF
HOLISTIC EVALUATION OF XML QUERIES WITH STRUCTURAL PREFERENCES ON AN ANNOTATE...
PDF
A Kernel Approach for Semi-Supervised Clustering Framework for High Dimension...
PDF
Comparative study of classification algorithm for text based categorization
PDF
A Combined Approach for Feature Subset Selection and Size Reduction for High ...
PPT
1.8 discretization
PDF
GCUBE INDEXING
PDF
G0354451
PDF
A new model for iris data set classification based on linear support vector m...
PDF
Text Categorization Using Improved K Nearest Neighbor Algorithm
PDF
Multiview Alignment Hashing for Efficient Image Search
PPTX
An Approach to Mixed Dataset Clustering and Validation with ART-2 Artificial ...
PDF
Lx3520322036
PPTX
lazy learners and other classication methods
PPTX
Machine Learning by Analogy II
PDF
Textual Data Partitioning with Relationship and Discriminative Analysis
DOCX
Expandable bayesian
PDF
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
PDF
E1062530
PPTX
04 Classification in Data Mining
HOLISTIC EVALUATION OF XML QUERIES WITH STRUCTURAL PREFERENCES ON AN ANNOTATE...
A Kernel Approach for Semi-Supervised Clustering Framework for High Dimension...
Comparative study of classification algorithm for text based categorization
A Combined Approach for Feature Subset Selection and Size Reduction for High ...
1.8 discretization
GCUBE INDEXING
G0354451
A new model for iris data set classification based on linear support vector m...
Text Categorization Using Improved K Nearest Neighbor Algorithm
Multiview Alignment Hashing for Efficient Image Search
An Approach to Mixed Dataset Clustering and Validation with ART-2 Artificial ...
Lx3520322036
lazy learners and other classication methods
Machine Learning by Analogy II
Textual Data Partitioning with Relationship and Discriminative Analysis
Expandable bayesian
WITH SEMANTICS AND HIDDEN MARKOV MODELS TO AN ADAPTIVE LOG FILE PARSER
E1062530
04 Classification in Data Mining
Ad

Viewers also liked (20)

PDF
A minimization approach for two level logic synthesis using constrained depth...
PDF
30120140507002
PDF
20320140506015
PDF
Utilization of ict in r & d institutions libraries in chennai a pilot study
PDF
Optimization of surface roughness in high speed end milling operation using
PDF
Benefits of fdi in indain retail sector and customer perception of organized r
PDF
Comparison of fuzzy neural clustering based outlier detection techniques
PPT
Twitter: le phénomène (II)
PDF
Acordao 1.3
PPT
C:\Documents And Settings\Admin\Mis Documentos\Power Points\2010\Sala Delfine...
PDF
Butlletí n.58. Acció de Govern
PPT
Liturgia 10 vida liturgica
PPT
Matrimonio 07 Transmision De La Vida
DOC
305. Zakon Vitestva
DOCX
Tema 1. Exercicios de masa, volume e densidade con lectura 2
PPS
Coresma 1 a. mt 4, 1 11.
PPT
patrologia-tema15
DOCX
Estudiantes financiero corregidos septiembre 24 2014
PPT
A pintura gótica
PPT
EL AUTISMO
A minimization approach for two level logic synthesis using constrained depth...
30120140507002
20320140506015
Utilization of ict in r & d institutions libraries in chennai a pilot study
Optimization of surface roughness in high speed end milling operation using
Benefits of fdi in indain retail sector and customer perception of organized r
Comparison of fuzzy neural clustering based outlier detection techniques
Twitter: le phénomène (II)
Acordao 1.3
C:\Documents And Settings\Admin\Mis Documentos\Power Points\2010\Sala Delfine...
Butlletí n.58. Acció de Govern
Liturgia 10 vida liturgica
Matrimonio 07 Transmision De La Vida
305. Zakon Vitestva
Tema 1. Exercicios de masa, volume e densidade con lectura 2
Coresma 1 a. mt 4, 1 11.
patrologia-tema15
Estudiantes financiero corregidos septiembre 24 2014
A pintura gótica
EL AUTISMO
Ad

Similar to 20120140505011 (20)

PDF
Supervised WSD Using Master- Slave Voting Technique
PDF
J017256674
PDF
IEEE Datamining 2016 Title and Abstract
PDF
Generalization of linear and non-linear support vector machine in multiple fi...
PDF
10.1.1.163.1173 - Copy.pdf aafefwe sfweew er wewewe erger
PDF
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
PDF
Classification of Breast Cancer Diseases using Data Mining Techniques
PDF
The effect of gamma value on support vector machine performance with differen...
PDF
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
PDF
Medical diagnosis classification
PDF
SVM Based Identification of Psychological Personality Using Handwritten Text
PDF
Oversampling technique in student performance classification from engineering...
PDF
A Formal Machine Learning or Multi Objective Decision Making System for Deter...
PDF
Big Data Processing using a AWS Dataset
PDF
Vchunk join an efficient algorithm for edit similarity joins
PDF
50120140504015
PDF
228-SE3001_2
PDF
A parsimonious SVM model selection criterion for classification of real-world ...
PDF
An efficient-classification-model-for-unstructured-text-document
PDF
Athifah procedia technology_2013
Supervised WSD Using Master- Slave Voting Technique
J017256674
IEEE Datamining 2016 Title and Abstract
Generalization of linear and non-linear support vector machine in multiple fi...
10.1.1.163.1173 - Copy.pdf aafefwe sfweew er wewewe erger
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
Classification of Breast Cancer Diseases using Data Mining Techniques
The effect of gamma value on support vector machine performance with differen...
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
Medical diagnosis classification
SVM Based Identification of Psychological Personality Using Handwritten Text
Oversampling technique in student performance classification from engineering...
A Formal Machine Learning or Multi Objective Decision Making System for Deter...
Big Data Processing using a AWS Dataset
Vchunk join an efficient algorithm for edit similarity joins
50120140504015
228-SE3001_2
A parsimonious SVM model selection criterion for classification of real-world ...
An efficient-classification-model-for-unstructured-text-document
Athifah procedia technology_2013

More from IAEME Publication (20)

PDF
IAEME_Publication_Call_for_Paper_September_2022.pdf
PDF
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
PDF
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
PDF
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
PDF
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
PDF
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
PDF
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
PDF
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
PDF
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
PDF
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
PDF
GANDHI ON NON-VIOLENT POLICE
PDF
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
PDF
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
PDF
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
PDF
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
PDF
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
PDF
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
PDF
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
PDF
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
PDF
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
IAEME_Publication_Call_for_Paper_September_2022.pdf
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
GANDHI ON NON-VIOLENT POLICE
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT

Recently uploaded (20)

PDF
Encapsulation theory and applications.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
August Patch Tuesday
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
TLE Review Electricity (Electricity).pptx
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Getting Started with Data Integration: FME Form 101
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PPTX
Spectroscopy.pptx food analysis technology
PPTX
OMC Textile Division Presentation 2021.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
Encapsulation theory and applications.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
NewMind AI Weekly Chronicles - August'25-Week II
Advanced methodologies resolving dimensionality complications for autism neur...
Digital-Transformation-Roadmap-for-Companies.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
August Patch Tuesday
Building Integrated photovoltaic BIPV_UPV.pdf
Programs and apps: productivity, graphics, security and other tools
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
TLE Review Electricity (Electricity).pptx
A comparative analysis of optical character recognition models for extracting...
Getting Started with Data Integration: FME Form 101
Encapsulation_ Review paper, used for researhc scholars
Diabetes mellitus diagnosis method based random forest with bat algorithm
Univ-Connecticut-ChatGPT-Presentaion.pdf
Spectroscopy.pptx food analysis technology
OMC Textile Division Presentation 2021.pptx
Per capita expenditure prediction using model stacking based on satellite ima...

20120140505011

  • 1. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME 91 EXPERIMENTAL EVALUATION OF DIFFERENT CLASSIFICATION TECHNIQUES FOR WEB PAGE CLASSIFICATION Ms. Rutu Joshi1 , Priyank Thakkar2 Department of Computer Science and Engineering, Institute of Technology, Nirma University, Ahmedabad ABSTRACT Classification of web pages is essential for improving the quality of web search, focused crawling, development of web directories like Yahoo, ODP etc. This paper compares various classification techniques for the task of web page classification. The classification techniques compared include k-Nearest Neighbours (KNN), Naive Bayes (NB), Support Vector Machine (SVM), Classification and Regression Trees (CART), Random Forest (RF) and Particle Swarm Optimization (PSO).Impact of using different representations of web pages is also studied. The different representations of the web pages that are used comprise Boolean, bag-of-words and Term Frequency and Inverse Document Frequency (TFIDF). Experiments are performed using WebKB and R8 data sets. Accuracy and F-measure are used as the evaluation measures. Impact of feature selectionon the accuracy of the classifier is moreover demonstrated. Keywords: Classification and Regression Trees (CART), K-Nearest Neighbours (KNN), Naive Bayes (NB), Particle Swarm Optimization (PSO), Random Forest, Support Vector Machine (SVM), Web Page Classification. 1. INTRODUCTION The internet consists of millions of web pages corresponding to each and every search word which provides highly useful information. Search engines help users retrieve web pages related to a keyword but searching those innumerable pages is tedious. Also, web pages are dynamic and volatile in nature. There is no unique format for the web pages. Some web pages may be unstructured (text), some pages may be semi structured (HTML pages) and some pages may be structured (database). This heterogeneous format on the web presents additional challenges for classification. Hence it is important for us to find a technique which accurately classifies web pages and provide only the most INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING AND TECHNOLOGY (IJARET) ISSN 0976 - 6480 (Print) ISSN 0976 - 6499 (Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME: www.iaeme.com/ijaret.asp Journal Impact Factor (2014): 7.8273 (Calculated by GISI) www.jifactor.com IJARET © I A E M E
  • 2. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME 92 relevant web pages. Classification is a data mining technique which predicts pre-defined classes for data sets. Classification is a supervised learning technique. Here, the classifier is learnt using the training data set. The trained classifier then assigns class labels to the testing data set. In Web Page classification, web pages are assigned to pre-defined classes mainly according to their content [1]. The rest of the paper is organized as follows. Section 2 focuses on related work. Section 3 discusses different classifiers for web page classification. Implementation methodology is described in Section 4. The paper finally ends with conclusions as section 5. 2. RELATED WORK Classification of web pages using various techniques was extensively studied. Four classification techniques namely decision trees, k-nearest neighbour, SVM and naive Bayes were discussed in [4]. The paper focussed on obtaining accurate system results. When decision tree gave accurate result, Bayesian network did not and vice versa due to their different operational profiles. Since many methods of web page classification were proposed, no clear conclusion about the best method was obtained. In [5],the best result was obtained with SVM employing linear kernel function (followed by method of k-nearest neighbours) and term frequency (TF) document model using feature selection by mutual information score. Here, special attention was paid while treating short documents, which frequently occurred on the web. In [6], authors concentrated on the effects of using context features (text, title and anchor words) in web page classification using SVM classifiers. Experiment showed that SVM technique gave very good result on the WebKB data set even using the text components only. Also, the accuracy of classification improved significantly when context features consisting of title components and anchor words were used. But elimination of anchor words could not render consistently good classification results for all the classes of data set. The performance of SVM was compared using four different kernel functions in [7]. Experimental results showed that out of these four kernel functions, Analysis of Variance (ANOVA) kernel function yielded the best result. Thereafter, Latent Semantic Analysis SVM (LSA-SVM), Weighted Vote SVM (WVSVM) and Back Propagation Neural Network (BPN) were also compared. From the experimental results it was concluded that WVSVM could classify accurately even with a small data set. Even if the smaller category had less training data, WVSVM could still classify those web pages with acceptable accuracy. Whereas in [9], even with a small data set LS-SVM yielded better accuracy with faster speed and reduced runtime of the algorithm. PSO produced more accurate classification models than associative classifiers in [10]. PSO was used for classifying multidimensional real data set in [11], where the parameters were tuned in such a way that it gave the best result. PSO, KNN, Naive Bayes and Decision Tree classification techniques were applied on Reuter-21578 and TREC-APdata set and the results were compared in [12]. The experimental results indicated that PSO yielded much better performance than other conventional algorithms. Three different fitness functions were used in [13] on different data sets. PSO was compared with nine other classification techniques, like Multi-Layer Perceptron Artificial Neural Network (MLP), Bayes Network, Naive Bayes Tree etc. Here, PSO was in fourth position, quite close to its predecessors. Also, PSO seemed effective for two class problem but contrasting results were obtained for more than two classes. Hence, no clear conclusion was inferred.
  • 3. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME 93 3. CLASSIFICATION METHODS The following classifiers are used in this paper for the task of web page classification. 3.1 KNN Classifier K-Nearest Neighbour is a lazy learning method [2]. Here, the training data set is not used to train the classifier. For a test instance say ‫,ݐ‬ the KNN method compares ‫ݐ‬ with the training data set to find the k most similar training instances. It then returns the class which represents the maximum of the ݇instances of the data set. Normally ݇ ൌ 1 is not opted for classification due to noise and other anomalies in the data set. Hence,݇ ൌ 3 is chosen forKNN classification in this study. 3.2 Naive Bayes Classifier Naive Bayes classifier is based on Bayes’ theorem. Here, classification is considered as estimating the posterior probabilities of class C for test instance X. PሺC|Xሻ= PሺX|CሻPሺCሻ PሺXሻ PሺX|Cሻ=P(x1, xଶ, … , ‫ݔ‬௡|‫ܥ‬ሻ ൌ ෑ ܲሺ‫ݔ‬௜ ௡ ௜ୀଵ |‫ܥ‬ሻ Where,ܲሺ‫ܺ|ܥ‬ሻ is the posterior probability of class given attribute, ܲሺܺ|‫ܥ‬ሻ is the probability of predictor given class,ܲሺ‫ܥ‬ሻ is the prior probability of class and ܲሺܺሻ is the prior probability of predictor. Here, for each class, probability is calculated. Thereafter, the class which has the highest probability is assigned to the test instance. Based on how the web pages are represented, appropriate distribution is fitted to the data. Gaussian distribution is fitted when web pages are represented by means of TFIDF scores of the terms. For Boolean representation, multivariate Bernoulli distribution and for bag-of-words representation, multinomial distribution is fitted to the data. 3.3 Support Vector Machine (SVM) SVM is one of the most popular classification method. SVM uses supervised learning technique and can be used for both classification and regression. In general, linear SVMs are used for binary classification. For more than two classes, SVM network. To build a classifier, SVM finds a maximum margin hyper plane ݂ሺ‫ݔ‬ሻ ൌ ‫ݓ‬ ‫כ‬ ‫ݔ‬ ൅ ܾ. Thereafter, an input vector say‫ݔ‬௜ is assigned to the positive class, if ݂ሺ‫ݔ‬௜ሻ ൐ൌ 0, and to the negative class otherwise. In essence, SVM finds a hyper plane‫ݓ‬ ‫כ‬ ‫ݔ‬ ൅ ܾ ൌ 0 that separates positive and negative training examples. This hyper plane is called the decision boundary or decision surface [2]. The main objective function here is to maximize hyper plane’s margin between positive and negative data points. If the data set is noisy, linear SVM is not be able to find a solution. In this case, soft margin SVMs are used. Also, if the data set cannot be separated linearly, kernel functions are used. The kernel function transforms the original space to a higher dimensional space so that a linear decision boundary can be formed in the transformed space to accurately separate positive and negative examples. Here the transformed space is called the feature space. Kernel functions can be polynomial functions, linear kernels etc. There are various methods to find the separating hyper plane. The “Least Square (LS)” method finds solution by solving a set of linear equations. The “Sequential Minimal Optimization (SMO)” method breaks a problem into 2D sub-problems that may be solved analytically, eliminating the need of a numerical optimization algorithms.
  • 4. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME 94 3.4 Classification and Regression Tree (CART) CART was first developed by Breiman et. al [8]. CART is a non-parametric decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively. In CART, leaves represent class labels, while branches represent conditions that will lead to any of the class labels i.e. leaves. The decision tree consists of linear combination of features that help in determining a class for test data set. CART uses historical data to construct decision trees which thereafter classify new data set. In order to use CART for categorizing the instances, it is necessary to know number of classes a priori. To perform classification/regression for a test instance, follow the decisions in the tree from the root node to the leaf node. The leaf node predicts the result for the test instance. Classification trees give nominal responses such as 'true' or 'false'. Regression trees give numeric responses. 3.5 Random Forest Random Forest consists of an ensemble of decision trees that may be used for either classification or regression. To train each tree, different subsets of the training data set (probably 2/3rd ) are selected. To predict class labels of an ensemble of trees for testing data set, Random Forest takes an average of predictions from individual trees. For estimating the prediction error, predictions are computed for each tree on its out-of-bag observations (those observations that were not used to train the trees). Thereafter these predictions are averaged over the entire ensemble for each observation and then compared with the true value of this observation. Here, an ensemble of 50 trees is considered for classifying web pages using random forest. 3.6 Particle Swarm Optimization (PSO) The PSO is a population-based stochastic optimization method first proposed by Kennedy and Eberhart [3] in 1995. It is simple as well as efficient in global search. In PSO, each particle represents a possible solution. PSO finds optimal solution using this swarm of particles. PSO Algorithm is of two types: global best (gbest) PSO and local best (lbest) PSO. In gbest PSO, the neighbourhood of the particle is the entire swarm while in lbest PSO, a particle may have social or geographical neighbourhood. The PSO algorithm starts with initializing the position and velocity of each particle. The function that is to be optimized for the PSO algorithm is called the fitness function. For each iteration, the velocity of the particles is updated by considering the previous velocity along with the personal best and global best position. Vij(t+1) = Vij(t) + C1 * R1 ( Pib(t) - Xij(t) ) + C2 * R2 (Pigb(t) – Xij(t) ) where,ܸ௜௝ሺ‫ݐ‬ሻ is the velocity at iteration ‫,ݐ‬ ‫ܥ‬ଵ and ‫ܥ‬ଶ are acceleration constants, ܴଵ and ܴଶ are random values in the range ሾ0,1ሿ, ܲሻ௜௕ሺ‫ݐ‬ሻ is the personal best position of particle for iteration ‫,ݐ‬ ܺ௜௝ሺ‫ݐ‬ሻ is the position of particle for iteration ‫ݐ‬ and ܲ௜௚௕ሺ‫ݐ‬ሻ is the global best position of particle.The personal best position is calculated by comparing the fitness of all the previous positions of the particle and selecting the position with the best fitness value. The global best position of the particle is obtained by selecting the personal best position of particle having the best fitness value. The position of the particle is updated using the new velocity and older position of the particle. Xij(t+1) = Xij(t) + Vij(t) These iterations are repeated until the algorithm satisfies the stopping criteria. The stopping criteria may be no of iterations or when the motion of the particles ceases. The algorithm renders the position of the particle having the best fitness value.
  • 5. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME 95 Selection of appropriate parameters is essential for the algorithm to render best results. For distinguishing web pages, hyperplane is used. So, in this study, initially fifty particles are used for PSO which are the hyperplanes obtained from SVMs. The initial velocity for all particles is zero and the value for ‫ܥ‬ଵ, ‫ܥ‬ଶ is 2 and 0.8 respectively. The algorithm is iterated 10 times. Fig 4.1: PSO Algorithm 4. IMPLEMENTATION METHODOLOGY 4.1 Performance Parameters F-measure is used to measure the performance of the algorithms. F-measure is a measure that combines precision and recall. It is the harmonic mean of precision and recall. It is also known as F1 measure as precision and recall are evenly weighted. F-measure is used for better visualization. F െ measure ൌ 2 ‫כ‬ Precision ‫כ‬ Recall Precision ൅ Recall Here, Precision (also called positive predictive value) is the ratio of true positives elements and the total number of elements that are predicted as positives (regardless of whether they are positive or not). Precision ൌ True Positive True Positive ൅ False Positive
  • 6. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME 96 Here, Recall (also known as sensitivity) is the ratio of true positives elements and the total no of elements that are actually positive. Recall ൌ True Positive True Positive ൅ False Negative 4.2 Data sets Two data sets are used for experimentations. 1) WebKB data set It consists of 4 classes: Project (Training 335, Testing 166 webpages), Course(Training 620, Testing 306 web pages), Faculty(Training 745, Testing 371 web pages) and Student(Training 1085 Testing 540 web pages). The web pages in the data set are represented by means of 7771 terms appearing in these web pages. 2) R8 data set of Reuters-21578 It consists of 8 classes: acq(Training 1596, Testing 696 web pages),crude(Training 253, Testing 121 web pages), earn(Training 2840, Testing 1083 web pages), grain(Training 41, Testing 10 web pages), interest(Training 190, Testing 81, web pages), money-fx(Training 206, Testing 87 web pages), ship(Training 108, Testing 36 web pages) and trade(Training 251, Testing 75 web pages). The web pages in this data set are represented by means of 17386 terms appearing in the web pages of this data set. 4.3 Pre-Processing of Web Pages Pre-processing of web pages is necessary to improve subsequent classification process. First of all, all the terms are converted to lower case. Each word in the document is extracted and the stop words are removed from the data set. The Boolean, TFIDF and bag-of-words representations are then obtained. Boolean representation consists of zeroes and ones, zero indicating the absence of the word in the web page while one indicating the presence of the word in the web page. In bag-of-words representation, number of times, the specific word appears in the web page is used as the value of the feature corresponding to that word. In TFIDF representation, feature values are in terms of TFIDF scores of the words. 4.4 Feature Selection Feature selection focuses on removing redundant or irrelevant attributes. Redundant features are those which provide no more information than the currently selected features, and irrelevant features provide no useful information in any context. Feature selection reduces the set of terms to be used in classification, thus improving both efficiency and accuracy. In this paper, feature selection is done using Information Gain [14]. Information Gain helps us determine which attributes in a given training set are most useful for discriminating between classes. It tells us how important a given attribute of the feature is. ‫݊݅ܽܩ݋݂݊ܫ‬ሺ‫,ݏݏ݈ܽܥ‬ ‫݁ݐݑܾ݅ݎݐݐܣ‬ሻ ൌ ‫ܪ‬ሺ‫ݏݏ݈ܽܥ‬ሻ െ ‫ܪ‬ሺ‫݁ݐݑܾ݅ݎݐݐܣ/ݏݏ݈ܽܥ‬ሻ Where,‫ܪ‬ stands for Entropy.Entropy measures the level of impurity in a group. ‫ݕ݌݋ݎݐ݊ܧ‬ ൌ ෍ െ‫݌‬௜݈‫݃݋‬ଶ‫݌‬௜ ௜
  • 7. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME 97 4.5 Results and Discussions All the classification methods discussed in section 3 are applied on the three different representations of web page collection of both the data set. Number of features are varied to see the impact on the performance of the classifiers. Figure 4.1: Impact of feature selection on F-measure (WebKB data set, Boolean representation) Figure 4.2: Impact of feature selection on F-measure (WebKB data set, TFIDF representation) Figures 4.1 to 4.3 show the impact of feature selection on f-measure for Boolean, TFIDF and bag-of-words representation respectively for WebKB dataset. Similar results for R8 data set are depicted in Figures 4.4 to 4.6. It can be seen that a classifier learnt using appropriate number of
  • 8. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME 98 features improves the performance. It is also evident that LS-SVM is most sensitive to the number of features. Figure 4.3: Impact of feature selection on F-measure (WebKB data set, bag-of-words representation) Figure 4.4: Impact of feature selection on F-measure (R8 data set, Boolean representation) Figures 4.7 to 4.9 depict the best performance of each of the classification techniques for both the data set. Number of features when each of the classification techniques has performed the best, along with the best performance, is also shown in this figures.
  • 9. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME 99 Figure 4.5: Impact of feature selection on F-measure (R8 data set, TFIDF representation) Figure 4.6: Impact of feature selection on F-measure (R8 data set, bag-of-words representation) It can be seen that, random forest achieves the best results for both the data sets. However, the best results are obtained for bag-of-words representation in case of R8 data set while in case of WebKB data set, best results are achieved for TFIDF representation. 5. CONCLUSIONS This paper addresses the task of classifying web pages using various classification techniques. Performance of KNN, NB, SVM, CART, RF and PSO is compared for different possible representation of web pages. Among all the methods, Random Forest (RF) gives best overall result. Results also demonstrate that the performance of the classifier is affected by the representation used.
  • 10. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME 100 Figure 4.7: Best F-measure (Boolean representation) Figure 4.8: Best F-measure (TFIDF representation) Figure 4.9: Best F-measure (Bag-of-words representation) It can be seen from the results that different classification techniques perform best for different representation of the web pages. This implies that there is no single representation which works best for all the classification techniques. One should select the representation based on the techniques to be used. Impact of feature selection is also studied in the paper and results show that selecting right number of features definitely improves the performance of the classifier.
  • 11. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 5, Issue 5, May (2014), pp. 91-101 © IAEME 101 REFERENCES [1] Thair Nu Phyu, “Survey of Classification Techniques in Data Mining”, Proceedings of the International MultiConference of Engineers and Computer Scientists, vol. I, 2009. [2] Bing Liu, Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data (Data-Centric Systems and Applications), Springer-Verlag New York, Inc., Secaucus, NJ, 2006. [3] R.C.Eberhart, J.Kennedy, “A new optimizer using particle swarm theory”, Proceedings of the 6th Symposium MicroMachine and Human Science, IEEE Press, Los Alamitos, CA, October 1995, pp. 39-43. [4] M. A. Nayak, “A comparative study of web page classification techniques," GIT Journal of Engineering and Technology, vol. 6, 2013. [5] J. Materna, “Automatic web page classification," 2008. [6] W.-K. N. Aixin Sun, Ee-Peng Lim, “Web classification using support vector machine." Proceedings of the fourth international workshop on Web information and data management - WIDM '02, 2002. [7] R.-C. Chen and C.-H. Hsieh, “Web page classification based on a support vector machine using a weighted vote schema," Expert Systems with Applications, vol. 31, 2006 [8] Breiman, L. and Friedman, J. H. and Olshen, R. A. and Stone, "Classification and Regression Trees", 1984. [9] L.-b. X. Yong Zhang, Bin Fan, “Web page classification based-on a least square support vector machine with latent semantic analysis," Fifth International Conference on Fuzzy Systems and Knowledge Discovery, 2008. [10] D. Radha Damodaram, “Phishing website detection and optimization using particle swarm optimization technique," 2011. [11] A. K. J. Sarita Mahapatra and B. Naik, “Performance evaluation of pso based classifier for classification of multidimensional data with variation of pso parameters in knowledge discovery database," vol. 34, 2011. [12] D. Z. Ziqiang Wang, Qingzhou Zhang, “A pso-based web document classification algorithm," Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD2007), 2007. [13] E. T. De Falco, A. Della Cioppa, “Facing classification problems with particles warm optimization," Applied Soft Computing, vol. 7, 2007. [14] Jiawei Han, Micheline Kamber and Jian Pei, “Data mining: concepts and techniques”, Morgan Kaufmann, 2006. [15] R. Manickam, D. Boominath and V. Bhuvaneswari, “An Analysis of Data Mining: Past, Present and Future”, International Journal of Computer Engineering & Technology (IJCET), Volume 3, Issue 1, 2012, pp. 1 - 9, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375. [16] Sandip S. Patil and Asha P. Chaudhari, “Classification of Emotions from Text using SVM Based Opinion Mining”, International Journal of Computer Engineering & Technology (IJCET), Volume 3, Issue 1, 2012, pp. 330 - 338, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375. [17] Prof. Sindhu P Menon and Dr. Nagaratna P Hegde, “Research on Classification Algorithms and its Impact on Web Mining”, International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 4, 2013, pp. 495 - 504, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375. [18] Priyank Thakkar, Samir Kariya and K Kotecha, “Web Page Clustering using Cemetery Organization Behavior of Ants”, International Journal of Advanced Research in Engineering & Technology (IJARET), Volume 5, Issue 1, 2014, pp. 7 - 17, ISSN Print: 0976-6480, ISSN Online: 0976-6499. [19] Alamelu Mangai J, Santhosh Kumar V and Sugumaran V, “Recent Research in Web Page Classification – A Review”, International Journal of Computer Engineering & Technology (IJCET), Volume 1, Issue 1, 2010, pp. 112 - 122, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.