SlideShare a Scribd company logo
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
DOI : 10.5121/ijdkp.2013.3404 57
SOURCE CODE RETRIEVAL USING
SEQUENCE BASED SIMILARITY
Yoshihisa Udagawa
Faculty of Engineering, Tokyo Polytechnic University, Atsugi City, Kanagawa, Japan
udagawa@cs.t-kougei.ac.jp
ABSTRACT
Duplicate code adversely affects the quality of software systems and hence should be detected. We discuss
an approach that improves source code retrieval using structural information of source code. A lexical
parser is developed to extract control statements and method identifiers from Java programs. We propose a
similarity measure that is defined by the ratio of the number of sequential fully matching statements to the
number of sequential partially matching statements. The defined similarity measure is an extension of the
set-based Sorensen-Dice similarity index. This research primarily contributes to the development of a
similarity retrieval algorithm that derives meaningful search conditions from a given sequence, and then
performs retrieval using all derived conditions. Experiments show that our retrieval model shows an
improvement of up to 90.9% over other retrieval models relative to the number of retrieved methods.
KEYWORDS
Java source code, Control statement, Method identifier, Similarity measure, Derived sequence retrieval
model
1. INTRODUCTION
Several studies have shown that approximately 5%–20% of a program can contain duplicate code
[2, 13]. Many such duplications are often the result of copy-paste operations, which are simple
and can significantly reduce programming time and effort when the same functionality is
required.
In many cases, duplicate code causes an adverse effect on the quality of software systems,
particularly the maintainability and comprehensibility of source code. For example, duplicate
code increases the probability of update anomalies. If a bug is found in a code fragment, all the
similar code fragments should be investigated to fix the bug in question [11, 15]. This coding
practice also produces code that is difficult to maintain and understand, primarily because it is
difficult for maintenance engineers to determine which fragment is the original one and whether
the copied fragment is intentional. Tool support that efficiently and effectively retrieves similar
code is required to support software engineers' activities.
Different approaches for identifying similar code fragments have been proposed in code clone
detection. Based on the level of analysis applied to the source code, clone detection techniques
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
58
can be roughly classified into four main groups, i.e., text-based, token-based, structure-based, and
metrics-based.
(1) Text-based approaches
In this approach, the target source program is considered as a sequence of strings. Baker [2]
described an approach that identifies all pairs of matching “parameterized” code fragments.
Johnson [7] proposed an approach to extract repetitions of text and a matching mechanism using
fingerprints on a substring of the source code. Although these methods achieve high performance,
they are sensitive to lexical aspects, such as the presence or absence of new lines and the ordering
of matching lines.
(2) Token-based approaches
In the token-based detection approach, the entire source system is transformed into a sequence of
tokens, which is then analyzed to identify duplicate subsequences. A sub-string matching
algorithm is generally used to find common subsequences. CCFinder [22] adopts the token-based
technique to efficiently detect “copy and paste” code clones. In CCFinder, the similarity metric
between two sets of source code files is defined based on the concept of “correspondence.” CP-
Miner [11] uses a frequent subsequence mining technique to identify a similar sequence of
tokenized statements. Token-based approaches are typically more robust against code changes
compared to text-based approaches.
(3) Structure-based approaches
In this approach, a program is parsed into an abstract syntax tree (AST) or program dependency
graph (PDG). Because ASTs and PDGs contain structural information about the source code,
sophisticated methods can be applied to ASTs and PDGs for the clone detection. CloneDR [3] is
one of the pioneering AST-based clone techniques. Wahler et al. [21] applied frequent itemset
data mining techniques to ASTs represented in XML to detect clones with minor changes.
DECKARD [6] also employs a tree-based approach in which certain characteristic vectors are
computed to approximate the structural information within ASTs in Euclidean space.
Typically, a PDG is defined to contain the control flow and data flow information of a program.
An isomorphic subgraph matching algorithm is applied to identify similar subgraphs. Komondoor
et al. [8] have also proposed a tool for C programs that finds clones. They use PDGs and a
program slicing technique to find clones. Krinke [10] uses an iterative approach (k-length patch
matching) to determine maximal similar subgraphs. Structure-based approaches are generally
robust to code changes, such as reordered, inserted, and deleted code. However, they are not
scalable to large programs.
(4) Metrics-based approaches
Metrics-based approaches calculate metrics from code fragments and compare these metric
vectors rather than directly comparing source code. Kontogiannis et al. [9] developed an abstract
pattern matching tool to measure similarity between two programs using Markov models. Some
common metrics in this approach include a set of software metrics called “fingerprinting” [7], a
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
59
set of method-level metrics including McCabe’s cyclomatic complexity [14], and a characteristic
vector to approximate the structural information in ASTs [6].
Our approach is classified as a structure-based comparison. It features a sequence of statements as
a retrieval condition. We have developed a lexical parser to extract source code structure,
including control statements and method identifiers. The extracted structural information is input
to a vector space model [1,12,17], an extended Sorensen-Dice model [4,16,19], and the proposed
source code retrieval model, named the “derived sequence retrieval model” (DSRM). The DSRM
takes a sequence of statements as a retrieval condition and derives meaningful search conditions
from the given sequence. Because a program is composed of a sequence of statements, our
retrieval model improves the performance of source code retrieval.
The remainder of this paper is organized as follows. In Section 2, we present an outline of the
process and the target source code of our research. In Section 3, we define source code similarity
metrics. Retrieval results are discussed in Section 4. In Section 5, we analyze performance in
elapsed time, and Section 6 presents conclusions and suggestions for future work.
2. RESEARCH PROCESS
2.1. Outline
Figure 1 shows an outline of our research process. Generally, similarity retrieval of source code is
performed for a specific purpose. From this perspective, the original source code may include
some uninteresting fragments. We have developed a lexical parser and applied it to a set of
original Java source codes to extract interesting code, which includes class method signatures,
control statements, and method calls. Our parser traces a variable type declaration and class
instantiation to generate an identifier-type list. This list is then used to translate a variable
identifier to its data type. A method call preceded by an identifier is converted into the method
calls preceded by the data type of the identifier.
Code matching is performed using three retrieval models. The first model is the proposed
DSRM, which takes a sequence of statements as a retrieval condition. The second model is based
on the collection of statements, and is referred to as the derived collection retrieval model
(DCRM). The DCRM is an extension of the Sorensen-Dice model of index [4,16,19]. The final
retrieval model is the vector space model (VSM) [1,12,17], which has been developed to retrieve
a natural language document. Source code can be perceived as a highly structured document;
therefore, comparison with DSRM, DCRM, and VSM provides a baseline performance
evaluation of how structure of a document will affect retrieval results.
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
60
Figure 1. Outline of our research process
2.2. Extracting Source Code Segments
At the beginning of our approach, a set of Java source codes is partitioned into methods. Then,
the code matching statements are extracted for each method. The extracted fragments comprise
class method signatures, control statements, and method calls.
(1) Class method signatures
Each method in Java is declared in a class [5]. Our parser extracts class method signatures in the
following syntax.
<class identifier>:<method signature>
An anonymous class, which is a local class without a class declaration, is often used when a local
class is used only once. An anonymous class is defined and instantiated in a single expression
using the new operator to make code concise. Our parser extracts a method declared in an
anonymous class in the following syntax.
<class identifier>:<anonymous class identifier>:<method signature>
Arrays and generics are widely used in Java to facilitate the manipulation of data collections. Our
parser also extracts arrays and generic data types according to Java syntax. For example, Object[],
String[][], List<String>, and List<Integer> are extracted and treated as different data types.
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
61
(2) Control statements
Our parser also extracts control statements with various levels of nesting. The block is
represented by the "{" and "}" symbols. Hence, the number of "{" symbols indicates the number
of nesting levels. The following Java control statements [5] are extracted by our parser.
- If-then (with or without else or else if statements)
- Switch
- While
- Do
- For and enhanced for
- Break
- Continue
- Return
- Throw
- Synchronized
- Try (with or without a catch or finally clause)
(3) Method calls
From the assumption that a method call characterizes a program, our parser extracts a method
identifier called in a Java program. Generally, the instance method is preceded by a variable
whose type refers to a class object to which the method belongs. Our parser traces a type
declaration of a variable and translates a variable identifier to its data type or class identifier, i.e.,
<variable>.<method identifier>
is translated into
<data type>.<method identifier>
or
<class identifier>.<method identifier>.
2.3. Extracting Statements of Struts 2
We selected Struts 2.3.1.1 Core as our target because Struts 2 [20] is a popular Java framework
for web applications. We estimated the volume of source code using file metrics. Typical file
metrics are as follows:
Java Files ---- 368
Classes ---- 414
Methods ---- 2,667
Lines of Code ---- 21,543
Comment Lines ---- 17,954
Total Lines ---- 46,100
Struts 2.3.1.1 Core consists of 46,100 lines of source code, including blank and comment lines.
Struts 2.3.1.1 Core is classified as mid-scale software in the industry. Struts 2.3.1.1 Core is
comprised of 368 Java files, which differs from the number of declared classes (414) because
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
62
some Java files include definitions of inner classes and anonymous classes. Figure 2 shows an
example of the extracted structure of the evaluateClientSideJsEnablement() method in the
Form.java file of the org.apache.struts2.components package. The three numbers preceded by the
# symbol are the number of comment, blank, and code lines, respectively. The extracted
structures include the depth of nesting of the control statements; thus, they supply sufficient
information for retrieving methods using a source code substructure.
Figure 2. Example of extracted structure
3. SIMILARITY METRICS
3.1. Vector Space Model for Documents
The VSM is widely used in retrieving and ranking documents written in natural languages.
Documents and queries are represented as vectors. Each dimension of the vectors corresponds to
a term that consists of documents and queries. The documents are ranked against queries by
computing similarity, which is computed as the cosine of the angle between the two vectors.
Given a set of documents D, a document dj in D is represented as a vector of term weights:
ࢊ࢐ = ൫‫ݓ‬ଵ,௝ , ‫ݓ‬ଶ,௝ , . . . , ‫ݓ‬ே,௝ ൯
where N is the total number of terms in document dj, and wi, j is the weight of the i-th term.
There are many variations of the term weighting scheme. Salton et al. [17] proposed the well-
known “term frequency-inverse document frequency” (tf-idf) weighting scheme. According to
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
63
this weighting scheme, the weight of the j-th element of the document dj, i.e., wi,j, is computed by
the product of the term frequency tfi,j and the inverse document frequency idfi.
‫ݓ‬௜,௝ = ‫݂ݐ‬௜,௝・݂݅݀௜
The term frequency tfi,j is defined as the number of occurrences of the term i in the document dj.
The inverse document frequency is a measure of the general importance of the term i and is
defined as follows:
݂݅݀௜ = ݈‫݃݋‬ଶ ൤
‫ܯ‬
݂݀௜
൨
where M denotes the total number of documents in a collection of documents. A high weight in
wi,j is reached by a high term frequency in the given document and a low document frequency dfi
of the term in the whole collection of documents. Hence, the weights tend to filter out common
terms.
A user query can be similarly converted into a vector q:
ࢗ = ൫‫ݓ‬ଵ,௤ , ‫ݓ‬ଶ,௤ , . . . , ‫ݓ‬ே,௤ ൯
The similarity between document dj and query q can be computed as the cosine of the angle
between the two vectors dj and q in the N-dimensional term space:
ܵ݅݉௖௢௦൫ࢊ࢐, ࢗ൯ =
∑ ‫ݓ‬௜,௝ ∗ ‫ݓ‬௜,௤
ே
௜ୀଵ
ට∑ ‫ݓ‬௜,௝
ଶே
௜ୀଵ ∗ ට∑ ‫ݓ‬௜,௤
ଶே
௜ୀଵ
This similarity is often referred to as the cosine similarity.
3.2. Extending Sorensen-Dice Index
Over the last decade, many techniques that detect software cloning and refactoring opportunities
have been proposed. Similarity coefficients play an important role in the literature. However,
most similarity definitions are validated by empirical studies. The choice of measure depends on
the characteristics of the domain to which they are applied. Among many different similarity
indexes, the similarity defined in CloneDR is worth notice. Baxter et al. [3] define the similarity
between two trees T1 and T2 as follows:
Similarity(T1, T2) = 2H / (2H + L + R)
where H is the number of shared nodes in trees T1 and T2, L is the number of unique nodes in
T1, and R is the number of unique nodes in T2. Within the context of a tree structure, this
definition can be seen as an extension of the Sorensen-Dice index.
The Sorensen-Dice index is originally defined by two sets and is formulated as follows:
SimSorensen-Dice( Xଵ, Xଶ ) =
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
64
Here, |X1∩X2| indicates the number of elements in the intersection of sets X1 and X2.
Another well-known index is the Jaccard index of binary features, which is defined by the
following formula:
Sim Jaccard ( ) =
In software, the Sorensen-Dice index and the Jaccard index are known experimentally to produce
better results than other indexes, such as a simple matching index, which counts the number of
features absent in both sets [16,19]. The absence of a feature in two entities does not indicate
similarity in software source code. For example, if two classes do not include the same method, it
does not mean that the two classes are similar. The Jaccard and Sorensen-Dice indexes perform
identically except for the value of the similarity because assigning more weight to the features
present in both entities does not have a significant impact on the results. Our study takes the
Sorensen-Dice index as a basis for defining the similarity measure between source codes. The
extension of the Sorensen-Dice index on N sets is straightforward.
Sim Sorensen-Dice ( ) =
The function SetComb(X1∩X2∩...∩Xn, r) defines intersections of sets {X1, X2, ... , Xn} whose r
elements are replaced by the elements with the negation symbol. The summation of r = 0 to n−1
of SetComb(X1∩X2∩...∩Xn, r) generates the power set of sets X1, X2,..., Xn, excluding the empty
set. (n−r) indicates the number of sets without the negation symbol. |X1∩X2, …,∩Xn| indicates the
number of tuples <x1,x2, ... ,xn> where x1∈X1, x2∈X2, ... , xn∈Xn.
For example, in case n = 3, the numerator of the extended Sorensen-Dice index on sets X1, X2,
and X3 equals 3|X1∩X2∩X3|, and the denominator equals 3|X1∩X2∩X3| + 2| X1∩X2∩¬X3 | + 2|
X1∩¬X2∩X3 | + 2| ¬X1∩X2∩X3 | + | X1∩¬X2∩¬X3 | + | ¬X1∩X2∩¬X3 | + | ¬X1∩¬X2∩X3 |.
3.3. Similarity Metric for Source Codes
In the vector space retrieval model, a document is represented as a vector of terms that comprise
the document. The similarity of a document and a query is calculated as the cosine of the angle
between a document vector and a query vector. This means that the order in which the terms
appear in a document is lost in the vector space model. On the other hand, a computer program is
a sequence of instructions written to perform a specified task [18]. The source code is essentially
a sequence of characters forming a more complex text structure, such as statements, blocks,
classes, and methods. This means that it is preferable or even crucial to consider the order of
terms for a similarity index. In our study, the similarity measure is tailored to measure the
similarity of sequentially structured text.
We first define the notion of a sequence. Let S1 and S2 be statements extracted by the structure
extraction tool. [S1→S2] denotes a sequence of S1 followed by S2. In general, for a positive
integer n, let Si (i ranges between 1 and n) be a statement. [S1→S2 →...→Sn] denotes a sequence
of n statements.
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
65
The similarity of the DSRM can be considered the same as the extended Sorensen-Dice index
except for symbols, i.e., using → symbol in place of ∩ symbol. The DSRM’s similarity between
two sequences is defined as follows:
SimDSRM ( [S1→S2→… →Sm], [T1→T2→… →Tn] ) =
Here, without loss of generality, we can assume that m ≥ n. In case m < n, we replace the
sequence [S1→S2 →...→Sm] with [T1→T2→…→Tn].
The numerator of the definition, i.e., | [S1→S2 →...→Sm], [T1→T2→… →Tn] | indicates the
number of statements in the sequence where Sj+1=T1, Sj+2=S2, ... , Sj+n=Tn for some j (0 ≤ j ≤ m
−n). The denominator of the definition indicates the iteration of the sequence match that counts
the sequence of statements from r = 0 to r = n−1. Note that the first sequence [S1→S2 →...→Sm]
is renewed when the sequence match succeeds, i.e., replacing the matched statements with a not
applicable symbol “n/a.” SqcComb ([T1→T2→…→Tn], r) generates a set of sequence
combinations by replacing the r (0 ≤ r < n) statements with the negation of the statements.
For example, for m = 4 and n = 2, SimDSRM ( [A1→A1→A2→A2], [A1→A2] ) equals 0.5. The
numerator of SimDSRM ( [A1→A1→A2→A2], [A1→A2] ) is 2 because the sequence [A1→A2] is
included in the first sequence, 2*| [A1→A1→A2→A2], [A1→A2] |= 2*1= 2. The denominator of
SimDSRM ([A1→A1→A2→A2], [A1→A2]) is computed as follows. First, for set r = 0,
SqcComb([A1→A2], 0) generates [A1→A2]. Then, 2*|| [A1→A1→A2→A2], [A1→A2] || is
estimated as 2 because [A1→A2] is a subsequence of [A1→A1→A2→A2]. Then, the first sequence
is renewed by [A1→n/a→n/a→A2].
Next, for set r = 1, SqcComb ([A1→A2], 1) generates [A1→¬A2] and [¬A1→A2]. ||
[A1→n/a→n/a→A2], [A1→¬A2] || is estimated as 1 because A1 is included in
[A1→n/a→n/a→A2], and then the first sequence is renewed by [n/a→n/a→n/a→A2]. Finally, ||
[n/a→n/a→n/a→A2], [¬A1→A2] || is estimated as 1 and the first sequence is renewed by
[n/a→n/a→n/a→n/a]. The denominator of SimDSRM ([A1→A1→A2→A2], [A1→A2]) is 4. Thus,
SimDSRM ([A1→A1→A2→A2], [A1→A2] ) equals 2/4 = 0.5.
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
66
Figure 3. Algorithm to compute similarity for a sequence [S1→S2→...→Sn]
A simplified version of the algorithm to compute the DSRM’s similarity is shown in Figure 3. It
takes a set of method structures and a sequence as retrieval conditions as arguments and returns
an array of similarity values for the set of method structures.
It is assumed that the getMethodStructure(j) function returns a structure of the j-th method
extracted by the structure extraction tool. The function abstracts the implementation of the
internal structure of the method. This is represented as a sequence of statements.
The Count function takes three arguments, i.e., a method_structure MS, a sequence of statements
TN, and an integer R. Note that an element of the method_structure is compatible with a sequence
of statements.
The SqcComb( TN, R ) function generates combinations of statement sequences that replace the
R statements with the negation of the statements in the sequence TN. Then, matching between the
method_structure MS and the combinations of statement sequences is processed. The Count
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
67
function returns the number of positive statements that match the combinations of statement
sequences.
The SimDSRM function calculates the similarity according to the DSRM’s defined similarity.
Note that the similarity is 1.0 when a method includes the sequence [S1→S2→...→Sn] and does
not include any of the derived sequences from [S1→S2→...→Sn].
4. CODE RETRIEVAL EXPERIMENTS
4.1. Approach
Cosine similarity is extensively used in research on retrieving documents written in natural
languages and recovering links between software artifacts [1,12]. Set-based indexes, such as the
Jaccard index and the Sorensen-Dice index, are used in a variety of research, including software
clustering [16] and generating refactoring guidelines [19]. Here, we present experimental results
obtained using cosine similarity, the Sorensen-Dice index, and the DSRM’s similarity.
4.2. Vector Space Model Results
It is natural to assign structural metrics to the elements of a document vector. For example, the
evaluateClientSideJs-Enablement() method shown in Figure 2 is represented by the vector (4, 1,
2, 1, 1, 1, 1, 1, 1), where we assume that the first element of the vector corresponds to if-
statements, the second corresponds to for-statements, the third corresponds to the addParameter
method identifier, and the fourth corresponds to the configuration.getRuntimeConfiguration
method identifier, and so on. Thus, the extracted fragments of Struts 2.3.1.1 Core are vectorized
to produce a 1,420 × 2,667 matrix.
In Struts 2 Core, the addParameter method is often called after an if-statement because the
addParameter method adds a key and a value given by the arguments to the parameter list
maintained by the Struts 2 process after checking the existence of a key. Thus, the same number
of if-statements and addParameter method identifiers are a reasonable retrieval condition in the
vector space retrieval model.
Table 1 shows the top 27 methods retrieved by a query vector that consists of one if-statement,
one addParameter method identifier, and one curly brace “}.” The third column of Table 1 shows
the similarity values calculated by the cosine similarity. It can be seen that 2,667 methods were
retrieved because all methods include at least one curly brace “}.” There were only 38 methods
whose similarity values were greater than 0.3. The result looks fairly good at a glance; however,
the results include some controversial methods in the sense that we are retrieving an
addParameter method identifier that is called just after an if-statement. Figure 4 shows
ActionError::void evaluateExtraParams(), which has the same structure as Action Message::void
evaluateExtraParams() except for string arguments “actionErrors” and “actionMessages.” The
cosine similarity of ActionError::void evaluateExtraParams() is 0.846, and the extended
Sorensen-Dice index is 0.750 because the method includes two if-statements and two
addParameter methods. However, the method does not include any sequences of if-statements or
an addParameter method. Thus, the DSRM’s similarity is estimated to be 0.
Let a "boundary method" be a retrieved method whose DSRM’s similarity is greater than 0 and
whose cosine similarity is minimum. The evaluateClientSideJsEnablement(), which is shown at
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
68
No.19 in Table 1, is the boundary method with the minimum cosine similarity 0.472. Table 1
consists of a set of retrieved methods whose cosine similarities are greater than or equal to the
cosine similarity of the boundary method (0.472). The methods whose "No" column is shaded in
Table 1 are those methods whose DSRM’s similarities equals 0. The methods are a kind of
controversial candidates. Details are discussed in the following sections.
Table 1. Top 27 retrieved methods
4.3 Extended Sorensen-Dice Index Results
The extended Sorensen-Dice index defined in Section 3.2 is greater than 0 when all three
elements are included in a method structure. In the vector space model, the similarity is greater
than 0 when at least one element of the three elements is included in a method structure. In other
words, the extended Sorensen-Dice index requires the AND condition on the retrieval elements,
while the vector space model requires the OR condition. Thus, the results of the extended
Sorensen-Dice index are a subset of the results of the vector space model.
For example, the extended Sorensen-Dice index evaluated 0 for the FieldError::void
setFieldName() method (No. 25 in Table 1) and the Text::void addParameter() method (No. 26 in
Table 1), while the similarities obtained for these methods by the vector space model are 0.928.
Both methods contain "addParameter" and "}"; however, these methods contain no if-statements.
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
69
Because "addParameter" is a rare term, the term weight for "addParameter" is so high that the
similarity value works out to 0.928.
4.4. Derived Sequence Model Results
The DSRM’s similarity is greater than 0 when the sequence [ if{→addParameter→} ] is included
in an extracted method structure. This means that the DSRM imposes a more severe retrieval
condition than the extended Sorensen-Dice model. In other words, the results of the DSRM are a
subset of the results of the extended Sorensen-Dice model. The source code of ActionError::void
evaluateExtra Params() (No. 22 in Table 1) is shown in Figure 4. The similarity of the method is
estimated to be 0 by the derived sequence model because the method does not include the
sequence [ if{→addParameter→} ]. Its similarity is 0.75 in the extended Sorensen-Dice model
because the method includes two if-statements and two addParameter method calls.
A program is essentially represented by a sequence of statements. Because the DSRM computes
the similarity based on a sequence of statements, it achieves higher performance than the other
models.
Figure 4. Example method that does not include any sequences of if-statements and
addParameter
4.5. Summary of Experiments
Table 2 shows a summary of 27 retrieval experiments using the three models. Column three of
Table 2 presents the number of methods retrieved by the DSRM with similarity values greater
than 0. Column four presents the number of methods retrieved by the extended Sorensen-Dice
model with similarity values greater than 0, and column five shows the number of methods
retrieved by the vector space model with tf-idf weighting. The results of the experiment shown in
Table 1 correspond to No. 14 in Table 2.
The degree of improvement of the DSRM over the extended Sorensen-Dice index is calculated by
the following formula:
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
70
The degree of improvement of DSRM to the vector space model with tf-idf weighing is
calculated by the similar formula.
The degree of improvement ranges from 0% to 90.1% over the extended Sorensen-Dice model,
and ranges from 22.2% to 90.9% over the vector space model with tf-idf weighting. As
previously mentioned, when the similarity is greater than 0, the results of the DSRM are a subset
of the results of the extended Sorensen-Dice index, and the results of the extended Sorensen-Dice
index are a subset of the results of the vector space model. Note that this set inclusion relationship
is not always true when the top N-methods are selected. For example, for No. 23 and No. 27 in
Table 2, the degree of improvement over the extended Sorensen-Dice model is 80.0%, and that of
the vector space model is 60.9%. In these cases, the similarity of the vector space model with tf-
idf weighting is 0.413, which is well above 0.
Table 2. Summary of 27 retrieval experiments
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
71
Figure 5 shows a graph of the degree of improvement sorted by degree of improvement over the
extended Sorensen-Dice model. The horizontal axis shows the sample number given in the first
column of Table 2, and the vertical axis shows the degree of improvement in percentage. For all
retrieval samples, the DSRM outperformed the extended Sorensen-Dice model except for samples
No. 7, 8, 9, and 10. The extended Sorensen-Dice model is more successful than the vector space
model with tf-idf weighting except for samples No. 23 and No. 27.
Figure 5. Degree of DSRM's improvement
5. ELAPSED TIME COMPARISONS
Table 3 summarizes the elapsed time in milliseconds of the three retrieval models for 27 sample
retrievals. We measured the elapsed time using the following experimental environment:
CPU: Intel Core i3 540 3.07 GHz
Main memory: 4.00 GB
OS: Windows 7 64 Bit
The three retrieval models were implemented using Visual Basic for Excel 2010. The unique
1,420 statement fragments, including control statements and method calls, were extracted from
the Struts 2.3.1.1 Core source code. Thus, a 1,420 × 2,667 matrix was stored in an Excel sheet for
the retrieval experiments by the vector space model. All 2,667 methods were transformed into
2,667 sequences of extracted statements. They were also stored in an Excel sheet for the extended
Sorensen-Dice model and the DSRM experiments. Through the experiments, all the data
concerning retrieval were accessed from the Excel sheet cells. Thus, it is fair to say that the three
retrieval model experiments were performed under equal conditions.
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
72
Table 3. Elapsed time of the three retrieval models (ms)
Figure 6 shows a graph of the elapsed times presented in Table 3. The horizontal axis shows the
sample number given in column one of Table 2, and the vertical axis shows the elapsed time in
milliseconds. All 27 samples were processed in near-constant time in the vector space model
because a given query is evaluated on the 1,420 × 2,667 matrix.
On the other hand, the extended Sorensen-Dice model and the DSRM require an elapsed time
approximately proportional to the number of derived sequences related to a given retrieval
condition. Both retrieval models generate two derived sequences for samples No. 1 to No. 13. As
a result, three retrievals were executed. The average execution time was 0.171. For samples No.
14 to No. 23, both retrieval models executed seven retrievals. The average execution time was
0.187 milliseconds. For samples No. 24 to No. 27, both retrieval models executed 15 retrievals.
The average execution time was 0.193 milliseconds. The elapsed time required for each derived
sequence increases approximately 3%–8% due to the overhead involved in the retrieval process.
The results in Table 3 indicate that the DSRM is approximately 10 times faster than the vector
space model.
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
73
Figure 6. Elapsed time comparison
6. CONCLUSIONS
We presented a source code retrieval model that takes a sequence of statements as a retrieval
condition. We conducted three types of experiments using the vector space model, the extended
Sorensen-Dice model, and the derived sequence retrieval model (DSRM).
The key contribution of our approach is the definition of the DSRM's similarity measure as an
extension of the Sorensen-Dice index and the evaluation of the DSRM's similarity measure on the
Struts 2 Core source code, which is a moderate-sized Java program. The experimental results
demonstrate that the DSRM's similarity measure shows higher selectivity than the other models,
which is a natural consequence because a program is essentially a sequence of statements.
The results are promising enough to warrant further research. In future, we intend to improve our
algorithms by combining information, such as the inheritance of a class and method overloading.
We also plan to develop a better user interface, which would allow us to conduct further user
studies and to more easily and precisely assess the retrieved code. In addition, we plan to conduct
experiments using various types of open source programs available on the Internet.
ACKNOWLEDGMENTS
We would like to thank Nobuhiro Kataoka, Tamotsu Noji, and Hisayuki Masui for their
suggestions on engineering tasks to improve software quality.
REFERENCES
[1] Antoniol, G., Penta, M.D., and Merlo, E. (2004) An automatic approach to identify class evolution
discontinuities. In Proceedings of the 7th International Workshop on Principles of Software
Evolution, pp31-40.
[2] Baker, B.S. (1996) Parameterized Pattern Matching: Algorithms and Applications, Journal of
computer and system sciences, 52, 1, pp28-42.
International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013
74
[3] Baxter, I. D., Yahin, A., Moura, L. Sant'Anna, M., and Bier, L. (1998) Clone detection using abstract
syntax trees. In Proceedings of the 14th International Conference on Software Maintenance, pp368-
377.
[4] Choi, S.S., Cha, S.H, and Tappert, C.C. (2010) A Survey of Binary Similarity and Distance Measures,
Journal of Systemics, Cybernetics and Informatics ISSN 1690-4532, Vol.8, 1, pp43-48.
[5] Gosling, J., Joy, B., Steele, G., and Bracha, G. (2005) The Java Language Specification, Third
Edition, ADDISON-WESLEY.
[6] Jiang, L., Misherghi, G., Su, Z., and Glondu, S. (2007) DECKARD: Scalable and Accurate Tree-
based Detection of Code Clones. In Proceedings of the 29th International Conference on Software
Engineering, pp96-105.
[7] Johnson, J.H. (1993) Identifying Redundancy in Source Code Using Fingerprints. In Proceeding of
the 1993 Conference of the Centre for Advanced Studies Conference, pp171-183.
[8] Komondoor, R., and Horwitz, S. (2001) Using Slicing to Identify Duplication in Source Code. In
Proceedings of the 8th International Symposium on Static Analysis, LNCS Vol.2126, pp40-56.
[9] Kontogiannis, K., Demori, R., Merlo, E., Galler, M., and Bernstein, M. (1996) Pattern matching for
clone and concept detection. Journal of Automated Software Engineering 3, pp77-108.
[10] Krinke, J. (2001) Identifying Similar Code with Program Dependence Graphs. In Proceedings of the
8th Working Conference on Reverse Engineering, pp301-309.
[11] Li, Z., Lu, S., Myagmar, S., and Zhou, Y. (2006) CP-Miner: Finding Copy-Paste and Related Bugs in
Large-Scale Software Code. In IEEE Transactions on Software Engineering, Vol.32, 3, pp176-192.
[12] Marcus, A., and Maletic, J.I. (2003) Recovering documentation-to source-code traceability links
using latent semantic indexing. In Proceedings of the 25th International Conference on Software
Engineering, pp125-135.
[13] Mayrand, J., Leblanc, C., Merlo, E. (1996) Experiment on the Automatic Detection of Function
Clones in a Software System Using Metrics. In Proceedings of the 12th International Conference on
Software Maintenance, pp244-253.
[14] McCabe, T.J. (1976) A complexity measure, IEEE Transactions on software engineering, 2, 4, pp308-
320.
[15] Roy, C.K., Cordya, J.R., and Koschkeb, R. (2009) Comparison and Evaluation of Code Clone
Detection Techniques and Tools: A Qualitative Approach, Science of Computer Programming.
Volume 74, Issue 7, 1, pp470-495.
[16] Saeed, M., Maqbool, O., Babri, H.A., Hassan, S.Z., and Sarwar, S.M. (2003) Software Clustering
Techniques and the Use of Combined Algorithm、IEEE Seventh European Conference on Soft
Maintenance and Reengineering, pp301-306.
[17] Salton, G., and Buckley, C. (1988) Term-weighting approaches in automatic text retrieval.
Information Processing and Management, 24, 5, pp513-523.
[18] Stair, R. M., and Reynolds, G.W. (2003) Principles of Information Systems. Sixth Edition. Thomson
Learning, Inc.
[19] Terra, R., Valente, M.T., Czarnecki, K., and Bigonha, R. (2012) Recommending refactorings to
reverse software architecture erosion. In Proceedings of the 16th European Conference on Software
Maintenance and Reengineering(CSMR), Early Research Achievements Track, pp335-340.
[20] The Apache Software Foundation. (2013) About Apache Struts 2. http://struts.apache.org/release/
2.3.x/.
[21] Wahler, V., Seipel, D., Gudenberg, J.W., and Fischer, G. (2004) Clone detection in source code by
frequent itemset techniques. In Proceedings of the 4th IEEE International Workshop Source Code
Analysis and Manipulation, pp128-135.
[22] Yamamoto,T., Matsushita,M., Kamiya,T., and Inoue,K. (2005) Measuring similarity of large software
systems based on source code correspondence. In Proceedings of the 6th International Conference on
Product Focused Software Process Improvement, pp530-544.
Ad

Recommended

A unified approach for spatial data query
A unified approach for spatial data query
IJDKP
 
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...
IJDKP
 
New proximity estimate for incremental update of non uniformly distributed cl...
New proximity estimate for incremental update of non uniformly distributed cl...
IJDKP
 
GCUBE INDEXING
GCUBE INDEXING
IJDKP
 
Enhancing the labelling technique of
Enhancing the labelling technique of
IJDKP
 
A CONCEPTUAL METADATA FRAMEWORK FOR SPATIAL DATA WAREHOUSE
A CONCEPTUAL METADATA FRAMEWORK FOR SPATIAL DATA WAREHOUSE
IJDKP
 
Effective data mining for proper
Effective data mining for proper
IJDKP
 
Enhancement techniques for data warehouse staging area
Enhancement techniques for data warehouse staging area
IJDKP
 
A statistical data fusion technique in virtual data integration environment
A statistical data fusion technique in virtual data integration environment
IJDKP
 
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...
acijjournal
 
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET Journal
 
G1803054653
G1803054653
IOSR Journals
 
A Novel Approach for Clustering Big Data based on MapReduce
A Novel Approach for Clustering Big Data based on MapReduce
IJECEIAES
 
F04463437
F04463437
IOSR-JEN
 
AN IMPROVED TECHNIQUE FOR DOCUMENT CLUSTERING
AN IMPROVED TECHNIQUE FOR DOCUMENT CLUSTERING
International Journal of Technical Research & Application
 
Iaetsd a survey on one class clustering
Iaetsd a survey on one class clustering
Iaetsd Iaetsd
 
Bi4101343346
Bi4101343346
IJERA Editor
 
84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b
PRAWEEN KUMAR
 
IRJET-Efficient Data Linkage Technique using one Class Clustering Tree for Da...
IRJET-Efficient Data Linkage Technique using one Class Clustering Tree for Da...
IRJET Journal
 
PATTERN GENERATION FOR COMPLEX DATA USING HYBRID MINING
PATTERN GENERATION FOR COMPLEX DATA USING HYBRID MINING
IJDKP
 
IRJET- Cluster Analysis for Effective Information Retrieval through Cohesive ...
IRJET- Cluster Analysis for Effective Information Retrieval through Cohesive ...
IRJET Journal
 
Spe165 t
Spe165 t
Rajesh War
 
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIES
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIES
csandit
 
H04564550
H04564550
IOSR-JEN
 
D-5436
D-5436
Dale Visser
 
EXECUTION OF ASSOCIATION RULE MINING WITH DATA GRIDS IN WEKA 3.8
EXECUTION OF ASSOCIATION RULE MINING WITH DATA GRIDS IN WEKA 3.8
International Educational Applied Scientific Research Journal (IEASRJ)
 
Cross Domain Data Fusion
Cross Domain Data Fusion
IRJET Journal
 
Document similarity with vector space model
Document similarity with vector space model
dalal404
 
Vector space model of information retrieval
Vector space model of information retrieval
Nanthini Dominique
 
Vector Spaces
Vector Spaces
Franklin College Mathematics and Computing Department
 

More Related Content

What's hot (19)

A statistical data fusion technique in virtual data integration environment
A statistical data fusion technique in virtual data integration environment
IJDKP
 
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...
acijjournal
 
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET Journal
 
G1803054653
G1803054653
IOSR Journals
 
A Novel Approach for Clustering Big Data based on MapReduce
A Novel Approach for Clustering Big Data based on MapReduce
IJECEIAES
 
F04463437
F04463437
IOSR-JEN
 
AN IMPROVED TECHNIQUE FOR DOCUMENT CLUSTERING
AN IMPROVED TECHNIQUE FOR DOCUMENT CLUSTERING
International Journal of Technical Research & Application
 
Iaetsd a survey on one class clustering
Iaetsd a survey on one class clustering
Iaetsd Iaetsd
 
Bi4101343346
Bi4101343346
IJERA Editor
 
84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b
PRAWEEN KUMAR
 
IRJET-Efficient Data Linkage Technique using one Class Clustering Tree for Da...
IRJET-Efficient Data Linkage Technique using one Class Clustering Tree for Da...
IRJET Journal
 
PATTERN GENERATION FOR COMPLEX DATA USING HYBRID MINING
PATTERN GENERATION FOR COMPLEX DATA USING HYBRID MINING
IJDKP
 
IRJET- Cluster Analysis for Effective Information Retrieval through Cohesive ...
IRJET- Cluster Analysis for Effective Information Retrieval through Cohesive ...
IRJET Journal
 
Spe165 t
Spe165 t
Rajesh War
 
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIES
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIES
csandit
 
H04564550
H04564550
IOSR-JEN
 
D-5436
D-5436
Dale Visser
 
EXECUTION OF ASSOCIATION RULE MINING WITH DATA GRIDS IN WEKA 3.8
EXECUTION OF ASSOCIATION RULE MINING WITH DATA GRIDS IN WEKA 3.8
International Educational Applied Scientific Research Journal (IEASRJ)
 
Cross Domain Data Fusion
Cross Domain Data Fusion
IRJET Journal
 
A statistical data fusion technique in virtual data integration environment
A statistical data fusion technique in virtual data integration environment
IJDKP
 
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...
acijjournal
 
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET Journal
 
A Novel Approach for Clustering Big Data based on MapReduce
A Novel Approach for Clustering Big Data based on MapReduce
IJECEIAES
 
Iaetsd a survey on one class clustering
Iaetsd a survey on one class clustering
Iaetsd Iaetsd
 
84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b
PRAWEEN KUMAR
 
IRJET-Efficient Data Linkage Technique using one Class Clustering Tree for Da...
IRJET-Efficient Data Linkage Technique using one Class Clustering Tree for Da...
IRJET Journal
 
PATTERN GENERATION FOR COMPLEX DATA USING HYBRID MINING
PATTERN GENERATION FOR COMPLEX DATA USING HYBRID MINING
IJDKP
 
IRJET- Cluster Analysis for Effective Information Retrieval through Cohesive ...
IRJET- Cluster Analysis for Effective Information Retrieval through Cohesive ...
IRJET Journal
 
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIES
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIES
csandit
 
Cross Domain Data Fusion
Cross Domain Data Fusion
IRJET Journal
 

Viewers also liked (6)

Document similarity with vector space model
Document similarity with vector space model
dalal404
 
Vector space model of information retrieval
Vector space model of information retrieval
Nanthini Dominique
 
Vector Spaces
Vector Spaces
Franklin College Mathematics and Computing Department
 
similarity measure
similarity measure
ZHAO Sam
 
The Six Highest Performing B2B Blog Post Formats
The Six Highest Performing B2B Blog Post Formats
Barry Feldman
 
The Outcome Economy
The Outcome Economy
Helge Tennø
 
Ad

Similar to SOURCE CODE RETRIEVAL USING SEQUENCE BASED SIMILARITY (20)

Duplicate Code Detection using Control Statements
Duplicate Code Detection using Control Statements
Editor IJCATR
 
Duplicate Code Detection using Control Statements
Duplicate Code Detection using Control Statements
Editor IJCATR
 
Duplicate Code Detection using Control Statements
Duplicate Code Detection using Control Statements
Editor IJCATR
 
Duplicate Code Detection using Control Statements
Duplicate Code Detection using Control Statements
Editor IJCATR
 
GENERIC CODE CLONING METHOD FOR DETECTION OF CLONE CODE IN SOFTWARE DEVELOPMENT
GENERIC CODE CLONING METHOD FOR DETECTION OF CLONE CODE IN SOFTWARE DEVELOPMENT
IAEME Publication
 
Behavioral Analysis for Detecting Code Clones
Behavioral Analysis for Detecting Code Clones
TELKOMNIKA JOURNAL
 
A Novel Approach for Code Clone Detection Using Hybrid Technique
A Novel Approach for Code Clone Detection Using Hybrid Technique
INFOGAIN PUBLICATION
 
Plagiarism introduction
Plagiarism introduction
Merin Paul
 
A novel approach for clone group mapping
A novel approach for clone group mapping
ijseajournal
 
IRJET- Code Cloning using Abstract Syntax Tree
IRJET- Code Cloning using Abstract Syntax Tree
IRJET Journal
 
Method-Level Code Clone Modification using Refactoring Techniques for Clone M...
Method-Level Code Clone Modification using Refactoring Techniques for Clone M...
acijjournal
 
Detecting the High Level Similarities in Software Implementation Process Usin...
Detecting the High Level Similarities in Software Implementation Process Usin...
IOSR Journals
 
Introducing Parameter Sensitivity to Dynamic Code-Clone Analysis Methods
Introducing Parameter Sensitivity to Dynamic Code-Clone Analysis Methods
Kamiya Toshihiro
 
A Source Code Similarity System For Plagiarism Detection
A Source Code Similarity System For Plagiarism Detection
James Heller
 
50120130405029
50120130405029
IAEME Publication
 
Paper id 22201490
Paper id 22201490
IJRAT
 
CORRELATING FEATURES AND CODE BY DYNAMIC AND SEMANTIC ANALYSIS
CORRELATING FEATURES AND CODE BY DYNAMIC AND SEMANTIC ANALYSIS
ijseajournal
 
Software Product Line Analysis and Detection of Clones
Software Product Line Analysis and Detection of Clones
RSIS International
 
Put Your Hands in the Mud: What Technique, Why, and How
Put Your Hands in the Mud: What Technique, Why, and How
Massimiliano Di Penta
 
Study on Different Code-Clone Detection Techniques & Approaches to MitigateCo...
Study on Different Code-Clone Detection Techniques & Approaches to MitigateCo...
IRJET Journal
 
Duplicate Code Detection using Control Statements
Duplicate Code Detection using Control Statements
Editor IJCATR
 
Duplicate Code Detection using Control Statements
Duplicate Code Detection using Control Statements
Editor IJCATR
 
Duplicate Code Detection using Control Statements
Duplicate Code Detection using Control Statements
Editor IJCATR
 
Duplicate Code Detection using Control Statements
Duplicate Code Detection using Control Statements
Editor IJCATR
 
GENERIC CODE CLONING METHOD FOR DETECTION OF CLONE CODE IN SOFTWARE DEVELOPMENT
GENERIC CODE CLONING METHOD FOR DETECTION OF CLONE CODE IN SOFTWARE DEVELOPMENT
IAEME Publication
 
Behavioral Analysis for Detecting Code Clones
Behavioral Analysis for Detecting Code Clones
TELKOMNIKA JOURNAL
 
A Novel Approach for Code Clone Detection Using Hybrid Technique
A Novel Approach for Code Clone Detection Using Hybrid Technique
INFOGAIN PUBLICATION
 
Plagiarism introduction
Plagiarism introduction
Merin Paul
 
A novel approach for clone group mapping
A novel approach for clone group mapping
ijseajournal
 
IRJET- Code Cloning using Abstract Syntax Tree
IRJET- Code Cloning using Abstract Syntax Tree
IRJET Journal
 
Method-Level Code Clone Modification using Refactoring Techniques for Clone M...
Method-Level Code Clone Modification using Refactoring Techniques for Clone M...
acijjournal
 
Detecting the High Level Similarities in Software Implementation Process Usin...
Detecting the High Level Similarities in Software Implementation Process Usin...
IOSR Journals
 
Introducing Parameter Sensitivity to Dynamic Code-Clone Analysis Methods
Introducing Parameter Sensitivity to Dynamic Code-Clone Analysis Methods
Kamiya Toshihiro
 
A Source Code Similarity System For Plagiarism Detection
A Source Code Similarity System For Plagiarism Detection
James Heller
 
Paper id 22201490
Paper id 22201490
IJRAT
 
CORRELATING FEATURES AND CODE BY DYNAMIC AND SEMANTIC ANALYSIS
CORRELATING FEATURES AND CODE BY DYNAMIC AND SEMANTIC ANALYSIS
ijseajournal
 
Software Product Line Analysis and Detection of Clones
Software Product Line Analysis and Detection of Clones
RSIS International
 
Put Your Hands in the Mud: What Technique, Why, and How
Put Your Hands in the Mud: What Technique, Why, and How
Massimiliano Di Penta
 
Study on Different Code-Clone Detection Techniques & Approaches to MitigateCo...
Study on Different Code-Clone Detection Techniques & Approaches to MitigateCo...
IRJET Journal
 
Ad

Recently uploaded (20)

GenAI Opportunities and Challenges - Where 370 Enterprises Are Focusing Now.pdf
GenAI Opportunities and Challenges - Where 370 Enterprises Are Focusing Now.pdf
Priyanka Aash
 
AI VIDEO MAGAZINE - June 2025 - r/aivideo
AI VIDEO MAGAZINE - June 2025 - r/aivideo
1pcity Studios, Inc
 
Wenn alles versagt - IBM Tape schützt, was zählt! Und besonders mit dem neust...
Wenn alles versagt - IBM Tape schützt, was zählt! Und besonders mit dem neust...
Josef Weingand
 
2025_06_18 - OpenMetadata Community Meeting.pdf
2025_06_18 - OpenMetadata Community Meeting.pdf
OpenMetadata
 
AI Agents and FME: A How-to Guide on Generating Synthetic Metadata
AI Agents and FME: A How-to Guide on Generating Synthetic Metadata
Safe Software
 
Connecting Data and Intelligence: The Role of FME in Machine Learning
Connecting Data and Intelligence: The Role of FME in Machine Learning
Safe Software
 
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
Fwdays
 
The Future of Technology: 2025-2125 by Saikat Basu.pdf
The Future of Technology: 2025-2125 by Saikat Basu.pdf
Saikat Basu
 
PyCon SG 25 - Firecracker Made Easy with Python.pdf
PyCon SG 25 - Firecracker Made Easy with Python.pdf
Muhammad Yuga Nugraha
 
Daily Lesson Log MATATAG ICT TEchnology 8
Daily Lesson Log MATATAG ICT TEchnology 8
LOIDAALMAZAN3
 
UserCon Belgium: Honey, VMware increased my bill
UserCon Belgium: Honey, VMware increased my bill
stijn40
 
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
Earley Information Science
 
The Future of Product Management in AI ERA.pdf
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Priyanka Aash
 
Cyber Defense Matrix Workshop - RSA Conference
Cyber Defense Matrix Workshop - RSA Conference
Priyanka Aash
 
Python Conference Singapore - 19 Jun 2025
Python Conference Singapore - 19 Jun 2025
ninefyi
 
A Constitutional Quagmire - Ethical Minefields of AI, Cyber, and Privacy.pdf
A Constitutional Quagmire - Ethical Minefields of AI, Cyber, and Privacy.pdf
Priyanka Aash
 
ReSTIR [DI]: Spatiotemporal reservoir resampling for real-time ray tracing ...
ReSTIR [DI]: Spatiotemporal reservoir resampling for real-time ray tracing ...
revolcs10
 
Tech-ASan: Two-stage check for Address Sanitizer - Yixuan Cao.pdf
Tech-ASan: Two-stage check for Address Sanitizer - Yixuan Cao.pdf
caoyixuan2019
 
From Manual to Auto Searching- FME in the Driver's Seat
From Manual to Auto Searching- FME in the Driver's Seat
Safe Software
 
GenAI Opportunities and Challenges - Where 370 Enterprises Are Focusing Now.pdf
GenAI Opportunities and Challenges - Where 370 Enterprises Are Focusing Now.pdf
Priyanka Aash
 
AI VIDEO MAGAZINE - June 2025 - r/aivideo
AI VIDEO MAGAZINE - June 2025 - r/aivideo
1pcity Studios, Inc
 
Wenn alles versagt - IBM Tape schützt, was zählt! Und besonders mit dem neust...
Wenn alles versagt - IBM Tape schützt, was zählt! Und besonders mit dem neust...
Josef Weingand
 
2025_06_18 - OpenMetadata Community Meeting.pdf
2025_06_18 - OpenMetadata Community Meeting.pdf
OpenMetadata
 
AI Agents and FME: A How-to Guide on Generating Synthetic Metadata
AI Agents and FME: A How-to Guide on Generating Synthetic Metadata
Safe Software
 
Connecting Data and Intelligence: The Role of FME in Machine Learning
Connecting Data and Intelligence: The Role of FME in Machine Learning
Safe Software
 
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
Fwdays
 
The Future of Technology: 2025-2125 by Saikat Basu.pdf
The Future of Technology: 2025-2125 by Saikat Basu.pdf
Saikat Basu
 
PyCon SG 25 - Firecracker Made Easy with Python.pdf
PyCon SG 25 - Firecracker Made Easy with Python.pdf
Muhammad Yuga Nugraha
 
Daily Lesson Log MATATAG ICT TEchnology 8
Daily Lesson Log MATATAG ICT TEchnology 8
LOIDAALMAZAN3
 
UserCon Belgium: Honey, VMware increased my bill
UserCon Belgium: Honey, VMware increased my bill
stijn40
 
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
EIS-Webinar-Engineering-Retail-Infrastructure-06-16-2025.pdf
Earley Information Science
 
The Future of Product Management in AI ERA.pdf
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Priyanka Aash
 
Cyber Defense Matrix Workshop - RSA Conference
Cyber Defense Matrix Workshop - RSA Conference
Priyanka Aash
 
Python Conference Singapore - 19 Jun 2025
Python Conference Singapore - 19 Jun 2025
ninefyi
 
A Constitutional Quagmire - Ethical Minefields of AI, Cyber, and Privacy.pdf
A Constitutional Quagmire - Ethical Minefields of AI, Cyber, and Privacy.pdf
Priyanka Aash
 
ReSTIR [DI]: Spatiotemporal reservoir resampling for real-time ray tracing ...
ReSTIR [DI]: Spatiotemporal reservoir resampling for real-time ray tracing ...
revolcs10
 
Tech-ASan: Two-stage check for Address Sanitizer - Yixuan Cao.pdf
Tech-ASan: Two-stage check for Address Sanitizer - Yixuan Cao.pdf
caoyixuan2019
 
From Manual to Auto Searching- FME in the Driver's Seat
From Manual to Auto Searching- FME in the Driver's Seat
Safe Software
 

SOURCE CODE RETRIEVAL USING SEQUENCE BASED SIMILARITY

  • 1. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 DOI : 10.5121/ijdkp.2013.3404 57 SOURCE CODE RETRIEVAL USING SEQUENCE BASED SIMILARITY Yoshihisa Udagawa Faculty of Engineering, Tokyo Polytechnic University, Atsugi City, Kanagawa, Japan [email protected] ABSTRACT Duplicate code adversely affects the quality of software systems and hence should be detected. We discuss an approach that improves source code retrieval using structural information of source code. A lexical parser is developed to extract control statements and method identifiers from Java programs. We propose a similarity measure that is defined by the ratio of the number of sequential fully matching statements to the number of sequential partially matching statements. The defined similarity measure is an extension of the set-based Sorensen-Dice similarity index. This research primarily contributes to the development of a similarity retrieval algorithm that derives meaningful search conditions from a given sequence, and then performs retrieval using all derived conditions. Experiments show that our retrieval model shows an improvement of up to 90.9% over other retrieval models relative to the number of retrieved methods. KEYWORDS Java source code, Control statement, Method identifier, Similarity measure, Derived sequence retrieval model 1. INTRODUCTION Several studies have shown that approximately 5%–20% of a program can contain duplicate code [2, 13]. Many such duplications are often the result of copy-paste operations, which are simple and can significantly reduce programming time and effort when the same functionality is required. In many cases, duplicate code causes an adverse effect on the quality of software systems, particularly the maintainability and comprehensibility of source code. For example, duplicate code increases the probability of update anomalies. If a bug is found in a code fragment, all the similar code fragments should be investigated to fix the bug in question [11, 15]. This coding practice also produces code that is difficult to maintain and understand, primarily because it is difficult for maintenance engineers to determine which fragment is the original one and whether the copied fragment is intentional. Tool support that efficiently and effectively retrieves similar code is required to support software engineers' activities. Different approaches for identifying similar code fragments have been proposed in code clone detection. Based on the level of analysis applied to the source code, clone detection techniques
  • 2. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 58 can be roughly classified into four main groups, i.e., text-based, token-based, structure-based, and metrics-based. (1) Text-based approaches In this approach, the target source program is considered as a sequence of strings. Baker [2] described an approach that identifies all pairs of matching “parameterized” code fragments. Johnson [7] proposed an approach to extract repetitions of text and a matching mechanism using fingerprints on a substring of the source code. Although these methods achieve high performance, they are sensitive to lexical aspects, such as the presence or absence of new lines and the ordering of matching lines. (2) Token-based approaches In the token-based detection approach, the entire source system is transformed into a sequence of tokens, which is then analyzed to identify duplicate subsequences. A sub-string matching algorithm is generally used to find common subsequences. CCFinder [22] adopts the token-based technique to efficiently detect “copy and paste” code clones. In CCFinder, the similarity metric between two sets of source code files is defined based on the concept of “correspondence.” CP- Miner [11] uses a frequent subsequence mining technique to identify a similar sequence of tokenized statements. Token-based approaches are typically more robust against code changes compared to text-based approaches. (3) Structure-based approaches In this approach, a program is parsed into an abstract syntax tree (AST) or program dependency graph (PDG). Because ASTs and PDGs contain structural information about the source code, sophisticated methods can be applied to ASTs and PDGs for the clone detection. CloneDR [3] is one of the pioneering AST-based clone techniques. Wahler et al. [21] applied frequent itemset data mining techniques to ASTs represented in XML to detect clones with minor changes. DECKARD [6] also employs a tree-based approach in which certain characteristic vectors are computed to approximate the structural information within ASTs in Euclidean space. Typically, a PDG is defined to contain the control flow and data flow information of a program. An isomorphic subgraph matching algorithm is applied to identify similar subgraphs. Komondoor et al. [8] have also proposed a tool for C programs that finds clones. They use PDGs and a program slicing technique to find clones. Krinke [10] uses an iterative approach (k-length patch matching) to determine maximal similar subgraphs. Structure-based approaches are generally robust to code changes, such as reordered, inserted, and deleted code. However, they are not scalable to large programs. (4) Metrics-based approaches Metrics-based approaches calculate metrics from code fragments and compare these metric vectors rather than directly comparing source code. Kontogiannis et al. [9] developed an abstract pattern matching tool to measure similarity between two programs using Markov models. Some common metrics in this approach include a set of software metrics called “fingerprinting” [7], a
  • 3. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 59 set of method-level metrics including McCabe’s cyclomatic complexity [14], and a characteristic vector to approximate the structural information in ASTs [6]. Our approach is classified as a structure-based comparison. It features a sequence of statements as a retrieval condition. We have developed a lexical parser to extract source code structure, including control statements and method identifiers. The extracted structural information is input to a vector space model [1,12,17], an extended Sorensen-Dice model [4,16,19], and the proposed source code retrieval model, named the “derived sequence retrieval model” (DSRM). The DSRM takes a sequence of statements as a retrieval condition and derives meaningful search conditions from the given sequence. Because a program is composed of a sequence of statements, our retrieval model improves the performance of source code retrieval. The remainder of this paper is organized as follows. In Section 2, we present an outline of the process and the target source code of our research. In Section 3, we define source code similarity metrics. Retrieval results are discussed in Section 4. In Section 5, we analyze performance in elapsed time, and Section 6 presents conclusions and suggestions for future work. 2. RESEARCH PROCESS 2.1. Outline Figure 1 shows an outline of our research process. Generally, similarity retrieval of source code is performed for a specific purpose. From this perspective, the original source code may include some uninteresting fragments. We have developed a lexical parser and applied it to a set of original Java source codes to extract interesting code, which includes class method signatures, control statements, and method calls. Our parser traces a variable type declaration and class instantiation to generate an identifier-type list. This list is then used to translate a variable identifier to its data type. A method call preceded by an identifier is converted into the method calls preceded by the data type of the identifier. Code matching is performed using three retrieval models. The first model is the proposed DSRM, which takes a sequence of statements as a retrieval condition. The second model is based on the collection of statements, and is referred to as the derived collection retrieval model (DCRM). The DCRM is an extension of the Sorensen-Dice model of index [4,16,19]. The final retrieval model is the vector space model (VSM) [1,12,17], which has been developed to retrieve a natural language document. Source code can be perceived as a highly structured document; therefore, comparison with DSRM, DCRM, and VSM provides a baseline performance evaluation of how structure of a document will affect retrieval results.
  • 4. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 60 Figure 1. Outline of our research process 2.2. Extracting Source Code Segments At the beginning of our approach, a set of Java source codes is partitioned into methods. Then, the code matching statements are extracted for each method. The extracted fragments comprise class method signatures, control statements, and method calls. (1) Class method signatures Each method in Java is declared in a class [5]. Our parser extracts class method signatures in the following syntax. <class identifier>:<method signature> An anonymous class, which is a local class without a class declaration, is often used when a local class is used only once. An anonymous class is defined and instantiated in a single expression using the new operator to make code concise. Our parser extracts a method declared in an anonymous class in the following syntax. <class identifier>:<anonymous class identifier>:<method signature> Arrays and generics are widely used in Java to facilitate the manipulation of data collections. Our parser also extracts arrays and generic data types according to Java syntax. For example, Object[], String[][], List<String>, and List<Integer> are extracted and treated as different data types.
  • 5. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 61 (2) Control statements Our parser also extracts control statements with various levels of nesting. The block is represented by the "{" and "}" symbols. Hence, the number of "{" symbols indicates the number of nesting levels. The following Java control statements [5] are extracted by our parser. - If-then (with or without else or else if statements) - Switch - While - Do - For and enhanced for - Break - Continue - Return - Throw - Synchronized - Try (with or without a catch or finally clause) (3) Method calls From the assumption that a method call characterizes a program, our parser extracts a method identifier called in a Java program. Generally, the instance method is preceded by a variable whose type refers to a class object to which the method belongs. Our parser traces a type declaration of a variable and translates a variable identifier to its data type or class identifier, i.e., <variable>.<method identifier> is translated into <data type>.<method identifier> or <class identifier>.<method identifier>. 2.3. Extracting Statements of Struts 2 We selected Struts 2.3.1.1 Core as our target because Struts 2 [20] is a popular Java framework for web applications. We estimated the volume of source code using file metrics. Typical file metrics are as follows: Java Files ---- 368 Classes ---- 414 Methods ---- 2,667 Lines of Code ---- 21,543 Comment Lines ---- 17,954 Total Lines ---- 46,100 Struts 2.3.1.1 Core consists of 46,100 lines of source code, including blank and comment lines. Struts 2.3.1.1 Core is classified as mid-scale software in the industry. Struts 2.3.1.1 Core is comprised of 368 Java files, which differs from the number of declared classes (414) because
  • 6. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 62 some Java files include definitions of inner classes and anonymous classes. Figure 2 shows an example of the extracted structure of the evaluateClientSideJsEnablement() method in the Form.java file of the org.apache.struts2.components package. The three numbers preceded by the # symbol are the number of comment, blank, and code lines, respectively. The extracted structures include the depth of nesting of the control statements; thus, they supply sufficient information for retrieving methods using a source code substructure. Figure 2. Example of extracted structure 3. SIMILARITY METRICS 3.1. Vector Space Model for Documents The VSM is widely used in retrieving and ranking documents written in natural languages. Documents and queries are represented as vectors. Each dimension of the vectors corresponds to a term that consists of documents and queries. The documents are ranked against queries by computing similarity, which is computed as the cosine of the angle between the two vectors. Given a set of documents D, a document dj in D is represented as a vector of term weights: ࢊ࢐ = ൫‫ݓ‬ଵ,௝ , ‫ݓ‬ଶ,௝ , . . . , ‫ݓ‬ே,௝ ൯ where N is the total number of terms in document dj, and wi, j is the weight of the i-th term. There are many variations of the term weighting scheme. Salton et al. [17] proposed the well- known “term frequency-inverse document frequency” (tf-idf) weighting scheme. According to
  • 7. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 63 this weighting scheme, the weight of the j-th element of the document dj, i.e., wi,j, is computed by the product of the term frequency tfi,j and the inverse document frequency idfi. ‫ݓ‬௜,௝ = ‫݂ݐ‬௜,௝・݂݅݀௜ The term frequency tfi,j is defined as the number of occurrences of the term i in the document dj. The inverse document frequency is a measure of the general importance of the term i and is defined as follows: ݂݅݀௜ = ݈‫݃݋‬ଶ ൤ ‫ܯ‬ ݂݀௜ ൨ where M denotes the total number of documents in a collection of documents. A high weight in wi,j is reached by a high term frequency in the given document and a low document frequency dfi of the term in the whole collection of documents. Hence, the weights tend to filter out common terms. A user query can be similarly converted into a vector q: ࢗ = ൫‫ݓ‬ଵ,௤ , ‫ݓ‬ଶ,௤ , . . . , ‫ݓ‬ே,௤ ൯ The similarity between document dj and query q can be computed as the cosine of the angle between the two vectors dj and q in the N-dimensional term space: ܵ݅݉௖௢௦൫ࢊ࢐, ࢗ൯ = ∑ ‫ݓ‬௜,௝ ∗ ‫ݓ‬௜,௤ ே ௜ୀଵ ට∑ ‫ݓ‬௜,௝ ଶே ௜ୀଵ ∗ ට∑ ‫ݓ‬௜,௤ ଶே ௜ୀଵ This similarity is often referred to as the cosine similarity. 3.2. Extending Sorensen-Dice Index Over the last decade, many techniques that detect software cloning and refactoring opportunities have been proposed. Similarity coefficients play an important role in the literature. However, most similarity definitions are validated by empirical studies. The choice of measure depends on the characteristics of the domain to which they are applied. Among many different similarity indexes, the similarity defined in CloneDR is worth notice. Baxter et al. [3] define the similarity between two trees T1 and T2 as follows: Similarity(T1, T2) = 2H / (2H + L + R) where H is the number of shared nodes in trees T1 and T2, L is the number of unique nodes in T1, and R is the number of unique nodes in T2. Within the context of a tree structure, this definition can be seen as an extension of the Sorensen-Dice index. The Sorensen-Dice index is originally defined by two sets and is formulated as follows: SimSorensen-Dice( Xଵ, Xଶ ) =
  • 8. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 64 Here, |X1∩X2| indicates the number of elements in the intersection of sets X1 and X2. Another well-known index is the Jaccard index of binary features, which is defined by the following formula: Sim Jaccard ( ) = In software, the Sorensen-Dice index and the Jaccard index are known experimentally to produce better results than other indexes, such as a simple matching index, which counts the number of features absent in both sets [16,19]. The absence of a feature in two entities does not indicate similarity in software source code. For example, if two classes do not include the same method, it does not mean that the two classes are similar. The Jaccard and Sorensen-Dice indexes perform identically except for the value of the similarity because assigning more weight to the features present in both entities does not have a significant impact on the results. Our study takes the Sorensen-Dice index as a basis for defining the similarity measure between source codes. The extension of the Sorensen-Dice index on N sets is straightforward. Sim Sorensen-Dice ( ) = The function SetComb(X1∩X2∩...∩Xn, r) defines intersections of sets {X1, X2, ... , Xn} whose r elements are replaced by the elements with the negation symbol. The summation of r = 0 to n−1 of SetComb(X1∩X2∩...∩Xn, r) generates the power set of sets X1, X2,..., Xn, excluding the empty set. (n−r) indicates the number of sets without the negation symbol. |X1∩X2, …,∩Xn| indicates the number of tuples <x1,x2, ... ,xn> where x1∈X1, x2∈X2, ... , xn∈Xn. For example, in case n = 3, the numerator of the extended Sorensen-Dice index on sets X1, X2, and X3 equals 3|X1∩X2∩X3|, and the denominator equals 3|X1∩X2∩X3| + 2| X1∩X2∩¬X3 | + 2| X1∩¬X2∩X3 | + 2| ¬X1∩X2∩X3 | + | X1∩¬X2∩¬X3 | + | ¬X1∩X2∩¬X3 | + | ¬X1∩¬X2∩X3 |. 3.3. Similarity Metric for Source Codes In the vector space retrieval model, a document is represented as a vector of terms that comprise the document. The similarity of a document and a query is calculated as the cosine of the angle between a document vector and a query vector. This means that the order in which the terms appear in a document is lost in the vector space model. On the other hand, a computer program is a sequence of instructions written to perform a specified task [18]. The source code is essentially a sequence of characters forming a more complex text structure, such as statements, blocks, classes, and methods. This means that it is preferable or even crucial to consider the order of terms for a similarity index. In our study, the similarity measure is tailored to measure the similarity of sequentially structured text. We first define the notion of a sequence. Let S1 and S2 be statements extracted by the structure extraction tool. [S1→S2] denotes a sequence of S1 followed by S2. In general, for a positive integer n, let Si (i ranges between 1 and n) be a statement. [S1→S2 →...→Sn] denotes a sequence of n statements.
  • 9. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 65 The similarity of the DSRM can be considered the same as the extended Sorensen-Dice index except for symbols, i.e., using → symbol in place of ∩ symbol. The DSRM’s similarity between two sequences is defined as follows: SimDSRM ( [S1→S2→… →Sm], [T1→T2→… →Tn] ) = Here, without loss of generality, we can assume that m ≥ n. In case m < n, we replace the sequence [S1→S2 →...→Sm] with [T1→T2→…→Tn]. The numerator of the definition, i.e., | [S1→S2 →...→Sm], [T1→T2→… →Tn] | indicates the number of statements in the sequence where Sj+1=T1, Sj+2=S2, ... , Sj+n=Tn for some j (0 ≤ j ≤ m −n). The denominator of the definition indicates the iteration of the sequence match that counts the sequence of statements from r = 0 to r = n−1. Note that the first sequence [S1→S2 →...→Sm] is renewed when the sequence match succeeds, i.e., replacing the matched statements with a not applicable symbol “n/a.” SqcComb ([T1→T2→…→Tn], r) generates a set of sequence combinations by replacing the r (0 ≤ r < n) statements with the negation of the statements. For example, for m = 4 and n = 2, SimDSRM ( [A1→A1→A2→A2], [A1→A2] ) equals 0.5. The numerator of SimDSRM ( [A1→A1→A2→A2], [A1→A2] ) is 2 because the sequence [A1→A2] is included in the first sequence, 2*| [A1→A1→A2→A2], [A1→A2] |= 2*1= 2. The denominator of SimDSRM ([A1→A1→A2→A2], [A1→A2]) is computed as follows. First, for set r = 0, SqcComb([A1→A2], 0) generates [A1→A2]. Then, 2*|| [A1→A1→A2→A2], [A1→A2] || is estimated as 2 because [A1→A2] is a subsequence of [A1→A1→A2→A2]. Then, the first sequence is renewed by [A1→n/a→n/a→A2]. Next, for set r = 1, SqcComb ([A1→A2], 1) generates [A1→¬A2] and [¬A1→A2]. || [A1→n/a→n/a→A2], [A1→¬A2] || is estimated as 1 because A1 is included in [A1→n/a→n/a→A2], and then the first sequence is renewed by [n/a→n/a→n/a→A2]. Finally, || [n/a→n/a→n/a→A2], [¬A1→A2] || is estimated as 1 and the first sequence is renewed by [n/a→n/a→n/a→n/a]. The denominator of SimDSRM ([A1→A1→A2→A2], [A1→A2]) is 4. Thus, SimDSRM ([A1→A1→A2→A2], [A1→A2] ) equals 2/4 = 0.5.
  • 10. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 66 Figure 3. Algorithm to compute similarity for a sequence [S1→S2→...→Sn] A simplified version of the algorithm to compute the DSRM’s similarity is shown in Figure 3. It takes a set of method structures and a sequence as retrieval conditions as arguments and returns an array of similarity values for the set of method structures. It is assumed that the getMethodStructure(j) function returns a structure of the j-th method extracted by the structure extraction tool. The function abstracts the implementation of the internal structure of the method. This is represented as a sequence of statements. The Count function takes three arguments, i.e., a method_structure MS, a sequence of statements TN, and an integer R. Note that an element of the method_structure is compatible with a sequence of statements. The SqcComb( TN, R ) function generates combinations of statement sequences that replace the R statements with the negation of the statements in the sequence TN. Then, matching between the method_structure MS and the combinations of statement sequences is processed. The Count
  • 11. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 67 function returns the number of positive statements that match the combinations of statement sequences. The SimDSRM function calculates the similarity according to the DSRM’s defined similarity. Note that the similarity is 1.0 when a method includes the sequence [S1→S2→...→Sn] and does not include any of the derived sequences from [S1→S2→...→Sn]. 4. CODE RETRIEVAL EXPERIMENTS 4.1. Approach Cosine similarity is extensively used in research on retrieving documents written in natural languages and recovering links between software artifacts [1,12]. Set-based indexes, such as the Jaccard index and the Sorensen-Dice index, are used in a variety of research, including software clustering [16] and generating refactoring guidelines [19]. Here, we present experimental results obtained using cosine similarity, the Sorensen-Dice index, and the DSRM’s similarity. 4.2. Vector Space Model Results It is natural to assign structural metrics to the elements of a document vector. For example, the evaluateClientSideJs-Enablement() method shown in Figure 2 is represented by the vector (4, 1, 2, 1, 1, 1, 1, 1, 1), where we assume that the first element of the vector corresponds to if- statements, the second corresponds to for-statements, the third corresponds to the addParameter method identifier, and the fourth corresponds to the configuration.getRuntimeConfiguration method identifier, and so on. Thus, the extracted fragments of Struts 2.3.1.1 Core are vectorized to produce a 1,420 × 2,667 matrix. In Struts 2 Core, the addParameter method is often called after an if-statement because the addParameter method adds a key and a value given by the arguments to the parameter list maintained by the Struts 2 process after checking the existence of a key. Thus, the same number of if-statements and addParameter method identifiers are a reasonable retrieval condition in the vector space retrieval model. Table 1 shows the top 27 methods retrieved by a query vector that consists of one if-statement, one addParameter method identifier, and one curly brace “}.” The third column of Table 1 shows the similarity values calculated by the cosine similarity. It can be seen that 2,667 methods were retrieved because all methods include at least one curly brace “}.” There were only 38 methods whose similarity values were greater than 0.3. The result looks fairly good at a glance; however, the results include some controversial methods in the sense that we are retrieving an addParameter method identifier that is called just after an if-statement. Figure 4 shows ActionError::void evaluateExtraParams(), which has the same structure as Action Message::void evaluateExtraParams() except for string arguments “actionErrors” and “actionMessages.” The cosine similarity of ActionError::void evaluateExtraParams() is 0.846, and the extended Sorensen-Dice index is 0.750 because the method includes two if-statements and two addParameter methods. However, the method does not include any sequences of if-statements or an addParameter method. Thus, the DSRM’s similarity is estimated to be 0. Let a "boundary method" be a retrieved method whose DSRM’s similarity is greater than 0 and whose cosine similarity is minimum. The evaluateClientSideJsEnablement(), which is shown at
  • 12. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 68 No.19 in Table 1, is the boundary method with the minimum cosine similarity 0.472. Table 1 consists of a set of retrieved methods whose cosine similarities are greater than or equal to the cosine similarity of the boundary method (0.472). The methods whose "No" column is shaded in Table 1 are those methods whose DSRM’s similarities equals 0. The methods are a kind of controversial candidates. Details are discussed in the following sections. Table 1. Top 27 retrieved methods 4.3 Extended Sorensen-Dice Index Results The extended Sorensen-Dice index defined in Section 3.2 is greater than 0 when all three elements are included in a method structure. In the vector space model, the similarity is greater than 0 when at least one element of the three elements is included in a method structure. In other words, the extended Sorensen-Dice index requires the AND condition on the retrieval elements, while the vector space model requires the OR condition. Thus, the results of the extended Sorensen-Dice index are a subset of the results of the vector space model. For example, the extended Sorensen-Dice index evaluated 0 for the FieldError::void setFieldName() method (No. 25 in Table 1) and the Text::void addParameter() method (No. 26 in Table 1), while the similarities obtained for these methods by the vector space model are 0.928. Both methods contain "addParameter" and "}"; however, these methods contain no if-statements.
  • 13. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 69 Because "addParameter" is a rare term, the term weight for "addParameter" is so high that the similarity value works out to 0.928. 4.4. Derived Sequence Model Results The DSRM’s similarity is greater than 0 when the sequence [ if{→addParameter→} ] is included in an extracted method structure. This means that the DSRM imposes a more severe retrieval condition than the extended Sorensen-Dice model. In other words, the results of the DSRM are a subset of the results of the extended Sorensen-Dice model. The source code of ActionError::void evaluateExtra Params() (No. 22 in Table 1) is shown in Figure 4. The similarity of the method is estimated to be 0 by the derived sequence model because the method does not include the sequence [ if{→addParameter→} ]. Its similarity is 0.75 in the extended Sorensen-Dice model because the method includes two if-statements and two addParameter method calls. A program is essentially represented by a sequence of statements. Because the DSRM computes the similarity based on a sequence of statements, it achieves higher performance than the other models. Figure 4. Example method that does not include any sequences of if-statements and addParameter 4.5. Summary of Experiments Table 2 shows a summary of 27 retrieval experiments using the three models. Column three of Table 2 presents the number of methods retrieved by the DSRM with similarity values greater than 0. Column four presents the number of methods retrieved by the extended Sorensen-Dice model with similarity values greater than 0, and column five shows the number of methods retrieved by the vector space model with tf-idf weighting. The results of the experiment shown in Table 1 correspond to No. 14 in Table 2. The degree of improvement of the DSRM over the extended Sorensen-Dice index is calculated by the following formula:
  • 14. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 70 The degree of improvement of DSRM to the vector space model with tf-idf weighing is calculated by the similar formula. The degree of improvement ranges from 0% to 90.1% over the extended Sorensen-Dice model, and ranges from 22.2% to 90.9% over the vector space model with tf-idf weighting. As previously mentioned, when the similarity is greater than 0, the results of the DSRM are a subset of the results of the extended Sorensen-Dice index, and the results of the extended Sorensen-Dice index are a subset of the results of the vector space model. Note that this set inclusion relationship is not always true when the top N-methods are selected. For example, for No. 23 and No. 27 in Table 2, the degree of improvement over the extended Sorensen-Dice model is 80.0%, and that of the vector space model is 60.9%. In these cases, the similarity of the vector space model with tf- idf weighting is 0.413, which is well above 0. Table 2. Summary of 27 retrieval experiments
  • 15. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 71 Figure 5 shows a graph of the degree of improvement sorted by degree of improvement over the extended Sorensen-Dice model. The horizontal axis shows the sample number given in the first column of Table 2, and the vertical axis shows the degree of improvement in percentage. For all retrieval samples, the DSRM outperformed the extended Sorensen-Dice model except for samples No. 7, 8, 9, and 10. The extended Sorensen-Dice model is more successful than the vector space model with tf-idf weighting except for samples No. 23 and No. 27. Figure 5. Degree of DSRM's improvement 5. ELAPSED TIME COMPARISONS Table 3 summarizes the elapsed time in milliseconds of the three retrieval models for 27 sample retrievals. We measured the elapsed time using the following experimental environment: CPU: Intel Core i3 540 3.07 GHz Main memory: 4.00 GB OS: Windows 7 64 Bit The three retrieval models were implemented using Visual Basic for Excel 2010. The unique 1,420 statement fragments, including control statements and method calls, were extracted from the Struts 2.3.1.1 Core source code. Thus, a 1,420 × 2,667 matrix was stored in an Excel sheet for the retrieval experiments by the vector space model. All 2,667 methods were transformed into 2,667 sequences of extracted statements. They were also stored in an Excel sheet for the extended Sorensen-Dice model and the DSRM experiments. Through the experiments, all the data concerning retrieval were accessed from the Excel sheet cells. Thus, it is fair to say that the three retrieval model experiments were performed under equal conditions.
  • 16. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 72 Table 3. Elapsed time of the three retrieval models (ms) Figure 6 shows a graph of the elapsed times presented in Table 3. The horizontal axis shows the sample number given in column one of Table 2, and the vertical axis shows the elapsed time in milliseconds. All 27 samples were processed in near-constant time in the vector space model because a given query is evaluated on the 1,420 × 2,667 matrix. On the other hand, the extended Sorensen-Dice model and the DSRM require an elapsed time approximately proportional to the number of derived sequences related to a given retrieval condition. Both retrieval models generate two derived sequences for samples No. 1 to No. 13. As a result, three retrievals were executed. The average execution time was 0.171. For samples No. 14 to No. 23, both retrieval models executed seven retrievals. The average execution time was 0.187 milliseconds. For samples No. 24 to No. 27, both retrieval models executed 15 retrievals. The average execution time was 0.193 milliseconds. The elapsed time required for each derived sequence increases approximately 3%–8% due to the overhead involved in the retrieval process. The results in Table 3 indicate that the DSRM is approximately 10 times faster than the vector space model.
  • 17. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 73 Figure 6. Elapsed time comparison 6. CONCLUSIONS We presented a source code retrieval model that takes a sequence of statements as a retrieval condition. We conducted three types of experiments using the vector space model, the extended Sorensen-Dice model, and the derived sequence retrieval model (DSRM). The key contribution of our approach is the definition of the DSRM's similarity measure as an extension of the Sorensen-Dice index and the evaluation of the DSRM's similarity measure on the Struts 2 Core source code, which is a moderate-sized Java program. The experimental results demonstrate that the DSRM's similarity measure shows higher selectivity than the other models, which is a natural consequence because a program is essentially a sequence of statements. The results are promising enough to warrant further research. In future, we intend to improve our algorithms by combining information, such as the inheritance of a class and method overloading. We also plan to develop a better user interface, which would allow us to conduct further user studies and to more easily and precisely assess the retrieved code. In addition, we plan to conduct experiments using various types of open source programs available on the Internet. ACKNOWLEDGMENTS We would like to thank Nobuhiro Kataoka, Tamotsu Noji, and Hisayuki Masui for their suggestions on engineering tasks to improve software quality. REFERENCES [1] Antoniol, G., Penta, M.D., and Merlo, E. (2004) An automatic approach to identify class evolution discontinuities. In Proceedings of the 7th International Workshop on Principles of Software Evolution, pp31-40. [2] Baker, B.S. (1996) Parameterized Pattern Matching: Algorithms and Applications, Journal of computer and system sciences, 52, 1, pp28-42.
  • 18. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.4, July 2013 74 [3] Baxter, I. D., Yahin, A., Moura, L. Sant'Anna, M., and Bier, L. (1998) Clone detection using abstract syntax trees. In Proceedings of the 14th International Conference on Software Maintenance, pp368- 377. [4] Choi, S.S., Cha, S.H, and Tappert, C.C. (2010) A Survey of Binary Similarity and Distance Measures, Journal of Systemics, Cybernetics and Informatics ISSN 1690-4532, Vol.8, 1, pp43-48. [5] Gosling, J., Joy, B., Steele, G., and Bracha, G. (2005) The Java Language Specification, Third Edition, ADDISON-WESLEY. [6] Jiang, L., Misherghi, G., Su, Z., and Glondu, S. (2007) DECKARD: Scalable and Accurate Tree- based Detection of Code Clones. In Proceedings of the 29th International Conference on Software Engineering, pp96-105. [7] Johnson, J.H. (1993) Identifying Redundancy in Source Code Using Fingerprints. In Proceeding of the 1993 Conference of the Centre for Advanced Studies Conference, pp171-183. [8] Komondoor, R., and Horwitz, S. (2001) Using Slicing to Identify Duplication in Source Code. In Proceedings of the 8th International Symposium on Static Analysis, LNCS Vol.2126, pp40-56. [9] Kontogiannis, K., Demori, R., Merlo, E., Galler, M., and Bernstein, M. (1996) Pattern matching for clone and concept detection. Journal of Automated Software Engineering 3, pp77-108. [10] Krinke, J. (2001) Identifying Similar Code with Program Dependence Graphs. In Proceedings of the 8th Working Conference on Reverse Engineering, pp301-309. [11] Li, Z., Lu, S., Myagmar, S., and Zhou, Y. (2006) CP-Miner: Finding Copy-Paste and Related Bugs in Large-Scale Software Code. In IEEE Transactions on Software Engineering, Vol.32, 3, pp176-192. [12] Marcus, A., and Maletic, J.I. (2003) Recovering documentation-to source-code traceability links using latent semantic indexing. In Proceedings of the 25th International Conference on Software Engineering, pp125-135. [13] Mayrand, J., Leblanc, C., Merlo, E. (1996) Experiment on the Automatic Detection of Function Clones in a Software System Using Metrics. In Proceedings of the 12th International Conference on Software Maintenance, pp244-253. [14] McCabe, T.J. (1976) A complexity measure, IEEE Transactions on software engineering, 2, 4, pp308- 320. [15] Roy, C.K., Cordya, J.R., and Koschkeb, R. (2009) Comparison and Evaluation of Code Clone Detection Techniques and Tools: A Qualitative Approach, Science of Computer Programming. Volume 74, Issue 7, 1, pp470-495. [16] Saeed, M., Maqbool, O., Babri, H.A., Hassan, S.Z., and Sarwar, S.M. (2003) Software Clustering Techniques and the Use of Combined Algorithm、IEEE Seventh European Conference on Soft Maintenance and Reengineering, pp301-306. [17] Salton, G., and Buckley, C. (1988) Term-weighting approaches in automatic text retrieval. Information Processing and Management, 24, 5, pp513-523. [18] Stair, R. M., and Reynolds, G.W. (2003) Principles of Information Systems. Sixth Edition. Thomson Learning, Inc. [19] Terra, R., Valente, M.T., Czarnecki, K., and Bigonha, R. (2012) Recommending refactorings to reverse software architecture erosion. In Proceedings of the 16th European Conference on Software Maintenance and Reengineering(CSMR), Early Research Achievements Track, pp335-340. [20] The Apache Software Foundation. (2013) About Apache Struts 2. http://struts.apache.org/release/ 2.3.x/. [21] Wahler, V., Seipel, D., Gudenberg, J.W., and Fischer, G. (2004) Clone detection in source code by frequent itemset techniques. In Proceedings of the 4th IEEE International Workshop Source Code Analysis and Manipulation, pp128-135. [22] Yamamoto,T., Matsushita,M., Kamiya,T., and Inoue,K. (2005) Measuring similarity of large software systems based on source code correspondence. In Proceedings of the 6th International Conference on Product Focused Software Process Improvement, pp530-544.