SlideShare a Scribd company logo
ISSN: 1694-2507 (Print)
ISSN: 1694-2108 (Online)
International Journal of Computer Science
and Business Informatics
(IJCSBI.ORG)
VOL 3, NO 1
JULY 2013
Table of Contents VOL 3, NO 1 JULY 2013
Comparative Analysis of Job Scheduling for Grid Environment ............................................................1
Neeraj Pandey, Ashish Arya and Nitin Kumar Agrawal
Hackers Portfolio and its Impact on Society ........................................................................................1
Dr. Adnan Omar and Terrance Sanchez, M.S.
Ontology Based Multi-Viewed Approach for Requirements Engineering..............................................1
R. Subha and S. Palaniswami
Modified Colonial Competitive Algorithm: An Approach for Graph Coloring Problem ..........................1
Hojjat Emami and Parvaneh Hasanzadeh
Security and Privacy in E-Passport Scheme using Authentication Protocols and Multiple Biometrics
Technology........................................................................................................................................1
V. K. Narendira Kumar and B. Srinivasan
Comparative Study of WLAN, WPAN, WiMAX Technologies ................................................................1
Prof. Mangesh M. Ghonge and Prof. Suraj G. Gupta
A New Method for Web Development using Search Engine Optimization............................................1
Chutisant Kerdvibulvech and Kittidech Impaiboon
A New Design to Improve the Security Aspects of RSA Cryptosystem ..................................................1
Sushma Pradhan and Birendra Kumar Sharma
A Hybrid Model of Multimodal Approach for Multiple Biometrics Recognition ...................................1
P. Prabhusundhar, V.K. Narendira Kumar and B. Srinivasan
IJCSBI.ORG
CBR Based Performance Analysis of OLSR Routing Protocol in MANETs ...............................................1
Jogendra Kumar
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1
Comparative Analysis of Job
Scheduling for Grid Environment
Neeraj Pandey
Department of Computer Science & Engineering
G. B. Pant Engineering College Ghurdauri
Uttarakhand, India.
Ashish Arya
Department of Computer Science & Engineering
G. B. Pant Engineering College Ghurdauri
Uttarakhand, India
Nitin Kumar Agrawal
Department of Computer Science & Engineering
G. B. Pant Engineering College Ghurdauri
Uttarakhand, India
ABSTRACT
Grid computing is a continuous growing technology that alleviates the executions of large-
scale resource intensive applications on geographically distributed computing resources.
For a computational grid environment, there are number of scheduling policies available to
address the scheduling and load balancing problem. Scheduling techniques applied in grid
systems are primarily based on the concept of queuing systems, and deals with the
allocation of job to computing node. The scheduler, that schedules the incoming job can be
based on global vs. local i.e. what information will be used to make a load balancing
decision, centralized vs. de-centralized i.e. where load balancing decisions are made, and
static vs. dynamic i.e. when the distribution of load is made. The primary objective of all
load balancing algorithm is minimization of the makespan value, maximum load balanced
and to gain more desirable performance. In this paper, we present the various load
balancing strategies of job scheduling for grid computing environment. We also analyze the
efficiency and limitations of the various approaches
Keywords
Computing, Load Balancing, Scheduling, Genetic Algorithm, Fuzzy logic, Job Replication.
1. INTRODUCTION
In the last several years grid computing has emerged as an weighty field,
distinguished from conventional distributed computing by its focus on large-
scale resource sharing, innovative applications, and in some cases, high-
performance orientation [1]. These enable sharing, selection and aggregation
of suitable computational and data resources for solving large-scale data
intensive problems in science, engineering, and commerce [5]. A grid
computing environment comprises combination of various homogeneous
and heterogeneous resources such as computing nodes, and workstations
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2
which are virtually aggregated to serve as a unified computing resource.
Grid middleware provide users with seamless computing ability and
uniform access to resources in the heterogeneous grid environment. In order
to provide user with a seamless computing environment, the Grid
middleware system need to solve several challenges originating from the
inherent features of the grid [2].
In distributed systems, every node has different processing speed and
system resources, so in order to enhance the utilization of each node and
shorten the consumption of time, “Load Balancing” will play a critical role
[15]. The performance of load balancing algorithms is strongly related to the
number of computing node. Since each computing node has its own
inimitable computing capabilities and the pattern of the job arrival to the
computing node is imbalanced, thus the grid system may be overloaded. The
main objective of load balancing is improved the performance of grid
system through its distribution of load among the computing nodes, and
minimize the execution time of job. In general, load-balancing algorithms
can be categorized as centralized or decentralized in terms of where the load
balancing decisions are made. A centralized load balancing approach can
function either based on averages scheme or instantaneous scheme
according to the type of information on which the load balancing decisions
are made [4].
The rest of paper is organized into 6 sections. Section 2 presents the
overview of the system model including the grid and mathematical model
with load balancing architecture Section. The load balancing approaches for
grid system are presented in Section 3. The analysis and comparisons of
some grid load balancing algorithms are described in section 4. Section 5
presents some challenges and key issues related to load balancing And
finally, the conclusion is given in section 6.
2. SYSTEM MODEL
2.1 Grid Model
The grid under study consist a central resource management unit (RMU), to
which every computing node (CN) connects and grid clients send their job
to RMU for further processing. The RMU is responsible for scheduling jobs
among CN. The role of dispatcher is the job management, including
maintenance of the load balancing, monitoring node status, node selection,
execution, and assignment of jobs for each node. An agent provides a
simple and uniform abstraction of the functions in the grid management.
The CNs in the grid can be either homogeneous or heterogeneous and a
queue is associated with every computing node. The arrived jobs will be
placed in a job queue in the RMU from which they are assigned to CNs. The
grid agent monitors the waiting job. Upon arrival, jobs must be assigned
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3
either to exactly one CN for processing immediately by the instantaneous
scheduling or wait to be scheduled by the averages-based scheme [4]. In a
Grid system, the composition of nodes is dynamic, every node is likely to
enter a busy state at any time and thus lower its performance, so in selecting
nodes, all factors should be considered. At the global grid level, each agent
is a representative of a grid resource and acts as a service provider of high
performance computing power [14].
2.2 Mathematical Model
A computational grid system model [21] consisting p set of the CNs, is
shown in figure 1. It consists of Ni computing nodes (CNs) such that:
i 1 2 3 n 1 n
N { (N ,N ,N , ................ ,N ,N ), | N| n }
  (1)
The nodes are connected to each other via a communication network. The
nodes of grid system possibly could be either an individual machine or a
cluster. The nature of the node is combination of both homogeneous and
heterogeneous, and modeled as M/M/1 queuing system. The inter arrival
time and service time of the system follows exponentially distributed. The
notations and assumptions used are as follows:
 ∅i : External job arrival rate at node i.
 μi : Mean service rate of node i.
 𝜆 : Total traffic through the network.
A job arriving at node i may be either processed at node i or transferred to
another node j through the communication network for computation. The
performance of load balancing policies are closely related to the number of
node involves in a computational grid system.
Figure 1. Grid System Model
 Φ : Total external job arrival rate of the system and given as:
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4
1
Φ   
n
i
i
  (2)
 ri : Mean service time of a job at node i (i.e. the average time to
service (process) a job at node i).
 βi : Mean job processing rate (load) at node i (i.e. the average
number of jobs processed at node i per unit interval of time). This is
the load for node i assigned by the job allocation scheme.
 t : Mean communication time for sending or/and receiving a job
from one node to another node.
 ρ : Utilization of the communication network and given as:
*t  (3)
3. GRID LOAD BALANCING APPROACH
In this section, four algorithms are considered for grid load balancing. A
grid is a huge collection of multiple grid resources (local or global) that are
distributed geographically in a wide area. Load balancing is an important
system function destined to distribute the workload among available
computing nodes to improve throughput and/or execution times of parallel
computer programs either uniform or non-uniformly [19]. There are various
load balancing algorithms are projected in past several years. Various
approaches such as fuzzy logic, genetic algorithm, and job replication are
used to implement load balancing algorithms.
3.1 FCFS Approach
The FCFS algorithm [4,14] is proved to be efficient under some conditions.
Consider a grid environment G with n CNs as given in equation 1, where
each CNs Ni has its own capability to process the job. Let m be the total
number of job J, which is considered to be run on G.
1 2 3 1
J {J , J ,J , ................., J , J }i m m
 (4)
The arrival time of each job Jj is tj and execution time is txj. Each job has
two scheduled attributes: a start time and an end time denoted by ts, te,
respectively. Upon arrival, jobs are allocated to a certain CN Nx ∈ N by the
central resource management unit using first-come-first-served algorithm.
The function of the agent is to find the earliest possible time for each job to
complete, according to the sequence of the job arrivals. A job probably
allocated to any of the CN. So, the function of FCFS to consider all these
possibilities and identify which CN will finish the job earliest. Therefore,
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5
( )j jtx min tx (5)
The completion time of jobs is always equal to the earliest possible start
time plus the execution time, which is described as:
j j jtc ts tx  (6)
The earliest possible start time for Job Jj on a CN is the latest free time of
the chosen CN if there are still jobs running on it. if there is no job running
on the chosen CN ahead of job Jj's arrival. Jj can be executed on this CN
immediately.
  { , , , ( ) ( ), }j j p
ts max t max te p P p P j p j    (7)
3.2 Job Replication Approach
Menno Dobber et al. [11] analyze and compare the effectiveness of the
dynamic load balancing and job replication approach. The two main
techniques that are most suitable with the dynamic nature of the grid are
Dynamic Load Balancing (DLB) and job replication (JR) [11]. Consider a
grid system P with n computing node; a set J of jobs Ji (as given in equation
1, and 8) is considered to be run on P. As the name implies a JR scheme
creates several replica of an individual job and schedule them to run into
different nodes. The node that finish the job first, send a message to other
node involve in a grid system to finish the execution of current job and start
the execution of next job available in the queue.
Figure 2. Job Replication (2JR) Scheme
A m-JR scheme creates m-1 precise copies of each job and these jobs are
considered to be run on P. The same data set and the parameters are
provided to the two copies of a job, to perform exactly the same
computation so the calculations are completely the same. The JR approach
consists of I iteration and one iteration takes total R-steps. N copies of all
jobs have been spread out to P. As soon as the computing node finished one
of the copies of the job, it sends a message to other computing nodes to kill
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6
the current job execution and start the processing of the next job. A 2-JR
scheme is show in fig. 2, consisting 4 CN and 4 jobs. Firstly 2 copies of
each job each job is created after then each job with its copy are distributed
to n =2 CN. In figure 3, one copy (A2) of job A1 is created and then
distributed to CN1 and CN2. CN1 finished job A1 first, so it send a finalize
message “fin” to CN2. Sending a message between CN take some network
delay, therefore scheduling of next job by a CN can takes some time.
Duration of a specific job is defined as:
Job-Time = min {all its job time} + possible send time (8)
3.3 Genetic algorithm Based Approach
Genetic algorithms (GA) [4, 7, 8, 22] are increasingly popular for solving
optimization, search and machine learning problems. It is basically a well-
known and robust search heuristics. It search optimal solution from entire
search space In grid environment scheduling is a type of be NP complete
problem, i.e. there is no known polynomial time algorithm to optimally
solve this problem. The main objective to use GA for load balancing is to
achieve the minimum of makespan (the latest completion time among all the
jobs processed in CN), maximum node utilization and a well-balanced load
across all the CNs. Genetic algorithms are well suited to solving scheduling
problems, because unlike heuristic methods GA operate on a population of
solutions rather than a single solution. A combination of intelligent agents
and multi-agent approaches is applied to both local grid resource scheduling
and global grid load balancing. GA is basically used to generate solutions
related to optimization problems using various techniques such as selection,
mutation, and crossover. Let P be a set, then the cost function CF, is given
as:
: P IRCF  (9)
GA start with an initial set of random solutions called population. Each
individual in a population is called a chromosome, represents a solution to
the problem. The evaluation of chromosomes is done through generations.
During each generation, the chromosomes are evaluated, using some fitness
function. For creation of the next generation, new chromosomes, called
offspring, are formed. The offspring is created by either using a crossover
operator or using a mutation operator. The new generation is formed by
selecting the individual using fitness values. After several generations, best
chromosome is chooses, which represents the optimum or suboptimal
solution to the problem [4]. The GA concentrates on an overall performance
over a list of jobs and aims at a more desirable load balance across all the
nodes in a computational grid.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7
3.4 Fuzzy Based Approach
This section introduces the fuzzy method used in the fuzzy load balancing
algorithm. Fuzzy logic [9, 10, 12, 13, 23] deals with reasoning that is
approximate rather than fixed and precise. It is a superset of conventional
Boolean logic that has been extended to handle the concept of partial truth-
values between “completely-truth”, and “completely-false”. In a more
extensive sense, fuzzy logic is associated with the concept of fuzzy sets.
Terms in the fuzzy set are given linguistic variables. Using fuzzy logic, one
can specify the degrees of overload or otherwise with linguistic labels such
as lightly loaded, normal, overloaded, heavily overloaded etc.. The fig 3(a)
Show the use of fuzzy logic to represent degree of truth. The fuzzy expert
system makes scheduling decisions based on a fuzzy logic. As shown in fig
3(b), if processor load is regarded as a linguistic variable, it may have the
following terms as its values: light, moderate, and heavy. If „light‟ is
interpreted as a load below about one job, „moderate‟ as a load between
about 3 and 5 jobs and „heavy„ as a load above about 7 jobs, these terms
may be described as fuzzy sets, where p represents the grade of membership
[9]. A given fuzzy rule Ri consists of two differentiated parts, namely,
antecedent and consequent parts, related to fuzzy concepts [12]. Rules
activation conditions are reflected in the antecedent part of the rule. A rule
within this is represented by the following expression:
1 1
R if ω  is A  and / or . . .ω  is A then y is Bi n m mn n
 (14)
Where ωm represents a system feature, y depicts the output variable and Amn
and Bn correspond to the fuzzy sets associated to feature m and output,
respectively.
Processor Load
Light Moderate Heavy
Load Status
Load
OverloadNot Overload
1
0
1.0 1 3 5 7
Figure 3. (a) Degree of truth representation (b) A linguistic variable, processor load
4. COMPARATIVE ANALYSIS
This section This section examine various characteristic of load balancing
policies. Genetic algorithms are applicable to a wide variety of application
[8]. It works better when we have to schedule a large number of jobs. The
sliding windows technique [4,7] is generally used to trigger the GA. Sliding
window consist a series of job for node assignment to schedule. The main
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8
focused is about the size of sliding window and when these jobs have been
assigned to appropriate node, the update of window is take place.
Table 1. Comparative analysis of load balancing policies
Approach Factors to consideration Advantages
Genetic
Algorithm
1. Windows size and their
update.
2. String selection for search
node in search space.
3. Fitness function for
measure the performance of
string.
4. Population Size.
5. Processing overhead.
1. Minimum execution
time.
2. Minimize
Communication cost.
3. Maximum utilization of
node.
4. Maximum throughput
value.
5. Minimize makespan
value.
Fuzzy Logic
1. Interference Engine.
2. Decision Making
3. Load Assignment.
4. State Update Decision.
1. Better Performance and
throughput.
2. Response time is
significantly better.
Job
Replication
1. No. of copies of Job.
2. Communication between
Nodes.
3. Processing overhead.
1. Consistently perform
better when measure
statistic is less than
threshold value.
FCFS
1. Instantaneous decision.
2. Sequence of Job arrival.
1. Reduce the system
response time.
2. Shorter Makespan.
Table 1 shows the comparative analysis of some load balancing policies.
The measurement of GA is based on the quality of solution it produced after
several generations. The measurements of input variables of a fuzzy
controller must be properly combined with relevant fuzzy information rules
to make inferences regarding the system Load State. The job replication
scheme consist multiple copies of each job so it gives extra overhead to the
computing node. To make an instantaneous decision, the FCFS approach is
preferred. The primary focus of all the load balancing approaches is spread
the workload to each node in such a manner that, the makespan is
minimized and a well-balanced load among all the nodes in grid system
therefore the current workload in the system must be considered to schedule
the job to appropriate node.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9
5. CHALLENGES AND KEY ISSUE
5.1 Challenges in Load Balancing
There are various strategies have been developed for solving load balancing
problem but it is not yet solved completely. Data partitioning and load
balancing are weighty components of parallel computations. For static and
dynamic load balancing various strategies have been developed, such as
recursive bisection (RB) methods, space-filling curve (SFC) partitioning and
graph partitioning [20]. A computation grid system consists of various
components such as computing node and workstations, etc. There exist a
heterogeneity between the various factors of a node such hardware
architecture, physical memory, CPU speed and node capacity which affect
the processing result. The dynamic behavior, and node failure can decrease
the performance of grid, while the selection of resources (or node) for jobs
could also be a factor.
5.2 Key Issues in Load Balancing
Load balancing is very crucial issue for the efficient operation of
computational grid environments for sharing of heterogeneous resources,
hence affect quality of service and scheduling. In dynamic load balancing,
load sharing and task migration are some of the widely researched issues
[7]. In a networked environment, interoperability (common protocols) is the
central issue to be addressed. How to select efficient nodes is one of the
issues of further investigation. As Grid is a distributed system utilizing idle
nodes scattered in every region, the most critical issue pertaining to
distributed systems is how to integrate and apply every computer resource
(node) into a distributed system, so as to achieve the goals of enhancing
performance, resource sharing, extensibility, and increase availability [15].
In order to make optimal balancing and work distribution decisions in Grid
environment, a load balancer needs to take some or all of the following
information into consideration:
 The capacity on each node (CPU, memory, and disk space, etc.)
 Current workload of each node
 Required capacity for each task
 Network connectivity and delay
 The assignment of processes to processors
 The use of multiprogramming on individual processors
 The actual dispatching of a process.
The choice of parameter and state information should also be considered.
For various software engineering related issues, there are some software
toolkit exists to provide effective solution. The distribution of load among
the CN in an optimal way is not easy task, as it requires complete analysis
of available resources and job. A load balancing policy can either be static
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10
or dynamic. The major concern in static load balancing policy is to
determine the execution time of the job, communication delays, and the
resources used by computing node. Since an accurate estimation is not
possible earlier in time, so emphasis can be done on to estimation of such
quantities closer to accurate values.
6. CONCLUSIONS
In this paper, we discussed few contemporary load balancing strategies
based on various approaches for computational grid environment. With the
rapid development of technology, grid computing have increasingly
becomes an attractive computing platform for a variety of applications. In a
computational grid environment, load balancing is the process of improving
the performance of system through re-distribution of load among the
computing nodes. When the newly created jobs arbitrary arrive into the
system, the node can become heavily loaded while other become either ideal
or lightly loaded, therefore the job assignment and load sharing must be
done carefully. The problem of load balancing is closely related to
scheduling and allocation of job to computing node. For efficient utilization
of grid resource and maximum node utilization, special scheduling policy is
needed. Load balancing methods will vary greatly between different grid
environment depending on the needs and the availability of computing node
to perform the task. There are various factors which affects the performance
of grid application such as load balancing, resource sharing, and resource
heterogeneity therefore it must be considered for making decision.
REFERENCES
[1] I. Foster, C. Kesselman, and S. Tuecke, The anatomy of the grid: Enabling scalable
virtual organizations, The International Journal of High Performance Computing
Applications, 15 (3) (2001) 200-222.
[2] Rajkumar Buyya, and Srikumar Venugopal, A Gentle Introduction to Grid Computing
and Technologies, Computer Socity of India, CSI Communication, July (2005).
[3] C. Gary Rommel, The Probability of Load Balancing Success in a Homogeneous
Network, IEEE transactions on Software Engineering, 17(9), Sept. (1991) 922-933.
[4] Yajun Li, Yuhang Yang, Maode Ma, and Liang Zhoy, A hybrid load balancing
strategy of sequential tasks for grid computing environments, Future Generation
Computer Systems, 25 (2009) 819-828.
[5] Rajkumar Buyya and Manzur Murshed, GridSim: A Toolkit for the Modeling and
Simulation of Distributed Resource Management and Scheduling for Grid Computing,
The Journal of Concurrency and Computation: Practice and Experience (CCPE),
Volume 14, Issue 13-15, Wiley Press, Nov.-Dec., (2002).
[6] Vandy Berten, Joel Goossens, and Emmanuel Jeannot, On the Distribution of
Sequential Jobs in Random Brokering for Heterogeneous Computational Grids, IEEE
Transections on Parallel and Distributed System 17 (2) (2006) 1-12.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11
[7] Albert Y. Zomaya, Yee-Hwei Teh, Observations on Using Genetic Algorithms for
Dynamic Load-Balancing, IEEE Transections on Parallel and Distributed System,
12(9), Sept. (2001) 899-911.
[8] Carlos Alberto Gonzalez Pico, Roger L. Wainwright, Dynamic Scheduling of
Computer Tasks Using Genetic Algorithms, Proc. of the First IEEE Conference on
Evolutionary Computation - IEEE World Congress on Computational Intelligence,
Orlando, Florida June, (1994) 829-833.
[9] Chulhye Park, Jon G. Kuhl, A Fuzzy-Based Distributed Load Balancing Algorithm for
Large Distributed Systems, IEEE (1995) 266-273.
[10]Kaveh Abani, Kiumi Akingbehh, Adnan Shaout, Fuzzy Decision Making for Load
Balancing in a Distributed System, IEEE, (1993) 500-502.
[11]Menno Dobber, Rob van der Mei, Ger Koole, Dynamic Load Balancing and Job
Replication in a Global-Scale Grid Environment: A Comparison, IEEE Transections
on Parallel and Distributed System, 20( 2), Feb. (2009) 207-218.
[12]Yu-Kwong Kwok, Lap-Sun Cheung, A new fuzzy-decision based load balancing
system for distributed object computing, Journal of Parallel and Distributed
Computing, 64 (2004) 238–253.
[13]Mika Rantonen, Tapio Frantti, Kauko Leiviska, Fuzzy expert system for load balancing
in symmetric multiprocessor systems, Expert Systems with Applications, 37 (2010)
8711–8720.
[14]Junwei Caoa, Daniel P. Spooner, Stephen A. Jarvis and Graham R. Nudd, Grid load
balancing using intelligent agents, Future Generation Computer Systems, 21 (2005).
[15]K.Q. Yan, S.C. Wang, C.P. Chang, J.S. Lin, A hybrid load balancing policy underlying
grid computing environment, Computer Standards & Interfaces 29 (2007) 161–173.
[16]Jun Wang, Jian-Wen Chen, Yong-Liang Wang, Di Zheng, Intelligent Load Balancing
Strategies for Complex Distributed Simulation Applications, 2009 International
Conference on Computational Intelligence and Security, 2009 (182-186).
[17]Kuo-Qin Yan, Shun-Sheng Wang, Shu-Ching Wang, Chiu-Ping Chang, Towards a
hybrid load balancing policy in grid computing system, Expert Systems with
Applications 36 (2009), 12054–12064.
[18]Brighten Godfrey, Karthik Lakshminarayanan, Sonesh Surana, Richard Karp, Ion
Stoica, Load Balancing in Dynamic Structured P2P Systems, IEEE INFOCOM (2004).
[19]Luis Miguel Campos, Isaac D. Scherson, Rate of change load balancing in distributed
and parallel systems, Parallel Computing 26 (2000), 1213-1230.
[20]Karen D. Devine, Erik G.Boman, Robert T. Heaphy, Bruce A. Hendrickson, James D.
Teresco, Jamal Faik, Joseph E. Flaherty, Luis G. Gervasio, New challenges in dynamic
load balancing, Applied Numerical Mathematics 52 (2005) 133–152.
[21]Satish Penmatsa, and Anthony T. Chronopoulos, Comparison of Price-based Static
and Dynamic Job Allocation Schemes for Grid Computing Systems, Eighth IEEE
International Symposium on Network Computing and Applications, (2009) 66-73.
[22]In Lee, Riyaz Sikora, and Michael J. Shaw, A Genetic Algorithm-Based Approach to
Flexible Flow-Line Scheduling with Variable Lot Sizes, IEEE Transactions On
Systems, Man, a Cybernetics-Part B: Cybernetics, 27(1), Feb. (1997).
[23]S.Salleh, and A.Y.Zomaya, Using Fuzzy Logic for Task Scheduling in Multiprocessor
Systems, Proc. 8th ISCA International Conference on Parallel and Distributed
Computing Systems, Orlando, Florida, (1995) 45-51.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1
Hackers Portfolio and its Impact on
Society
Dr. Adnan Omar & Terrance Sanchez, M.S.
6400 Press Drive
Southern University at New Orleans
ABSTRACT
Currently, a hacker is defined as a person using computers to explore a network to which he
or she did not belong. Hackers find new ways to harass people, defraud corporations, steal
information and maybe even destroy valuable information by infiltrating private and non-
private organizations. According to recent research, bad hackers make up only a small
minority of the hacker community. In today’s society, we depend on more technology than
ever and that increases the likelihood of hackers having more control over cyberspace.
Hackers work by collecting information on the intended target, figuring out the best plan of
attack and then exploiting vulnerabilities in the system. Programs such as Trojan horses and
Flame viruses are designed and used by hackers to get access to computer networks. This
paper describes how hacker behavior is aimed at information security and what measures
are being taken to combat them.
Keywords
Types of Hacker, Security, Technology, Cyberspace.
1. INTRODUCTION
Hacking is a very serious problem that can severely compromise your
computer. If your computer is connected to the Internet, you are vulnerable
to cyber-attacks from viruses and spyware. It is virtually impossible to stop
a determined and skilled hacker from breaching most home network
security measures commercially available [1]. The primary objective of
hacking is to gather information and documents that could compromise the
security of governments, corporations or other organizations and agencies.
In addition to focusing on diplomatic and governmental agencies around the
world, the hackers also attack individuals as well as groups.
The computer term “hacker” can refer to a good or bad reputation according
to the mass media. Hackers have developed new ways to use computers
since their invention, and create programs that no one else can, to utilize
their potential. Hackers are motivated by various reasons which may range
from bold ideas, lofty goals, great expectations or simple deviation from the
norm as well as the excitement of intrusion into a complicated computer
system. In the past, hacking has been used to hassle an intended victim, steal
information, or spread viruses. Not all hacking results from premeditated
malicious intent. Most hackers are interested in how computer networks
function and barriers between them and that knowledge is considered a
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2
challenge to their intelligence. However, some use their knowledge to help
corporations and governments construct better security measures. Although
we have heard about mischievous hackers sabotaging computers, networks,
and spreading viruses, most of the time hackers are just driven by the
curiosity they have about how different systems and programs work.
Although malicious and intrusive methods may be representative of what
hackers do, many of the methods and tools used by them are constructive in
fixing glitches in software as well as focusing on the vulnerability of
computer technology. It is through exposure of these vulnerabilities that
new ideas and better security measures are created.
When someone hears the word "hacker" one might immediately conjure
images in the mind’s eye of a criminal; more specifically, a criminal sitting
at a computer typing away with a screen reading "Access Denied." That
image in mind, one has the mainstream image of a modern day hacker. In
today’s society, hacking is just as prevalent as it has been in years past.
Viruses are still coded every day, worms still crawl the internet, and Trojan
horses continue to allow back door access into computer systems [2].
Even within hacker society, the definitions range from socially very positive
to criminal. In [3], there are two basic principles hackers live by: first, is
that information sharing is a powerful good and that it is the ethical duty of
hackers to share their expertise by writing free software and facilitating
access to information and to computing resources whenever possible.
Second, is that system cracking for fun and exploitation is ethically OK as
long as the cracker commits no theft, vandalism or breach of confidentiality.
It differentiates between benign and malicious hackers based on whether
damage is performed, though in reality all hacking involves intrusion and a
disregard for the efforts, works and property of others.
This research has reviewed the literature on hackers and it identifies
countries, reasoning, and type of penalties that are most likely to be
involved in hacking activities. In addition, it will address the steps that are
needed to put in place in order to reduce hacking and the type of penalties.
2. LITERATURE REVIEW
Electronic information is a critical part of our culture. Yet no matter where
the technology has taken us, the fact remains that what happens in
cyberspace has tangible impacts on each of our lives. Therefore, it is as
important for us to be secure in cyberspace as it is in our physical world [4].
According to a report from McAfee based on a survey conducted globally
on more than 800 IT company CEO's in 2009, data hacking and related
cybercrimes have cost multinational companies one trillion U.S. dollars [5].
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3
The media often presents hackers as having a thrilling reputation.
Adolescents who are lacking the social skills required to be accepted by
others may fantasize about their degree of technological skills, and move
online in search of those who profess to have technological skills the student
desires. A simple search using the term "hacker" with any search engine,
results in hundreds of links to illegal serial numbers, ways to download and
pirate commercial software, etc. Showing this information off to others may
result in the students being considered a "hacker" by their less
technologically savvy friends, further reinforcing antisocial behavior. In
some cases, individuals move on to programming and destruction of other
individuals programs through the writing of computer viruses and Trojan
horses; programs which include computer instructions to execute a hacker's
attack. If individuals can successfully enter computers via a network, they
may be able to impersonate an individual with high level security clearance
access to files, modifying or deleting them or introducing computer viruses
or Trojan horses. As hackers become more sophisticated, they may begin
using sniffers to steal large amounts of confidential information, become
involved in burglary of technical manuals, larceny or espionage [6].
The British government released evidence that foreign intelligence agencies,
possibly in China, Korea and some former Soviet states, were hacking
computers in the United Kingdom. "Economic espionage" was believed to
be one reason behind the attacks. Economic espionage involves attempting
to undermine the economic activity of other countries, sometimes by
passing on stolen industry and trade secrets to friendly or state-owned
companies. Key employees; those who have access to sensitive information
or government secrets, can be targeted through virus-laden e-mails, infected
CD-ROMS or memory sticks, or by hacking their computers. To respond to
these threats, the European Union, G8 and many other organizations have
set up cybercrime task forces. In the United States, some local law
enforcement organizations have electronic crime units and the FBI shares
information with these units through its InfraGard program [7].
Cyber security is becoming an important issue, as emphasized in an article
by Jacob Silverman titled “Could hackers devastate the U.S. economy?”
He discloses the fact that many media organizations and government
officials rank it just as grave a threat as terrorist attacks, nuclear
proliferation and global warming. With so many commercial, government
and private systems connected to the Internet, the concern seems warranted.
To add to the concern, consider that today's hackers are more organized and
powerful than ever. Many of them work in groups; and networks of black-
market sites exist where hackers exchange stolen information and illicit
programs. Credit-card data is sold in bulk by "carders" and phishing scams
are a growing concern. Malware -- viruses, Trojan horse programs and
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4
worms -- generates more money than the entire computer security industry,
according to some experts. He further reveals that hackers are also
distributed all over the world, many in countries like Romania that have lots
of Internet connectivity and loose enforcement of laws [8].
In 2008 Security experts said Chinese hackers began targeting Western
journalists as part of an effort to identify and intimidate their sources and
contacts, and to anticipate stories that might damage the reputations of
Chinese leaders. In a December 2012 over the course of several
investigations it found evidence that Chinese hackers had stolen e-mails,
contacts and files from more than 30 journalists and executives at Western
news organizations, and had maintained a “short list” of journalists whose
accounts they repeatedly attack. Based on a forensic analysis, it appears the
hackers broke into New York Times computers on Sept. 13, when the
reporting for the Wen articles was nearing completion. They set up at least
three back doors into users’ machines that they used as a digital base camp.
From there they snooped around New York Times‟ systems for at least two
weeks before they identified the domain controller that contains user names
and hashed, or scrambled, passwords for every Times employee [9].
In 2009, dubbed, “Operation: Aurora” by security firm McAfee,
sophisticated hackers based in China breached the corporate networks of
Google, Yahoo! Juniper Networks, Adobe Systems, and dozens of other
prominent technology companies and tried to access their source codes.
China's hackers seemed narrowly focused on military technology and
telecommunications companies as early as 2000. In 2011 Wiley Rein, a
prominent Washington Law firm working on a trade case against China was
hacked, and the White House was targeted last year. The hackers also
breached the website of the Council on Foreign Relations and rigged it to
deliver malware to anyone who visited it. Hacking groups with ties to the
Chinese government have also aggressively targeted Western oil and gas
companies and their law firms and investment banks [10].
In 2011, U.S. computer security firm McAfee reported that hackers
operating from China stole sensitive information from Western oil
companies in the United States, Taiwan, Greece and Kazakhstan, beginning
in November 2009. Citizen Lab and the SecDev Group discovered
computers at embassies and government departments in 103 countries,
including the Dalai Lama's office and India, were compromised by an attack
originating from servers in China. They dub the network involved
"GhostNet". Google claims cyber-attacks from China have hit it and at least
20 other companies. Google shut down its China operations. A top-secret
memo by the Canadian Security Intelligence Service warns that cyber-
attacks on government, university and industry computers have been
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5
growing "substantially. Quebec provincial police say they dismantled a
computer hacking network that targeted unprotected computers around the
world, including government computers [11].
In 2011, NASA reported it was the victim of 47 APT attacks, 13 of which
successfully compromised Agency computers. In one of the successful
attacks, intruders stole user credentials for more than 150 NASA employees
– credentials that could have been used to gain unauthorized access to
NASA systems. An ongoing investigation of another such attack at Jet
Propulsion Laboratories JPL involving Chinese-based Internet protocol (IP)
addresses has confirmed that the intruders gained full access to key JPL
systems and sensitive user accounts. With full system access the intruders
could: (1) modify, copy, or delete sensitive files; (2) add, modify, or delete
user accounts for mission-critical JPL systems; (3) upload hacking tools to
steal user credentials and compromise other NASA systems; and (4) modify
system logs to conceal their actions. In other words, the attackers had full
functional control over these networks [12].
NASA is a prestigious target for hackers because of its seat atop the United
States' broader technology incubation apparatus, and because of that
position it is also a strategic target for foreign state actors and
cybercriminals looking to steal information they can profit from. And while
the agency reportedly spends about a third of its $1.5 billion IT budget on
security, things aren’t looking so secure. Securing a huge bureaucracy like
NASA is difficult, no doubt. But according to Martin’s testimony, as of
February 2012 only one percent of NASA’s portable devices and laptops
were encrypted [12].
According to Bloomberg BusinessWeek, the executive order called for the
U.S. Department of Homeland Security to identify which critical
infrastructure is vulnerable to a cyber-attack that would be catastrophic to
the economy and public safety. According to Apple a week after Obama
issued the order, Apple’s employees computers were attacked by malicious
software after they visited a website aimed at iPhone developers. Shortly
afterward Microsoft announced that similar malware has infected some of
his company computers. According to trade groups representing tech
manufacturers and Web companies, the cables and fibers that information
travels over are more critical than the devices and programs their members
make, although Tech Companies argue other countries might take a cue
from the U.S. and set up their own cyber security guidelines. Multiple sets
of regulations might mean manufacturers and Web companies would have
to create different products and services for different countries, for further
increasing cost [13].
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6
Security experts hired by the New York Times to detect and block the
computer attacks gathered digital evidence that Chinese hackers, using
methods that some consultants have associated with the Chinese military in
the past, breached The Times’ network. They broke into the e-mail accounts
of its Shanghai bureau chief, David Barboza, who wrote the reports on Mr.
Wen’s relatives, and Jim Yardley, The Times’ South Asia bureau chief in
India, who previously worked as bureau chief in Beijing. Security experts
found evidence that the hackers stole the corporate passwords for every
Times employee and used those to gain access to the personal computers of
53 employees, most of them outside New York Times’ newsroom [9].
For three straight years, a group of Chinese hackers waged a cyber-war
against a family-owned, eight-person software firm in California, according
to court records. It started when Solid Oak Inc. founder Brian Milburn
claims he discovered that China was stealing his company's parental
filtering software, CYBERsitter. The theft hurt their business and sales,
which was bad enough. But twelve days after he publicly accused Chinese
hackers, he says he was inundated by attempts to bring down his Santa
Barbara-based business. Hackers broke into the company's system, shut
down its email and web servers, spied on employees using their own
webcams and gained access to sensitive company files, according to court
records. Apple Inc. reported it was hacked by the same group that hit social-
networking monster Facebook in January 2013. The security breaches are
the latest in a string of high-profile attacks on companies including The
Wall Street Journal and New York Times. Cyber security firm Mandiant
also came out with a report in early 2013 that accused a secret Chinese
military unit in Shanghai of years of systematic cyber-espionage against
more than 140 U.S. companies. Adam Levin, co-founder and chairman of
Identity Theft 911, says that for most companies it's not a matter of if they
will have a breach but when Levin told FoxBusiness.com that no company
is ultimately immune to this [14].
Members of Congress have published proposals that could result in longer
prison sentences for hackers. The House Judiciary committee is looking to
expand the Computer Fraud and Abuse Act (CFAA), an anti-hacking bill
dating back to 1984. Under the new proposals, damaging a computer after
accessing it without authorization would carry a maximum 10-year prison
term, double the current punishment. "Trafficking" passwords would also
carry a 10-year penalty. Hacking and damaging a "critical infrastructure
computer" would become the most serious crime, with a maximum 30-year
sentence. That would cover any machine that plays a vital role in areas such
as power, transportation, and finance [15].
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7
3.METHODOLOGY
Developing a psychological profile of a likely attacker is an attractive goal.
Because of variation among human motivations, and limitations in the
knowledge of psychology, such a profile may prove elusive [16]. There are
a number of recent and growing trends in the hacking activity landscape that
were observed by the Cybercrime division in the past decade dealing with
not only the state and local government aspects but with other national
governments across the world. Recently, cyber-attacks have no details given
to the attackers’ identity.
3.1 Data Gathering
In this research study, data was collected from several Department of Justice
(DOJ) Cybercrime press releases. However, the bulk of the data was
extracted from their department since 2009. DOJ generates reports from
cybercrime activity and people across the world. In the United States, more
than 35 million dollars in damage has been done to targeted companies.
Table 1 consists of 97 hackers listed by nationality, age, job status, reasons
for hacking, damage to company, money to judicial system, and punishment
from the DOJ for a period of 4 years. A shortlist of hackers is shown in
Table 1 as an example. Tables 2 through 4 were constructed from the data
collected from the aforementioned references.
Table 1. Computer Cybercrime Portfolio 2009-2013
Source: U.S. Department of Justice (2009-2012). Computer Crime and
Intellectual Property Section Press Releases [17].
National Age
Job
Status Reason for Hacking
Damage
to
Business
Money to
Judicial
Sys. Punish
Sweden 37 N/A Steal info N/A $650,000 Pending
Malaysia N/A N/A Bank fraud N/A N/A 10 Y
Romania N/A N/A Steal info N/A N/A 7 Y
Russia 55 N/A Steal info N/A $1,000,000 3 Y
America 46 N/A Personal gain N/A N/A 18 Y
America 36 N/A Destroy company data N/A N/A 10 Y
America 49 N/A Personal gain $100,000 $250,000 10 Y
America N/A Emp Steal info $5,000 N/A 40 Y
America 45 N/A Confidential info N/A $1,000,000 15 Y
America 22 N/A Confidential info N/A $350,000 21 Y
America 27 N/A Personal gain $9,481.03 $187,659 17 Y
America 28 N/A Confidential info N/A $500,000 5 Y/ 1 Y Pro
Russia 29 N/A N/A N/A $3.5 million
dollarsMoldova 29 N/A N/A N/A
America 25 N/A Steal info N/A 171.6 mil 2Y
20 Y
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8
Table 2 represents the percentage of the number of hackers by nationality.
Table 2. Hacking by Nationality
Nationality % Nationality %
Unknown 30 Malaysian 2
Russian 3 American 34
Latvian 1 Estonian 11
Romanian 9 Venezuelan 1
Sweden 1 Moldova 1
Albanian 1 Dublin 1
Hungarian 1 Blaine 1
Table 3 indicates the percentage of the type of motivation behind hacking.
Table 3. Motivation behind Hacking
Reasoning %
Steal Information 34
Intentional Damage to
Companies 11
Bank fraud 25
Need employment 1
Personal gain 11
Steal Money 3
Commit Multiple Fraud 7
Steal Intellectual Property 5
Table 4 shows the percentage of the type of penalties applied to hackers by
the judicial system.
Table 4. Type of Penalty
Category %
Pending 42
Probation 3
Supervised release 2
Cyber warfare 15
Prison 41
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9
3.2 Ways to Minimize Potential for Hacking
In order to minimize hacking, several techniques are required. The
following procedures need to be implemented to help limit the possibility of
hacking:
 It is quite essential that organizations proactively introduce
guidelines of standard use and outline the consequences for
inappropriate actions.
 People should be informed and have adequate knowledge of
hacking. They must be educated of certain commonalities and
characteristics connected with this activity, as well as the
significant penalties for hacking and the repercussions online
networking with other characters claiming to be skillful in
raiding others.
 Outside the business perspective, people need to be educated as
to not post any sensitive information on social networks such as
Facebook, Twitter, YouTube, etc.
 The organizations can use filters which can prohibit its
members from accessing unauthorized software serial numbers,
hacking-related materials such as newsgroups, chat-rooms and
hacking organizations.
 Organizational staff should monitor activities in the working
environment and be proactive when information is obtained
about hacking activities.
 There is a need for cooperation between private, public, as well
as governments to reduce hacking activities.
 Recognizing good hackers that report vulnerable security
weaknesses to companies.
 Lawmakers need to address the seriousness of cyber hacking by
establishing several special centers across the nation where they
can recruit the minds of our skilled youth as early as 15 years of
age and provide them the proper training, financial means, and
support them through college to those who have the passion and
commitment to continue in the field of cyber security.
 Global corporations between nations are needed to reduce
hacking.
In summary, people need to be aware of incidents regarding hacking, the
mentality associated with it, the consequences of various hacking actions
and possible consequences of interacting and forming online relationships
with anonymous individuals who claim to be proficient in invading others'
privacy. Many organizations have engaged in enabling employees to
collaborate with technology-oriented staffs who demonstrate several
physiognomies that can result in hacking activities.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10
4. FINDINGS
The hacking became more serious beyond the extent of individual/groups
and it is now at a government level between nations. From the data
collected, Tables 2-4 have been analyzed and illustrated as a graphical
representation shown in Figure 1-3.
Figure 1 illustrates the number of hacking by nationality. The highest
percentage comes from America (35%) and second to it is unknown (31%).
Thirdly, it is Estonian (12%).
Figure 1. Hacking by Nationality
Figure 2 represents the drive behind hacking; 35% would steal information,
26% is Bank Fraud, and 12% is intentional damage to companies.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11
Figure 2. Motivation behind Hacking
Figure 3 shows that 41% of their penalties are still pending which states that
their punishments have yet been decided by the courts and 40% are serving
a prison sentence. However, 14% are included in cyber warfare which
means that their punishment has not been published due to their crime
involving the government. Monitoring behavior and motivation of hackers
can help improve awareness of their danger and underscores the importance
of maintaining robust security, including up-to-date cyber security and anti-
virus software.
Figure 3. Types of Penalty
Figure 4 shows an example illustrating the seriousness of cyber-attacks in
today’s society. Chevron, the U.S. headquartered international oil and gas
company, has admitted that Stuxnet infected its IT network. Stuxnet is
known for destroying centrifuges used in Iran uranium enrichment program.
It was designed by a nation state with the intention of targeting Siemens
supervisory control and data acquisition systems (SCADA) which
controlled the industrial processes inside the enrichment facilities. Industrial
Safety and Security Source is reporting that the Stuxnet virus was planted
by an Iranian double agent via a memory stick. The Stuxnet malware is
widely believed to have caused damage to Iran nuclear program by breaking
the motors on 1,000 centrifuges at the Natanz uranium enrichment facility.
Kaspersky Lab reported that a new virus dubbed Gauss has attacked
computers in the Middle East spying on financial transactions, emails and
picking passwords to all kind of pages. The virus resembles Stuxnet and
Flame malware which was used to target Iran. Gauss has infected hundreds
of personal computers across the Middle East – most of them in Lebanon,
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12
but also in Israel and Palestinian territories. Kaspersky Lab has classified
the virus, named after one of its major components, as “a cyber-espionage
toolkit” [18].
Figure 4. Flame Wars
Source: Cyber Security Helping secure one network at a time, 2013
Recent hacker IT attacks could be catastrophic for global business and even
cost lives. According to Alicia Buller [19], the three points to note in cyber
war are: Companies could become collateral victims in the war between
superpowers. Ideas from state nation cyber weapons could be repurposed
and copied by amateurs. Cyber criminals may start using weapons gleaned
from governments and nation states. Depending on the severity of the latest
hacks, establishing a framework to protect the country becomes top priority
of the Obama Administration as well as other countries around the world.
5. CONCLUSION
While computer hackers constitute a major security concern for individuals,
businesses, and governments across the globe, hacking and hackers’
underground culture remains secretive and difficult to identify for both
lawmakers and those vulnerable to hacker attacks. The mystery that
surrounds much of hacking prevents us from arriving at definitive solutions
to the security problem it poses; but our analysis provides at least tentative
insights for dealing with this problem. Hacking became a serious problem
affecting all levels of business activities from individuals, corporations, as
well as governmental agencies. The bulk of the hacking is initiated from
Americans about 35%. The type of penalty ranges from jail time to
monetary fines. Although there are laws against hacking, the courts cannot
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 13
persecute these crimes fast enough to deter people from committing them.
From the literature review, the maximum jail penalty is 62 years and fines
are $171.6 million dollars.
Results show that hackers continue to engage in illegal hacking activities
despite the perception of severe judicial punishment. A closer look shows
that hackers perceive a high utility value from hacking, little informal
sanctions, and a low likelihood of punishment. These observations
combined with their disengagement from society, partially explains the
hacker's illegal behavior. Whatever their reason, it is a learning experience
through which they hope to gain anonymity. Future effort to minimize
hacking will undoubtedly include a combination of aggressive legislation,
new technological solutions, and increased public awareness and education.
Existing laws should be reviewed and amended periodically to allow for
appropriate evolution. The international community for online security
should respond with collaborative efforts globally towards this terrorist act
of hacking in order to manage this predicament.
6. REFERENCES
[1] Callwood, K. (2013). “How to Reduce Hacking” eHow.com Retrieved from:
http://www.ehow.com/how_8663856_reduce-hacking.html#ixzz2Of7JZqMC
[2] Wooten, D. (2009). “Hacking: modern day threat or hobby? (pt. 1)”. Retrieved from:
http://www.examiner.com/x-13831-Computer-Security-Examiner~y2009m6d22-
Hacking-modern-day-threat-or-hobby-pt-1#
[3] Parker, D. (1998). “Fighting Computer Crime: A New Framework for Protecting
Information” Retrieved from: http://education.illinois.edu/wp/crime/hacking.htm
[4] Shoemaker, D. & Conklin A. (2012). “Cyber security: The Essential Body of
Knowledge 1st
edition” Cengage Learning.
[5] Loganathan, M. & Kirubakaran, E. (2011). “A Study on Cyber Crimes and protection”
IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 5, No 1.
[6] Stone, D. (1999). “Computer Hacking”, University Laboratory High School,
Retrieved from: http://www.ed.uiuc.edu/wp/crime/hacking.htm
[7] Computer Weekly (2006). “Act on foreign spy risk, firms urged” Retrieved from:
http://www.computerweekly.com/articles/2006/12/01/220307/act-on-foreign-spy-risk
firms-urged.htm
[8] Silverman, J. (2007). "Could hackers devastate the
U.S.economy?” HowStuffWorks.com. Retrieved from:
http://computer.howstuffworks.com/die-hard-hacker.htm
[9] Perlroth, N. (2013). “Hacker in China Attacked the Times for last 4 months” The New
York Times. Retrieved from:
http://www.nytimes.com/2013/01/31/technology/chinese- hackers- infiltrate-new-york-
times-computers.html?pagewanted=all&_r=0
[10]Canter, D. (2013). “Fighting an Order to fight cybercrime” Bloomberg BusinessWeek.
March 11-17, 2013.
[11]New Orleans Business News (2011). “Hackers in China hit Western oil companies,
security firm reports” The Associated Press. Retrieved from:
http://www.nola.com/business/index.ssf/2011/02/hackers_in_china_hit_
western_o.html
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 14
[12]Dillow, C. (2012). “In the last year, hackers gained „Full Functional Control‟ of NASA
networks, stole the control codes for the is” POPSCI. Retrieved from:
http://www.popsci.com/technology/article/2012-03/hackers-gained-full-functional-
control-nasa-networks-stole-control-codes-iss-last-year
[13]Engleman, E. (2013).“Hacked? Who Ya Gonna Call?” Bloomberg Business Week.
February 11- February 17, 2013.
[14]Chakraborty, B. (2013). “Small firm hit by 3-year hacking campaign puts face on
growing cyber problem” Foxnews.com Retrieved from:
http://www.foxnews.com/politics/2013/02/22/small-businesses-big-targets-for-cyber-
snoops/#ixzz2Ngj7XMli
[15]Peters, J. (2013). “America's Awful Computer-Crime Law Might Be Getting a Whole
LotWorse” Retrieved from:
http://www.slate.com/blogs/crime/2013/03/25/computer_fraud_and_abuse_act_the_cfa
a_america_s_awful_computer_crime_law.html
[16]Stolfo, S. (2008) “Insider Attack and Cyber Security: Beyond the Hacker” Vol.39,
Springer.
[17]U.S. Department of Justice (2009-2012). Computer Crime and Intellectual Property
Section Press Releases. Retrieved from:
http://www.justice.gov/criminal/cybercrime/pr.html
[18]Hatcher, W. (2013). “Cyber Security helping secure one network at a time”
Information Systems Audit and Control Association – Greater New Orleans Chapter –
2013.
[19]Buller, A. (2013). “The Coming of Cyber War I” Retrieved from:
http://gulfbusiness.com/2013/03/the-coming-of-cyber-world-war-i/
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1
Ontology Based Multi-Viewed
Approach for Requirements
Engineering
R. Subha
Assistant Professor
Sri Krishna College of Technology,
Kovaipudur, Coimbatore-641 042,
Tamil Nadu, India
S. Palaniswami
Principal
Government College of Engineering,
Bodinayakanur,
Tamil Nadu, India
ABSTRACT
Software requirement engineering is an important process in software development. When
considered software development as a whole, 75% of software failures are due to impaired
software requirements. The proposed technique involves a multi viewed approach
comprising of Controlled Natural language and ontologies that can be used for representing
the requirements. Ontology is an explicit information modelling method which can be used
to model applications and their interactions. Controlled natural languages are subsets of
natural languages, obtained by restricting the grammar and vocabulary in order to reduce or
eliminate ambiguity and complexity that enable reliable automatic semantic analysis of a
language. The ontologies are constructed based upon the semantic similarities between the
domain and requirement models. The CNL is developed by restricting the vocabulary based
on certain rules. The similarity between the two representations can be defined by a
program that extracts the extracts the objects and relationship between them. Tool based
verification of the similarities is performed. This approach is applicable in software
development areas and many official applications such as banking system, currency
conversion, weather reports, transport, and sports.
Keywords
Requirements Engineering , Ontology , Controlled Natural Language
1. INTRODUCTION
Software engineering (SE) is the engineering discipline through which
software is developed. Commonly the process involves finding out what the
client wants, composing this in a list of requirements, designing an
architecture capable of supporting all of the requirements, designing,
coding, testing and integrating the separate parts, testing the whole,
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2
deploying and maintaining the software. Programming is only a small part
of software engineering.
Requirements engineering (RE) being the first phase of software
engineering deals with the process of discovering, eliciting, documenting
and maintaining the requirements for a particular computer-based system.
Computer systems are designed, and anything that is designed has an
intended purpose. If a computer system is unsatisfactory, it is because the
system was designed without an adequate understanding of its purpose, or
the purpose is deviated from the intended one. Both problems can be
mitigated by careful analysis of purpose throughout a system’s life.
Requirements Engineering provides a framework for understanding the
purpose of a system and the contexts in which it will be used. The
requirements engineering bridges the gap between an initial vague
recognition that there is some problem to which there is a computer
technology, and the task of building a system to address the problem.
Ontology formally represents knowledge as a set of concepts within
a domain, and the relationships between pairs of concepts[3]. It can be used
to model a domain and support reasoning about entities. An ontology
renders shared vocabulary and taxonomy which models a domain with the
definition of objects/concepts, as well as their properties and relations.
Ontologies are the structural frameworks for organizing information and are
used in artificial intelligence, the Semantic Web, systems
engineering, software engineering, informatics, library, enterprise
bookmarking, and information architecture as a form of knowledge
representation about the world or some part of it [5]. The creation of domain
ontologies is also fundamental to the definition and use of an enterprise
architecture framework. Model for describing the world that consists of a set
of types, properties, and relationship types. There is also generally a view
that the feature of the model in an ontology closely resembles the real world.
There are many problems associated with requirements engineering. The
nature of problems includes defining the system scope, problems of
understanding and problems of volatility. The major problems are sharing
equal understanding among the different communities involved in the
development of a given system, and problems in dealing with the volatile
nature of requirements. Problems of understanding during elicitation can
lead to requirements which are ambiguous, incomplete, inconsistent, and
even incorrect .If changes are not accommodated, the original requirements
set will become incomplete, inconsistent with the new situation, and
potentially unusable because they capture information that has since become
obsolete. The requirement traceability issue is also a major factor concerned
with the requirement engineering. These problems may lead to poor
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3
requirements and the cancellation of system development, or else the
development of a system that is later judged unsatisfactory or unacceptable,
has high maintenance costs, or undergoes frequent changes. By improving
requirements elicitation, the requirements engineering process can be
improved, resulting in enhanced system requirements and potentially a
much better system. In order to overcome these issues, a model integrating
controlled natural language and ontology providing a multi-viewed
approach is proposed to represent requirements from different perspectives
of all the stakeholders and the development team.
2. RELATED WORK
2.1 A scenario-driven approach to trace dependency analysis
Egyed et al. proposed an approach to trace dependency analysis. Software
development artifacts such as model descriptions, diagrammatic languages,
abstract (formal) specifications, and source code are highly interrelated
where changes in some of them affect others [2]. Trace dependencies
characterize such relationships abstractly. The research area focused on an
automated approach to generating and validating trace dependencies. The
research addressed the severe problem that the absence of trace information
or the uncertainty of its correctness limits the usefulness of software models
during software development. This is considered to be an important issue
affecting requirements engineering. This approach also proposed a method
that automates what is normally a time consuming and costly activity due to
the quadratic explosion of potential trace dependencies between
development artifacts.
2.2 Issues in requirement elicitation
Christel M and Kang k, in their research paper titled “Issues in requirements
elicitation”, clearly listed all the issues that affect the requirement elicitation
phase of software development. According to their research, the requirement
elicitation phase suffers from issues dealing with the improper detailing of
scope of the system. The communication between the various people
involved in developing the system also is causing major problems, since
requirement elicitation highly involves with collection of information
related to the system. The next major issue listed deals with the volatile
nature of the requirements. The paper implies that if these problems are not
taken seriously, it might lead to a poor requirement and also the end result
may be a failure.
2.3 Communication problems in requirements engineering
Al-Rawas et al. explains about the problems of communication between
disparate communities involved in the requirements specification activities
[1]. The requirements engineering phase of software development projects is
characterized by the intensity and importance of communication activities.
During this phase, the various stakeholders must be able to communicate
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4
their requirements to the analysts, and the analysts need to be able to
communicate the specifications they generate back to the stakeholders for
validation. The results of this study are discussed in terms of their relation to
three major communication barriers ineffectiveness of the current
communication channels, restrictions on expressiveness imposed by
notations, social and organizational barriers. The results confirm that
organizational and social issues have great influence on the effectiveness of
communication. They also show that in general, end-users find the notations
used by software practitioners to model their requirements difficult to
understand and validate.
2.4 Revisiting ontology-based requirements engineering in the age of
the semantic web systems
Dobson G, Sawyer P , in “Revisiting ontology-based requirements
engineering in the age of the semantic web”, propose usage of
dependability ontology compliant with the IFIP Working Group 10.4
taxonomy and discuss how this, and other ontologies, must interact in the
course of Dependability Requirements Engineering. In particular usage of
the links between the dependability ontology, ontology for requirements and
domain ontologies, identifying the advantages and difficulties involved are
discussed in detail. Ontology is generally based upon some logical
formalism, and has the benefits for requirements of explicitly modeling
domain knowledge in a machine interpretable way, e.g. allowing
requirements to be traced and checked for consistency by an inference
engine, and software specifications to be derived. With the emergence of the
semantic web, the interest in ontologies for Requirements Engineering is on
the increase. Lots of research has been concentrated upon re-interpreting
software engineering techniques for the semantic web. Usage of ontology is
proved to be highly beneficial for requirement engineering processes.
3. METHODOLOGY
In software development requirement refinement is an important process. It
has a tremendous impact on all its software development phases. Though a
lot of research is involved in the area of solving the various issues affecting
the requirement engineering, few issues remain unsolved. The common
issues that haunt the requirement elicitation phase include problems
concerned with areas of scope, volatility, communication, traceability [4].
Defining the scope of the system plays the major role in developing any new
system. Hence solving the scope issues is highly necessary. The
requirements of the customers are prone to changes throughout the
development of the system which is generally referred as the volatility issue.
Developing a system involves a wide class of people and hence
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5
communication plays a major role. Any fault that arises in developing the
system as a result of mistaken communication is referred as communication
issues. Any issues that affect describing and following a requirement in both
forward and backward direction are referred as traceability issues. In order
to overcome these issues a system consisting of a multi-viewed approach is
proposed. The system integrates two views for representing the perspective
of different stakeholders. Ontologies and controlled natural languages are
used for this purpose. Thus, the proposed technique provides a way for
dealing with the various issues affecting requirement engineering.
In this paper , we propose a comprehensive approach for dealing with the
various issues affecting the requirement elicitation process. The requirement
document is analyzed and split in to individual tokens. The tokens are used
to extract only the required relevant terms involved in the document. The
tokens are used as a base for construction of the ontology. The requirement
document is next represented in terms of controlled natural language .The
controlled natural language representation also extracts only the key terms
involved. Both the representations show the relationship among the objects
that are extracted. The similarities between the two representations are
measured to verify the level of accuracy.
Figure 1. Overall process of integrating ontology and CNL
Figure 1 represents the process involved in designing the multi viewed
approach consisting of ontology and control natural language.
The ontological view of requirements document involves representing the
requirement document in terms of concepts and relations among them. The
process also involves tagging of the requirement in prior to the construction
of the ontology.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6
The CNL view of requirements involves representing the document through
restricted vocabulary. This representation enables to clearly establish the
relationships among the identified objects through a restricted vocabulary.
The mapping of the ontology representation and the CNL representation is
performed with the help of v-doc and GMO algorithm. The level of
similarity is noted.
3.1 Ontological view of requirement
The requirement document is represented in terms of ontology.
Ontology formally represents knowledge as a set of concepts within
a domain, and the relationships between pairs of concepts. Ontologies are
used to model a domain and support reasoning about entities. Ontology
provides shared vocabulary and taxonomy which models a domain with the
definition of objects/concepts, as well as their properties and relations.
Ontologies are the structural frameworks for organizing information and are
used in artificial intelligence, the Semantic Web, systems
engineering, software engineering, informatics, library, enterprise
bookmarking, and information architecture as a form of knowledge
representation about the world or some part of it. The creation of domain
ontologies is also fundamental to the definition and use of an enterprise
architecture framework. Model for describing the world that consists of a set
of types, properties, and relationship types. There is also generally a view
that the feature of the model in ontology closely resembles the real world.
The first step in representing the requirement document as ontology may
require tagging the parts of speech in the document. The tagging is
performed with the help of a tool called POS Tagger.
The POS Tagger does the process of marking up a word in a text (corpus) as
corresponding to a particular part of speech. Once the tagging is completed,
the ontology can be constructed. Formally it can be said that ontology is a
statement of a logical theory. Ontologies are often equated with taxonomic
hierarchies of classes without class definition and the subsumption relation.
Ontologies need not to be limited to these forms. Ontologies are also not
limited to conservative definitions, that is, definitions in the traditional logic
sense that only introduce terminology and do not add any knowledge about
the world. To specify a conceptualization, one needs to state axioms that do
constrain the possible interpretations for the defined terms. The construction
of ontology starts with identifying the concepts. Concept represents a set of
entities within a domain. Relations are constructed next and it specifies the
interaction among concepts involved.
Ontology of a program is obtained by defining a set of representational
terms. In such ontology, definitions associate the names of entities in the
universe of discourse (e.g. classes, relations, functions, or other objects)
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7
with human readable text describing what the names mean, and formal
axioms that constraint the interpretation and well-formed use of these terms.
The ontologies can be designed on the basis of domain document. This
consists of 3 steps:
1. Tokenization is done by Stanford pos tagger which reads the text
document and assign the parts of speech to each word such as noun, verb,
adjectives, etc.(figure 2)
2. The isolation of individual tokens shows the number of nouns, verbs,
adjectives in the domain document.
3. Based on the Nouns, verbs and adjectives, the concepts involved are
identified.
4. Ontology can be created by using the protégé tool based on the concepts
identified. (figure 3)
Figure 2. Tokenization
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8
Figure 3. Construction of ontology
3.2 CNL view of requirements
A sample requirement document is considered. The requirement document
will be in ordinary English which has no rules and semantics to make it
understandable by the system we convert that into controlled natural
language. CNL refers to controlled natural language that are subsets of
natural languages, obtained by restricting the grammar and vocabulary in
order to reduce or eliminate ambiguity and complexity that enable reliable
automatic semantic analysis of a language A tool called Fluent Editor is
used to provide a platform for constructing the required CNL.
We use Controlled English as the knowledge modeling language. Supported
by a suitable Editor, it prohibits one from entering any sentence that is
grammatically or morphologically incorrect and actively helps in correcting
any error. The Controlled English is a subset of Standard English with
restricted grammar and vocabulary; in order to reduce the ambiguity and
complexity inherent in full English. The relationships between the various
objects involved in the domain are established.
A taxonomy tree is also displayed representing the objects hierarchically. A
taxonomy tree is constructed with four important parts. They are “thing”
which shows “is a” relationship between concepts and relations. The
“nothing” shows concepts that cannot have any instances. The “relations”
that shows hierarchy of information about the relation between concepts
and/or the instances. And finally “attributes” that shows the hierarchy of
attributes. Modal expressions are constructed using must, should, can, must
not, should not, and cannot. Similarly complex expressions can also be
constructed. There are 4 steps:
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9
1. The domain document is represented in terms of Controlled English
which is a subset of Standard English with restricted grammar and
vocabulary in order to reduce the ambiguity and complexity inherent
in full English.(Fig 4)
2. The CNL phrases for every corresponding OWL statements are
created using a CNL editor.
3. Modality is enabled to create modal expressions that are used to
express relationship in CNL.
4. Complex sentences as well as simple sentences can be created using
the CNL editor.
Figure 4. Construction of CNL
3.3 Mapping of ontology and CNL
The mapping of the CNL and ontology are performed, after both
representations are constructed completely. The objective of this module is
to measure the level of similarity that is established between the two forms
of representation. The obtained CNL is extracted as objects and
relationships. The obtained objects are written on to a separate file. The
objects obtained from ontology are kept on a separate file manually. The
mapping of the two representations are performed through a program as
well as verified by a tool. The program is used to extract the objects and the
relationship that exist between them. The first part of program deals the
extraction of the objects and relationship between the objects. The program
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10
consists of pre defined relationships. The extracted objects and relationships
from the OWL file is compared with the pre defined relationships that are
already entered. After extraction, the mapping of the extracted data is
performed. If the objects and relationship match, a positive output is
obtained stating “ontology matching”. If the objects and relationships don’t
match, a similar output stating “ontology not matching” is obtained. The
mapping involves the usage of V-Doc algorithm and the GMO algorithm.
The v-doc constructs virtual documents for each entity in the ontologies and
use Vector Space Model to compute similarities between the virtual
documents. The GMO algorithm is a novel graph matching algorithm. The
mapping of the ontology representation and the CNL representation is
performed with the help of v-doc and GMO algorithm. The level of
similarity is noted. The steps are
1. Relationship and objects that exists in the ontological view of
representation is extracted.
2. Relationship and objects that exists in the CNL view of
representation is extracted.
3. The extracted relationships and objects are placed in individual
files.
4. A program is used to check the level of similarity between the
objects that are extracted.
5. Tool based verification is further performed.
Figure 5. Matching OWL files
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11
Thus, the tool based verification of matching is shown if Figure 5. The
ontology1 column shows the list of objects in OWL file for ontology .The
ontology2 column shows the list of objects in OWL file for CNL. The
similarity column shows the level of similarity between the two OWL files.
4. CONCLUSION
As for as software development is concerned, requirement elicitation is an
important process. The existing system used single view approach, thus
failing to deal with the lot of issues affecting requirements engineering.
Therefore a technique is proposed to refine the requirements by integrating
ontology and controlled natural language, thus providing a multi viewed
approach. The ontological view is mapped with the controlled natural
language view to calculate the level of similarity. Thus, usage of a multi
viewed approach helps in resolving most of the issues affecting the
requirement elicitation process. The proposed process provides a simple and
useful Traceability scheme. Refining the requirements provides a lot of
advantages and reduces a lot of cost in building the system. The research on
the area of RE has grown fast in the last few years. In spite of this fact, there
are still open issues. In our work we initially identified such issues and
investigated the main existent initiatives that are addressing them. Further,
the work can be improvised by exploring the full potential of ontologies,
thus improving quality of the knowledge base.
5. REFERENCES
[1] Al-Rawas, A., Easterbrook, S. Communication problems in requirements engineering-
a field study.(1996)
[2] Egyed, A., A scenario-driven approach to trace dependency analysis. IEEE trans
software engineering., 29(2) (2003), 11632.
[3] Denny Vrandecic., York Sure. An approach to the automatic evaluation of ontologies.
Journal of Applied Ontology, 3, (1-2), (2008), seiten 41-62.
[4] Christel, M., Kang, K. Issues in requirement elicitation. IEEE trans software
engineering., 78(2) (2005), 156-198
[5] Dobson, G., Sawyer, P. Revisiting ontology-based requirements engineering in the age
of the semantic web. Dependable requirements engineering of computerised systems at
NPPs, (2006)
[6] Mellor, S. MDA distilled: principles of model driven Architecture. Addison-Wesley
Professional, (2004)
[7] http://www.protege.com.
[8] http://xobjects.seu.edu.cn/project/falcon/falcon.html
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1
Modified Colonial Competitive
Algorithm: An Approach for
Graph Coloring Problem
Hojjat Emami
Computer Engineering Department, Islamic Azad University, Miyandoab Branch
Miyandoab, Iran
Parvaneh Hasanzadeh
Computer Engineering Department, Islamic Azad University, Miyandoab Branch
Miyandoab, Iran
ABSTRACT
This paper describes a modified version of colonial competitive algorithm and how it is
used to solve graph coloring problem (GCP). Colonial competitive algorithm (CCA) is a
meta-heuristic optimization and stochastic search technique that is inspired from socio-
political phenomenon of imperialistic competition. CCA has high convergence rate and in
optimization problems can reach to global optimum. Graph coloring problem is the process
of finding an optimal coloring for any given graph so that adjacent vertices have different
colors. Graph coloring problem is an optimization problem.The original CCA algorithm is
designed for continuous problems whereas graph coloring problem is a discrete problem.
So in this paper we applied some changes in the assimilation and revolution operators of the
original CCA algorithm and presented a modified CCA which is called MCCA. The
performance of the MCCA algorithm is evaluated on five well-known graph coloring
benchmarks. Simulation results demonstrate that MCCA is a feasible and capable
algorithm.
Keywords
Colonial competitive algorithm, graph coloring problem, modified colonial competitive
algorithm.
1. INTRODUCTION
Graph coloring is a special case of graph labeling. Coloring a graph involves
assigning colors for each vertex of graph, so that any two adjacent vertices
have different colors. One of the main challenges in the graph coloring
problem (GCP) is to find the least number of colors for which there is a
valid coloring of the vertices of the graph. Graph coloring problem provides
a useful test bed for representing many real world problems including time
scheduling, frequency assignment, register allocation, and circuit board
testing [1]. The graph coloring problem is a complex and NP-hard problem
[2]; therefore many methods have been presented for solving the graph
coloring problem. The most common and successful methods that are
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2
introduced for solving graph coloring problem use a conflict minimizing
approach as its goals. i.e., given k colors, a coloring is sought which
minimizes the number of adjacent vertices bearing the same color.
Researchers used various methods for solving graph coloring problem that
some of them include meta-heuristic algorithms (such as genetic algorithm
[3], particle swarm optimization [4, 5], ant colony optimization [6, 7], and
differential evolution [8]), scatter search [9], variable space search [10],
learning automata [11], distributed algorithms [12], and some hybrid
algorithms [5, 13, 14]. In this paper, we used another meta-heuristic
algorithm ⎯colonial competitive algorithm [15] ⎯ to solve graph coloring
problem.
Colonial competitive algorithm (CCA) is an optimization and search
algorithm. This socio-politically algorithm is inspired from imperialistic
competition among imperialists and colonies. CCA is a population based
evolutionary algorithm and contains two main steps: the movement of the
colonies toward their relevant imperialists and the imperialistic competition.
CCA has been used in many engineering and optimization tasks. Original
CCA is inherently designed for continuous problems whereas graph
coloring problem is a discrete problem [15]. So in this paper, a modified
discrete version of CCA is proposed to deal with the solution of graph
coloring problem. The success of the new proposed method is shown by
evaluating its performance on the well-known graph coloring benchmarks.
This paper proceeds as follows. Section 2 describes the graph coloring
problem. Colonial competitive algorithm is explained in Section 3. Section
4 presents the proposed modified colonial competitive algorithm and how it
is used to solve graph coloring problem. Section 5 presents our empirical
results and Section 6 concludes the paper and gives an outlook of the future
works.
2. GRAPH COLORING PROBLEM (GCP)
Graph coloring is an NP-hard problem and is still a very hot research field
of topic. Graph coloring problem is a practical method of representing many
real world problems such as pattern matching, scheduling, frequency
assignment, register allocation, and circuit board testing [1].
In graph theory, graph coloring involves an assignment of labels to elements
of a graph subject to certain constraints. In other words, coloring a graph is
a way of coloring the vertices of graph so that adjacent vertices have
different colors. Generally this process called vertex coloring. There are
other forms of graph coloring including edge coloring or face coloring that
can be transformed into a vertex coloring problem. A coloring that uses K
colors is called a K-coloring. The smallest number of colors required to
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3
color a graph G is called its chromatic number. A minimum coloring of a
graph is a coloring that uses as few different colors as possible.
Formally the graph coloring problem can be stated as follows: Given an
undirected graph G with a set of vertices V, and a set of edges E, (G=(V,E)),
a K-Coloring of G consists of assigning a color to each vertex of V such that
no two adjacent vertices share the same color. One of the most interesting
challenges in the graph coloring problem is to find a correctly coloring that
uses exactly the predefined chromatic number of graph [1, 16]. In other
words, in coloring process all vertices of graph must be colored with
minimal number of colors. Figure 1 shows the coloring process of a simple
graph. This graph has 4 vertices and 4 edges. The chromatic number of this
graph is 3.
(a) (b)
Figure 1. A simple example of graph coloring process.
(a) Graph G before coloring, (b) Graph G after coloring.
3. COLONIAL COMPETITIVE ALGORITHM (CCA)
Colonial competitive algorithm (CCA) was developed by Atashpaz-gargari
and et al., and has been applied in various optimization and engineering
tasks [15]. CCA is a global search and optimization meta-heuristic that is
inspired by the social-political competition. Like other evolutionary
algorithms, CCA begins its work with an initial population. In CCA agents
in the population are called countries. After computing the cost of these
countries by using a cost function, some of the best countries are selected to
be the imperialist states and the remaining countries form the colonies of the
mentioned imperialists. All colonies of the population are divided among
imperialists based on imperialist’s power. The power of countries is
conversely related to their cost. The imperialist together with their colonies
form some empires. After forming initial empires, the colonies in each
empire are start moving toward their relevant imperialist state. This process
is a simple simulation of assimilation policy that was pursued by some of
the imperialist countries. This movement is shown in Figure 2. In this
movement, a colony goes toward the imperialist by a random value which is
randomly distributed between 0 and β×d. Where d is the distance between
imperialist and the colony, β is a control parameter. θ and x are random
numbers with uniform distribution [15].
1
3
2
4
1
3
2
4
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4
𝑥~𝑈(0, 𝛽 × 𝑑) (1)
𝜃~𝑈(−𝜑, 𝜑) (2)
In equation (2), 𝜑 is a random number with uniform distribution. In this
paper, 𝛽 and 𝜑 are set to 2 and 𝜋/4 (Rad) respectively.
All empires try to take the possession of colonies of other empires and
control them. The imperialistic competition gradually brings about an
increase in the power of powerful empires and a decrease in the power of
weaker empires. In the original CCA, the imperialistic competition is
modeled just by picking one of the weakest colonies of the weakest empire
and then making a competition among all other empires in order to possess
this colony.
Figure 2. Motion of a colony toward its relevant imperialist [15]
In CCA solution space is modeled as a search space. Each position in the
search space is a potential solution of the problem. Moving colonies toward
their relevant imperialists and imperialistic competition are the two
important operators of the CCA and causes the countries in the solution
space converge to a state that is the global optimum and satisfies the
problem constraints. CCA has high convergence rate and can often achieve
to global optimum. Figure 3 shows the pseudo code of the original CCA.
4. GRAPH COLORING USING MODIFIED COLONIAL
COMPETITIVE ALGORITHM
The original CCA is designed to solve continuous problems whereas graph
coloring problem is a discrete problem. Hence a discrete version of CCA is
needed for solving graph coloring problem. This section describes how
modified colonial competitive algorithm (MCCA) is used to solve graph
coloring problem. The goal of using MCCA to solve graph coloring
problem is to find reliable and optimal colorings for graph coloring problem
instances.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5
1. Initialize the empires.
2. Move the colonies toward their relevant imperialist (Assimilation).
3. Randomly change the position of some colonies (Revolution).
4. If there is a colony in an empire which has lower cost than the imperialist,
exchange the positions of that colony and the imperialist.
5. Unite the similar empires.
6. Compute the total cost of all empires.
7. Pick the weakest colony (colonies) from the weakest empires and give it
(them) to one of the empires (Imperialistic Competition).
8. Eliminate the powerless empires.
9. If termination conditions satisfied, stop, if not go to 2.
Figure 3. Pseudo code of the CCA [15]
At the beginning, a population of popN countries is generated. If the graph
coloring problem instance has n vertices then each country in the initial
population will consist of a random permutated list of the integers {1, 2, …,
n}. After forming the initial population, the countries have to be assessed
according to the cost function explained later. Some of the best countries
(countries with least cost value) are selected to be imperialist states and the
remaining form the colonies of the mentioned imperialists. Within the main
loop of the MCCA, imperialist in each empire, try to attract their colonies
toward themselves. Then imperialistic competition begins among empires.
During this competition process the weaker imperialists are collapsed
whereas powerful imperialists increase their power. The MCCA executes
for a fixed number of iterations, where iteration is defined as a cycle of
assimilation, revolution, uniting and competition stages. Following
subsections describe the attributes of the proposed MCCA method.
4.1 Forming Initial Population
The first step in the MCCA as any other population based algorithm, is to
create an initial population. This population consists of countries. Each
country consists of a permutated list of the integers and indicates a potential
solution to the problem. Each integer number in the country denotes a color
used for coloring the graph coloring problem instance. In fact if a graph
coloring problem instance has n vertices then a country will be a vector of
integer numbers with size 1×n.
Figure 4.a illustrates the process of creation of initial population using a
sample graph with 4 vertices and 4 edges. The chromatic number of this
graph is 3. Also Figure 4.b indicates a population of size 4 created for
solving the graph indicated in Figure 4.a. In the graph coloring problem, the
input is an undirected graph and the output is an optimal coloring (i.e. a
permutated list of colors assigned to the vertices of the graph).
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6
Figure 4. Graph Coloring process (creating initial population)
4.2 Cost Function
A cost function is used to evaluate the cost of the countries in the
population. From optimization viewpoint, a country with lowest cost value
is preferable. Here the cost of a country is equal to the total number of color
conflicts in it. In other words if two vertices of the graph be adjacent and in
the coloring process have the same color then there is a conflict. A simple
criterion for cost function can be stated as follows:
,
,
1 1 ,
1 ( ) ( )
( ) ,
0 ( ) ( )
V V i j
i j
N N
i j
i j
i j i i j
C if Color n Color n
Cost Country C
C if Color n Color n  
 
      (3)
Where V
N is the total number of vertices, ,i jC is a counter variable that
keeps the number of conflicts, and ( )i
Color n indicates the color of the
vertex i.
4.3 Modified Assimilation Operator
In the original CCA assimilation operator was designed for continuous
problems whereas in the graph coloring problem we deal with a discrete
problem. So a modified discrete version of this operator is needed. The goal
of assimilation policy is that the colony countries are similar to their
relevant imperialists. In this paper this fact is modeled by conveying some
random portions of imperialist country to its colonies. Figure 5 shows
country
1 3 2 1 2
2country 2 2 1 1
3country 2 3 2 3
4country 1 2 2 3
1
3
2
4
1
1
1
3
2
3
2
2
2
1
1
2
3
3
1
4Vertice
s
Colors
a) Graph with 4 vertices
and 4 edges
b) Initial population
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7
modified assimilation operator with a simple example graph that showed in
Figure 4.a.
imperialist 2 1 3 2
Colony 2 3 1 1
imperialist 2 1 3 2
Colony 2 3 1 2
a. before applying assimilation operator b. after applying assimilation operator
Figure 5. Modification of assimilation operator
4.4 Modified Revolution operator
The revolution operator increases the exploration power of the CCA
algorithm and helps it to escape from local optimums. In the origin CCA,
revolution causes a country to suddenly change its position. In this paper,
instead of the revolution operator, we used an operator similar to mutation
in the genetic algorithm [17]. For each country, some elements (which is
called victim elements) of that country are selected and their values replaced
with other randomly generated integers. In each empire, victim elements are
randomly selected from the cols vN N total number of elements in that
empire. colsN indicates the total number of colonies in the empire and vN
denotes total number of vertices of the graph coloring problem instance. In
our implementation we selected the revolution rate to be 30% (r=30%)
within each empire. In this case the number of revolutions in an empire is
given by:
# revolutions = r×( colsN -1) × vN (4)
Figure 6 demonstrates revolution process on a sample country.
Figure 6. View of modified revolution operator on a sample country
4.5 Other Operators
In the proposed method, imperialistic competition, unite similar empires,
and elimination of powerless empires are implemented similar to what
proposed in the original continuous CCA algorithm. Similar to original
CCA, imperialist competition is modeled just by picking one of the weakest
colonies from the weakest empire and then making a competition between
other empires to posses this colony and control it. Also in uniting process,
3 1 1 3 3 1 2 3
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8
if the distance between any two empires is smaller than a predefined
threshold then these empires will combine together and create a new united
empire. In the elimination process of powerless empires, if an empire loses
all of its colonies, then will collapse and its colonies will be divided among
other empires. This operator is implemented similar to what proposed
mechanism in the original CCA.
The main steps of MCCA method are summarized in the pseudo code as
shown in Figure 7.
Figure 7. The process of applying MCCA on the graph coloring problem
5. RESULTS
Five datasets are used to asses the efficiency of the proposed MCCA
method. These datasets are Myceil3.col, Myceil4.col, Myceil5.col,
queen5_5.col, and queen7-7.col [18]. These datasets cover instances of
dataset of low, medium and large dimensions. Myceil3.col has 11 vertices
and 20 edges. Myceil4.col has 23 vertices and 71 edges. Myceil5.col has 47
vertices and 236 edges. The chromatic number for Myceil3.col, Myceil4.col,
and Myceil5.col are 4, 5, and 6 respectively. Queen5_5.col has 25 vertices
and 160 edges. Queen7-7.col has 49 vertices and 476 edges. The chromatic
Input: an acyclic and undirected graph
Output: a valid and optimal coloring for the input graph
Step 1: set the initial parameters including:
 algorithm parameters such as: the maximum iterative count Max_Itr,
the population size popN , and other MCCA parameters.
 problem parameters such as: number of graph vertices, edges
Step2: Graph = CreateAadjacencyMatrix (input graph);
Step 3: ColoredGraph = MCCA (Graph); //solve the GCP by using DICA
Method
Step 3.1: Pop = Initialize a population of size popN .
Step 3.2: InitialCost = CostFunction (Pop);
Step 3.3: Empires = CreateInitialEmpires (Pop);
Step 3.3: Set iterative count itrCount = 0.
Step 3.4: Empires = AssimilateColonies(Empires);
Step 3.5: Empires = RevoloveColonies(Empires);
Step 3.6: Empires = UniteSimilarEmpires (Empires);
Step 3.7: Empires = ImperialisticCompetition (Empires);
Step 3.8: itrCount = itrCount +1. If itrCount <Max_Itr, go to Step
3.4.
Step 4: return the ColoredGraph;
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9
number for Queen5-5 and Queen7-7 are 5 and 7 respectively. The
characteristics of these datasets are summarized in Table 1. Also Table 2
indicates the general parameter setting for MCCA that were used in our
implementation.
The proposed algorithm is implemented using MATLAB software on a
computer with 3.00 GHz CPU and 512MB RAM. The efficiency of the
MCCA algorithms is evaluated on the graph coloring benchmarks. The
performance of the MCCA is measured by the following criterion.
 The average success/failure over 10 replication runs of the algorithm
simulation:
performance 100%
SR
TR
  (5)
Where SR is the number of successful runs and TR is the total number of
simulation runs. How many the number of correct and successful runs will
be higher then the performance of algorithm will be larger. Tables 3 gives
the results over 10 runs obtained based on performance measure in equation
(5). From our simulations it appears that the proposed algorithm can find the
valid and optimal coloring for the graph coloring problem instances. Also it
is clear that the proper tuning of algorithm parameters is very important for
that the algorithm to be successful. MCCA has less runtime and can use for
large datasets.
Table 1. Characteristics of datasets considered
Graph
Number of
Vertices
Number of
Edges
Chromatic
Number
Myceil3.col 11 20 4
Myceil4.col 23 71 5
Myciel5.col 47 236 6
queen5_5.col 25 160 5
Queen7-7.col 49 476 7
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10
Table 2. The MCCA method parameters setup.
Parameter Value
Population size 300
Number of initial imperialists 10 % of population size
Number of colonies
(Population size) – (Number of
initial imperialists)
Iteration count 100
Revolution rate 0.30
Uniting threshold 0.02
Assimilation coefficient 2
Assimilation angle coefficient 0.45
Damp ratio 0.90
Table 4. Results of MCCA algorithm on five data sets; The quality of solutions is
evaluated using performance metric. The table shows means of performance for 10
independent runs.
Graph
Number of
Vertices
Number of
Edges
MCCA
Performance (%)
Myciel3.col 11 20 100
Myciel4.col 23 71 100
Myciel5.col 47 236 97
queen5_5.col 25 160 94
queen7_7.col 49 952 83.5
6. CONCLUSIONS
The paper has presented a modified colonial competitive algorithm (MCCA)
for finding effective graph coloring schemes. Also the success of proposed
method is demonstrated on five well-known graph coloring benchmarks.
The proposed method has less runtime and can find optimal and valid
solutions for graph coloring problem. This method is appropriate for
working with both low and high dimensional graph coloring problem
instances. Future work will focus on improving the results further for the
graph coloring problem, by combining the proposed MCCA algorithm with
other existing heuristic and mathematical methods.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11
REFERENCES
[1] Jensen T.R., and Toft B., "Graph Coloring Problems", Wiley interscience Series in
Discrete Mathematics and Optimization, 1995.
[2] Arathi R., Markov, I. L., and Sakallah, K. A., "Breaking Instance-Independent
Symmetries in Exact Graph Coloring", Journal of Artificial Intelligence Research 26,
2006, pp. 289-322.
[3] Fleurent C., and Ferland, J.A., “Genetic and hybrid algorithms for graph coloring,”
Annals of Operations Research,vol. 63, 1996, pp. 437–461.
[4] Anh T.H., Giang T.T.T., and Vinh T.L., “A novel particle swarm optimization – Based
algorithm for the graph coloring problem”, Proceedings of International Conference on
Information Engineering and Computer Science, ICIECS 2009.
[5] Qin, J., Yin Y., and Ban, X., “Hybrid Discrete Particle Swarm Algorithm for Graph
Coloring Problem”, Journal of Computers, VOL. 6, NO. 6, June 2011, pp. 1175-1182.
[6] SangHyuck, A., SeungGwan L., and TaeChoong Ch., “Modified ant colony system for coloring
graphs”, Proceedings of the Joint Conference of the Fourth International Conference on
Information, Communications and Signal Processing, and Fourth Pacific Rim Conference on
Multimedia, 2003, pp. 1849 – 1853.
[7] Dowsland, K. A. and Thompson, J. M., “An improved ant colony optimization
heuristic for graph coloring”, Discrete Applied Mathematics, Vol. 156, Issue 3, 2008,
pp. 313-324.
[8] Fister, I., and Brest, J., “Using differential evolution for the graph coloring”, IEEE
Symposium on Differential Evolution (SDE), 2011, pp. 1-7.
[9] Hamiez, J-P., and HAO, J.K, “Scatter Search For graph coloring”, Lecture Notes in
Computer Science 2310: 168-179, Springer, 2002.
[10]Hertz, A., Plumettaz, M., Zufferey, N., “Variable space search for graph coloring”,
Discrete Applied Mathematics Vol. 156, 2008, pp. 2551–2560.
[11]Torkestani, J.A., and Meybodi, M.R., “Graph Coloring Problem Based on Learning
Automata”, International Conference on Information Management and Engineering,
ICIME '09, 2009, pp. 718-722.
[12]Choudhary, S., and Purohit, G.N., “Distributed algorithm for optimized vertex coloring”,
International Conference on Methods and Models in Computer Science (ICM2CS), 2010, pp.
65–69.
[13]Galinier. P., “Hybrid Evolutionary Algorithms for graph coloring”. J.Combin. Optim. Vol. 3,
No.4, 1999, pp. 379-397.
[14]Yang, Z., Liu, H., Xiao, X., and Wu, W., “Improved hybrid genetic algorithm and its application
in auto-coloring problem”, International Conference on Computer Design and Applications
(ICCDA), 2010, pp. 461-464.
[15]Atashpaz-Gargari, E., Hashemzadeh, F., Rajabioun, R., Lucas, C., “Colonial
competitive algorithm: A novel approach for PID controller design in MIMO
distillation column process”, International Journal of Intelligent Computing and
Cybernetics, Vol. 1, issue: 3, 2008, pp. 337 – 355.
[16]Kubale M., "Introduction to Computational Complexity and Algorithmic Graph
Coloring", Gdanskie Towarzystwo Naukowe, 1998.
[17]Melanie, M., “An Introduction to genetic Algorithms”, Massachusetts: MIT Press,
1999.
[18][Online:] http://mat.gsia.cmu.edu/COLOR/instances [Accessed on 10 March 2013].
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1
Security and Privacy in E-Passport
Scheme using Authentication Protocols
and Multiple Biometrics Technology
V.K. NARENDIRA KUMAR
Assistant Professor, Department of Information Technology,
Gobi Arts & Science College (Autonomous),
Gobichettipalayam – 638 453, Erode District, Tamil Nadu, India.
B. SRINIVASAN
Associate Professor, PG & Research Department of Computer Science, Gobi Arts &
Science College (Autonomous),
Gobichettipalayam – 638 453, Erode District, Tamil Nadu, India.
ABSTRACT
Electronic passports (e-Passports) have known a wide and fast deployment all around the
world since the International Civil Aviation Organization (ICAO) the world has adopted
standards whereby passports can store biometric identifiers. The purpose of biometric
passports is to prevent the illegal entry of traveler into a specific country and limit the use
of counterfeit documents by more accurate identification of an individual. The e-passport,
as it is sometimes called, represents a bold initiative in the deployment of two new
technologies: Cryptography security and multiple biometrics (face, fingerprints, palm prints
and iris). A passport contains the important personal information of holder such as photo,
name, date of birth and place, nationality, date of issue, date of expiry, authority and so on.
The goal of the adoption of the electronic passport is not only to expedite processing at
border crossings, but also to increase security. Important in their own right, e-passports are
also the harbinger of a wave of next-generation e-passport: several national governments
plan to deploy e-passport integrating cryptography algorithm and multiple biometrics. The
paper consider only those passport scenarios whose passport protocols base on public-key
cryptography, certificates, and a public key infrastructure without addressing the protocols
itself detailed, but this is no strong constraint. Furthermore assume the potential passport
applier to use ordinary PCs with Windows or Linux software and an arbitrary connection to
the Internet. Technological securities issues are to be found in several dimension, but below
paper focus on hardware, software, and infrastructure as some of the most critical issues.
Keywords
Biometrics, e-Passport, Internet, Face, Iris, Palm Print and Fingerprint.
1. INTRODUCTION
The electronic passports have been successfully deployed in many countries
around the world. Besides classical “paper” properties, these travel
documents are equipped with an electronic chip employing wireless
communication interface, so-called RFID chip (Radio Frequency
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2
Identification). In addition to the electronic copy of the data printed in the
passport (name of the holder, birth date, photo, etc.), the chip may contain
e.g. biometric measures of the holder and may employ sophisticated
cryptographic techniques providing enhanced security compared to the
classical passports. For instance, it should be much harder to copy an
electronic passport compared the classical one.
The e-Passports create opportunities for States to enhance global civil
aviation safety while at the same time improving the efficiency of aviation
operations. The e-Passport can contribute to this because verification of the
public key infrastructure certificates associated with e-Passports can provide
border control authorities with an assurance that documents are genuine and
unaltered, which in turn allows the biometric information contained in e-
Passports to be relied on to automate aspects of the border clearance
process.
RFID chip has no conductive power contacts that would supply it with the
energy, other means from the world of physics have to be borrowed. The
power and the communication channels employ the near magnetic field
around the reader. For instance, when the chip needs to send information to
the reader, it alters this surrounding field which is detected by the reader. Of
course, if this modification is not properly filtered, unwanted information
about the behavior of the chip may propagate in the surrounding
electromagnetic field, as well. This phenomenon is what cryptologists call a
side channel.
Electronic passports include contactless chip which stores personal data of
the passport holder, information about the passport and the issuing
institution. In its simplest form an electronic passport contains just a
collection of read-only files, more advanced variants can include
sophisticated cryptographic mechanisms protecting security of the
document and / or privacy of the passport holder. Its goal is to provide
foolproof passport identification using a combination of biometrics and
cryptographic security.
2. LITERATURE SURVEY
Juels et al (2005) discussed security and privacy issues that apply to e-
passports. They expressed concerns that, the contact-less chip embedded in
an e-passport allows the e-passport contents to be read without direct
contact with an IS and, more importantly, with the e-passport booklet
closed. They argued that data stored in the chip could be covertly collected
by means of “skimming” or “eavesdropping”. Because of low entropy,
secret keys stored would be vulnerable to brute force attacks as
demonstrated by Laurie (2007). Koch and Karger (2005) suggested that an
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3
e-passport may be susceptible to “splicing attack”, “fake finger attack” and
other related attacks that can be carried out when an e-passport bearer
presents the e-passport to hotel clerks. There has been considerable press
coverage (Johnson, 2006; Knight, 2006; Reid, 2006) on security weaknesses
in e-passports. These reports indicated that it might be possible to “clone”
an e-passport.
2.1. Biometrics in passports
Biometrics in e-passports complying with the ICAO specifications now
provide for the optional inclusion of an encoded biometric to confirm the
holder's identity, or other data to verify the document's authenticity. This
makes possible an unprecedented level of document security, offering
border control authorities a high level of confidence in the validity of travel
documents. A biometric in a machine readable passport will only be able to
contain information of the passport holder, and no other additional person.
Therefore, this section only covers the vulnerabilities of facial images,
fingerprints, palm print and iris images.
2.2. Face Recognition
Face recognition are the most common biometric characteristic used by
humans to make a personal recognition, hence the idea to use this biometric
in technology. This is a no intrusive method and is suitable for covert
recognition applications. The applications of facial recognition range from
static ("mug shots") to dynamic, uncontrolled face identification in a
cluttered background (subway, airport). Face verification involves
extracting a feature set from a two-dimensional image of the user's face and
matching it with the template stored in a database. The most popular
approaches to face recognition are based on either: 1) the location and shape
of facial attributes such as eyes, eyebrows, nose, lips and chin, and their
spatial relationships, or 2) the overall (global) analysis of the face image
that represents a face as a weighted combination of a number of canonical
faces. It is questionable if a face itself is a sufficient basis for recognizing a
person from a large number of identities with an extremely high level of
confidence. Facial recognition system should be able to automatically detect
a face in an image, extract its features and then recognize it from a general
viewpoint (i.e., from any pose) which is a rather difficult task. Another
problem is the fact that the face is a changeable social organ displaying a
variety of expressions [4].
2.3. Fingerprint Recognition
A fingerprint is a pattern of ridges and furrows located on the tip of each
finger. Fingerprints were used for personal identification for many centuries
and the matching accuracy was very high. Patterns have been extracted by
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4
creating an inked impression of the fingertip on paper. Today, compact
sensors provide digital images of these patterns. Fingerprint recognition for
identification acquires the initial image through live scan of the finger by
direct contact with a reader device that can also check for validating
attributes such as temperature and pulse. In real-time verification systems,
images acquired by sensors are used by the feature extraction module to
compute the feature values. The feature values typically correspond to the
position and orientation of certain critical points known as minutiae points.
The matching process involves comparing the two-dimensional minutiae
patterns extracted from the user's print with those in the template. One
problem with the current fingerprint recognition systems is that they require
a large amount of computational resources [2].
2.4. Palm print Recognition
The palm print recognition module is designed to carry out the person
identification process for the unknown person. The palm print image is the
only input data for the recognition process. The person identification details
are the expected output value. The input image feature is compared with the
database image features. The relevancy is estimated with reference to the
threshold value. The most relevant image is selected for the person’s
identification. If the comparison result does not match with the input image
then the recognition process is declared as unknown person. The recognition
module is divided into four sub modules. They are palm print selection,
result details, ordinal list and ordinal measurement. The palm print image
selection sub module is designed to select the palm print input image. The
file open dialog is used to select the input image file. The result details
produce the list of relevant palm print with their similarity ratio details. The
ordinal list shows the ordinal feature based comparisons. The ordinal
measurement sub module shows the ordinal values for each region.
2.5. Iris Recognition
Iris recognition technology is based on the distinctly colored ring
surrounding the pupil of the eye. Made from elastic connective tissue, the
iris is a very rich source of biometric data, having approximately 266
distinctive characteristics. These include the orbicular meshwork, a tissue
that gives the appearance of dividing the iris radically, with striations, rings,
furrows, a corona, and freckles. Iris recognition technology uses about 173
of these distinctive characteristics. Iris recognition can be used in both
verification and identification systems. Iris recognition systems use a small,
high-quality camera to capture a black and white, high-resolution image of
the iris. The systems then define the boundaries of the iris, establish a
coordinate system over the iris, and define the zones for analysis within the
coordinate system [12].
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5
2.6. Design of Biometric System
Five objectives, cost, user acceptance and environment constraints,
accuracy, computation speed and security should be considered when
designing a biometric system. They are inter-related, as is shown in Figure
1.2. Reducing accuracy can increase speed. Typical examples are
hierarchical approaches. Reducing user acceptance can improve accuracy.
For instance, users are required to provide more samples for training the
system. Increasing cost can enhance security. More sensors can be
embedded to collect different signals for aliveness detection. In some
applications, some environmental constraints such as memory usage, power
consumption, size of templates, and size of devices have to be factored into
a design. A biometric system installed in a PDA (Personal Digital Assistant)
requires low power and memory usage, but these requirements are not
essential for access control. A practical biometric system should balance all
these aspects [7].
3. E-PASSPORT PKI VALIDATION
E-Passport validation achieved via the exchange of Public Key
Infrastructure (PKI) certificates is essential for the interoperability benefits
of e-Passports to be realized. PKI validation does not require or involve any
exchange of the personal data of passport holders, and the validation
transactions help combat identity fraud. The business case for validating e-
Passports is compelling. Border control authorities can confirm that:
 The document held by the traveler was issued by a bonfire authority.
 The biographical and biometric information endorsed in the document at
issuance has not subsequently been altered.
 Provided active authentication and / or chip authentication is supported
by the e-Passport, the electronic information in the document is not a
copy (i.e. clone).
 If the document has been reported lost or has been cancelled, the
validation check can help confirm whether the document remains in the
hands of the person to whom it was issued.
As a result passport issuing authorities can better engage border control
authorities in participating countries in identifying and removing from
circulation bogus documents. E-Passport validation is therefore an essential
element to capitalize on the investment made by States in developing e-
Passports to contribute to improved border security and safer air travel
globally. Because the benefits of e-Passport validation are collective,
cumulative and universal, the broadest possible implementation of e-
Passport validation is desirable.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6
4. IMPLEMENTATION OF E-PASSPORT SYSTEM
In order to implement this internet passport authentication system using
multiple biometric identification efficiently, ASP.NET program is used.
This program could speed up the development of this system because it has
facilities to draw forms and to add library easily. There are three ways of
doing authentication and authorization in ASP.NET:
Biometric authentication is the process of determining the authenticity of a
user based on the user's credentials. Whenever a user logs on to an
application, the user is first authenticated and then authorized. The
application's web.config file contains all of the configuration settings for an
ASP.NET application. It is the job of the authentication provider to verify
the credentials of the user and decide whether a particular request should be
considered authenticated or not. An biometric authentication provider is
used to prove the identity of the users in a system. ASP.NET provides three
ways to authenticate a user:
Forms Authentication: This authentication mode is based on cookies
where the user name and the password are stored either in a text file or the
database. After a user is authenticated, the user's credentials are stored in a
cookie for use in that session. When the user has not logged in and requests
for a page that is insecure, he or she is redirected to the login page of the
application. Forms authentication supports both session and persistent
cookies.
Windows Authentication: This is the default authentication mode in
ASP.NET. Using this mode, a user is authenticated based on his/her
Windows account. Windows Authentication can be used only in an intranet
environment where the administrator has full control over the users in the
network.
Passport Authentication: Passport authentication is a centralized
authentication service that uses Microsoft's Passport Service to authenticate
the users of an application. It allows the users to create a single sign-in
name and password to access any site that has implemented the Passport
single sign-in (SSI) service.
Authorization is the process of determining the accessibility to a resource
for a previously authenticated user. Note that authorization can only work
with authenticated users, hence ensuring that no un-authenticated user can
access the application. The default authentication mode is anonymous
authentication.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7
4.1. Passport Authentication in Win HTTP
Microsoft Windows HTTP Services (Win HTTP) fully support the client
side use of the Passport authentication protocol. It provides an overview of
the transactions involved in Passport authentication and how to handle
them. Win HTTP provides platform support for e-Passport by implementing
the client-side protocol for Passport authentication. It frees applications
from the details of interacting with the Passport infrastructure and the
Stored User Names, Passwords and biometric identification.
4.2. Passport Single Sign-In
Passport allows users to create a single sign-in name, password and
biometric identification to access passport site that has implemented the
Passport single sign-in (SSI) service. By implementing the Passport SSI, it
won't have to implement user-authentication mechanism. Users authenticate
with the SSI, which passes their identities to passport site securely.
Although passport authenticates users, it doesn't grant or deny access to
individual sites i.e. Passport does only authentication not authorization.
Passport simply tells a participating site who the user is. Each site must
implement its own access-control mechanisms based on the user's Passport
User ID (PUID). The following figure 2 shows overview of Authentication
works.
Figure 1. Overview of Passport Authentication.
P1 Initial Page request,
P2 Redirect for authentication,
P3 Authentication request sign-in page,
P4 Sign-in page,
P5 User credentials,
P6 Update website cookies and redirect,
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8
P7 Encrypted authentication query string,
P8 Site cookies and requested web page
First user requests any page from his web server. Since user is not
authenticated, passport web server redirects its request for authentication
with Sign-In logo. When user presses Sign-In button, request will go to
Passport server for Sign-In page. Once the Sign-In page comes to browser,
user will enter his authentication details like Passport ID, Password and
biometric identification. When user credentials are submitted, Credentials
are validated in Passport server. Then Cookies are created in server and
response is send to the browser with encrypted query string. Now both
cookies and query string is having details about authentication. Once user is
authenticating, he will be taken to page which is requested first.
1. Web user authenticates with enterprise security system (authentication
can be through Web server)
2. Enterprise security system provides an authentication reference to Web
user
3. Web user requests a dynamic resource from Web server, providing
authentication reference
4. Web server requests application function from application on behalf of
Web user, providing Web user’s authentication reference
5. Application requests authentication document from enterprise security
system, corresponding to Web user’s authentication reference
6. Enterprise security system provides authentication document, including
authorization attributes for the Web user, and authentication event
description
7. Application performs application function for Web server
8. Web server generates dynamic resource for Web user
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9
Figure 2. Passport Application Chain
4.3. Initial Request
When a client requests a resource on a server that requires Passport
authentication, the server checks the request for the presence of tickets. If a
valid ticket is sent with the request, the server responds with the requested
resource. If the ticket does not exist on the client, the server responds with a
302 status code. The response includes the challenge header, "WWW-
Authenticate: Passport". Clients that are not using Passport can follow the
redirection to the Passport login server. More advanced clients typically
contact the Passport nexus to determine the location of the Passport login
server. The following figure 3 shows the initial request to a Passport
affiliate.
Central to the Passport network is the Passport Nexus, which facilitates
synchronization of Passport participant sites to assure that each site has the
latest details on network configuration and other issues. Each Passport
component (Passport Manager, Login servers, Update servers, and so on)
periodically communicates with the Nexus to retrieve the information it
needs to locate, and properly communicate with, the other components in
the Passport network. This information is retrieved as an XML document
called a Component Configuration Document, or CCD.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10
Figure 3. The initial request to a Passport affiliate.
Figure 4. A client ticket request to a Passport login server.
4.4. Passport Login Server
The figure 4 shows the passport login server to a Passport affiliate. A
Passport login server handles all requests for tickets for any resource in a
Passport domain authority. Before a request can be authenticated using
Passport, the client application must contact the login server to obtain the
appropriate tickets. When a client requests tickets from a Passport login
server, the login server typically responds with a 401 status code to indicate
that user credentials must be provided. When these credentials are provided,
the login server responds with the tickets required to access the specified
resource on the server that contains the originally requested resource. The
login server can also redirect the client to another server that can provide the
requested resource.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11
4.5. Authenticated Request
When the client has the tickets that correspond to a given server, those
tickets are included with all requests to that server. If the tickets have not
been modified since they were retrieved from the Passport login server, and
the tickets are valid for the resource server, the resource server sends a
response that includes both the requested resource and cookies that indicate
that the user is authenticated for future requests.
Figure 5. An authenticated request to the Passport login server.
The additional cookies in the response are intended to speed the
authentication process. Additional requests in the same session for resources
on servers in the same Passport Domain Authority all include these
additional cookies. Credentials do not need to be sent to the login server
again until the cookies expire.
4.6. Passport in Win HTTP
Win HTTP handles many of the transaction details internally for Passport
authentication. During the initial request, the server responds with a 302
status code when authentication is necessary. The 302 status code actually
indicates a redirection and is part of the Passport protocol for backwards
compatibility. Win HTTP hides the 302 status code and contacts the
Passport nexus, and then the login server. The Win HTTP application is
notified of the 401 status code sent by the login server to request user
credentials. To the application, however, it appears as if the 401 status
originates from the server from which the resource was requested. In this
way, the Win HTTP application is unaware of interactions with other
servers, and it can handle Passport authentication with the same code that
handles other authentication schemes.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12
Typically, a Win HTTP application responds to a 401 status code by
supplying authentication credentials. When credentials are supplied with
WinHttpSetCredentials or Set Credentials for passport authentication, the
credentials are actually being sent to the login server, not to the server
indicated in the request. Once retrieved, tickets are managed internally and
are automatically sent to applicable servers in future requests.Win HTTP
can successfully complete the Passport authentication even if an application
disables auto redirection. However, after the Passport authentication is
complete, an implicit redirect must occur from the Passport login server
URL back to the original URL. If an application has disabled automatic
redirection, Win HTTP requires that the application give Win HTTP
"permission" to redirect automatically in this special case.
5. ON-LINE SECURE E-PASSPORT PROTOCOL
To resolve the security issues identified in both the first- and second-
generation of e-Passports, in this section, we present an on-line secure e-
Passport protocol (OSEP protocol). The proposed protocol leverages the
infrastructure available for the standard non-electronic passports to provide
mutual authentication between an e-Passport and an IS. Currently, most
security organizations are involved in passive monitoring of the border
security checkpoints. When a passport bearer is validated at a border
security checkpoint, the bearer’s details are collected and entered into a
database. The security organization compares this database against the
database of known offenders (for instance, terrorists and wanted criminals).
The OSEP protocol changes this to an active monitoring system. The border
security check-point or the DV can now crosscheck against the database of
known offenders themselves, thus simplifying the process of the
identification of criminals.
5.1. Internet Passport Initial Setup
All entities involved in the protocol share the public quantities p, q, g
where:
 p is the modulus, a prime number of the order 1024 bits or more.
 q is a prime number in the range of 159 -160 bits.
 g is a generator of order q, where Ai < q, gi
≠ 1 mod p.
 Each entity has its own public key and private key pair (PKi,SKi)
where PKi = g(SK
i
)
mod p
 Entity i’s public key (PKi) is certified by its root certification
authority (j), and is represented as CERTj(PKi, i).
 The public parameters p, q, g used by an e-Passport are also certified
by its root certification authority.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 13
5.2. Phase One – Inspection System Authentication
Step 1 (IS) When an e-Passport is presented to an IS, the IS reads the MRZ
information on the e-Passport using an MRZ reader and issues the
command GET CHALLENGE to the e-Passport chip.
Step 2 (P) The e-Passport chip then generates a random eP £ R 1 ≤ eP ≤ q - 1
and computes KeP = geP
mod p, playing its part in the key agreement
process to establish a session key. The e-Passport replies to the GET
CHALLENGE command by sending KeP and its domain parameters
p, q, g.
eP → IS : KeP , p, q, g
Step 3 (IS) On receiving the response from the e-Passport, the IS generates a
random IS £R 1 ≤ IS ≤ q - 1 and computes its part of the session key
as KIS = gIS
mod p. The IS digitally signs the message containing
MRZ value of the e-Passport and KeP.
SIS = SIGNSKIS (MRZ || KeP)
It then contacts the nearest DV of the e-Passports issuing country
and obtains its public key. The IS encrypts and sends its signature
SIS along with the e-Passport’s MRZ information and KeP using the
DV’s public key PKDV.
IS → DV: ENCPK DV (SIS, MRZ, KeP), CERTCVCA(PKIS, IS)
Step 4 (DV) The DV decrypts the message received from the IS and verifies
the CERTCVCA (PKIS, IS) and the signature SIS. If the verification
holds, the DV knows that the IS is genuine, and creates a digitally-
signed message SDV to prove the IS’s authenticity to the e-Passport.
SDV = SIGNSKDV (MRZ || KeP || PKIS), CERTCVCA (PKDV, DV)
The DV encrypts and sends the signature SDV using the public key
PKIS of IS.
DV → IS: ENCPKIS (SDV, [PKeP])
The DV may choose to send the public key of the e-Passport if
required. This has an obvious advantage, because the IS system now
trusts the DV to be genuine. It can obtain a copy of e-Passport’s PK
to verify during e-Passport authentication.
Step 5 (IS) After decrypting the message received, the IS computes the
session key KePIS = (KIS)eP
and encrypts the signature received from
the DV, the e-Passport MRZ information and KeP using KePIS. It also
digitally signs its part of the session key KIS.
IS → eP : KIS, SIGNSKIS (KIS, p, q, g), ENCKePIS (SDV,MRZ,KeP )
5.3. Phase Two - E-Passport Authentication
Step 1 C The IS issues an INTERNAL AUTHENTICATE command to the
e-Passport. The e-Passport on receiving the command, the e-Passport
creates a signature SeP = SIGNSKeP (MRZ || KePIS) and sends its
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 14
domain parameter certificate to the IS. The entire message is
encrypted using the session key KePIS.
eP → IS : ENCKePIS (SeP , CERTDV (PKeP), CERTDV (p, q, g))
Step 2 (IS) The IS decrypts the message and verifies CERTDV (p, q, g),
CERTDV (PKeP) and SeP. If all three verifications hold then the IS is
convinced that the e-Passport is genuine and authentic.
During the IS authentication phase, and IS sends the e-Passport’s MRZ
information to the nearest e-Passport’s DV, which could be an e-Passport
country’s embassy. Embassies are DV’s because they are allowed to issue e-
Passports to their citizens and because most embassies are located within an
IS’s home country, any network connection issues will be minimal. Sending
the MRZ information is also advantageous, because the embassy now has a
list of all its citizens that have passed through a visiting country’s border
security checkpoint. We do not see any privacy implications, because, in
most cases, countries require their citizens to register at embassies when
they are visiting a foreign country.
6. EXPERIMENTAL RESULTS
The key application of a biometrics solution is the identity verification
problem of physically tying an MRTD holder to the MRTD they are
carrying. There are several typical applications for biometrics during the
enrolment process of applying for a passport: The applicant’s biometric
template(s) generated by the enrolment process can be searched against one
or more biometric databases (identification) to determine whether the
applicant is known to any of the corresponding systems (for example,
holding a passport under a different identity, criminal record, holding a
passport from another state). When the applicant collects the passport (or
presents them for any step in the issuance process after the initial
application is made and the biometric data is captured) their biometric data
can be taken again and verified against the initially captured template .
The identities of the staff undertaking the enrolment can be verified to
confirm they have the authority to perform their assigned tasks. This may
include biometric authentication to initiate digital signature of audit logs of
various steps in the issuance process, allowing biometrics to link the staff
members to those actions for which they are responsible. Each time traveler
(i.e. MRTD holders) enters or exit a State, their identities can be verified
against the images or templates created at the time their travel documents
were issued. This will ensure that the holder of a document is the legitimate
person to whom it was issued and will enhance the effectiveness of any
Advance Passenger Information (API) system. Ideally, the biometric
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 15
template or templates should be stored on the travel document along with
the image, so that travelers’ identities can be verified in locations where
access to the central database is unavailable or for jurisdictions where
permanent centralized storage of biometric data is unacceptable.
Two-way check - The traveler’s current captured biometric image data, and
the biometric template from their travel document (or from a central
database), can be matched to confirm that the travel document has not been
altered. Three-way check - The traveler’s current biometric image data, the
image from their travel document, and the image stored in a central database
can be matched (via constructing biometric templates of each) to confirm
that the travel document has not been altered. This technique matches the
person, with their passport; with the database recording the data that was put
in that passport at the time it was issued. Four-way check - A fourth
confirmatory check, albeit not an electronic one, is visually matching the
results of the 3-way check with the digitized photograph on the Data Page
of the traveler’s passport.
Besides the enrolment and border security applications of biometrics as
manifested in one-to-one and one-to-many matching, States should also
have regard to, and set their own criteria, in regard to: Accuracy of the
biometric matching functions of the system. Issuing States must encode one
or more facial, fingerprint, palm print or iris biometrics on the MRTD as per
LDS standards (or on a database accessible to the Receiving State). Given
an ICAO-standardized biometric image and/or template, Receiving States
must select their own biometric verification software, and determine their
own biometric scoring thresholds for identity verification acceptance rates –
and referral of imposters.
7. CONCLUSIONS
The work represents an attempt to acknowledge and account for the
presence on inspection system for biometric passport using face, fingerprint,
and iris recognition towards their improved identification. The application
of biometric recognition in passports requires high accuracy rates; secure
data storage, secure transfer of data and reliable generation of biometric
data. The passport data is not required to be encrypted, identity thief and
terrorists can easily obtain the biometric information. The discrepancy in
privacy laws between different countries is a barrier for global
implementation and acceptance of biometric passports. A possible solution
to un-encrypted wireless access to passport data is to store a unique
cryptographic key in printed form that is also obtained upon validation. The
key is then used to decrypt passport data and forces thieves to physically
obtain passports to steal personal information. More research into the
technology, additional access and auditing policies, and further security
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 16
enhancements are required before biometric recognition is considered as a
viable solution to biometric security in passports. The adversaries might
exploit the passports with the lowest level of security. The inclusion of
biometric identification information into machine readable passports will
improve their robustness against identity theft if additional security
measures are implemented in order to compensate for the limitations of the
biometric technologies. It enables countries to digitize their security at
border control and provides faster and safer processing of an e-passport
bearer. The main cryptographic features and biometrics used with e-
passports and considered the surrounding procedures. E-passports may
provide valuable experience in how to build more secure and biometric
identification platforms in the years to come.
REFERENCES
[1] A. K. Jain, R. Bolle, “Biometric personal identification in networked society” 2010,
Norwell, MA: Kluwer.
[2] C.Hesher, A.Srivastava, G.Erlebacher, “A novel technique for face recognition using
range images” in the Proceedings of Seventh International Symposium on Signal
Processing and Its Application, 2009.
[3] HOME AFFAIRS JUSTICE, “EU standard specifications for security features and
biometrics in passports and travel documents”, Technical report, European Union,
2008.
[4] ICAO, “Machine readable travel documents”, Technical report, ICAO 2011.
[5] KLUGLER, D., “Advance security mechanisms for machine readable travel
documents, Technical report”, Federal Office for Information Security (BSI),
Germany, 2012.
[6] ICAO, “Machine Readable Travel Documents”, Part 1 Machine Readable Passports.
ICAO, Fifth Edition, 2007
[7] Riscure Security Lab, “E-passport privacy attack”, at the Cards Asia Singapore, April
2012.
[8] D. Monar, A. Juels, and D. Wagner, “Security and privacy issues in e-passports”,
Cryptology ePrint Archive, Report 2005/095, 2011.
[9] ICAO, “Biometrics Deployment of Machine Readable Travel Documents”, Version
2.0, May 2010.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1
Comparative Study of WLAN,
WPAN, WiMAX Technologies
Prof. Mangesh M. Ghonge
Assistant Professor
Jagadambha College of Engineering & Technology
Yavatmal-445001
Prof. Suraj G. Gupta
Assistant Professor
Jawaharlal Darda Institute of Engineering & Technology
Yavatmal-445001
ABSTRACT
Today wireless communication systems can be classified in two groups. The first group
technology provides low data rate and mobility while the other one provides high data rate
and bandwidth with small coverage. Cellular systems and Broadband Wireless Access
technologies can be given as proper examples respectively. In this paper, WLAN, WPAN
and WiMAX technologies are introduced and comparative study in terms of peak data
rate, bandwidth, multiple access techniques, mobility, coverage, standardization, and
market penetration are presented.
Keywords
WLAN, WPAN, WiMAX.
1. INTRODUCTION
Wireless broadband technologies promise to make all kinds of information
available anywhere, anytime, at a low cost, to a large portion of the
population. From the end user perspective the new technologies provide the
necessary means to make life more convenient by creating differentiated and
personalized services. In the last decade we were primarily used to
accessing people via voice, but there are of course other forms of
communication like gestures, facial expressions, images and even moving
pictures. Today we increasingly need user devices wireless for mobility and
flexibility with total coverage for small light and affordable terminals than
ever. Evolving of circuit switched networks towards packet switched
technology high data rates is acquired and this evolution has opened new
opportunities. 2.5 and 3G networks provide high mobility for the packet
domain users. On the other hand the development of the technology has
opened a new era like WLAN, WPAN and WiMAX communication.
Therefore the merging IP based services provide broadband data access in
fixed, mobile and nomadic environments supporting voice, video and data
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2
traffic with high speed, high capacity and low cost per bit. In this paper
WLAN, WPAN and WiMAX Technologies introduced and comparative
analysis is done.
2. LITERATURE REVIEW
WLAN technologies were first available in late 1990, when vendors
initiated introducing products that operated within the 900 MHz frequency
band. These solutions, which used non-standard, proprietary designs,
provided data transfer rates of approximately 1Mbps. It was considerably
slower than the 10 Mbps speed provided by most wired LANs at that time.
In 1992, sellers began selling WLAN products that used the 2.4GHz band.
Even if these products provided higher data transfer rates than 900 MHz
band products they were expensive provided comparatively low data rates,
were prone to radio interference and were often designed to use proprietary
radio frequency technologies. The Institute of Electrical and Electronic
Engineers started the IEEE 802.11 project in 1990 with the objective to
develop a MAC and PHY layer specification for wireless connectivity for
fixed, portable and moving stations within an area.
3. IEEE 802.11 WLAN/WI-FI
Wireless LAN (WLAN, also known as Wi-Fi) is a set of low tier, terrestrial,
network technologies for data communication. The WLAN standard
operates on the 2.4 GHz and 5 GHz Industrial, Science and Medical (ISM)
frequency bands. It is specified by the IEEE 802.11 standard and it comes in
many different variations like IEEE 802.11a/b/g/n. The application of
WLAN has been most visible in the consumer market where most portable
computers support at least one of the variations. In the present study, we
overview on different standard in table-1 and four WLAN standards were
preferred for comparison that are IEEE 802.11a, IEEE 802.11b, IEEE
802.11g and IEEE 802.11n because these standards are very much popular
among the users. It is noted that all 802.11 standards used Ethernet protocol
and Carrier Sense Multiple Access / Collision Avoidance (CSMA/CA) for
path sharing [1][12][9].
Standards are a set of specifications that all manufacturers must follow in
order for their products to be compatible. This is important to insure
interoperability between devices in the market. Standards may provide some
optional requirements that individual manufacturers may or may not
implement in their products.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3
4. OVERVIEW ON IEEE802.11 WLAN STANDARD
Table 1. List of Concurrent and Future IEEE Standard of WLAN/ Wi-
Fi.[2][6][7][9][10]
Sr.
No.
IEEE 802.11
Standard
Year of
Release
Comments
01
IEEE
802.11a
1999
Speed 54 Mbits and 5 GHz band
02
IEEE
802.11b
1999
Enhancements to 802.11 to support 5.5 and 11 Mbits speed
03
IEEE
802.11c
2001
Bridge operation procedures; included in the IEEE 802.11D
standard
04
IEEE
802.11d
2001
International (country-to-country) roaming extensions
05
IEEE
802.11e
2005
Enhancements: QoS, including packet bursting
06
IEEE
802.11F
2003
Inter-Access Point Protocol, Withdrawn February 2006
07
IEEE
802.11g
2003
54 Mbits, 2.4 GHz standard (backwards compatible with b)
08
IEEE
802.11h
2004
Spectrum Managed 802.11a (5 GHz) for European
compatibility
9
IEEE
802.11i
2004
Enhanced security
10
IEEE
802.11j
2004
Extensions for Japan
11
IEEE
802.11k
2008
Radio resource measurement enhancements
12
IEEE
802.11n
2009
Higher throughput improvements using Multiple In Multiple
Out
13
IEEE
802.11p
2010
WAVE-Wireless Access for the Vehicular Environment
15
IEEE
802.11r
2008
Fast BSS transition (FT) (
16
IEEE
802.11s
July 2011
Mesh Networking, Extended Service Set (ESS)
17
IEEE 802.
11t
Define recommended practice for evolution of 802.11wireless
performance.
18
IEEE
802.11u
February
2011
Improvements related to Hot Spots and 3rd party authorization
of clients, e.g. Cellular network offload
19
IEEE
802.11v
February
2011
Wireless network management
20
IEEE
802.11w
September
2009
Protected Management Frames
21
IEEE
802.11x
-
Extensible authentication network for enhancement of security
22
IEEE
802.11y
2008
3650–3700 MHz Operation in the U.S.
23
IEEE
802.11z
September
2010
Extensions to Direct Link Setup (DLS) (September 2010)
24
IEEE
802.11aa:
June 2012
Robust streaming of Audio Video Transport Streams
25
IEEE
802.11ad
December
2012
Very High Throughput 60 GHz
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4
26
IEEE
802.11ae
March
2012
Prioritization of Management Frames
In process
27
IEEE
802.11ac:
February
2014
Very High Throughput <6 GHz, potential improvements over
802.11n: better modulation scheme (expected ~10% throughput
increase), wider channels (estimate in future time 80 to 160
MHz), multi user MIMO
28
IEEE
802.11af:
June 2014
TV Whitespace ()
29
IEEE
802.11ah:
January
2016
Sub 1 GHz sensor network, smart metering.
30
IEEE
802.11ai:
February
2015
Fast Initial Link Setup
31
IEEE
802.11mc:
March
2015
Maintenance of the standard
32
IEEE
802.11aj:
October
2016
China Millimeter Wave :
33
IEEE
802.11aq
May 2015
Pre-association Discovery
34
IEEE
802.11ak
-
General Link
4.1 IEEE 802.11a
Ratification of 802.11a took place in 1999. The 802.11a standard uses the 5
GHz spectrum and has a maximum theoretical 54 Mbps data rate. Like in
802.11g, as signal strength weakens due to increased distance, attenuation
(signal loss) through obstacles or high noise in the frequency band, the data
rate automatically adjusts to lower rates (54/48/36/24/12/9/6 Mbps) to
maintain the connection. The 5 GHz spectrum has higher attenuation (more
signal loss) than lower frequencies, such as 2.4 GHz used in 802.11b/g
standards. Penetrating walls provide poorer performance than with 2.4 GHz.
Products with 802.11a are typically found in large corporate networks or
with wireless Internet service providers in outdoor backbone networks [9]
[12].
4.2 IEEE 802.11b
In 1995, the Federal Communications Commission had allocated several
bands of wireless spectrum for use without a license. The FCC stipulated
that the use of spread spectrum technology would be required in any
devices. In 1990, the IEEE began exploring a standard. In 1997 the 802.11
standard was ratified and is now obsolete. Then in July 1999 the 802.11b
standard was ratified. The 802.11 standard provides a maximum theoretical
11 Megabits per second (Mbps) data rate in the 2.4 GHz Industrial,
Scientific and Medical (ISM) band [9][12].
4.3 IEEE 802.11g
In 2003, the IEEE ratified the 802.11g standard with a maximum theoretical
data rate of 54 megabits per second (Mbps) in the 2.4 GHz ISM band. As
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5
signal strength weakens due to increased distance, attenuation (signal loss)
through obstacles or high noise in the frequency band, the data rate
automatically adjusts to lower rates (54/48/36/24/12/9/6 Mbps) to maintain
the connection. When both 802.11b and 802.11g clients are connected to an
802.11g router, the 802.11g clients will have a lower data rate. Many routers
provide the option of allowing mixed 802.11b/g clients or they may be set to
either 802.11b or 802.11g clients only. To illustrate 54 Mbps, if you have
DSL or cable modem service, the data rate offered typically falls from 768
Kbps (less than 1 Mbps) to 6 Mbps. Thus 802.11g offers an attractive data
rate for the majority of users. The 802.11g standard is backwards
compatible with the 802.11b standard. Today 802.11g is still the most
commonly deployed standard [9][12].
4.4 IEEE 802.11n
In January, 2004 the IEEE 802.11 task group initiated work. There have
been numerous draft specifications, delays and lack of agreement among
committee members. Yes, even in the process of standards development,
politics are involved. The Proposed amendment has now been pushed back
to early 2010. It should be noted it has been delayed many times already.
Thus 802.11n is only in draft status. Therefore, it is possible that changes
could be made to the specifications prior to final ratification.
The goal of 802.11n is to significantly increase the data throughput rate.
While there are a number of technical changes, one important change is the
addition of multiple‐input multiple‐output (MIMO) and spatial
multiplexing. Multiple antennas are used in MIMO, which use multiple
radios and thus more electrical power. 802.11n will operate on both 2.4 GHz
(802.11b/b) and 5 GHz (802.11a) bands. This will require significant site
planning when installing 802.11n devices. The 802.11n specifications
provide both 20 MHz and 40 MHz channel options versus 20 MHz channels
in 802.11a and 802.11b/g standards. By bonding two adjacent 20 MHz
channels, 802.11n can provide double the data rate in utilization of 40 MHz
channels.
However, 40 MHz in the 2.4 GHz band will result in interference and is not
recommended nor likely which inhibits data throughput in the 2.4 GHz
band. It is recommended to use 20 MHz channels in the 2.4 GHz spectrum
like 802.11b/g utilizes. For best results of 802.11n, the 5 GHz spectrum will
be the best option. Deployment of 802.11n will take some planning effort in
frequency and channel selection. Some 5 GHz channels must have dynamic
frequency selection (DFS) technology implemented in order to utilize those
particular channels [12][8][9].
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6
Here, we compared IEEE 802.11 a/b/g/n standard of WLAN/ Wi-Fi we use
some basic characteristics like Operating frequency, Modulation technique,
Data rate (Mbps), Slot time (µs), Preamble, Throughput, Speed, Indoor
Range, Outdoor Range, Multiple Access, Channel Bandwidth, Half/ Full
duplex, Number of spatial streams, Mode of operation Ad-hoc,
Infrastructure, VANET, FEC Rate, License/Unlicensed.
Table 2. Comparison overview of WLAN /Wi-Fi IEEE Standard 802.11 a/ b/ g /n
[1][3][4][5][9][11]
IEEE 802.11a
IEEE
802.11
b
IEEE 802.11g
IEEE
802.11n
Operating
frequency
5 GHz UNII/ISM
bands
2.4 GHz
ISM band
2.4 GHz ISM band 2.4 - 5 GHz
Modulation
technique
BPSK, QPSK, 16-,
64-QAM , OFDM
QPSK ,
DBPSK,
DQPSK,
CCK, DSSS
BPSK, QPSK, 16-,
64-QAM , OFDM
64-QAM,
Alamouti,
OFDM,CC
K, DSSS
Data rate (Mbps)
6,9,12,18,24,36,48,
54
1, 2, 5.5,
11
1, 2, 5.5, 11,
6,9,12,18,24,36,48,
54
7.2, 14.4,
21.7, 28.9,
43.3, 57.8,
65, 72.2,
15, 30, 45,
60, 90, 120,
135, 150
Slot time (µs) 9 20 20,(9 optional) Less than 9
Preamble OFDM
Long /
short
(optional)
Long/ Short/
OFDM
HT PHY
for 2.4 and
5 GHz
Throughput 23 Mbits 4.3 Mbits 19 Mbits 74 Mbits
Speed 54 Mbits 11 Mbits 54 Mbits 248 Mbits
Indoor Range 35 Mtrs 38 Mtrs 38 Mtrs 70 Mrs.
Outdoor Range 120 Mrs. 140 Mrs. 140 Mrs. 250 Mrs.
Multiple Access CSMA/CA
CSMA/C
A
CSMA/CA CSMA/CA
Channel
Bandwidth
20 MHz
20, 25
MHz
20 MHz
20 or 40
MHz
Half/ Full duplex Half Half Half Full duplex
Number of spatial
streams
1 1 1 1,2,3or 4
Ad-hoc(mode of
operation)
Yes Yes Yes Yes
Infrastructure Yes Yes Yes Yes
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7
VANET Yes Yes Yes Yes
FEC Rate 1/2,2/3,3/4 NA 1/2,2/3,3/4
3/4, 2/3 and
5/6
Licensed/Unlicens
ed
Unlicensed
Unlicense
d
Unlicensed Unlicensed
4.5 Wireless Personal Area Network (WPAN)
Wireless Personal Area Network (WPAN) technologies have fueled the
development as well as the wide proliferation of wireless personal devices
(e.g. PDAs, Bluetooth headset, PSP, and etc). Yet, the popularity of these
wireless devices has resulted in many forms of frequency spectrum clash
amongst the different wireless technologies. To understand the performance
of these wireless devices in different interference situations, it is
increasingly important to study the coexistence issue amongst the existing
wireless technologies. Various wireless technologies have been developed
for WPAN purposes. A WPAN could serve to interconnect all the ordinary
computing and communicating devices that many people have on their desk
or carry with them today; or it could serve a more specialized purpose such
as allowing the surgeon and other team members to communicate during an
operation. The technology for WPANs is in its infancy and is undergoing
rapid development. Proposed operating frequencies are around 2.4 GHz in
digital modes. The objective is to facilitate seamless operation among home
or business devices and systems. Wireless PAN is based on the standard
IEEE 802.15. In this paper, we concentrate on the three most famous IEEE
standard 802.15.1, 802.15.3, and 802.15.4 we overview on these standard
pads compare on the basis of basic characteristic, application, limitation and
their use.
4.6 IEEE 802.15 .1 is a working group of the Institute of Electrical and
Electronics Engineers (IEEE) IEEE 802 standards committee which
specifies Wireless Personal Area Network (WPAN) standards. It includes
seven task groups. IEEE 802.15.1 [16] is a WPAN standard based on the
Bluetooth v1.1 specification, which is a short-range radio technology
operating in the unlicensed 2.4GHz ISM frequency band. The original goal
of Bluetooth was to replace the numerous proprietary cables to provide a
universal interface for devices to communicate with each other. However, it
soon became to use Bluetooth technology to interconnect various Bluetooth
devices to form so-called personal area networks, and facilitate more
creative ways of exchanging data. Low cost and smaller footprint of
Bluetooth chips consequently met with high demands [9][10][11][14].
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8
4.7 IEEE 802.15.3 [17] is designed to facilitate High-Rate Wireless
Personal Area Networks (HR-WPAN) for fixed, portable and moving
devices within a personal operating space. The main purpose of IEEE
802.15.3 is to provide low cost, low complexity, low power consumption,
and high data rate connectivity for wireless personal devices. Thus, it is
designed to support at least 11Mbps data rate within at least 10 meters
range2. The IEEE 802.15.3 standard is operated in 2.4GHz ISM frequency
band. Unlike IEEE 802.15.1, which employs FHSS on PHY layer, IEEE
802.15.3 uses Direct Sequence Spread Spectrum (DSSS), and it does not
allow changes of operating channels once a connection is initiated
[9][10][11][14].
4.8 IEEE 802.15.4 [18] addresses the needs of Low-Rate Wireless Personal
Area Networks (LR-WPAN). While other WLAN (e.g. IEEE 802.11.a/b/g )
and WPAN (e.g. IEEE 802.15.1 and 802.15.3) technologies focus on
providing high data throughput over wireless ad hoc networks, IEEE
802.15.4 is designed to facilitate those wireless networks, which are mostly
static, large, and consuming small bandwidth and power. Therefore, the
IEEE 802.15.4 technology is anticipated to enable various applications in
the fields of home networking, automotive networks, industrial networks,
interactive toys and remote metering [9][10][11][14].
Here we compared different standard of WPAN on the basis of the basic
characteristic like Topic, Operational Spectrum, Physical Layer Detail,
Channel Access, Maximum Data Rate, Modulation Technique, Coverage,
Approximate Range, Power Level Issues, Interference, Price, Security, rcv
Bandwidth, Number of Channels, Applications, Mode of operation (Ad hoc,
Infrastructure, VANET ), License/Unlicensed, QoS needs.
Table 3. Comparison of IEEE standard of WPAN [3][5][13][12] [9][10][11][14].
IEEE Standard 802.15.1 802.15.3 802.15.4
Topic Bluetooth High rate WPAN Low rate WPAN
Operational
Spectrum
2.4 GHz ISM band
2.402-2.480 GHz ISM
band
2.4 GHz and
868/915Mhz
Physical Layer
Detail
FHSS 1600 hops per
second
Uncoded QPSK, Trellis
Coded QPSK or
16/32/64-QAM scheme
DSSS with BPSK or
MSK
(O-QPSK)
Channel Access
Master-Slave
Polling, Time
Division
Duplex(TDD)
CSMA-CA and
Guaranteed Time
Slot(GTS) in a Super
frame Structure
CSMA-CA and
Guaranteed Time
Slot(GTS) in a Super
frame Structure
Maximum Data
Rate
Up to 1 Mbps(0.72)
/ 3Mbps
11-55 Mbps/ 110Mbits
868 MHz -20,915
MHz- 40 MHz, 2.4
GHz-250 Kbps, 40
kbps
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9
Modulation
Technique
8DPSK, DQPSK,
_/4-DQPSK, GFSK,
AFH
QPSK, DQPSK,
16/32/64QAM
BPSK, OQPSK, ASK,
DSSS, PSSS
Coverage <10 m <10m <20m
Approximate
Range
100m 10m 75m
Power Level Issues 1mA-60mA <80mA
Very low current
drain(20- 50 µA)
Interference Present Present Present
Price Low(<$10) Medium Very low
Security
Less Secure. User
the SAFER +
encryption at
baseband layer.
Relies on higher
layer security
Very high level of
security including, piracy,
encryption and digital
service certificate
Security features in
development
rcv Bandwidth 1MHz 15MHz 2MHz
Number of
Channels
79 5 16
Applications WPAN HR-WPAN LR-WPAN
Ad hoc Yes Yes Yes
Infrastructure No No No
VANET Yes Yes Yes
License/Unlicense
d
Unlicensed Unlicensed Unlicensed
QoS needs
QoS suitable for
voice application
Very high QoS
Relaxed needs for data
rate and QoS
4.9 Worldwide Interoperability for Microwave Access (WiMAX)
WiMAX (Worldwide Interoperability for Microwave Access) is a wireless
communications standard designed to provide 30 to 40 megabit-per-second
data rates, with the 2011 update providing up to 1 Gbit/s for fixed stations.
The name "WiMAX" was created by the WiMAX Forum, which was
formed in June 2001 to promote conformity and interoperability of the
standard. The forum describes WiMAX as "a standards-based technology
enabling the delivery of last mile wireless broadband access as an
alternative to cable and DSL". IEEE 802.16 stands for WiMAX (Worldwide
Interoperability for Microwave Access) is a trademark for a family of
telecommunications protocols that provide fixed and mobile Internet access.
The 2005 WiMAX revision provided bit rates up to 40 Mbit/s with the 2011
update up to 1 Gbit/s for fixed stations. It supports the frequency bands in
the range between 2 GHz and 11 GHz, specifies a metropolitan area
networking protocol that will enable a wireless alternative for cable, DSL
and T1 level services for last mile broadband access, as well as providing
backhaul for 801.11 hotspots.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10
WiMAX allows for infrastructure growth in underserved markets and is
today considered the most cost-effective means of delivering secure and
reliable bandwidth capable of supporting business critical, realtime
applications to the enterprise, institutions and municipalities. It has proven
itself on the global stage as a very effective last mile solution. In the United
States though, licensed spectrum availability and equipment limitations have
held up early WiMAX adoption. In fact, while there are currently 1.2+
million WiMAX subscribers worldwide, only about 11,000 of those are
from the United States. Future growth in this market will be driven by
wireless ISPs like Clear wire who intends to cover 120-million covered
POPs in 80 markets with WiMAX by the end of 2010. Growth will also be
driven by the availability of the 3.65-GHz spectrum that the FCC opened up
this past year. In this paper, we compared some IEEE Standard 802.16a,
802.16d, 802.16e, 802.16m on the basis of basic characteristic, Application,
Limitation and their used.[2][19]
Table 4. Different IEEE Standard under 802.16 Standard [19].
Standard Description Status
802.16-2001 Fixed Broadband Wireless Access (10–66 GHz) Superseded
802.16.2-2001 Recommended practice for coexistence Superseded
802.16c-2002 System profiles for 10–66 GHz Superseded
802.16a-2003 Physical layer and MAC definitions for 2–11 GHz Superseded
P802.16b License-exempt frequencies (Project withdrawn) Withdrawn
P802.16d Maintenance and System profiles for 2–11 GHz (Project
merged into 802.16-2004)
Merged
802.16-2004 Air Interface for Fixed Broadband Wireless Access System
(rollup of 802.16-2001, 802.16a, 802.16c and P802.16d)
Superseded
P802.16.2a Coexistence with 2–11 GHz and 23.5–43.5 GHz (Project
merged into 802.16.2-2004)
Merged
802.16.2-2004 Recommended practice for coexistence (Maintenance and
rollup of 802.16.2-2001 and P802.16.2a)
Current
802.16f-2005 Management Information Base (MIB) for 802.16-2004 Superseded
802.16-
2004/Cor1-
2005
Corrections for fixed operations (co-published with 802.16e-
2005)
Superseded
802.16e-2005 Mobile Broadband Wireless Access System Superseded
802.16k-2007 Bridging of 802.16 (an amendment to IEEE 802.1D) Current
802.16g-2007 Management Plane Procedures and Services Superseded
P802.16i Mobile Management Information Base (Project merged into
802.16-2009)
Merged
802.16-2009 Air Interface for Fixed and Mobile Broadband Wireless
Access System
Current
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11
(rollup of 802.16-2004, 802.16-2004/Cor 1, 802.16e,
802.16f, 802.16g and P802.16i)
802.16j-2009 Multihop relay Current
802.16h-2010 Improved Coexistence Mechanisms for License-Exempt
Operation
Current
802.16m-
2011
Advanced Air Interface with data rates of 100 Mbit/s mobile
and 1 Gbit/s fixed. Also known as Mobile WiMAX Release
2 or Wireless MAN-Advanced. Aiming at fulfilling the ITU-
R IMT-Advanced requirements on 4G systems.
Current
P802.16n Higher Reliability Networks In Progress
P802.16p Enhancements to Support Machine-to-Machine Applications In Progress
Here our comparison of WiMAX Standard on basis of Spectrum
Bandwidth, Propagation, Throughput, Modulation, Usage/ Mobility, Range,
Mode of Network (Ad-hoc, Infrastructure, VANET), License/Unlicensed
Table 5. Comparison of Different WiMAX Standard 802.16a/ d/ e/ m
[2][3][5][9][10][11][20].
WiMAX
802.16 a
Fixed WiMAX
802.16d
Mobile WiMAX
802.16e
MobileWiM
AX2.0
802.16m
Spectrum
Bandwidth
10-66 GHz 2-11GHz 2-6GHz Sub 6 GHz
Propagation LOS NLOS NLOS NLOS
Throughput
up to 134
Mbps
up to 75 Mbps up to15 /30 Mbps
Over
300Mbps
Channelizati
on
28 MHz 20 MHz 5 MHz/10 MHz 100 MHz
Modulation
QPSK,
16QAM
256 subcarriers
OFDM-
BPSK,QPSK,16
QAM,64QAM
OFDMA,QPSK,16QA
M,64QAM, 256QAM
(optional)
64QAM
Usage/
Mobility
WMAN
Fixed
WMAN Fixed WMAN Portable
WMAN
Portable
Range
Typical 4-
6 miles
Typical 4-6 miles Typical 1-3 miles
Typical 1-3
miles
Ad-hoc Yes Yes Yes Yes
Infrastructure Yes Yes Yes Yes
VANET Yes Yes Yes Yes
Licensed/Unl
icensed
Unlicensed Unlicensed
2.3, 2.5, 3.5, 3.7, and
5.8 GHz- Licensed
Unlicensed
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12
Finally in this paper, we compared the Different technology WLAN/Wi-Fi,
WPAN, WiMax on the basis of IEEE Standard, Operating Frequency,
Bandwidth, Data rate, Multiple Access, Coverage, Range, Mode of
Network, Target Market.
Table 6. Comparison of WLAN, WPAN, WiMAX [1][3][4][5][9]
[10][11][12][13][14][20].
WLAN/Wi-Fi WPAN WiMax Fixed/Mobile
IEEE Standard 802.11 802.15 802.16
Operating
Frequency
2.4- 5 GHz 2.4GHz 10-66 GHz
Bandwidth 20MHz 15 MHZ 5-6 GHz
Data rate 1-150 Mbps
40 kbps- 110
Mbps
15,30,75,134,over300Mbps
Multiple
Access
CSMA/CA CSMA-CA OFDM/OFDMA
Coverage Small Small Low
Range 250 m 10- 75 m 1-6 mile
Mode of
Network
Ad-hoc, Infrastructure
and VANET
Ad-hoc and
VANET
Ad-hoc, Infrastructure and
VANET
Target Market Home/ Enterprise
Home/
Enterprise
Home/ Enterprise
5. CONCLUSION
This paper has presented a description of the most prominent developing
wireless access networks. Detailed technical comparative analysis between
WLAN, WPAN, WiMAX wireless networks that provide an alternative
solution to the problem of information access in remote inaccessible areas
where wired networks are not cost effective have been looked into. This
work has proved that the WiMAX standard goal is not to replace Wi-Fi in
its applications, but rather to supplement it in order to form a wireless
network web.
REFERENCES
[1] Anil Kumar Singh, Bharat Mishra, “COMPARATIVE STUDY ON WIRELESS
LOCAL AREA NETWORK STANDARDS”, International Journal of Applied
Engineering and Technology ISSN: 2277-212X , 2012 Vol. 2 (3) July-September.
[2] Sourangsu Banerji, Rahul Singha Chowdhury, "Wi-Fi & WiMAX: A Comparative
Study”, Indian Journal of Engineering Vol 2,No. 5, March 2013.
[3] Jan Magne Tjensvold, “Comparison of the IEEE 802.11, 802.15.1, 802.15.4 and
802.15.6 wireless standards”, September 18, 2007
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 13
[4] Raju Sharma, Dr. Gurpal Singh, Rahul Agnihotri “Comparison of performance analysis
of 802.11a, 802.11b and 802.11g standard”, (IJCSE) International Journal on Computer
Science and Engineering Vol. 02, No. 06, 2010
[5] Amit R. Welekar, Prashant Borkar, S. S. Dorle, “Comparative Study of IEEE 802.11,
802.15,802.16, 802.20 Standards for Distributed VANET” International Journal of
Electrical and Electronics Engineering(IJEEE) Vol-1,Iss-3,2012
[6] Vijay Chandramouli, “A Detailed Study on Wireless LAN Technologies”
[7] Mr. Jha Rakesh , Mr. Wankhede Vishal A. , Prof. Dr. Upena Dalal, “A Survey of
Mobile WiMAX IEEE 802.16m Standard”,(IJCSIS) International Journal of Computer
Science and Information Security, Vol. 8, No. 1, April 2010.
[8] Hyeopgeon Lee, Aran Kim, Kyounghwa Lee, Yongtae Shin ,” A Improved Channel
Access Algorithm for IEEE 802.15.4 WPAN ”, International Journal of Security and Its
Applications Vol. 6, No. 2, April, 2012 281
[9] Carlo de Morais Cordeiro, Dharma Prakash Agrawal, “ADHOC &SENSOR
NETWORKS Theory and Application”, World Scientific Publishing Co. Pvt. Ltd,
2010
[10] IEEE Std 802.15.4-2011, Low-Rate Wireless Personal Area Networks (LR-WPANs).
[11] IEEE 802.15 Working Group for WPAN, http://ieee802.org/15/index.html
[12] Ling-Jyh Chen, Tony Sun, Mario Gerla, “Modeling Channel Conflict Probabilities
between IEEE 802.15 based Wireless Personal Area Networks”
[13] https://en.wikipedia.org/wiki/IEEE_802.15
[14] “IEEE 802.15 wpan task group 1 (tg1),” http://www.ieee802.org/15/pub/TG1.html.
[15] “IEEE 802.15 wpan task group 3 (tg3),” http://www.ieee802.org/15/pub/TG3.html.
[16] “IEEE 802.15.4 wpan-lr task group,” http://www.ieee802.org/15/pub/TG4.html.
[17] Aktul Kavas, “Comparative Analysis of WLAN, WiMAX and UMTS Technologies”,
PIERS Proceedings, August 27-30, Prague, Czech Republic, 2007.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1
A New Method for Web Development
using Search Engine Optimization
Chutisant Kerdvibulvech
Department of Information & Communication Technology, Rangsit University
52/347 Muang-Ake, Paholyothin Rd, Lak-HokPatumThani 12000, Thailand
Kittidech Impaiboon
Department of Information & Communication Technology, Rangsit University
52/347 Muang-Ake, Paholyothin Rd, Lak-HokPatumThani 12000, Thailand
ABSTRACT
In this paper, we present the website development that utilizes Meta tag and the code we
used in our website to increase SEO (Search Engine Optimization) in the search engine.
When people talk about search engine, a lot of people may think about Google but Google
is not the one search engine in the world. Nowadays we have a lot of search engines for
example Bing, Ask, Yahoo and etc. In this paper we’ll show the evidence that we found
from 20 people in Unilever Thailand we gave them a survey and made a graph to show that
our website is better than the old one. After that we’ll show the web browser structure
which is very important for web developers when they develop a website.
Keywords
Web Design; Search Engine Optimization; Code Structure.
1. INTRODUCTION
Technology in all fields has grown faster and stronger than it ever has in the
past. What machines did in the past, is almost infinitesimal compared to
what we can achieve today. But yet the true fuel of such innovations lies
with us, the human need [1]. The need that have driven the likes of many
scientists and researchers, around the world to further our understanding of
the environment we reside in [2]. Like the chain of evolution has proven so
and so again, to adapt is to survive; which brings us to one of mankind’s
arguably greatest achievements, the Internet. We are driven by the need for
social interaction, self-growth and knowledge and through this
determination we have given birth to an ever-growing living beast [3]. The
World Wide Web, the pinnacle of human communication and interaction
has demolished physical limitations and distances [4]. And in these webs of
information brought about by sophisticated data communication techniques
and a multitude of complex networking lies and equally sophisticated being,
the front page of the internet, the very frontier we use to interact with, the
websites.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2
2. SEARCH ENGINE OPTIMIZATION
Nowadays we have the technology that helps us find the information that we
want, it is called the “search engine”. Imagine if we did not have a search
engine, then when people want to find information, it’ll be very difficult
because they have to remember the URL [5]. Related studies showed how to
develop web application that invokes web service [6] [7].
Search engine optimization is the way that we use to aid in increasing our
web ranking on the search engine, the following section will describe what
we did and what we did not do on the code of our website.
Why do we have to utilize search engine optimization or SEO? The answer
is because SEO can help us in increasing the chances of people that
searching for education or international university in Thailand to find our
website. How do we do that? It’s all about the “tags” in HTML code and
that tag is called the “Meta Tags”.
Meta Tags are source codes in the part of the head tag, in HTML normally
when we open up a webs page a part of the head tag will contain Meta Tags
that shows the attribute of a that web page. In this case, the robot of each
search engine that sees the Meta Tag will process and index our website in
their system. Let’s move on to how we used these Meta Tags and how we
used them in our website.
2.1 Meta Content Type Tag
This tag is very important for SEO because we use it for webpages to
display the result correctly with the correct kind of character set and type of
document on the website.
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
This is how we represent Meta content type tag in our website to show the
SEO and increase our web ranking.
2.2 Meta Description Tag
This tag is used to explain a brief description of the website, the tag content
should not be too short or too long because according to the rule of SEO, we
should only have 150 characters and it is important to ensure that the
description that is written on the Meta Description tag be related to the page,
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3
for example: this page is about the university then the description must be
about the university.
<META NAME='DESCRIPTION' CONTENT='write the description of the
website page'/>
This is the standard Meta Description tag that people use and next is the
Meta Description tag that we used on our website.
<meta name="description" content="Rangsit International College offers
unparallel education.
Study and learn from the best Thailand has to offer in international
undergraduate and graduate courses.">
2.3 Title Element - Page Titles
This tag is another important tag that a lot of people don’t know. Title tag is
used for telling web crawlers what the topic of the current page is about.
<title> University website</title>
This is an example of how to use the title tag let’s see how it works on our
website.
<title>Rangsit International College - Education has never been better
</title>
We use the word Rangsit international college and education because when
people search about international schools or international universities they
can find us easily. Also if they search for education they can find us this
way.
2.4 <div> Tag Why we don’t use the <table> Tag
In Thailand if you’re a web developer or just somebody that is viewing the
page source on a website in Thailand you will see that a lot of websites in
our country uses the table tag to create web page layouts. Why issuing the
table tag in our website a bad thing? The answer is when the search engine
robot (spider, crawler) read the table tag it’ll read the content line by line
and the table tag is exactly like a table as described by its name.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4
When the search engine robot reads the information then it’ll not read it
correctly. This mean it will read the content in the wrong way, the problem
is that, if they read the content in our website the wrong way when they
index the information in our website it’ll be wrong or a certain part of the
content will be out of context, obviously this is bad for us so we have to
change to the use of the <div> tag when we want to create a container to
keep the information in our website. Tables all take up more space in term
of bytes, making for slower page loading. Compared to using CSS for
layouts, tables take longer to implement and breaks up content, especially
images. Users accessing the web with screen readers will also find it
difficult to scan the contents of a page. Tables are also bad for the long run
as they are complicated to edit and manage [8].
<table border="1">
<tr>
<td>January</td><td>$100</td>
</tr>
</table>
This is an example of a table tag in use that makes the search engine robot
read the information in the wrong way in the website.
In our website we use the <div> tag it is better than the <table> tag mainly
because when the spider reads the information it’ll understand and keep the
information on our content clearly.
<div> Information in this box </div>
This is the structure of the <div> tag that people use and next is the <div>
tag in our website that we use I’ll pick just one <div>
<div id="ric_logo">
<imgsrc="../images/Ric_logo.png" alt="Rangsit International
College" />
</div>
In this <div> we represent the logo of our college and this <div> keep the
image on it to show the logo image of our college.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5
2.5 When you use <img> don’t forget to write information in the alt tag
Why do we need have to write the alt in the <img/> tag? That’s because
the search engine robot can’t read the image and it doesn’t know what the
image is about so according to SEO standards we have to write information
inthe <alt> tag to explain what that image is about, then the search engine
robot will know exactly what that image tells us.Which in turn reveals more
information about our web page, translating in to relevant content. That of
course depends on what we put in the <alt> tag. Proper usage of <alt> tags
also ensures that our HTML is valid. Meaning, it follows the guidelines set
by the W3C. [9]
<imgsrc="../images/Ric_logo.png" alt="Rangsit International College" />
This is the structure of the <img/> tag in our website and we write the alt to
explain to the search engine robot to understand what this image is about.
2.6 The On Page Ranking Factors
The on page ranking factors is the first group and identifies its ranking
factor. This is the on the page SEO. These parts of the factors are those that
the web developers are in total control of, they have the complete power to
create their own content, or change it or delete it. On the page SEO
categories consists of Content, HTML and architecture.
2.6.1 Content
This is simply a no brainer, the content is what you put out for your users to
experience, the only reason they come to your website is because of this.
This is in fact the least technical aspect, but the most vital part. After all
content is king. In this category itself, if we follow the periodic table we
have five more sub categories.
2.6.2 Content Quality:
What makes your content special? Developers and creators have to put stuff
out, but not just any stuff. They need quality. Quality distinguishes
competitors from one another, and people always want the best.
2.6.3 Content Research
Before you know your enemy, you need to know yourself. It’s always
important to do your research especially if you are going put yourself in the
open. Learning about the most appropriate terms for your content can help
by allowing people to relate to them easier, or even just understand them
easier.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6
In SEO terms, this would be the keywords that will be used on the site.
Keywords are a factor in which Search Engines use to present ads or search
results to users. So basically this is what the user will type in when
searching for something. Of course keywords alone do not directly affect
search results, if it were that easy, we wouldn’t need SEO.
2.6.4 Content Words
Closely followed by Content Research, after the research you need to put it
to use; using the right keywords for the right content. Keywords help
highlight your content and what you are trying to present. Using them
correctly will allow the user to feel like they have something they needed,
abusing them however will just result in them clicking away.
2.6.5 Content Engagement
If you’ve got visitors you need to make sure they stay and have a good time.
Search Engines may try to measure how engaging content is by how long
users stay on your site, if they click in and move away immediately (bounce
rate), it might be something they might not be looking.
2.6.6 Content Freshness
The same old stuff can get boring very fast, if users don’t have enough to
keep them going. They probably won’t be coming back for more. Google
also tries to support content freshness by what it calls “Query Deserved
Freshness”. If there’s an increase in a search more than usual, Google will
try to find the right content for that period of time. This might result in the
website getting a slight boost up the search results.
2.6.7 HTML
This section concerns a few important HTML tags used on the website. The
entire structure of the HTML does matter but these ones you have to pay a
little more attention to as they come up first when a spider wants to know
more about a website.
2.6.8 HTML Title Tag
The title says it all, (no pun intended). This tag will tell the spider what a
page is all about. Like every person needs a name to distinguish themselves
or like a report needs a title to explain what it’s going to be about a Title tag
will include and should include keywords that will help the page introduce
itself to the user and the spider as well.
2.6.9 HTML Description Tag
Although this is not something that directly influences the search engine or
the spider, it’s still just as important. A description tag will be displayed
when the web site come up in the search listings. A good description of the
website directly to the user will help in letting the user know what he needs
to know about a site without wasting his or her time. This description may
very well be the deciding factor for a person to click on the link or not.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7
2.6.10 HTML Header Tags
Much like the Title tags, headers function as headings within the page itself.
Header help indentify sections of content to the viewers. Search Engines
also make use of these headers to find out more about content. It’s always a
good idea have appropriate headings for content, if not for the search
engine, then for the users who will be making use of the content.
2.6.11 Architecture
The architecture is the overall structure of your web page, right down to the
HTML foundations, the way links are set up between pages and up to the
very content and the way you display this content.
2.6.12 Site Crawlabilty
For a spider or crawler to be able to effectively do its job, it needs to be able
to maneuver through the website. A web developer must make sure that the
links all go somewhere and lead to the right places; avoiding dead links and
misplaced links. Depending on how you place your links some pages might
be accessible and the content on these pages might not even appear for the
spider’s index. Third party plug-ins like Flash or even JavaScript can cause
links to unexpectedly disappear, or using such content instead of links will
reduce a spider’s ability to crawl through the site. The use of site maps
greatly helps in carefully detailing the pathways of a site.
2.6.13 Site Speed
Fast websites are fast for a reason, that’s because they don’t want their users
to wait. Nobody likes to wait longer than they should have to. Properly
optimizing the structure and using correct content placement can help speed
up website loading times. Over use of too many animations or effects can
greatly reduce performance. If the home page is being slow, people might
not even wait for your site to load. According to Google, a fast site may get
a minor advantage over their slower peers.
Having a Descriptive URL: If your website is about kittens, make sure you
don’t call it monstertruckmadness.com. Unless you actually have monster
trucks running over helpless little kittens. It’s important for a URL to be
easily read, and short so that users can quickly get to the point, the more
they have to scan through the URL, the less likely they will click on it.
Especially if there’s something shorter they can read.
2.7 The Off Page Ranking Factors
These elements of the table are things that web developers can’t completely
control. They are the external factors that surround the web site, but they are
just as impactful. The sub categories can be broken down to Links, Social,
Trust and Personal.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8
2.7.1 Links
Links are an extremely important factor; both internally and externally.
What goes around comes around. And on the internet, that happens through
links. Links going out and coming in help determines your own standing
among other websites. To search engines, links mean a lot, they are all a
part of what a website is, but there are good and bad links, and some are just
better than others.
2.7.2 Link Quality
Google and other search engines has way of determine how good a link is,
or how it is a better link. As it is possible for many different websites to link
to any other website, there are, however some sites that are more reliable or
popular sites. The link backs from these sites have high qualities that may
reflect the reliability and popularity of your own site.
2.7.3 Link Text
The very phrases that are use on the links pointing back or away from you
also weigh in as a factor to not overlook. Just because a website has a ton of
links, this does not mean its better. Relevancy is an all important factor in
any case, so the text describing the link counts.
2.7.4 Link Numbers
Although quality is preferred over quantity, you still need the quantity. Less
you have, the less you can be evaluated for especially when come to links.
2.7.5 Social
In the end it’s all about the people. You may have the best links, the best
website or even the best content, the best, at least for you. But for the others
out there they will be the own judge of that. That however is actually a good
thing. When people come across something amazing or something they find
interesting, most people would want to share them, or just talk about them.
Social Reputation
The online social presence you maintain and something you can represent
yourself online, social accounts are important in way that they let creators or
developers establish themselves as identities.
These social accounts let’s you interact and communicate with the users.
However; the main point is to be active on social media. This where a lot of
buzz can be generated, and it’s always better to approach people directly
and with today’s social media like Face book and Twitter, you have a direct
line between the users and creators.
2.7.6 Social Share
This is the attention and activity you get on social media in relation to your
website either through linking from social media or shares on Face book, or
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9
likes, tweeting about it, digging it or any other form of social sharing
mechanism provided by the social media website.
2.7.7 Trust
Trust is simply that comes from being legitimate on your own content;
having the right influence, coming from the right source, being believable
and legitimate. Although it is still unsure how search engines measure a
trust level of a website, nevertheless for humans they are very much an
important factor.
2.7.8 Trust Authority
The best way to establish trust with users and search engines is by having a
good reputation. Being true to your service and customers, or just simply
being a legitimate service or content provider can help reign in good
remarks from other people. This can happen through online reviews from
blogs, or just simply more sharing from trusted social media accounts and
links from popular sites; After all the power of authority is given by the
people.
2.7.9 Trust History
The overall history of the website, it’s almost like a track record for the
lifetime of your site. Search engines will try to see if anything out of order
has ever happened in the past. Of course the longer you’ve stayed the higher
chances of you being trustworthy because the longer and more exposure
you’ve had with users, though this is not always the case.
2.7.10 Personal
Ever wonder why you see different search results when searching in one
country than in another country? That’s because the major search engines
out there have come to include localization factors to narrow down certain
search results. This doesn’t happen to every single thing you search for, but
when it does. It’s because the certain website is optimized in such a way.
2.7.11 Personal Country
Country is the easiest of factors to come across; depending on where you are
in the country your website may not be relevant to someone else on the
other side of the planet. Of course this does not just pertain to websites; the
search terms themselves may have different meanings depending on the
country.
2.7.12 Personal Locality
These are local search, narrowing down according to the city. Much like the
country level, this will affect what is relevant in a local range.
2.7.13 Personal History
This solely depends on the personal preference of a person. Like the links
they have clicked or continually click after searching for something. There
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10
is no way to influence this other trying to get the first impressions right, on a
person’s first visit to the web page.
Personal Social Connections: This is the impression of what a person’s
social circle might have about web sites. The best way to strengthen this
front is by actively participating in social networks. The more friends the
more likely you content will be shared and higher chances of friends of
friends finding your site.
2.8 The Violation
These are things that are shunned upon by search engines. Things that
web developers should avoid if they want to stay on good terms with the
major search engines out there. These violations can generally result in a
website being taken off from search engine result listings.
2.8.1 Thin Content
Content that is lacking or relatively simple. Either the words do not
coherently adhere to any form of grammatical sense, such as large texts of
Spam created just for the purpose of getting a chance to appear in the
listings or that the content itself is insufficient or repetitive. These factors
can cause search engines to flag websites with penalties.
2.8.2 Keyword Stuffing
Keywords are there to help, not abuse. However there are people who do
just that. Keywords are good indicators to search engines of what you want
to be found for but overusing them does not help. This is illegal in terms of
SEO. Though it is not sure what extent of using keywords are considered
keyword stuffing, but using a lot more than you have to might get you in to
trouble.
2.8.3 Hidden Text
This is a follow up from the previous technique. Instead of having all these
unnecessary keywords become visible they are hidden in backgrounds by
taking advantage of text colors or just simply hidden elsewhere; usually a
place unbeknownst to the user. Search Engines however are fully aware of
this, and they don’t like it.
2.8.4 Cloaking
This is the worst sin you can commit; the ultimate act of bad SEO. Cloaking
involves letting the search engine have a different version of the site. So
basically speaking, creating a fake Spam site that catches the attention of the
search engine, but once a user clicks on it. It’s a different site.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11
2.8.5 Paid Links
This is when you pay for links; the exchange of money in return for high
volume link backs or simply just buying links.
2.8.6 Link Spam
Link spamming pretty much like all spam is repetitive posting of your link
on every single blog, forum or comment section through the use of
automated software. This is bad SEO practice. Not only will the links feel
out of place, if they keep popping up everywhere people will start to tire of
it, and those who actually click on it out of curiosity may never even come
back.
2.9 Blocking
2.9.1 Personal Blocking
Certain search engines gives the user to block an entire site from the result
listings or block web pages they don’t find very nice for whatever reason. In
more cases it’s because the websites are not relevant to the users but they
keep appearing when the user search for a particular thing. This may
necessarily be a bad website, but the user just happens to find it annoying.
2.9.2 Trust Blocking
A handful of people blocking may not seem like a much of a deal. But once
a good number of related people begin doing so, then there might be trouble.
This is seen as a negative factor and might result overall blocking for any
other average user.
3. DESIGN AND LAYOUT
In the present time a lot of people may think and suspect that personal
computers and laptops are where people make use of the internet the most.
This is in fact, not true. Mobile users are quickly exceeding desktop users.
Anything the desktop PC can do, the mobile device can too,including
surfing the web.
In our web design we have to manage the screen resolution carefully
because on the web browser they place things in the order they see, and in
the form of the box model, the contents on the web take up space in the
form of rectangles; filling in available spaces from left to right and top to
bottom. Depending on your screen size you might see more of the content
or less.
For the user they may not care what screen resolution they have when
viewing websites, but for web developers this creates an absolute nightmare
when it comes to positioning objects on the page or placing content
appropriately. If you are not careful the design might take unexpected turns
with certain resolutions. With the increasing ranges of screen sizes and the
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12
other way around, more people are using a variety of devices to view and
interact with websites. A developer could simply decide not to cater to a
range of audiences, but from a business perspective this is highly
unacceptable.
With a multitude of screen resolutions to consider and the fact that more and
more people are beginning to use handheld devices to view web pages on
the internet. Placing content on the web is no longer as simple as just putting
it there. There are many factors to consider such as the target audience, the
screens they will be using, how much content there are and finally, the most
important factor, how the web developer will be placing said content.
Creating a different version for each website is simply something that is not
worth it. The website itself needed to adapt to the differences.
The Layout is a vital factor in creating a website, as this will be the portion
that users will be interacting, not only is it the face of the website, it is the
entire external structure that users will be interacting with; So there is no
surprise to why a lot of web designers heavily place priority in trying to get
their layouts right before anything else.
There are basically three layouts that web designers ponder over when
creating their website; the Fixed Layout, the Fluid Layout and the Elastic
Layout. I will provide a brief explanation of each layout. There is no best
layout for choosing, each form will have its own use and the web developer
has to choose the correct layout based on his or her customers. As we move
beyond these layouts, we will go in to something known as Responsive Web
Design, which is very much what layouts are trying to achieve but done
more sophisticatedly.
This is why we have to manage the screen carefully and why screen is so
important to our website that we create. There are three kinds of layout we
will explain them. For our website we used the Fluid Layout and we will
show the code that we used on it and some images describing the three
different kinds of layout.
3.1 The Fixed Layout
Each user has a different screen resolution such as 800x600, 1024x780,
1280x800, 1280x960, 1280x1024 pixels. Depending on what the developer
sees fit, some websites may have smaller pixel dimensions for example a
screen resolution at 800x600 and defined by the width of the content of
about 780 pixels and the arrangement is on the center of the screen. So the
users that use a screen resolution of 800x600 they will see a page full
screen. For those using a screen resolution of 1024x780 they will see the
content of the page in the center and the remaining area will either be a
background color or a background image created by the web developer. If
we make the screen resolution 1024x780 and the users use a resolution of
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 13
800x600 the problem is they have to scroll and see the website. Which is
time consuming and troublesome for the user. This shows us how crucial
screen resolutions are to the web viewing experience. Figure 1 shows an
example of a fixed layout. We have discarded the use of the fixed layout.
Figure 1. Fixed Layout [4]
3.2 The Fluid Layout
This layout is more user friendly as it adjusts to the user’s screen as much as
possible. However compared to fixed layouts, it s much more difficult to
make and control for the matter, because what developer sees on his screen
may not be the same for the users. Much of the trouble may come when
placing content such as videos and images with fixed dimensions, as these
may become harder to scale along with the screen sizes.
This is the layout we use to create our website. In short, Fluid Layout is the
layout structure that uses percentages instead of pixels to determine content
dimensions, which is different from the Fixed Layout that uses pixel values
to control the page no matter what screen resolution a web page is being
viewed in. In Figure 2 we replace the pixel values with percentages. For
example if we create the width and height of the page as 100% then if users
use 800x600, 1024x780 or any screen resolution they can view our website
full screen so it’s mean that if we use this design our website can open with
almost every screen resolution on a computer.
Figure 2. Fluid Layout [4]
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 14
From the picture above the top part of the site displays a header which is
wide 90%, the main part that shows the content is about 50% and 20% for
the lateral side. This is a good example that makes web designer and web
developer to look at and understand easily because it shows a simple
webpage that makes it clear.
This layout is also used in our website because it can show our website in
every screen resolution and it doesn’t take too much time for doing this
design layout and the result is very great so that’s why we prefer this layout.
This is the way we set the whole body of our page we set it 100% and our
website we create it as a single page that a lot of websites in the world use.
#page1_content {
position: relative;
width:100%;
height:95%;
margin:0 auto;
background-color:#2e2f31;
color:#d8d8d8;
z-index:0;
}
This is an example of page 1 that we use 100% of width and 95% of height
and it depends on each page.
3.3 The Elastic Layout
A lot of testing has to go in to the design before it can be fit for all screens.
In an elastic layout a special unit called the “em” is used for the unit of
measurement. The em unit scales according to the users font size which is
set in the unit em. This scaling can be done via the browser.
3.4 The Responsive Web Design
This is undoubtedly the next big standard that all web developers are and
should be conforming to. Much like the layout techniques that are
mentioned above, in fact the same thing but all combined together to
provide the ultimate user friendly website possible. This technique is made
possible through the use of a CSS3 selector called the media query and a
fluid grid.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 15
Responsive Web Design is probably the most confusing and the hardest to
get right, but it provides the most flexible user experience across a multitude
of screens, if not possibly all screens.
.
Figure 3. Example experimental results
3.5 Graph from survey of 20 people
In this part, the graph of the survey is shown that we made for the evidence
that our website is better than the old website. Then we have about 5 graphs
to show what people like why they like the old website and new website that
we create. What we found from the survey we thought that our website is
better than the old one but the truth is some people they think the style of
the old one is better. The percentage that we found in the survey 85% of
people they like our website and 15% they like the old website and we think
that it is alright for us because we cannot make any website that a 100%
people like. It is very difficult. Table I and Figure 3 show the experimental
results.
Table 1. Web Development Survey from 20 persons
Table I
Web Development Survey from 20 persons
Which website design is better?
New Website 17
Old Website 3
Which website
design is better ?
New
website
17
Old
website
3
0
2
4
6
8
10
12
14
16
18
Web Development Survey
from 20 persons
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 16
This is the graph that we made from our survey. The survey was given to 20
employees working at Unilever Thailand. We also found out that older
people tend to like the old website better than the new one. But in this case
it’s not about the ages but it’s about the opinion. Those that like the new
design do so because of better design and usability. This graph is the
evidence tells us 85% of people think our old website design is better.
The first thing that we make is better design that show the new website 10
out of the 17 chose the new website because of better design, demonstrating
the fact that people often judge first on looks. An eye catching website has
better chances of inviting people rather than a dull website. This means
more than 50% of those who chose the website, did so because of the design
[10].
This fact further emphasized as none of those who chose the previous did so
because of design. Visual appeal is an important factor in attracting
attention. Our website had gone with a modern look, appealing to younger
people. In fact more time was spent on designing the website than actually
coding it. As we continually asked for feedback from those who were
testing the website, design has always been an important factor, not just in
web design but almost everywhere. Taking this in to account, we had strived
to make the design as suitable and as grand as possible. We had gone with a
black and grey color scheme that allowed us to easily build contrast but
adding color. As University websites serve to provide information for
possibly future students, we found that it was important for us to be able to
visually highlight important content on a web page. A dark background
makes any bright color instantly pop, therefore guiding the user’s eye.
However this also raises problems colors visually clashing as there is no
middle ground the colors can fall back on. We hoped to eliminate this
problem by only using a set of colors to a certain amount, on a certain page.
This next graph examines the remaining portion of those who chose the
website. Each person can choose the website for more than one reason.
Of course a website’s design is not the only thing of importance. You can’t
make fried eggs with an egg’s shell. You need the yolk. A rather silly
analogy as it may be, but what it trying to tell is very important. A website
can look as great as it wants, but without good content, or access to that
content for the matter, a user will quickly dismiss a website [11]. In order to
collect data on how well we had structured the content we raised another
topic for those who were surveyed.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 17
Figure 4. Web Development Survey
Yet again, more than half have chosen the new website, but however this
time some had actually preferred the old website. The main reason was that
the navigation on the old website was simpler. But then again, those who
chose the new website also gave a contradicting reason, they had said the
navigation on the new website was better. Judging by the numbers, it seems
we would have to take this as factual data rather than personal preference.
We employed a one page website design, putting as much content as we can
in to a single page and dividing them in to sections. Users navigate via a
fixed navigation bar with topics regarding the University. Clicking on the
navigation will allow the user to smoothly scroll to corresponding location.
Simply, but it is also stylish.
The next question is also concerned with user being able to make use of the
content. Much like anything that has words in it. The most important thing
is being able to read it. The main thing we could do ensure that users will
have the most comfort reading our content was font selection, font sizing
and spacing [12].
Table 2. Why they Choose the website that they like?
Table II
Why they Choose the website that they like?
Font easier to read
New Website 5
Old Website 0
Font easier to read
New
website
5
Old
website
0
0
1
2
3
4
5
6
Web Development Survey
from 20 persons. Why
they choose the website
that they like ?
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 18
Though the numbers are significantly less than that of the other reasons, its
importance is still proven by the fact that there are those who have chosen
the new website over the old one because of readability issues. As shown in
Figure 4, much like the design, the new website provides enough contrast
which are easy on the user’s eyes. After all people need to be able to know
what they have managed to find on the website, and if they can’t have
pictures explaining things to them, then at least have words that they can
read.
We have worked with the font BebasNeue, which is an easily
distinguishable font and rather modern looking. This helps tie in the rest of
the web design. Typography is an important design element that many
people look over, often without a second thought. The best fonts are the
ones that are the simplest, as users are trying to read content, not have to
decipher them every time they want to find out about something.
The last remaining portion raises the questions of the layout of the content.
It is whether the structuring of content made sense, or whether it followed a
good pattern. As we have mentioned before we had used a fluid layout for
the website, and screen resolution is also important.
This is the graph is to find out whether people thought our website was
accessible or not. So 35% of people think that our website is more
accessible and 10 % think that the old website is better. Accessibility in this
case refers to viewing on different screens, different browsers and the
overall simplicity of the link structure from one web page to another.
4. STRUCTURE
The main languages of the web are the HTML (Hyper Text Markup
Language), the CSS (Cascading Style Sheet) and JavaScript. More on these
languages will be explained but know that when a browser works on the
web page, these are the languages that it has to interpret for the user as they
are the languages used to construct the web pages themselves. The browsers
interfaces as we know it are rather similar consisting of an address bar;
where you type the URL (Uniform Resource Locator) of a certain web page,
other basic functions include being able to book mark web pages, moving
back forth between web pages with the back and forward buttons, and the
home page button takes you to the page you have set as the default page.
For the average user this might be all they know and need to know about a
web browser, but for most web developers they might need to delve a little
deeper. It is not a surprising fact however that most web developers
themselves do not know how a browser works, at least in the most minute of
details.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 19
The browser can be broken down in to 7 components, each component
working together to display the content as shown in Figure 5.
Figure 5.The Browser’s Structure [5]
1) The UI Interface: This is the interface that users interact with, consisting
of a graphical interface for browser functionalities such as bookmarking,
zooming on web pages, refreshing web pages and other useful functions.
This the application stage where users will interact with the browser. This is
the visible component that all users are aware of.
2) The Browser Engine: The browser engine is responsible for acting as the
bridge between the UI Interface and the Rendering Engine. It will take
actions from the user and the results of the Rendering Engine to display a
page in the browser window.
3) The Rendering Engine: The vital part of the browser, the Rendering
Engine is responsible for displaying the page by parsing the web languages.
The rendering engine will follow a flow in which it will gradually render the
contents of the page. Chrome and Safari uses a rendering engine called the
Web kit while Firefox uses one called the Gecko Engine.
The HTML will be parsed and the browser will begin construction of what
is called the DOM tree or the Document Object Model. The DOM is the
convention by which we control and manipulate HTML, XHTML and XML
documents. It’s called a tree because the code is divided in to nodes and laid
out in a tree structure. Each HTML mark up tag will have its own hierarchy
and the construction of DOM tree basically help lays the foundation for
what each mark up means and what it does in relation to the browser. Once
the DOM tree has been constructed, the styling elements of the web which
is the CSS code will be used for the creation of another tree called the
Render Tree. This tree will be responsible for laying down the visual
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 20
elements that are included with the HTML. The nodes are laid out in
rectangles in what we call the CSS box model where each markup takes up
space like a box. Once the necessary elements have been loaded it is time to
position the elements by where they will appear exactly on the screen.
The last stage is to finally paint the tree. What this means is that the
rendering engine will use the UI methods of the underlying Operating
system to render out each individual node.
4) Networking: HTTP requests and other networking calls are handled by
this component. Implementing Data communication and networking
protocols to send and receive data.
5) UI Backend: This is the interface that uses underlying UI of the operating
system. That is used to draw basic graphical elements such as windows,
check boxes, radio buttons and combo boxes.
6) JavaScript Interpreter: The JavaScript Interpreter is used for the execution
and parsing of JavaScript code.
7) Data Storage: The database system of the browser in which it will save
user specific data such as cookies and other web related user information.
5. CONCLUSIONS
The current five major browsers we in use today consist of Chrome, Internet
Explorer, Firefox, Safari and Opera. Below you can see the usage statistics
of each browser. It is evident the top three popular browsers are Chrome,
Firefox and Internet Explorer. Since each browser renders out pages in
slightly different ways, web developers have to constantly be aware of how
their websites will turn out on each individual browser, Internet Explorer
being a prime example of what works in one browser does not work in
another. Browser differences are not the only factors that websites have
adapted.
REFERENCES
[1] Elisabeth Freeman and Eric Freeman, Head First HTML with CSS and XHTML (First
Edition), O’Reilly Media, Inc, United States of America, December 2005.
[2] Browser Statistics and Trends, Available:
http://www.w3schools.com/browsers/browsers_stats.asp
[3] J. Jones. (1991, May 10). Networks (2nd ed.) [Online]. Available: http://www.atm.com
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 21
[4] Kayla Knight (2009, June 2nd). Fixed vs. Fluid vs. Elastic Layout: What’s The Right
One For You? [Online]. Available:
http://coding.smashingmagazine.com/2009/06/02/fixed-vs-fluid-vs-elastic-layout-
whats-the-right-one-for-you/
[5] Tali Garsiel& Paul Irish (2011, Aug 5). How Browsers Work: Behind the scenes of
modern web browsers.
Available:www.html5rocks.com/en/tutorials/internals/howbrowserswork/#Painting
[6] Mardiana, M., Araki, K. ,and Omori, Y., MDA and SOA approach to development of
web application interface, TENCON 2011 - 2011 IEEE Region 10 Conference, pages
226 – 231, 21-24 Nov. 2011.
[7] Rachit Mohan Garg, YaminiSood, BalajiKottana, PallaviTotlani, A Framework Based
Approach for the Development of Web Based Applications, World of Computer
Science and Information Technology Journal (WCSIT), ISSN: 2221-0741, Vol. 1, No.
1, 1-4, Feb. 2011.
[8] GavinKistner. Why tables are bad compared to semantic HTML and CSS:
http://phrogz.net/css/WhyTablesAreBadForLayout.html 2010
[9] Patrick Sexton. Descriptive and accurate <title> elements and ALT attributes:
http://www.feedthebot.com/titleandalttags.html 2011
[10]Why graphic design is important
http://www.slideshare.net/LocalInternetTraffic/importance-of-graphic-design-in-web-
development (2008)
[11]Julie M. Rinder, Fiserv, The importance of Usability Testing in Website Design (July
2012)
https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/12257/Rinder2012.pdf?
sequence=1
[12] Sandra Gabriele. The role of typography.http://www.longwoods.com/content/18465
(2006)
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1
A New Design to Improve the
Security Aspects of RSA
Cryptosystem
Sushma Pradhan and Birendra Kumar Sharma
School of Studies in Mathematics,
Pt. Ravi Shankar Shukla University,
Raipur, Chhattisgarh, India
ABSTRACT
This paper introduce a security improvement on the RSA cryptosystem, it suggests the use
of randomized parameters in the encryption process to make RSA many attacks described
in literature, this improvement will make the RSA semantically secure i.e. , an attacker
cannot distinguish two encryptions from each other even if the attacker knows (or has
chosen) the corresponding plaintexts. This paper also briefly discuss some other attacks on
the RSA and the suitable choice of RSA parameter to avoid attacks, also an important issue
for the RSA implementation is how to speed up the RSA encryption and decryption
process.
Keywords
RSA cryptosystem, RSA signature, RSA Problem, Public Key Cryptosystems, Private Key
Cryptography.
1. INTRODUCTION
A powerful tool for protection is the use of Cryptography. Cryptography
underlies many of the security mechanisms and builds the science of data
encryption and decryption. Cryptography [1] enables us to securely store
sensitive data or transmit across insecure networks such that it cannot be
read by anyone except the intended recipient. By using a powerful tool such
as encryption we gain privacy, authenticity, integrity, and limited access to
data. In Cryptography we differentiate between private key cryptographic
systems (also known as conventional cryptography systems) and public key
cryptographic systems.
Private Key Cryptography, also known as secret-key or symmetric-key
encryption is based on using one shared secret key for encryption and
decryption. The development of fast computers and communication
technologies did allow us to define many modern private key cryptographic
systems, e.g. in 1960's Feistel cipher [2], Data Encryption Standard (DES),
Triple Data Encryption standards (3DES), Advanced Encryption Standard
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2
(AES), The International Data Encryption Algorithm (IDEA), Blowfish,
RC5, CAST, etc. The problem with private key cryptography was the key
management, a system of n communicating parties would require to manage
((n-1)*n)/2, this means that to allow 1000 users to communicate securely,
the system must manage 499500 different shared secret key, thus it is not
scalable for a large set of users.
A new concept in cryptography was introduced in 1976 by Diffie and
Hellman [2] called as public-key cryptography and is based on using two
keys (Public and Private Key). The use of public key cryptography solved
many weaknesses and problems in private key cryptography, many public
key cryptographic systems were specified (e.g. RSA [3], ElGamal [4],
Diffie-Hellman key exchange [2], elliptic curves [5], etc.). The security of
such Public key cryptosystems is often based on apparent difficulties of
some mathematical number theory problems (also called "one way
functions") like the discrete logarithm problem over finite fields, the discrete
logarithm problem on elliptic curves, the integer factorization problem or
the Diffie-Hellman Problem, etc. [1].
One of the firstly defined and often used public key cryptosystems is the
RSA. The RSA cryptosystem is known as the de-facto standard for Public-
key encryption and signature worldwide and it has been patented in the U.S.
and Canada. Several standards organizations have written standards that use
of the RSA cryptosystem for encryption, and digital signatures [6]. The
RSA cryptosystem was named after his inventors R. Rivest, A. Shamir, and
L. Adleman and is one of the mostly used public-key cryptosystem.
Due to the wide use of the RSA cryptosystem, it is critical to ensure a high
level of security for the RSA. In this paper, we introduce a new design to
improve the security of the RSA cryptosystem; this is achieved by using
randomized parameter that make the encrypted message more difficult for
an adversary to break; thus making the RSA more secure.
This paper is organized as follows. In the next section the mathematical
basics preliminaries on RSA are briefly described. In Section 3, we describe
our new scheme with security improvement that can protect us against the
given attacks. In Section 4, we give the comparison between basic RSA
cryptosystem and our new scheme. Finally, we give a short conclusion in
Section 5.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3
2. BASIC PRELIMINARIES
The security of the RSA cryptosystem is based on the intractability of the
RSA problem. This means that if in the future the RSA problem generally
solved then the RSA cryptosystem will no longer be secure.
Definition2.1. The RSA problem (RSAP) is the following: given a positive
integer n that is a product of two distinct odd primes p and q, a positive
integer e such that gcd(e, (p − 1)(q − 1)) = 1, and an integer c, find an
integer m such that )(modncme
 .
This means that the RSA problem is based on finding the e-th roots modulo
a composite integer n.
Definition2.2. For n>1, let φ (n) denote the number of integers in the
interval [1, n] which are relatively prime to n. The function φ is called the
Euler phi function (or the Euler totient function).
The RSAP has been studied for many years but still an efficient solution
was not found thus it is considered as being difficult if the parameters are
carefully chosen, but if the factors of n are known then the RSA problem
can easily be solved, an adversary can then compute Euler φ (n) = (p−1)
(q−1) function, and the private key d, once d is obtained the adversary can
decrypt any encrypted text. It is also widely believed that the RSA and the
integer factorization problems are computationally equivalent, although no
proof of this is known.
Remark: The problem of computing the RSA decryption exponent d from
the public key (n, e) and the problem of factoring n are computationally
equivalent [6]. This imply that when generating RSA keys, it is important
that the primes p and q be selected in sufficient size such that factoring n =
p*q should be computationally infeasible.
The RSA public key and signature scheme is often used in modern
communications technologies; it is one of the firstly defined public key
cryptosystem that enable secure communicating over public unsecure
communication channels.
The following algorithms describe the RSA key generation, and the RSA
cryptosystem (basic version)
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4
Algorithm2.1 Key generation for the RSA public-key encryption
Each user A creates an RSA public key and the corresponding private key.
User A should do the following:
1. Generate two large random (and distinct) prime’s p and q, each roughly
the same size.
2. Compute n = p *q and φ (n) = (p − 1) (q − 1).
3. Select a random integer e, 1 < e < φ (n), such that gcd (e, φ (n)) =1.
4. Use the Euclidean algorithm to compute the unique integer d,
1 < d < φ (n), such that e *d ≡ 1 (mod φ (n)).
5. User A public key is (n, e) and A’s private key is d.
Definition2.3. The integer’s e and d in RSA key generation are called the
encryption exponent and the decryption exponent, respectively, while n is
called the modulus.
Algorithm2.2. The RSA public-key encryption and decryption (Basic
version)
User B encrypts a message m for user A, which A decrypts.
1. Encryption. User B should do the following:
1. Obtain user A authentic public key (n, e).
2. (Represent the message as an integer m in the interval [0, n − 1].
3. Compute )(modnmc e
 .
4. Send the encrypted text message c to user A.
2. Decryption. To recover plaintext m from c, user A should do the
following:
1. Use the private key d to recover )(mod)( nmm de
 .
The original RSA encryption, decryption does not contain any randomized
parameter making the RSA cryptosystem deterministic, which means that an
attacker can distinguish between two encryptions, based on this many of the
attacks listed below can be performed on the RSA basic version.
3. NEW SCHEME
The key generation remains unchanged as in the original RSA, see above.
The following algorithms describe as follows.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5
Algorithm3.1. The RSA public-key encryption and decryption (Modified
version)
User B encrypts a message m for user A, which A decrypts.
1. Encryption. User B should do the following:
1. Obtain user A authentic public keys (n, e).
2. Represent the message as an integer m in the interval [0, n − 1].
3. Select a random integer k, 1 < k < n, such that gcd (k, n)) = 1.
4. Compute )(mod1 nkc e

5. Compute )(mod2 nkmc e

6. Send the encrypted text message ),( 21 cc to user A.
2. Decryption. To recover plaintext m from 2c , user A should do the
following:
1. Use own private key d and compute: )(mod1 nkc 
2. Use the Euclidean algorithm and calculate the unique integer s,
1 < s < n, such that s *k ≡1 (mod n).
3. Compute )(mod)()(2 nmksmskmsc eee
 .
4. Recover m, use the private key d and compute: nmm de
mod)( 
The following example illustrates the use of modified RSA cryptosystem.
Example: (RSA Encryption/Decryption)
Key Generation: Assume p = 2350, q = 2551, n = p* q = 6012707
1. Encryption. User B should do the following:
1. User A authentic public key e = 3674911
2. Message m = 31
3. Random k = 525
4. Compute: 3674911
525 = 20639mod6012707
5. Compute: 367491125
31 = 2314247mod6012707
6. Send (20639, 2314247) to user A
2. Decryption. To recover plaintext m from c, user A should do the
following:
1. User A private key d= 422191, compute: 422191
20639 =
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6
525mod6012707
2. Extended GCD (525, 6012707) → s = 3516002
3. Compute: 2314247*3516002 = 2913413mod6012707
4. Recover m: 422191
2913413 = 31mod6012707.
The RSA encryption/decryption is much slower than commonly used
symmetric key encryption algorithms such as the well know algorithm DES
and this is the reason why in practice RSA encryption is commonly used to
encrypt symmetrical keys or to encrypt small amount of data, there are
many software solutions or hardware implementations to speeding up the
RSA encryption/decryption process. For more information about speeding
up RSA software implementations see [6].
Because the basic version of the RSA cryptosystem has no randomization
component an attacker can successfully launch many kinds of attack, now
we discuss some of these attacks.
1. Known plain-text attack: A known-plaintext attack is one where the
adversary has a quantity of plaintext and corresponding cipher-text [6].
Given such a sorted set }},}....{,{},,{{ 2211 rr cpcpcpS  (where Ppi 
plaintext set, Cci  cipher text set, r < φ (n) is the order of Z*n) an
adversary can determine the plaintext xp if the corresponding xc is in S.
The modified version of the RSA described above use k as randomizing
parameter; this can protect the encrypted text against known plain text
attacks.
2. Small Public/Private exponent e/d Attack: To reduce decryption time,
one may wish to use a small value of private exponent d or reduce the
encryption time using a small public exponent e, but this can result in a total
break of the RSA cryptosystem as Coppersmith [10] and M.Wiener [11]
showed.
3. Johan Hstad and Don Coppersmith Attack: If the same clear text
message is sent to more recipients in an encrypted way, and the receivers
share the same exponent e, but different p, q, and n, then it is easy to decrypt
the original clear text message via the Chinese remainder theorem [6]. Johan
Hstad [7] described this attack and Don Coppersmith [8] improved it.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7
4. Common Modulus Attack: If also same message m is encrypted twice
using the same modulus n, then one can recover the message m as follows:
Let )(mod1
1 nmc e
 , and )(mod2
2 nmc e
 be the cipher texts
corresponding to message m, where gcd 1),( 21 ee , then attacker recovers
original message nccm
ba
mod211  for 121  beae . Using the
extended great common divisor (GCD) one can determine a and b then
calculate m without knowing private key d, this is known in the literature as
the Common Modulus Attack that requires ))((log 2
KO , where k is
maximum size of a or b.
5. Timing Attack: One attack on the RSA implementation is the Timing
Attack; Kocher [9] demonstrated that an attack can determine a private key
by keeping track on how long a computer takes to decrypt a message.
6. Adaptive chosen cipher text attacks (ACC attacks): In 1998, Daniel
Bleichenbacher [12] described the first practical adaptive chosen cipher text
attack, against RSA-encrypted messages using the PKCS#1v1[13] padding
scheme (a padding scheme randomizes and adds structure to an RSA-
encrypted message, so it is possible to determine whether a decrypted
message is valid.) Bleichenbacher was able to mount a practical attack
against RSA implementations of the Secure Socket Layer protocol (SSL)
[14], and to recover session keys, here it is important to mention that such
protocol is still often used in internet to secure emails and e-payment via
internet. As a result of this work, cryptographers now recommend the use of
provably secure padding schemes such as Optimal Asymmetric Encryption
Padding, and RSA Laboratories has released new versions of PKCS#1 that
are not vulnerable to these attacks.
7. Attacks on the factorization problem: Some powerful attacks on the
RSA cryptosystem are the attacks on the factorization problem; the factoring
algorithms to solve the factorization problem come in two parts: special
purpose and general purpose algorithms. The efficiency of special purpose
depends on the unknown factors, whereas the efficiency of the latter
depends on the number to be factored. Special purpose algorithms are best
for factoring numbers with small factors, but the numbers used for the
modulus in the RSA do not have any small factors. Therefore, general
purpose factoring algorithms are the more important ones in the context of
cryptographic systems and their security.
A major requirement to avoid factorization attacks on the RSA
cryptosystem is that p and q should be about the same bits length and
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8
sufficiently large. For a moderate security level p and q should be at least
1024 bits length, this will result in a 2048 bit length for modulus n.
furthermore p and q should be random prime number and not of some
special case binary bit structure.
The following table summarizes the running time for some of the well
known integer factoring algorithms where p denotes the smallest prime
factor of n, and e = 2.718 is the Euler’s constant.
Table 1. Factorization algorithms
Algorithm Runtime Estimation
1.Pollards Rho [15] O(p)
2.Pollards p − 1 [16] O (p*) where p* is the largest prime factor of p − 1.
3.Williams p + 1
[17]
O (p*) where p* is the largest prime factor of p + 1.
4.Elliptic Curve
Method (ECM) [18]
)( )2/1)lnlnln2))(1(1( ppo
eO 
5.Quadratic Sieve
(Q.S.) [19]
)( )2/1)lnlnln2))(1(1( NNo
eO 
6.Number Filed
Sieve (NFS) [20]
)( )3/2)ln(ln31)))(ln1(92.1( NNo
eO 
In 2010, the largest number factored by a general-purpose factoring
algorithm was 768 bits long [21] using distributed implementation thus
some experts believe that 1024-bit keys may become breakable in the near
future so it is currently recommended to use 2048 for midterm security and
a 4096-bit keys for long term security.
Now, the described RSA security improvement in this paper can protect us
against the following attacks:
Table 2. Improved RSA is immune against the following attacks
Attack Justification
1.Known plain-text
attack
Is not possible as described above
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9
2. Small public
exponent e
Is not possible due to the use of random integer k
3. Hasted and
Coppersmith attack
Is not possible because every msg. have unique ki
4. Common Modulus
Attack
Is not possible because every msg. have unique ki
5. Timing Attack Using k in encryption and decryption process will
make it difficult to distinguish between the time
for k and the time for public e or private key d
6.ACC attacks One can use randomized integer k instead of
secure padding
This will make the RSA cryptosystem more secure compared with the basic
version of the RSA cryptosystem. The modification makes the RSA
cryptosystem semantically secure that means an attacker cannot distinguish
two encryptions from each other even if the attacker knows (or has chosen)
the corresponding plaintexts.
4. COMPARISION WITH STANDARD RSA CRYPTOSYSTEM
We can compare the new scheme to the RSA cryptosystem. For the latter,
the natural security parameter is n = the logarithm of the RSA modulus. The
public and secret keys of RSA have size O (n), and both encryption and
decryption require time )( 3
nO (using ordinary multiplication algorithms).
For our new scheme, the natural security parameter is the dimension k. The
keys for the new system are relatively large: size )( 3
kO for the public key
and )( 2
kO for the secret key. However, the time required for encryption is
only O (n) and no multiplications are needed. Decryption requires time
)( 3
kO , comparable to RSA (again using ordinary multiplication
algorithms).
5. CONCLUSION
In this paper, we briefly discussed the improve security of the RSA public-
key cryptosystem. This improvement use randomized parameter to change
every encrypted message block such that even if the same message is sent
more than once the encrypted message block will look different. The major
advantage gained in the security improvement described above is making
RSA system immune against many well known attacks on basic RSA
cryptosystem; thus making the RSA encryption more secure. This is
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10
essential because RSA is implemented in many security standards and
protocols and a weak RSA may result in a whole compromised system.
Although the security improvement make RSA more secure nevertheless it
should be noted that the RSA modulus n bit length should be at least 2048 to
ensure a moderate security and to avoid powerful attacks on the discrete
logarithm and factorization problem. This security consideration and other
mentioned in literature should be used to define an improved version of the
RSA.
REFERENCES
[1] D. Kahn., The Code breakers: The comprehensive History of Secret
Communication from Ancient to the Internet, Scribner, 1967.
[2] W. Diffie and M.Hellman, New directions in cryptography, IEEE Transactions on
Information Theory, vol. 22 (1976), 644-654.
[3] R. L. Rivest, A. Shamir, and L. Adelman, A method for obtaining digital signatures
and public key cryptosystems” Commun. Of the ACM, Vol.21 (1978), 120-126.
[4] T. ElGamal, A public-key cryptosystem and a signature scheme based on discrete
logarithms, IEEE Transactions on Information Theory, Vol. 31 (1985), 469-472.
[5] N. Koblitz, Elliptic curve cryptosystems, Mathematics of Computation, Vol.48
(1987), 203-209.
[6] A. Menezes, P. van Oorscot and S. Vanstone, Handbook of Applied Cryptography,
CRC Press, ISBN: 0-8493-8523-7, 1999
[7] Hstad and M. Nslund, The Security of all RSA and Discrete Log Bits, Journal of
the ACM, Vol.51, No.2 (2004), 187-230.
[8] Don Coppersmith, Small Solutions to Polynomial Equations, and Low Exponent
RSA Vulnerabilities, Journal of Cryptology, Vol. 10, No. 4, (Dec. 1997).
[9] P. Kocher, Timing attacks on implementations of Diffie-Hellman, RSA, DSS and
other systems. Advances in Cryptology, Vol. 1109 (1996), 104-113.
[10] Don Coppersmith, Matthew K. Franklin, Jacques Patarin, Michael K. Reiter, Low
Exponent RSA with Related Messages, EUROCRYPT (1996), 1-9.
[11] M. Wiener., Cryptanalysis of short RSA secret exponents, IEEE Transactions on
Information Theory, Vol. 36(1990), 553- 558.
[12] Daniel Bleichenbacher., Chosen ciphertext attacks against protocols based on the
RSA encryption standard PKCS #1, Advances in Cryptology CRYPTO ’98
Lecture Notes in Computer Science, Vol. 1462 (1998).
[13] PKCS#1: RSA Cryptography Standard, website:
http://www.rsa.com/rsalabs/node.asp?id=2125
[14] The Transport Layer Security (TLS) Protocol, version 1.2, website:
http://tools.ietf.org/html/rfc5246.
[15] J. M. Pollard, A Monte Carlo method for factorization, BIT Numerical
Mathematics, Vol. 15, No. 3 (1975), 331-334.
[16] J. M. Pollard, Theorems of Factorization and Primality Testing”, Proceedings of
the Cambridge Philosophical Society, Vol.76, No. 3 (1974), 521-528.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11
[17] H. C. Williams., A p+1 method of factoring, Mathematics of Computation, Vol.
39, No.159 (1982), 225-234.
[18] B. Dixon, A.K. Lenstra, Massively parallel elliptic curve factoring, Advances in
Cryptology-EUROCRYPT’ 92 Lecture Notes in Computer Science, Vol. 658, (1993), 183-
193.
[19] C. Pomerance, The quadratic sieve factoring algorithm, Advances in Cryptology
Lecture Notes in Computer Science, Vol.209 (1985), 169-182.
[20] J. Buchmann, J. Loho, J. Zayer, An implementation of the general number field
sieve, Advances in Cryptology-CRYPTO’ 93 Lecture Notes in Computer
Science, Vol.773 (1994), 159-165.
[21] RSA Laboratories, the RSA Factoring Challenge
http://www.rsa.com/rsalabs/node.asp-id=2092
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1
A Hybrid Model of Multimodal
Approach for Multiple Biometrics
Recognition
P. Prabhusundhar
Assistant Professor, Department of Information Technology,
Gobi Arts & Science College (Autonomous),
Gobichettipalayam – 638 453, Erode District, Tamil Nadu, India.
V.K. Narendira Kumar
Assistant Professor, Department of Information Technology,
Gobi Arts & Science College (Autonomous),
Gobichettipalayam – 638 453, Erode District, Tamil Nadu, India.
B. Srinivasan
Associate Professor, PG & Research Department of Computer Science,
Gobi Arts & Science College (Autonomous),
Gobichettipalayam – 638 453, Erode District, Tamil Nadu, India.
ABSTRACT
A single biometric identifier in making a personal identification is often not able to meet
the desired performance requirements. Biometric identification based on multiple
biometrics represents an emerging trend. Automated biometric systems for human
identification measure a “signature” of the human body, compare the resulting
characteristic to a database, and render an application dependent decision. These biometric
systems for personal authentication and identification are based upon physiological or
behavioral features which are typically distinctive, although time varying, such as Face
recognition, Iris recognition, Fingerprint verification, Palm print verification in making a
personal identification. Multi-biometric systems, which consolidate information from
multiple biometric sources, are gaining popularity because they are able to overcome
limitations such as non-universality, noisy sensor data, large intra-user variations and
susceptibility to spoof attacks that are commonly encountered in uni-biometric systems. In
this paper, it addresses the concept issues and the applications strategies of multi-biometric
systems.
Keywords
Biometrics, Fingerprint, Iris, Palm print, Face recognition and Sensors.
1. INTRODUCTION
A Biometric is defined as a unique, measurable, biological characteristic or
trait for automatically recognizing or verifying the identity of a human
being. Statistically analyzing these biological characteristics has become
known as the science of biometrics. These days, biometric technologies are
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2
typically used to analyze human characteristics for security purposes. Five
of the most common physical biometric patterns analyzed for security
purposes are the fingerprint, hand, eye, face, and voice. Biometric fusion is
the process of combining information from multiple biometric readings,
either before, during or after a decision has been made regarding
identification or authentication from a single biometric. The data
information from those multiple modals can be combined in several levels:
sensor, feature, score and decision level fusions.
Security is not enforced by focusing on a single parameter. Instead of
solving a one-dimensional problem, a secure environment requires multiple
dimensions of critical check points. Secure authentication is provided by
multiple parameters. One parameter is a security token an individual
uniquely possesses, such as a physical key or a smart card. Another
parameter is an item an individual uniquely knows, such as a PIN. An
additional parameter is an individual's unique biological characteristic, such
as DNA or an iris code [8]. Some of the challenges commonly encountered
by biometric systems are listed here:
a) Noise in sensed data: The biometric data being presented to the
system may be contaminated by noise due to imperfect acquisition
conditions or subtle variations in the biometric itself.
b) Non-universality: The biometric system may not be able to acquire
meaningful biometric data from a subset of individuals resulting in a
failure-to-enroll (FTE) error.
c) Upper bound on identification accuracy: The matching performance
of a unibiometric system cannot be indefinitely improved by tuning
the feature extraction and matching modules. There is an implicit
upper bound on the number of distinguishable patterns (i.e., the
number of distinct biometric feature sets) that can be represented
using a template.
d) Spoof attacks: Behavioral traits such as voice and signature are
vulnerable to spoof attacks by an impostor attempting to mimic the
traits corresponding to legitimately enrolled subjects.
Some of the limitations of a unibiometric system can be addressed by
designing a system that consolidates multiple sources of biometric
information. This can be accomplished by having multiple traits of an
individual or multiple feature extraction and matching algorithms operating
on the same biometric. Such systems, known as multibiometric systems, can
improve the matching accuracy of a biometric system while increasing
population coverage and deterring spoof attacks. This paper presents an
overview of multibiometric systems.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3
2. MULTIPLE BIOMETRICS
Multiple Biometrics refers to the use of a combination of two or more
biometric modalities in a verification / identification system. Identification
based on multiple biometrics represents an emerging trend. The most
compelling reason to combine different modalities is to improve the
recognition rate. This can be done when biometric features of different
biometrics are statistically independent. There are other reasons to combine
two or more biometrics. One is that different biometric modalities might be
more appropriate for the different applications. Another reason is simply
customer preference [5].
A variety of factors should be considered when designing a multiple
biometric system. These include the choice and number of biometric traits;
the level in the biometric system at which information provided by multiple
traits should be integrated; the methodology adopted to integrate the
information; and the cost versus matching performance trade-off [8].
Multiple Biometric systems capture two or more biometric data. Fusion
techniques are applied to combine and analyze the data in order to produce a
better recognition rate. Such technologies can not only overcome the
restriction and shortcomings from single modal systems, but also probably
produce lower error rate in recognizing persons [7].
To integrate fully biometric identification systems will be a lengthy process,
but the technology has the potential to change the way the world works, no
more passwords and smart cards, just using your body as your key.
However, biometrics has been usefully applied for matters of lower
importance, time monitoring systems and industry authentication systems.
As the progress of technology increases, it is assured that biometrics can be
effectively applied to important systems. There is no doubt that biometrics
is the next stage of ubiquitous security technology in our increasingly
paranoid, authoritarian society. However, there is still much to be done:
customers are scared off by high failure-to-enroll and false non-match rates
as well as incompatibilities. Furthermore, system security as a whole needs
more care to be taken of. Future improvements in acquisition technology
and algorithms as well as the availability of industry standards will certainly
assure a bright future for biometrics. Will this be the end of traditional
password or token-based systems certainly not biometrics is not the perfect
solution either; it is just a good trade-off between security and ease of use.
2.1 Face Recognition
Face recognition analyzes facial characteristics. It requires a digital
camera to capture one or more facial images of the subject for recognition.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4
With a facial recognition system, one can measure unique features of ears,
nose, eyes, and mouth from different individuals, and then match the
features with those stored in the template of systems to recognize subjects
under test. Popular face recognition applications include surveillance at
airports, major athletic events, and casinos. The technology involved
has become relatively mature now, but it has shortcomings, especially
when one attempts to identify individuals in different environmental
settings involving light, pose, and background variations. Also, some
user-based influences must be taken into consideration, for example,
mustache, hair, skin tone, facial expression, cosmetics, and surgery and
glasses. Still there is a possibility that a fraudulent user could simply
replace a photo of the authorized person’s to obtain access permission.
Some major vendors include Viisage Technology, Inc. and AcSys
Biometrics Corporation.
2.2 Fingerprint Recognition
The patterns of fingerprints can be found on a fingertip. Whorls,
arches, loops, patterns of ridges, furrows and minutiae are the measurable
minutiae features, which can be extracted from fingerprints. The matching
process involves comparing the 2-D features with those in the template.
There are a variety of approaches of fingerprint recognition, some of
which can detect if a live finger is presented, and some cannot. A main
advantage of fingerprint recognition is that it can keep a very low
error rate. However, some people do not have distinctive fingerprints for
verification and 15% of people cannot use their fingerprints due to
wetness or dryness of fingers. Also, an oily latent image left on
scanner from previous user may cause problems. Furthermore, there
are also legal issues associated with fingerprints and many people may
be unwilling to have their thumbprints documented. The most popular
applications of fingerprint recognition are network security,
physical access entry, criminal investigation, etc. So far, there are
many vendors that make fingerprint scanners; one of the leaders in this
area is Identix, Inc.
2.3 Palm Print Recognition
Palm print recognition measures and analyzes Palm print images to
determine the identity of a subject under test. Specific measurements
include location of joints, shape and size of palm. Palm print recognition
is relatively simple; therefore, such systems are inexpensive and easy to
use. And there are not negative effects on its accuracy with individual
anomalies, such as dry skin. In addition, it can be integrated with other
biometric systems. Another advantage of the technology is that it can
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5
accommodate a wide range of applications, including time and
attendance recording, where it has been proved extremely popular.
Since Palm print geometry is not very distinctive, it cannot be used to
identify a subject from a very large population. Further, Palm print
geometry information is changeable during the growth period of children. A
major vendor for this technology is Recognition Systems, Inc [6].
Figure 1. Examples of some of the biometric traits used for authenticating an
individual
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6
2.4 Iris Recognition
Iris biometrics involves analyzing features found in the colored ring of
tissue that surrounds the pupil. Complex iris patterns can contain many
distinctive features such as ridges, crypts, rings, and freckles.
Undoubtedly, iris scanning is less intrusive than other eye-related
biometrics. A conventional camera element is employed to obtain iris
information. It requires no close contact between user and camera. In
addition, irises of identical twins are not same, even thought people can
seldom identify them. Meanwhile, iris biometrics works well when people
wear glasses. The most recent iris systems have become more users
friendly and cost effective. However, it requests a careful balance of light,
focus, resolution and contrast in order to extract features from images.
Some popular applications for iris biometrics can be employee
verification, and immigration process at airports or seaports. A major
vendor for iris recognition technology is Iridian Technologies, Inc.
3. CHALLENGES TO MULTI-BIOMETRIC SYSTEM
Based on applications and facts presented in the previous sections,
followings are the challenges in designing the multi modal systems.
Successful pursuit of these biometric challenges will generate significant
advances to improve safety and security in future missions.
 The sensors used for acquiring the data should show consistency in
performance under variety of operational environment. The sensor
should be fast in collecting quality images from a distance and should
have low cost with no failures to enroll.
 The information obtained from different biometric sources can be
combined at five different levels such as sensor level, feature level,
score level, rank level and decision level. Therefore selecting the best
level of fusion will have the direct impact on performance and cost
involved in developing a system.
 There are Numbers of techniques available for fusion in multi-biometric
system; the multiple source of information is available. Hence it is
challenging to find the optimal solution for the application provided.
 In multi-biometric systems the information acquired from different
sources can be processed either in sequence or parallel. Hence it is
challenging to decide about the processing architecture to be employed
in designing the multi-biometric system.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7
4. IMPLEMENTATION
In general, the use of the terms multimodal or multi-biometric indicates the
presence and use of more than one biometric aspect (modality, sensor,
instance and/or algorithm) in some form of combined use for making a
specific biometric verification/identification decision. The goal of multi-
biometrics is to reduce one or more of the following:
• False accept rate (FAR)
• False reject rate (FRR)
• Failure to enroll rate (FTE)
• Susceptibility to artifacts or mimics
Multi modal biometric systems take input from single or multiple sensors
measuring two or more different modalities of biometric characteristics. For
example a system with fingerprint and face recognition would be considered
“multimodal” even if the “OR” rule was being applied, allowing users to be
verified using either of the modalities.
 Multi algorithmic biometric systems
Multi algorithmic biometric systems take a single sample from a single
sensor and process that sample with two or more different algorithms.
 Multi-instance biometric systems
Multi-instance biometric systems use one sensor or possibly more
sensors to capture samples of two or more different instances of the
same biometric characteristics. Example is capturing images from
multiple fingers.
 Multi-sensorial biometric systems
Multi-sensorial biometric systems sample the same instance of a
biometric trait with two or more distinctly different sensors. Processing
of the multiple samples can be done with one algorithm or combination
of algorithms. Example face recognition application could use both a
visible light camera and an infrared camera coupled with specific
frequency.
4.1 Fusion in Multimodal biometric systems
A Mechanism that can combine the classification results from each
biometric channel is called as biometric fusion. We need to design this
fusion. Multimodal biometric fusion combines measurements from different
biometric traits to enhance the strengths. Fusion at matching score, rank and
decision level has been extensively studied in the literature. Various levels
of fusion are: Sensor level, feature level, matching score level and decision
level [6].
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8
4.1.1 Sensor level Fusion
In sensor Fusion we combine the biometric traits coming from sensors like
Thumbprint scanner, Video Camera, Iris Scanner etc, to form a composite
biometric trait and process.
4.1.2 Feature Level Fusion
In feature level fusion signal coming from different biometric channels are
first preprocessed, and feature vectors are extracted separately, using
specific fusion algorithm we combine these feature vectors to form a
composite feature vector. This composite feature vector is then used for
classification process.
4.1.3 Matching Score Level
Here, rather than combining the feature vector, we process them separately
and individual matching score is found, then depending on the accuracy of
each biometric channel we can fuse the matching level to find composite
matching score which will be used for classification.
4.1.4 Decision level Fusion
Each modality is first pre-classified independently. The final classification
is based on the fusion of the outputs of the different modalities. Multimodal
biometric system can implement any of these fusion strategies or
combination of them to improve the performance of the system.
5. EXPERIMENTAL RESULTS
Performance statistics are computed from the real and fraud scores. Real
scores are those that result from comparing elements in the target and query
sets of the same subject. Fraud scores are those resulting from comparisons
of different subjects. Use each fusion score as a threshold and compute the
false-accept rate (FAR) and false-reject rate (FRR) by selecting those fraud
scores and genuine scores, respectively, on the wrong side of this threshold
and divide by the total number of scores used in the test. A mapping table of
the threshold values and the corresponding error rates (FAR and FRR) are
stored. The complement of the FRR (1 – FRR) is the genuine accept-rate
(GAR). The GAR and the FAR are plotted against each other to yield a
ROC curve, a common system performance measure. We choose a desired
operational point on the ROC curve and use the FAR of that point to
determine the corresponding threshold from the mapping table.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9
Figure 2. Min-Max Normalization with different fusions
For example, at a FAR of 0.1% the simple sum fusion with the min-max
normalization has a GAR of 94.9%, which is considerably better than that
of face, 75.3%, and fingerprint, 83.0%. Also, using any of the normalization
techniques in lieu of not normalizing the data proves beneficial. The
simplest normalization technique, the min-max, yields the best performance
in this example. Figure 2 illustrates the results of Min-Max normalization
for a spectrum of fusion methods. The simple sum fusion method yields the
best performance over the range of FARs. Interestingly, the Genuine-Accept
Rate for sum and product probability rules falls off dramatically at a lower
FAR. GAR for the spectrum of normalization and fusion techniques at
FARs of 1% and 0.1% respectively. At 1% FAR, the sum of probabilities
fusion works the best. However, these results do not hold true at a FAR of
0.1%. The simple sum rule generally performs well over the range of
normalization techniques. These results demonstrate the utility of using
multimodal biometric systems for achieving better matching performance.
They also indicate that the method chosen for fusion has a significant
impact on the resulting performance. In operational biometric systems,
application requirements drive the selection of tolerable error rates and in
both single modal and multimodal biometric systems, implementers are
forced to make a trade-off between usability and security. In operational
biometric systems, application requirements drive the selection of tolerable
error rates and in both single-modal and multimodal biometric systems,
implementers are forced to make a trade-off between usability and security.
Clearly the use of these fusion and normalization techniques enhances the
performance significantly over the single-modal face or fingerprint
classifiers.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10
6. PERFORMANCE OF MULTIMODAL BIOMETRICS
Multimodal Biometric systems are often evaluated solely on the basis of
recognition system performance. But it is important to note that other
factors are involved in the deployment of a bio-metric system. One factor is
the quality and ruggedness of the sensors used. Clearly the quality of the
sensors used will affect the performances of the associated recognition
algorithms. What should be evaluated is therefore the sensor/algorithm
combination, but this is difficult because often the same sensors are not used
in both the enrolment and test phases. In practice therefore the evaluation is
made on the basis of the recognition algorithm's resistance to the use of
various types of sensor (interoperability problem). Another key factor in
determining the acceptability of a biometric solution is the quality of the
associated communication inter-face. In addition to ease of use, acquisition
speed and processing speed are key factors, which are in many cases not
evaluated in practice.
In the case of a verification system, two error rates are evaluated which vary
in opposite directions: the false rejection rate FRR (rejection of a legitimate
user called “the client”) and the false acceptance rate FAR (acceptance of an
impostor). The decision of acceptance or rejection of a person is thus taken
by comparing the answer of the system to a threshold (called the decision
threshold). The values of FAR and FRR are thus dependent on this
threshold which can be chosen so as to reduce the global error of the
system. The decision threshold must be adjusted according to the desired
characteristics for the application considered. High security applications
require a low FAR which has the effect of increasing the FRR, while Low
security applications are less demanding in terms of FAR. EER denotes
Equal Error Rate (FAR=FRR). This threshold must be calculated afresh for
each application, to adapt it to the specific population concerned. This is
done in general using a small database recorded for this purpose.
Different biometric application types make different trade-offs between the
false match rate and false non-match rate (FMR and FNMR). Lack of
understanding of the error rates is a primary source of confusion in
assessing system accuracy in vendor and user communities alike.
Performance capabilities have been traditionally shown in the form of ROC
(receiver- or relative-operating characteristic) plots, in which the probability
of a false-acceptance is plotted versus the probability of a false-rejection for
varying decision thresholds. Unfortunately, with ROC plots, curves
corresponding to well-performing systems tend to bunch together near the
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11
lower left corner, impeding a clear visualization of competitive systems.
More recently, a variant of an ROC plot, the detection error tradeoff (DET)
plot has been used, which plots the same tradeoff using a normal deviate
scale.
Figure 3. Example of Verification Performance Comparison for Same Hypothetical
Systems, A and B, for both (a) ROC and (b) DET plots
Although the complete DET curve is needed to fully describe system error
tradeoffs, it is desirable to report performance using a single number. Often
the equal-error-rate (EER), the point on the DET curve where the FA rate
and FR rate are equal, is used as this single summary number. However, the
suitability of any system or techniques for an application must be
determined by taking into account the various costs and impacts of the
errors and other factors such as implementations and lifetime support costs
and end-user acceptance issues. There is a tradeoff between the probability
of correct detect and identify rate and the false alarm rate. If we increase the
probability of correct detect and identify rate, the false alarm rate will
increase. A Watch list Receiver Operating Characteristic curve is used to
show the relationship between the probabilities of correct detects and
identify rate and the false alarm rate. In practice, most applications that
operate in the watch list task can be grouped into five operational areas:
a) Extremely low false alarm: In this application, any alarm requires
immediate action. This could lead to public disturbance and
confusion. An alarm and subsequent action may give away the fact
that surveillance is being performed and how, and may minimize the
possibility of catching a future suspect.
b) Extremely high probability of detect and identify: In this
application, we are mostly concerned with detecting someone on the
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12
watch list; false alarms are a secondary concern and will be dealt
with according to pre-defined procedures.
c) Low false alarm and detect/identify: In this application we are
more concerned with lower false alarms and can deal with low
detect/identify.
d) High false alarm and detect/identify: In this application we are
more concerned with higher detect/identify performance and can
deal with a high false alarm rate as well.
e) No threshold: User wants all results with confidence measures on
each for investigation case building.
7. CONCLUSION
A Multimodal Biometrics technique, which combines multiple biometrics in
making a personal identification, can be used to overcome the limitations of
individual biometrics. We developed a multimodal biometrics system which
integrates decisions made by Face recognition, Iris recognition, Fingerprint
verification, Palm print verification. Multi-biometric systems alleviate a few
of the problems observed in uni-modal biometric systems. Besides
improving matching performance, they also address the problems of non-
universality and spoofing.With the widespread deployment of biometric
systems in several civilian and government applications, it is only a matter
of time before multimodal bio-metric systems begin to impact the way in
which identity is established in the 21st century. Multiple Biometric
technologies could make a huge positive impact into society, if it is
correctly utilized to increase the robustness of security systems across the
world. This would help to cope with the rising levels of fraud, crime and
terrorism.
REFERENCES
[1] John Daugman, “How iris recognition works” IEEE Transactions on Circuits and
Systems for Video Technology, 14(1):21–30, 2004. Page No. 103-109.
[2] Chang, “New multi-biometric approaches for improved person identification,” PhD
Dissertation, Department of Computer Science and Engineering, University of Notre
Dame, 2004. Page No. 153-159.
[3] C.Hesher, A.Srivastava, G.Erlebacher, “A novel technique for face recognition using
range images” in the Proceedings of Seventh International Symposium on Signal
Processing and Its Application, 2003. Page No. 58-69.
[4] Barral and A. Tria, “Fake fingers in fingerprint recognition: Glycerin supersedes
gelatin”, In Formal to Practical Security. Springer, 2009. Page No. 83-92.
[5] Bergman, “Multi-biometric match-on-card alliance formed” Biometric Technology
Today, vol. 13, no. 5, 2005. Page No. 1-9.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 13
[6] F. YANG, M. Baofeng, "Two Models Multimodal Biometric Fusion Based on
Fingerprint, Palm-print and Hand-Geometry",DOI-1-4244-1120-3/07, IEEE,2007.
[7] Teddy Ko, “Multimodal Biometric Identification for Large User Population Using
Fingerprint, Face and Iris Recognition”, Proceedings of the 34th Applied Imagery and
Pattern Recognition Workshop (AIPR05) ,2005.
[8] A.K.Jain, R.Bolle, “Biometrics-personal identification in networked society” Norwell,
1999, Page No. 23-36.
[9] C. Soutar, D. Roberge, A. Stoianov, R. Gilroy and B.V.K. V. Kumar, “Biometric
Encryption, Enrollment and Verification Procedures”, Proc. SPIE 3386, 24-35, 1998.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1
CBR Based Performance Analysis of
OLSR Routing Protocol in MANETs
Jogendra Kumar
Department of Computer Science & Engineering
G. B. Pant engineering College Pauri Garhwal Uttarakhand, India
ABSTRACT
Mobile ad-hoc network is an autonomous system which has its own rules and regulations.
MANETs control themselves by configuration the system. In this paper, we analysed and
implemented TC HELLO messages by using multipoint relay (MPR) of OLSR. The routing
performance is then checked using the Qualnet 5.0.2 simulator. To simulate the performance of
the OLSR (Optimised Link State Routing) routing protocol, we took different performance
metrics like hello messages sent, hello messages received, TC messages generated, TC messages
replied and TC messages received on Constant Bit Rate (CBR) using the random waypoint
model.
Keywords
Ad-hoc Network, MANETs, OLSR, Routing Protocol, Qualnet 5.0.2, Simulator.
1. INTRODUCTION
A MANET consists of mobile nodes, a router with multiple hosts and
wireless communication devices. The wireless communication devices are
transmitters, receivers and smart antennas. These antennas can be of any
kind and nodes can be fixed or mobile. The term node referred to as which
are free to move arbitrarily in every direction means it is a asymmetric that
provide communicate between two or more from one direction is good
communication but reverse not be good. These nodes can be a mobile
phone, laptop, personal digital assistance and personal computer. It is
greatly desired to have a quick communication infrastructure.
MANET is the quick remedy for any disaster situation [1][3][7][17]. The
mobile nodes in wireless network communicate with each other within
range because it is a self organized network means configure automatically.
The mobile nodes form a network automatically without a fixed
infrastructure and central management. The topology of the network
changes every time by getting in and out of the mobile nodes in the network
because mobile ad-hoc network has dynamic nature. MANET
[1][2][5][7][9][11][17][18][22] stands for Mobile Ad hoc Network. It is not
centralized autonomous wireless system because it has not central network
which consists of free nodes [2].
Figure 1 shows the mobility change due to dynamic nature of the MANET
routing protocol.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2
Figure 1. The dynamic scenario of network topology with mobility
2. OLSR (OPTIMIZATION LINK STATE ROUTING) IN MANETs
OLSR [8][17][19][23] is a proactive routing protocol stores and updates its
routing table information permanently. OLSR keeps track of routing table in
order to provide a route if needed or route all time available for
communication. OLSR can be implemented in all ad hoc networks due to its
nature OLSR is called as proactive routing protocol. OLSR protocols all the
nodes in the network do not broadcast the route packets only Multipoint
Relay (MPR) [14] [16] [18] nodes can broadcast route packets.
These MPR nodes can be selected in the neighbours of source node in a
network. Each node in the network keeps a list of MPR nodes information
and stores that information in routing table. This MPR selector is obtained
from HELLO packets sending between in neighbours nodes within
range of that node only neighbours. These routes are established before
any source node intends to send a message to a particular destination.
Each and every node in the network keeps a routing table and update
information periodically. This is the reason the routing overhead for OLSR
is minimum than other reactive routing protocols and it provide a shortest
route to the destination in the network. There is no need to build the new
routes, as the existing in use route does not increase enough routing
overhead because every node already builds. OLSR reduces the route
discovery delay.
Figure 2 shows the broadcast packet from center node A to all the other
neighbours’ nodes that nodes attached to A. Distance counting is based on
the hop count.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3
A
B
C
E
G
F
D
Figure 2. Showing the Multipoint relays (hop count = 2)
Figure 3 shows the nodes in the network sending HELLO messages to their
neighbours. These messages are sent at a predetermined interval in OLSR to
determine the link status [1][2][16].
A B
Asymmetric
Asymmetric
Symmetric
Figure 3. HELLO Messages in MANET Using OLSR
The HELLO messages contain the entire neighbor information store in
routing table. This enables the mobile node to have a table in which it has
information about its entire multiple hop neighbors. A node chooses
minimal number of MPR nodes, when symmetric connections are made. It
broadcast topology control (TC)[16][18][22] messages with information
about link status at predetermined TC interval. TC messages also
calculate the routing table’s information and update periodically.
3. RELATED WORKS
Vats et al. [13] proposed a MANET routing protocol in the OLSR were
performance analyzed. The performance of OLSR protocol through a
network of different size showed that it had better performance in all
aspects. The performance of OLSR which can be achieved by Hello
Traffic Sent (bit/sec), Total TC Message Sent (TTMS) and Total TC
Message Forward (TTMF), Total hello message and TC traffic sent
(bit/sec), Routing traffic received (pkt/s), Routing traffic sent (pkt/s), MPR
count using the OPNET Modular simulation tool.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4
Kaur et al. [9] proposed a MANET routing protocol. OLSR performs best in
terms of load and throughput. GRP performs best in terms of delay and
routing overhead. TORA is the worst choice when we consider any of the
four performance parameters. We can say that OLSR is best as compared to
GRP and TORA in all traffic volumes since it has maximum throughput
using OPNET modeler simulation tools. All the above works proposed
several routing protocols to construct a route performance on TC hello
message. This paper checks the performance of the OLSR routing protocol
using CBR in Qualnet which gives faster performance and takes minimum
time for executing the scenarios as compared to OPNET.
4. SIMULATION PARAMETERS AND PERFORMANCE METRIC
4.1 SIMULATION PARAMETERS
Table 1. Simulation Parameters
Parameter Name Parameter Values
Area 700m*700m
Simulation Time 260sec
Channel-Frequency 2.4 GHz
Path loss-Model Two Ray Model
Shadowing-Model Constant
Number of Nodes 50 nodes
Routing Protocols OLSR
PHY-Model PHY802.11b
Antenna-Model Omni directional
Mobility Model Random-Waypoint Model
Traffic Source CBR
Data Rate 2 Mbps
4.2 PERFORMANCE METRIC
Hello Messages Received: Total number of Hello Messages Received by
the node.
Hello Messages Sent: Total number of Hello Messages Sent by the node.
TC Messages Received: Total number of TC Messages Received by the
node.
TC Messages Generated: Total number of TC Messages Generated by the
node.
TC Messages Relayed: Total number of TC Messages Relayed by the node
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5
4.3 SIMULATION TOOLS
Qualnet 5.0.2 [24] is an extended version of Glomosim. Glomosim
simulator tools for wireless network. Design scenarios and routing protocol
in mobile ad-hoc network (MANET) [1] [4] [5] [16] but Qualnet use for
both wireless and wired network. Qualnet is more 10 times powerful as
compared to the Glomosim because it takes less time for the execution of
the scenarios, establish more nodes at the same time and taken the
performance easily as compared to Glomosim and NS2, OPNET, etc.
4.4 NODES PLACEMENT AND ANIMATION VIEW OF OLSR
ROUTING PROTOCOLS
Figure 4. Showing Nodes Placement Scenarios of OLSR routing protocol for 50 nodes
In Figure 4, we described the nodes placement strategies of the random
waypoint model. We have taken an area of 700*700 m wireless network
which attached with all nodes randomly. OLSR routing protocol use CBR
apply for source node to destination node with constant speed. In this
model, all 50 nodes have constant speed. Overall execution time of this
scenario is 260 sec and data rate flow is 2 Mbps with a channel frequency of
2.4 Ghz. We have taken omni-directional model for controlling both
direction signals.
4
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6
4.5 SIMULATION VIEW OF OLSR ROUTING PROTOCOL
Figure 5. Showing the simulation view of OLSR routing protocol for 50 nodes
Figure 5 shows the animation view of OLSR routing protocols’ scenarios
using the Qualnet simulation tool and takes the performance on the basis of
metrics like Hello Messages Received, Hello Messages Sent, TC Messages
Received, TC Messages Generated and TC Messages Relayed.
5. SIMULATION RESULTS AND DISCUSSION OF OLSR
ROUTING PROTOCOLS
OLSR: Hello Messages Sent
Figure 6 shows the Hello Messages Sent by root sending a hello message
broadcast to all neighbors to attach nodes using the OLSR routing protocols.
Only 15 packets were sent in this scenario. The rate was 4 packets/s.
Figure 6. Showing the performance result of OLSR: Hello Messages sent from the Nodes
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7
OLSR: Hello Messages Received:
Figure 7 shows hello messages received from the nodes. In case of OLSR
routing, packet are sent at a constant rate but are received at a different rate
due to some interference. We have seen nodes 21, 29 receiving 100%
packets but at nodes 8, 49 receiving minimum 40% packet received and all
other nodes receiving packet approximately 50%. In overall scenarios all
packet received not 100%.
Figure 7. Showing the performance result of OLSR: Hello Messages Received at the Nodes
Combine performance result of OLSR:
Figure 8 shows the combined performance of the hello messages sent and
the hello messages received at the various nodes.
Figure 8. Showing the combined performance result of OLSR: Hello Messages Sent and Hello
Messages Received at the Nodes
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8
OLSR: TC Messages Generated: Figure 9 shows the performances of TC
messages generated by MPR. In MPR have all information related to the
attached list of nodes like sending address, destination address, secret code,
MAC address are updated periodically. Figure 9 shows nodes 7, 41 and 49
where no TC messages were generated because these nodes were not
selected by MPR. In OLSR routing protocols, TC messages generated
almost 95 % and less than 5 % are not generated by MPR.
Figure 9. Showing the performance result of OLSR: TC Messages Generated at the Nodes
OLSR: TC Messages Received:
Figure 10 shows the performance of TC messages Received in MPR. Figure
10, shows nodes 7, 41 and 49 where no TC messages generated but received
packets because these nodes were attached to the center so that these nodes
have some information related to the neighbors. Nodes 21 to 29 received
100 % TC messages and nodes 12, 20, 27, 35 and 43 received 85% and the
other nodes received less than 40 %.
Figure 10. Showing the performance result of OLSR: TC Messages Received at the Nodes
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9
TC Messages Relayed:
Figure 11 shows the performances of TC messages relayed by MPR. Figure
11 shows nodes 7, 41 and 49 where no TC messages were relayed. From
nodes 14, 16 and 38, TC messages relayed 100 %. TC messages and nodes
6, 10, 13, 21, 22, 23, 27, 35, 43, 27, 35 and 43 TC messages relayed almost
65% and the other nodes relayed less than 20 % of TC messages.
Figure 11. Showing the performance result of OLSR: TC Messages Relayed Vs Nodes
Combine performance result of OLSR: TC Messages Generated, TC
Messages received and TC Messages Relayed:
Figure 12 shows the combine performance of OLSR: TC Messages
Generated, TC Messages received and TC Messages Relayed and Nodes.
Figure 12. Showing the combine performance result of OLSR: TC Messages Generated, TC
Messages Received and TC Messages Relayed at the Nodes
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10
6. CONCLUSIONS AND FUTURE WORKS
This paper discusses Mobile ad-hoc networks (MANETs) and checks the
performance of the optimization link state routing (OLSR) protocol on the
basis of constant bit rate. The performance is checked using the random
waypoint model in which nodes are placed randomly. In the OLSR routing
protocol, hello messages are created to sense the neighboring nodes and a
list of MPR selection nodes and TC hello messages controlling the route
calculation and its routing information are maintained periodically,
minimizing the broadcasting by MPR. In OLSR routing protocols, sending
and receiving packet 95% and less than 2% packets are wasted so that this
protocol is best for large networks. In case of MPR selected nodes gives TC
Messages Generated at 90%, 85% TC Message Received and 80% TC
Messages Relayed so that designing and controlling messages in OLSR
routing protocol almost 80% so that OLSR routing protocol gives better
performances in case of large networks and small networks due to proactive
routing nature protocols as compared to other proactive routing protocols in
MANETs. As future work, another routing protocol and different nodes
placement strategies, energy consumption, fixed bit rate, and variable bit
rate will be analysed on the basis on applying different loads and
modification of existing routing protocols.
REFERENCES
[1] A. B. Malany, V. R. S. Dhulipala, R. M. Chandrasekaran, “Throughput and Delay
Comparison of MANET Routing Protocols,” Intl. Journal Open Problems Comp.
Math., Vol. 2, No. 3, Sep 2009.
[2] Alexander Klein, “Performance Comparison and Evaluation of AODV, OLSR, and
SBR in Mobile Ad-Hoc Networks”, IEEE Personal Communications, pp. 571-575,
2008.
[3] C. Perkins, E.M. Royer, S.R. Das and M.K. Marina, “Performance Comparison of
Two On-demand Routing Protocols for Ad Hoc Networks”, IEEE Personal
Communications, pp. 16-28, Feb. 2001.
[4] C. Siva Rammurty and B. S. Manoj, “Ad hoc wireless networks architectures and
protocols,” ISBN 978-81-317-0688-6, 2011.
[5] Dilpreet Kaur and Naresh Kumar, “Comparative Analysis of AODV, OLSR, TORA,
DSR and DSDV Routing Protocols in Mobile Ad-Hoc Networks”, I. J. Computer of
Network and Information Security, 2013, 3, 39-46.
[6] Elizabeth Royer and C. K. Toh, “A Review of Current Routing Protocols for Ad
hoc mobile Wireless Networks”, RFC 2409, IEEE Personal Communications 1999.
[7] G. Karthiga, J. Benitha Christinal and Jeban Chandir Moses, “Performance Analysis of
Various Ad-Hoc Routing Protocols in Multicast Environment”, IJCST, Vo l. 2, Issue 1,
March 2011, pp. 161-165.
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11
[8] Haas and M. Pearlman. The performance of query control scheme for the zone routing
protocol. ACM/IEEE Transactions on Networking, 9(4) pages 427-438, August 2001.
[9] Harmanpreet Kaur and Jaswinder Singh,” Performance comparison of OLSR, GRP and
TORA using OPNET”, International Journal of Advanced Research in Computer
Science and Software Engineering, Vol. 2, Issue 10, October 2012.
[10]Hong Jiang, 1994. “Performance Comparison of Three Routing Protocols for Ad Hoc
Networks”, Communications of the ACM, Vol. 37.
[11]Hong, K. Xu, M. Gerla, “Scalable Routing Protocols for Mobile Ad-Hoc Networks”
IEEE Network Magazine, Vol. 16, Issue 4, pp. 11–21.
[12]Krishna Kumar Chandel, Sanjeev Sharma and Santosh Sahu,” Performance Analysis of
Routing Protocols Based on IPV4 and IPV6 for MANET”, International Journal of
Computer Technology and Electronics Engineering (IJCTEE), Vol. 2, Issue 3, June
2012.
[13]Kuldeep Vats, Monika Sachdeva, Krishan Saluja and Amit Rathee,” Simulation and
Performance Analysis of OLSR Routing Protocol Using OPNET”, International
Journal of Advanced Research in Computer Science and Software Engineering, Vol. 2,
Issue 2, February 2012.
[14]M. Joa-Ng and I. T. Lu, “A Peer-to-Peer zone based two-level link state routing for
mobile Ad Hoc Networks”, IEEE Journal on Selected Areas in Communications,
Special Issue on Ad-Hoc Networks, August 1999, pp. 1415-1425.
[15]M.K. J. Kumar, R.S. Rajesh, “Performance Analysis of MANET Routing Protocols in
different Mobility Models”, IJCSNS International Journal of Computer Science and
Network 22 Security, Vol. 9, No. 2, February 2009.
[16]M.L Sharma, Noor Fatima, Rizvi Nipun Sharma, Anu Malhan and Swati Sharma,
“Performance Evaluation of MANET Routing Protocols under CBR and FTP traffic
classes”, Int. J. Comp. Tech. Appl., Vol. 2 (3), pp. 393-400.
[17]M. Sreerama Murty and M. Venkat Das,” Performance Evaluation of MANET Routing
protocols using Reference Point Group Mobility and Random Way Point Models”,
International Journal of Ad hoc, Sensor & Ubiquitous Computing (IJASUC), Vol. 2,
No. 1, March 2011, pp. 33-43.
[18]Pearlman MR, Haas ZJ, “Determining the optimal configuration for the zone routing
protocol”, IEEE Journal on Selected Areas in Communications 1999, Vol. 17, pp.
1395–1414.
[19]QualNet documentation, “QualNet 5.0 Model Library: Advanced Wireless”,
http://www.scalablenetworks.com/products/Qualnet/download.php#docs.
[20]T. Clausen, P. Jacquet, “RFC 3626 Optimized Link State Routing Protocol (OLSR)"
October 2003.
[21]S. A. Ade and P. A. Tijare, “Performance Comparison of AODV, DSDV, OLSR and
DSR Routing Protocols in Mobile Ad Hoc Networks”, International Journal of
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12
Information Technology and Knowledge Management July-December 2010, Vol. 2,
No. 2, pp. 545-548.
[22]Syed Basha Shaik and S. P. Setty, “Performance Comparison of AODV, DSR
and ANODR for Grid Placement Model”, International Journal of Computer
Applications, Vol. 11, No. 12, 6-9, 2010.
[23]Thakore Mitesh, “Performance Analysis of AODV and OLSR Routing Protocol with
Different Topologies”, International Journal of Science and Research (IJSR), Vol. 2,
Issue 1, January 2013.
[24]The Qualnet 5.0.2 simulator tools. [Online]. Available at www.scalable-networks.com.

More Related Content

PDF
A study on dynamic load balancing in grid environment
PDF
J41046368
PDF
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...
PDF
A vm scheduling algorithm for reducing power consumption of a virtual machine...
PDF
A vm scheduling algorithm for reducing power consumption of a virtual machine...
PDF
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...
PDF
An Algorithm for Optimized Cost in a Distributed Computing System
PDF
On the-joint-optimization-of-performance-and-power-consumption-in-data-centers
A study on dynamic load balancing in grid environment
J41046368
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...
A vm scheduling algorithm for reducing power consumption of a virtual machine...
A vm scheduling algorithm for reducing power consumption of a virtual machine...
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...
An Algorithm for Optimized Cost in a Distributed Computing System
On the-joint-optimization-of-performance-and-power-consumption-in-data-centers

What's hot (19)

PDF
International Journal of Computational Engineering Research(IJCER)
PDF
N03430990106
PDF
DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTING
PDF
Bounded ant colony algorithm for task Allocation on a network of homogeneous ...
PDF
A novel load balancing model for overloaded cloud
PDF
Performance comparison of row per slave and rows set
PDF
Performance comparison of row per slave and rows set per slave method in pvm ...
PDF
A weighted-sum-technique-for-the-joint-optimization-of-performance-and-power-...
PDF
M017419499
PDF
PDF
TASK-DECOMPOSITION BASED ANOMALY DETECTION OF MASSIVE AND HIGH-VOLATILITY SES...
PDF
Scheduling Divisible Jobs to Optimize the Computation and Energy Costs
PDF
Task scheduling methodologies for high speed computing systems
PDF
An enhanced adaptive scoring job scheduling algorithm with replication strate...
PDF
IRJET- Latin Square Computation of Order-3 using Open CL
PDF
STUDY THE EFFECT OF PARAMETERS TO LOAD BALANCING IN CLOUD COMPUTING
PDF
(Paper) Task scheduling algorithm for multicore processor system for minimiz...
PDF
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
PDF
A study of load distribution algorithms in distributed scheduling
International Journal of Computational Engineering Research(IJCER)
N03430990106
DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTING
Bounded ant colony algorithm for task Allocation on a network of homogeneous ...
A novel load balancing model for overloaded cloud
Performance comparison of row per slave and rows set
Performance comparison of row per slave and rows set per slave method in pvm ...
A weighted-sum-technique-for-the-joint-optimization-of-performance-and-power-...
M017419499
TASK-DECOMPOSITION BASED ANOMALY DETECTION OF MASSIVE AND HIGH-VOLATILITY SES...
Scheduling Divisible Jobs to Optimize the Computation and Energy Costs
Task scheduling methodologies for high speed computing systems
An enhanced adaptive scoring job scheduling algorithm with replication strate...
IRJET- Latin Square Computation of Order-3 using Open CL
STUDY THE EFFECT OF PARAMETERS TO LOAD BALANCING IN CLOUD COMPUTING
(Paper) Task scheduling algorithm for multicore processor system for minimiz...
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...
A study of load distribution algorithms in distributed scheduling
Ad

Similar to Vol 3 No 1 - July 2013 (20)

PDF
A Survey on Task Scheduling and Load Balanced Algorithms in Cloud Computing
PDF
A Survey of Job Scheduling Algorithms Whit Hierarchical Structure to Load Ba...
PDF
Grid computing for load balancing strategies
PDF
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...
PDF
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...
PDF
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...
PDF
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...
PDF
Deadline-Aware Task Scheduling Strategy for Reducing Network Contention in No...
PDF
International Journal of Computer Networks & Communications (IJCNC)
PDF
Deadline-Aware Task Scheduling Strategy for Reducing Network Contention in No...
PDF
Deadline-Aware Task Scheduling Strategy for Reducing Network Contention in No...
PDF
Ie3514301434
PDF
D04573033
PDF
Use of genetic algorithm for
PDF
RSDC (Reliable Scheduling Distributed in Cloud Computing)
PDF
A Prolific Scheme for Load Balancing Relying on Task Completion Time
PDF
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...
PDF
Vol 16 No 2 - July-December 2016
PDF
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...
A Survey on Task Scheduling and Load Balanced Algorithms in Cloud Computing
A Survey of Job Scheduling Algorithms Whit Hierarchical Structure to Load Ba...
Grid computing for load balancing strategies
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...
Deadline-Aware Task Scheduling Strategy for Reducing Network Contention in No...
International Journal of Computer Networks & Communications (IJCNC)
Deadline-Aware Task Scheduling Strategy for Reducing Network Contention in No...
Deadline-Aware Task Scheduling Strategy for Reducing Network Contention in No...
Ie3514301434
D04573033
Use of genetic algorithm for
RSDC (Reliable Scheduling Distributed in Cloud Computing)
A Prolific Scheme for Load Balancing Relying on Task Completion Time
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...
Vol 16 No 2 - July-December 2016
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...
Ad

More from ijcsbi (20)

PDF
Vol 17 No 2 - July-December 2017
PDF
Vol 17 No 1 - January June 2017
PDF
Vol 16 No 1 - January-June 2016
PDF
Vol 15 No 6 - November 2015
PDF
Vol 15 No 5 - September 2015
PDF
Vol 15 No 4 - July 2015
PDF
Vol 15 No 3 - May 2015
PDF
Vol 15 No 2 - March 2015
PDF
Vol 15 No 1 - January 2015
PDF
Vol 14 No 3 - November 2014
PDF
Vol 14 No 2 - September 2014
PDF
Vol 14 No 1 - July 2014
PDF
Vol 13 No 1 - May 2014
PDF
Vol 12 No 1 - April 2014
PDF
Vol 11 No 1 - March 2014
PDF
Vol 10 No 1 - February 2014
PDF
Vol 9 No 1 - January 2014
PDF
Vol 8 No 1 - December 2013
PDF
Vol 7 No 1 - November 2013
PDF
Vol 6 No 1 - October 2013
Vol 17 No 2 - July-December 2017
Vol 17 No 1 - January June 2017
Vol 16 No 1 - January-June 2016
Vol 15 No 6 - November 2015
Vol 15 No 5 - September 2015
Vol 15 No 4 - July 2015
Vol 15 No 3 - May 2015
Vol 15 No 2 - March 2015
Vol 15 No 1 - January 2015
Vol 14 No 3 - November 2014
Vol 14 No 2 - September 2014
Vol 14 No 1 - July 2014
Vol 13 No 1 - May 2014
Vol 12 No 1 - April 2014
Vol 11 No 1 - March 2014
Vol 10 No 1 - February 2014
Vol 9 No 1 - January 2014
Vol 8 No 1 - December 2013
Vol 7 No 1 - November 2013
Vol 6 No 1 - October 2013

Recently uploaded (20)

PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
Introduction to Building Materials
PDF
Trump Administration's workforce development strategy
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
Indian roads congress 037 - 2012 Flexible pavement
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PPTX
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PDF
IGGE1 Understanding the Self1234567891011
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
PDF
RMMM.pdf make it easy to upload and study
PPTX
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
PDF
Complications of Minimal Access Surgery at WLH
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PDF
What if we spent less time fighting change, and more time building what’s rig...
PPTX
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
Introduction to Building Materials
Trump Administration's workforce development strategy
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
Indian roads congress 037 - 2012 Flexible pavement
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
IGGE1 Understanding the Self1234567891011
Chinmaya Tiranga quiz Grand Finale.pdf
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
RMMM.pdf make it easy to upload and study
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
Complications of Minimal Access Surgery at WLH
Practical Manual AGRO-233 Principles and Practices of Natural Farming
Paper A Mock Exam 9_ Attempt review.pdf.
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
What if we spent less time fighting change, and more time building what’s rig...
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...

Vol 3 No 1 - July 2013

  • 1. ISSN: 1694-2507 (Print) ISSN: 1694-2108 (Online) International Journal of Computer Science and Business Informatics (IJCSBI.ORG) VOL 3, NO 1 JULY 2013
  • 2. Table of Contents VOL 3, NO 1 JULY 2013 Comparative Analysis of Job Scheduling for Grid Environment ............................................................1 Neeraj Pandey, Ashish Arya and Nitin Kumar Agrawal Hackers Portfolio and its Impact on Society ........................................................................................1 Dr. Adnan Omar and Terrance Sanchez, M.S. Ontology Based Multi-Viewed Approach for Requirements Engineering..............................................1 R. Subha and S. Palaniswami Modified Colonial Competitive Algorithm: An Approach for Graph Coloring Problem ..........................1 Hojjat Emami and Parvaneh Hasanzadeh Security and Privacy in E-Passport Scheme using Authentication Protocols and Multiple Biometrics Technology........................................................................................................................................1 V. K. Narendira Kumar and B. Srinivasan Comparative Study of WLAN, WPAN, WiMAX Technologies ................................................................1 Prof. Mangesh M. Ghonge and Prof. Suraj G. Gupta A New Method for Web Development using Search Engine Optimization............................................1 Chutisant Kerdvibulvech and Kittidech Impaiboon A New Design to Improve the Security Aspects of RSA Cryptosystem ..................................................1 Sushma Pradhan and Birendra Kumar Sharma A Hybrid Model of Multimodal Approach for Multiple Biometrics Recognition ...................................1 P. Prabhusundhar, V.K. Narendira Kumar and B. Srinivasan IJCSBI.ORG
  • 3. CBR Based Performance Analysis of OLSR Routing Protocol in MANETs ...............................................1 Jogendra Kumar
  • 4. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1 Comparative Analysis of Job Scheduling for Grid Environment Neeraj Pandey Department of Computer Science & Engineering G. B. Pant Engineering College Ghurdauri Uttarakhand, India. Ashish Arya Department of Computer Science & Engineering G. B. Pant Engineering College Ghurdauri Uttarakhand, India Nitin Kumar Agrawal Department of Computer Science & Engineering G. B. Pant Engineering College Ghurdauri Uttarakhand, India ABSTRACT Grid computing is a continuous growing technology that alleviates the executions of large- scale resource intensive applications on geographically distributed computing resources. For a computational grid environment, there are number of scheduling policies available to address the scheduling and load balancing problem. Scheduling techniques applied in grid systems are primarily based on the concept of queuing systems, and deals with the allocation of job to computing node. The scheduler, that schedules the incoming job can be based on global vs. local i.e. what information will be used to make a load balancing decision, centralized vs. de-centralized i.e. where load balancing decisions are made, and static vs. dynamic i.e. when the distribution of load is made. The primary objective of all load balancing algorithm is minimization of the makespan value, maximum load balanced and to gain more desirable performance. In this paper, we present the various load balancing strategies of job scheduling for grid computing environment. We also analyze the efficiency and limitations of the various approaches Keywords Computing, Load Balancing, Scheduling, Genetic Algorithm, Fuzzy logic, Job Replication. 1. INTRODUCTION In the last several years grid computing has emerged as an weighty field, distinguished from conventional distributed computing by its focus on large- scale resource sharing, innovative applications, and in some cases, high- performance orientation [1]. These enable sharing, selection and aggregation of suitable computational and data resources for solving large-scale data intensive problems in science, engineering, and commerce [5]. A grid computing environment comprises combination of various homogeneous and heterogeneous resources such as computing nodes, and workstations
  • 5. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2 which are virtually aggregated to serve as a unified computing resource. Grid middleware provide users with seamless computing ability and uniform access to resources in the heterogeneous grid environment. In order to provide user with a seamless computing environment, the Grid middleware system need to solve several challenges originating from the inherent features of the grid [2]. In distributed systems, every node has different processing speed and system resources, so in order to enhance the utilization of each node and shorten the consumption of time, “Load Balancing” will play a critical role [15]. The performance of load balancing algorithms is strongly related to the number of computing node. Since each computing node has its own inimitable computing capabilities and the pattern of the job arrival to the computing node is imbalanced, thus the grid system may be overloaded. The main objective of load balancing is improved the performance of grid system through its distribution of load among the computing nodes, and minimize the execution time of job. In general, load-balancing algorithms can be categorized as centralized or decentralized in terms of where the load balancing decisions are made. A centralized load balancing approach can function either based on averages scheme or instantaneous scheme according to the type of information on which the load balancing decisions are made [4]. The rest of paper is organized into 6 sections. Section 2 presents the overview of the system model including the grid and mathematical model with load balancing architecture Section. The load balancing approaches for grid system are presented in Section 3. The analysis and comparisons of some grid load balancing algorithms are described in section 4. Section 5 presents some challenges and key issues related to load balancing And finally, the conclusion is given in section 6. 2. SYSTEM MODEL 2.1 Grid Model The grid under study consist a central resource management unit (RMU), to which every computing node (CN) connects and grid clients send their job to RMU for further processing. The RMU is responsible for scheduling jobs among CN. The role of dispatcher is the job management, including maintenance of the load balancing, monitoring node status, node selection, execution, and assignment of jobs for each node. An agent provides a simple and uniform abstraction of the functions in the grid management. The CNs in the grid can be either homogeneous or heterogeneous and a queue is associated with every computing node. The arrived jobs will be placed in a job queue in the RMU from which they are assigned to CNs. The grid agent monitors the waiting job. Upon arrival, jobs must be assigned
  • 6. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3 either to exactly one CN for processing immediately by the instantaneous scheduling or wait to be scheduled by the averages-based scheme [4]. In a Grid system, the composition of nodes is dynamic, every node is likely to enter a busy state at any time and thus lower its performance, so in selecting nodes, all factors should be considered. At the global grid level, each agent is a representative of a grid resource and acts as a service provider of high performance computing power [14]. 2.2 Mathematical Model A computational grid system model [21] consisting p set of the CNs, is shown in figure 1. It consists of Ni computing nodes (CNs) such that: i 1 2 3 n 1 n N { (N ,N ,N , ................ ,N ,N ), | N| n }   (1) The nodes are connected to each other via a communication network. The nodes of grid system possibly could be either an individual machine or a cluster. The nature of the node is combination of both homogeneous and heterogeneous, and modeled as M/M/1 queuing system. The inter arrival time and service time of the system follows exponentially distributed. The notations and assumptions used are as follows:  ∅i : External job arrival rate at node i.  μi : Mean service rate of node i.  𝜆 : Total traffic through the network. A job arriving at node i may be either processed at node i or transferred to another node j through the communication network for computation. The performance of load balancing policies are closely related to the number of node involves in a computational grid system. Figure 1. Grid System Model  Φ : Total external job arrival rate of the system and given as:
  • 7. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4 1 Φ    n i i   (2)  ri : Mean service time of a job at node i (i.e. the average time to service (process) a job at node i).  βi : Mean job processing rate (load) at node i (i.e. the average number of jobs processed at node i per unit interval of time). This is the load for node i assigned by the job allocation scheme.  t : Mean communication time for sending or/and receiving a job from one node to another node.  ρ : Utilization of the communication network and given as: *t  (3) 3. GRID LOAD BALANCING APPROACH In this section, four algorithms are considered for grid load balancing. A grid is a huge collection of multiple grid resources (local or global) that are distributed geographically in a wide area. Load balancing is an important system function destined to distribute the workload among available computing nodes to improve throughput and/or execution times of parallel computer programs either uniform or non-uniformly [19]. There are various load balancing algorithms are projected in past several years. Various approaches such as fuzzy logic, genetic algorithm, and job replication are used to implement load balancing algorithms. 3.1 FCFS Approach The FCFS algorithm [4,14] is proved to be efficient under some conditions. Consider a grid environment G with n CNs as given in equation 1, where each CNs Ni has its own capability to process the job. Let m be the total number of job J, which is considered to be run on G. 1 2 3 1 J {J , J ,J , ................., J , J }i m m  (4) The arrival time of each job Jj is tj and execution time is txj. Each job has two scheduled attributes: a start time and an end time denoted by ts, te, respectively. Upon arrival, jobs are allocated to a certain CN Nx ∈ N by the central resource management unit using first-come-first-served algorithm. The function of the agent is to find the earliest possible time for each job to complete, according to the sequence of the job arrivals. A job probably allocated to any of the CN. So, the function of FCFS to consider all these possibilities and identify which CN will finish the job earliest. Therefore,
  • 8. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5 ( )j jtx min tx (5) The completion time of jobs is always equal to the earliest possible start time plus the execution time, which is described as: j j jtc ts tx  (6) The earliest possible start time for Job Jj on a CN is the latest free time of the chosen CN if there are still jobs running on it. if there is no job running on the chosen CN ahead of job Jj's arrival. Jj can be executed on this CN immediately.   { , , , ( ) ( ), }j j p ts max t max te p P p P j p j    (7) 3.2 Job Replication Approach Menno Dobber et al. [11] analyze and compare the effectiveness of the dynamic load balancing and job replication approach. The two main techniques that are most suitable with the dynamic nature of the grid are Dynamic Load Balancing (DLB) and job replication (JR) [11]. Consider a grid system P with n computing node; a set J of jobs Ji (as given in equation 1, and 8) is considered to be run on P. As the name implies a JR scheme creates several replica of an individual job and schedule them to run into different nodes. The node that finish the job first, send a message to other node involve in a grid system to finish the execution of current job and start the execution of next job available in the queue. Figure 2. Job Replication (2JR) Scheme A m-JR scheme creates m-1 precise copies of each job and these jobs are considered to be run on P. The same data set and the parameters are provided to the two copies of a job, to perform exactly the same computation so the calculations are completely the same. The JR approach consists of I iteration and one iteration takes total R-steps. N copies of all jobs have been spread out to P. As soon as the computing node finished one of the copies of the job, it sends a message to other computing nodes to kill
  • 9. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6 the current job execution and start the processing of the next job. A 2-JR scheme is show in fig. 2, consisting 4 CN and 4 jobs. Firstly 2 copies of each job each job is created after then each job with its copy are distributed to n =2 CN. In figure 3, one copy (A2) of job A1 is created and then distributed to CN1 and CN2. CN1 finished job A1 first, so it send a finalize message “fin” to CN2. Sending a message between CN take some network delay, therefore scheduling of next job by a CN can takes some time. Duration of a specific job is defined as: Job-Time = min {all its job time} + possible send time (8) 3.3 Genetic algorithm Based Approach Genetic algorithms (GA) [4, 7, 8, 22] are increasingly popular for solving optimization, search and machine learning problems. It is basically a well- known and robust search heuristics. It search optimal solution from entire search space In grid environment scheduling is a type of be NP complete problem, i.e. there is no known polynomial time algorithm to optimally solve this problem. The main objective to use GA for load balancing is to achieve the minimum of makespan (the latest completion time among all the jobs processed in CN), maximum node utilization and a well-balanced load across all the CNs. Genetic algorithms are well suited to solving scheduling problems, because unlike heuristic methods GA operate on a population of solutions rather than a single solution. A combination of intelligent agents and multi-agent approaches is applied to both local grid resource scheduling and global grid load balancing. GA is basically used to generate solutions related to optimization problems using various techniques such as selection, mutation, and crossover. Let P be a set, then the cost function CF, is given as: : P IRCF  (9) GA start with an initial set of random solutions called population. Each individual in a population is called a chromosome, represents a solution to the problem. The evaluation of chromosomes is done through generations. During each generation, the chromosomes are evaluated, using some fitness function. For creation of the next generation, new chromosomes, called offspring, are formed. The offspring is created by either using a crossover operator or using a mutation operator. The new generation is formed by selecting the individual using fitness values. After several generations, best chromosome is chooses, which represents the optimum or suboptimal solution to the problem [4]. The GA concentrates on an overall performance over a list of jobs and aims at a more desirable load balance across all the nodes in a computational grid.
  • 10. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7 3.4 Fuzzy Based Approach This section introduces the fuzzy method used in the fuzzy load balancing algorithm. Fuzzy logic [9, 10, 12, 13, 23] deals with reasoning that is approximate rather than fixed and precise. It is a superset of conventional Boolean logic that has been extended to handle the concept of partial truth- values between “completely-truth”, and “completely-false”. In a more extensive sense, fuzzy logic is associated with the concept of fuzzy sets. Terms in the fuzzy set are given linguistic variables. Using fuzzy logic, one can specify the degrees of overload or otherwise with linguistic labels such as lightly loaded, normal, overloaded, heavily overloaded etc.. The fig 3(a) Show the use of fuzzy logic to represent degree of truth. The fuzzy expert system makes scheduling decisions based on a fuzzy logic. As shown in fig 3(b), if processor load is regarded as a linguistic variable, it may have the following terms as its values: light, moderate, and heavy. If „light‟ is interpreted as a load below about one job, „moderate‟ as a load between about 3 and 5 jobs and „heavy„ as a load above about 7 jobs, these terms may be described as fuzzy sets, where p represents the grade of membership [9]. A given fuzzy rule Ri consists of two differentiated parts, namely, antecedent and consequent parts, related to fuzzy concepts [12]. Rules activation conditions are reflected in the antecedent part of the rule. A rule within this is represented by the following expression: 1 1 R if ω  is A  and / or . . .ω  is A then y is Bi n m mn n  (14) Where ωm represents a system feature, y depicts the output variable and Amn and Bn correspond to the fuzzy sets associated to feature m and output, respectively. Processor Load Light Moderate Heavy Load Status Load OverloadNot Overload 1 0 1.0 1 3 5 7 Figure 3. (a) Degree of truth representation (b) A linguistic variable, processor load 4. COMPARATIVE ANALYSIS This section This section examine various characteristic of load balancing policies. Genetic algorithms are applicable to a wide variety of application [8]. It works better when we have to schedule a large number of jobs. The sliding windows technique [4,7] is generally used to trigger the GA. Sliding window consist a series of job for node assignment to schedule. The main
  • 11. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8 focused is about the size of sliding window and when these jobs have been assigned to appropriate node, the update of window is take place. Table 1. Comparative analysis of load balancing policies Approach Factors to consideration Advantages Genetic Algorithm 1. Windows size and their update. 2. String selection for search node in search space. 3. Fitness function for measure the performance of string. 4. Population Size. 5. Processing overhead. 1. Minimum execution time. 2. Minimize Communication cost. 3. Maximum utilization of node. 4. Maximum throughput value. 5. Minimize makespan value. Fuzzy Logic 1. Interference Engine. 2. Decision Making 3. Load Assignment. 4. State Update Decision. 1. Better Performance and throughput. 2. Response time is significantly better. Job Replication 1. No. of copies of Job. 2. Communication between Nodes. 3. Processing overhead. 1. Consistently perform better when measure statistic is less than threshold value. FCFS 1. Instantaneous decision. 2. Sequence of Job arrival. 1. Reduce the system response time. 2. Shorter Makespan. Table 1 shows the comparative analysis of some load balancing policies. The measurement of GA is based on the quality of solution it produced after several generations. The measurements of input variables of a fuzzy controller must be properly combined with relevant fuzzy information rules to make inferences regarding the system Load State. The job replication scheme consist multiple copies of each job so it gives extra overhead to the computing node. To make an instantaneous decision, the FCFS approach is preferred. The primary focus of all the load balancing approaches is spread the workload to each node in such a manner that, the makespan is minimized and a well-balanced load among all the nodes in grid system therefore the current workload in the system must be considered to schedule the job to appropriate node.
  • 12. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9 5. CHALLENGES AND KEY ISSUE 5.1 Challenges in Load Balancing There are various strategies have been developed for solving load balancing problem but it is not yet solved completely. Data partitioning and load balancing are weighty components of parallel computations. For static and dynamic load balancing various strategies have been developed, such as recursive bisection (RB) methods, space-filling curve (SFC) partitioning and graph partitioning [20]. A computation grid system consists of various components such as computing node and workstations, etc. There exist a heterogeneity between the various factors of a node such hardware architecture, physical memory, CPU speed and node capacity which affect the processing result. The dynamic behavior, and node failure can decrease the performance of grid, while the selection of resources (or node) for jobs could also be a factor. 5.2 Key Issues in Load Balancing Load balancing is very crucial issue for the efficient operation of computational grid environments for sharing of heterogeneous resources, hence affect quality of service and scheduling. In dynamic load balancing, load sharing and task migration are some of the widely researched issues [7]. In a networked environment, interoperability (common protocols) is the central issue to be addressed. How to select efficient nodes is one of the issues of further investigation. As Grid is a distributed system utilizing idle nodes scattered in every region, the most critical issue pertaining to distributed systems is how to integrate and apply every computer resource (node) into a distributed system, so as to achieve the goals of enhancing performance, resource sharing, extensibility, and increase availability [15]. In order to make optimal balancing and work distribution decisions in Grid environment, a load balancer needs to take some or all of the following information into consideration:  The capacity on each node (CPU, memory, and disk space, etc.)  Current workload of each node  Required capacity for each task  Network connectivity and delay  The assignment of processes to processors  The use of multiprogramming on individual processors  The actual dispatching of a process. The choice of parameter and state information should also be considered. For various software engineering related issues, there are some software toolkit exists to provide effective solution. The distribution of load among the CN in an optimal way is not easy task, as it requires complete analysis of available resources and job. A load balancing policy can either be static
  • 13. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10 or dynamic. The major concern in static load balancing policy is to determine the execution time of the job, communication delays, and the resources used by computing node. Since an accurate estimation is not possible earlier in time, so emphasis can be done on to estimation of such quantities closer to accurate values. 6. CONCLUSIONS In this paper, we discussed few contemporary load balancing strategies based on various approaches for computational grid environment. With the rapid development of technology, grid computing have increasingly becomes an attractive computing platform for a variety of applications. In a computational grid environment, load balancing is the process of improving the performance of system through re-distribution of load among the computing nodes. When the newly created jobs arbitrary arrive into the system, the node can become heavily loaded while other become either ideal or lightly loaded, therefore the job assignment and load sharing must be done carefully. The problem of load balancing is closely related to scheduling and allocation of job to computing node. For efficient utilization of grid resource and maximum node utilization, special scheduling policy is needed. Load balancing methods will vary greatly between different grid environment depending on the needs and the availability of computing node to perform the task. There are various factors which affects the performance of grid application such as load balancing, resource sharing, and resource heterogeneity therefore it must be considered for making decision. REFERENCES [1] I. Foster, C. Kesselman, and S. Tuecke, The anatomy of the grid: Enabling scalable virtual organizations, The International Journal of High Performance Computing Applications, 15 (3) (2001) 200-222. [2] Rajkumar Buyya, and Srikumar Venugopal, A Gentle Introduction to Grid Computing and Technologies, Computer Socity of India, CSI Communication, July (2005). [3] C. Gary Rommel, The Probability of Load Balancing Success in a Homogeneous Network, IEEE transactions on Software Engineering, 17(9), Sept. (1991) 922-933. [4] Yajun Li, Yuhang Yang, Maode Ma, and Liang Zhoy, A hybrid load balancing strategy of sequential tasks for grid computing environments, Future Generation Computer Systems, 25 (2009) 819-828. [5] Rajkumar Buyya and Manzur Murshed, GridSim: A Toolkit for the Modeling and Simulation of Distributed Resource Management and Scheduling for Grid Computing, The Journal of Concurrency and Computation: Practice and Experience (CCPE), Volume 14, Issue 13-15, Wiley Press, Nov.-Dec., (2002). [6] Vandy Berten, Joel Goossens, and Emmanuel Jeannot, On the Distribution of Sequential Jobs in Random Brokering for Heterogeneous Computational Grids, IEEE Transections on Parallel and Distributed System 17 (2) (2006) 1-12.
  • 14. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11 [7] Albert Y. Zomaya, Yee-Hwei Teh, Observations on Using Genetic Algorithms for Dynamic Load-Balancing, IEEE Transections on Parallel and Distributed System, 12(9), Sept. (2001) 899-911. [8] Carlos Alberto Gonzalez Pico, Roger L. Wainwright, Dynamic Scheduling of Computer Tasks Using Genetic Algorithms, Proc. of the First IEEE Conference on Evolutionary Computation - IEEE World Congress on Computational Intelligence, Orlando, Florida June, (1994) 829-833. [9] Chulhye Park, Jon G. Kuhl, A Fuzzy-Based Distributed Load Balancing Algorithm for Large Distributed Systems, IEEE (1995) 266-273. [10]Kaveh Abani, Kiumi Akingbehh, Adnan Shaout, Fuzzy Decision Making for Load Balancing in a Distributed System, IEEE, (1993) 500-502. [11]Menno Dobber, Rob van der Mei, Ger Koole, Dynamic Load Balancing and Job Replication in a Global-Scale Grid Environment: A Comparison, IEEE Transections on Parallel and Distributed System, 20( 2), Feb. (2009) 207-218. [12]Yu-Kwong Kwok, Lap-Sun Cheung, A new fuzzy-decision based load balancing system for distributed object computing, Journal of Parallel and Distributed Computing, 64 (2004) 238–253. [13]Mika Rantonen, Tapio Frantti, Kauko Leiviska, Fuzzy expert system for load balancing in symmetric multiprocessor systems, Expert Systems with Applications, 37 (2010) 8711–8720. [14]Junwei Caoa, Daniel P. Spooner, Stephen A. Jarvis and Graham R. Nudd, Grid load balancing using intelligent agents, Future Generation Computer Systems, 21 (2005). [15]K.Q. Yan, S.C. Wang, C.P. Chang, J.S. Lin, A hybrid load balancing policy underlying grid computing environment, Computer Standards & Interfaces 29 (2007) 161–173. [16]Jun Wang, Jian-Wen Chen, Yong-Liang Wang, Di Zheng, Intelligent Load Balancing Strategies for Complex Distributed Simulation Applications, 2009 International Conference on Computational Intelligence and Security, 2009 (182-186). [17]Kuo-Qin Yan, Shun-Sheng Wang, Shu-Ching Wang, Chiu-Ping Chang, Towards a hybrid load balancing policy in grid computing system, Expert Systems with Applications 36 (2009), 12054–12064. [18]Brighten Godfrey, Karthik Lakshminarayanan, Sonesh Surana, Richard Karp, Ion Stoica, Load Balancing in Dynamic Structured P2P Systems, IEEE INFOCOM (2004). [19]Luis Miguel Campos, Isaac D. Scherson, Rate of change load balancing in distributed and parallel systems, Parallel Computing 26 (2000), 1213-1230. [20]Karen D. Devine, Erik G.Boman, Robert T. Heaphy, Bruce A. Hendrickson, James D. Teresco, Jamal Faik, Joseph E. Flaherty, Luis G. Gervasio, New challenges in dynamic load balancing, Applied Numerical Mathematics 52 (2005) 133–152. [21]Satish Penmatsa, and Anthony T. Chronopoulos, Comparison of Price-based Static and Dynamic Job Allocation Schemes for Grid Computing Systems, Eighth IEEE International Symposium on Network Computing and Applications, (2009) 66-73. [22]In Lee, Riyaz Sikora, and Michael J. Shaw, A Genetic Algorithm-Based Approach to Flexible Flow-Line Scheduling with Variable Lot Sizes, IEEE Transactions On Systems, Man, a Cybernetics-Part B: Cybernetics, 27(1), Feb. (1997). [23]S.Salleh, and A.Y.Zomaya, Using Fuzzy Logic for Task Scheduling in Multiprocessor Systems, Proc. 8th ISCA International Conference on Parallel and Distributed Computing Systems, Orlando, Florida, (1995) 45-51.
  • 15. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1 Hackers Portfolio and its Impact on Society Dr. Adnan Omar & Terrance Sanchez, M.S. 6400 Press Drive Southern University at New Orleans ABSTRACT Currently, a hacker is defined as a person using computers to explore a network to which he or she did not belong. Hackers find new ways to harass people, defraud corporations, steal information and maybe even destroy valuable information by infiltrating private and non- private organizations. According to recent research, bad hackers make up only a small minority of the hacker community. In today’s society, we depend on more technology than ever and that increases the likelihood of hackers having more control over cyberspace. Hackers work by collecting information on the intended target, figuring out the best plan of attack and then exploiting vulnerabilities in the system. Programs such as Trojan horses and Flame viruses are designed and used by hackers to get access to computer networks. This paper describes how hacker behavior is aimed at information security and what measures are being taken to combat them. Keywords Types of Hacker, Security, Technology, Cyberspace. 1. INTRODUCTION Hacking is a very serious problem that can severely compromise your computer. If your computer is connected to the Internet, you are vulnerable to cyber-attacks from viruses and spyware. It is virtually impossible to stop a determined and skilled hacker from breaching most home network security measures commercially available [1]. The primary objective of hacking is to gather information and documents that could compromise the security of governments, corporations or other organizations and agencies. In addition to focusing on diplomatic and governmental agencies around the world, the hackers also attack individuals as well as groups. The computer term “hacker” can refer to a good or bad reputation according to the mass media. Hackers have developed new ways to use computers since their invention, and create programs that no one else can, to utilize their potential. Hackers are motivated by various reasons which may range from bold ideas, lofty goals, great expectations or simple deviation from the norm as well as the excitement of intrusion into a complicated computer system. In the past, hacking has been used to hassle an intended victim, steal information, or spread viruses. Not all hacking results from premeditated malicious intent. Most hackers are interested in how computer networks function and barriers between them and that knowledge is considered a
  • 16. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2 challenge to their intelligence. However, some use their knowledge to help corporations and governments construct better security measures. Although we have heard about mischievous hackers sabotaging computers, networks, and spreading viruses, most of the time hackers are just driven by the curiosity they have about how different systems and programs work. Although malicious and intrusive methods may be representative of what hackers do, many of the methods and tools used by them are constructive in fixing glitches in software as well as focusing on the vulnerability of computer technology. It is through exposure of these vulnerabilities that new ideas and better security measures are created. When someone hears the word "hacker" one might immediately conjure images in the mind’s eye of a criminal; more specifically, a criminal sitting at a computer typing away with a screen reading "Access Denied." That image in mind, one has the mainstream image of a modern day hacker. In today’s society, hacking is just as prevalent as it has been in years past. Viruses are still coded every day, worms still crawl the internet, and Trojan horses continue to allow back door access into computer systems [2]. Even within hacker society, the definitions range from socially very positive to criminal. In [3], there are two basic principles hackers live by: first, is that information sharing is a powerful good and that it is the ethical duty of hackers to share their expertise by writing free software and facilitating access to information and to computing resources whenever possible. Second, is that system cracking for fun and exploitation is ethically OK as long as the cracker commits no theft, vandalism or breach of confidentiality. It differentiates between benign and malicious hackers based on whether damage is performed, though in reality all hacking involves intrusion and a disregard for the efforts, works and property of others. This research has reviewed the literature on hackers and it identifies countries, reasoning, and type of penalties that are most likely to be involved in hacking activities. In addition, it will address the steps that are needed to put in place in order to reduce hacking and the type of penalties. 2. LITERATURE REVIEW Electronic information is a critical part of our culture. Yet no matter where the technology has taken us, the fact remains that what happens in cyberspace has tangible impacts on each of our lives. Therefore, it is as important for us to be secure in cyberspace as it is in our physical world [4]. According to a report from McAfee based on a survey conducted globally on more than 800 IT company CEO's in 2009, data hacking and related cybercrimes have cost multinational companies one trillion U.S. dollars [5].
  • 17. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3 The media often presents hackers as having a thrilling reputation. Adolescents who are lacking the social skills required to be accepted by others may fantasize about their degree of technological skills, and move online in search of those who profess to have technological skills the student desires. A simple search using the term "hacker" with any search engine, results in hundreds of links to illegal serial numbers, ways to download and pirate commercial software, etc. Showing this information off to others may result in the students being considered a "hacker" by their less technologically savvy friends, further reinforcing antisocial behavior. In some cases, individuals move on to programming and destruction of other individuals programs through the writing of computer viruses and Trojan horses; programs which include computer instructions to execute a hacker's attack. If individuals can successfully enter computers via a network, they may be able to impersonate an individual with high level security clearance access to files, modifying or deleting them or introducing computer viruses or Trojan horses. As hackers become more sophisticated, they may begin using sniffers to steal large amounts of confidential information, become involved in burglary of technical manuals, larceny or espionage [6]. The British government released evidence that foreign intelligence agencies, possibly in China, Korea and some former Soviet states, were hacking computers in the United Kingdom. "Economic espionage" was believed to be one reason behind the attacks. Economic espionage involves attempting to undermine the economic activity of other countries, sometimes by passing on stolen industry and trade secrets to friendly or state-owned companies. Key employees; those who have access to sensitive information or government secrets, can be targeted through virus-laden e-mails, infected CD-ROMS or memory sticks, or by hacking their computers. To respond to these threats, the European Union, G8 and many other organizations have set up cybercrime task forces. In the United States, some local law enforcement organizations have electronic crime units and the FBI shares information with these units through its InfraGard program [7]. Cyber security is becoming an important issue, as emphasized in an article by Jacob Silverman titled “Could hackers devastate the U.S. economy?” He discloses the fact that many media organizations and government officials rank it just as grave a threat as terrorist attacks, nuclear proliferation and global warming. With so many commercial, government and private systems connected to the Internet, the concern seems warranted. To add to the concern, consider that today's hackers are more organized and powerful than ever. Many of them work in groups; and networks of black- market sites exist where hackers exchange stolen information and illicit programs. Credit-card data is sold in bulk by "carders" and phishing scams are a growing concern. Malware -- viruses, Trojan horse programs and
  • 18. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4 worms -- generates more money than the entire computer security industry, according to some experts. He further reveals that hackers are also distributed all over the world, many in countries like Romania that have lots of Internet connectivity and loose enforcement of laws [8]. In 2008 Security experts said Chinese hackers began targeting Western journalists as part of an effort to identify and intimidate their sources and contacts, and to anticipate stories that might damage the reputations of Chinese leaders. In a December 2012 over the course of several investigations it found evidence that Chinese hackers had stolen e-mails, contacts and files from more than 30 journalists and executives at Western news organizations, and had maintained a “short list” of journalists whose accounts they repeatedly attack. Based on a forensic analysis, it appears the hackers broke into New York Times computers on Sept. 13, when the reporting for the Wen articles was nearing completion. They set up at least three back doors into users’ machines that they used as a digital base camp. From there they snooped around New York Times‟ systems for at least two weeks before they identified the domain controller that contains user names and hashed, or scrambled, passwords for every Times employee [9]. In 2009, dubbed, “Operation: Aurora” by security firm McAfee, sophisticated hackers based in China breached the corporate networks of Google, Yahoo! Juniper Networks, Adobe Systems, and dozens of other prominent technology companies and tried to access their source codes. China's hackers seemed narrowly focused on military technology and telecommunications companies as early as 2000. In 2011 Wiley Rein, a prominent Washington Law firm working on a trade case against China was hacked, and the White House was targeted last year. The hackers also breached the website of the Council on Foreign Relations and rigged it to deliver malware to anyone who visited it. Hacking groups with ties to the Chinese government have also aggressively targeted Western oil and gas companies and their law firms and investment banks [10]. In 2011, U.S. computer security firm McAfee reported that hackers operating from China stole sensitive information from Western oil companies in the United States, Taiwan, Greece and Kazakhstan, beginning in November 2009. Citizen Lab and the SecDev Group discovered computers at embassies and government departments in 103 countries, including the Dalai Lama's office and India, were compromised by an attack originating from servers in China. They dub the network involved "GhostNet". Google claims cyber-attacks from China have hit it and at least 20 other companies. Google shut down its China operations. A top-secret memo by the Canadian Security Intelligence Service warns that cyber- attacks on government, university and industry computers have been
  • 19. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5 growing "substantially. Quebec provincial police say they dismantled a computer hacking network that targeted unprotected computers around the world, including government computers [11]. In 2011, NASA reported it was the victim of 47 APT attacks, 13 of which successfully compromised Agency computers. In one of the successful attacks, intruders stole user credentials for more than 150 NASA employees – credentials that could have been used to gain unauthorized access to NASA systems. An ongoing investigation of another such attack at Jet Propulsion Laboratories JPL involving Chinese-based Internet protocol (IP) addresses has confirmed that the intruders gained full access to key JPL systems and sensitive user accounts. With full system access the intruders could: (1) modify, copy, or delete sensitive files; (2) add, modify, or delete user accounts for mission-critical JPL systems; (3) upload hacking tools to steal user credentials and compromise other NASA systems; and (4) modify system logs to conceal their actions. In other words, the attackers had full functional control over these networks [12]. NASA is a prestigious target for hackers because of its seat atop the United States' broader technology incubation apparatus, and because of that position it is also a strategic target for foreign state actors and cybercriminals looking to steal information they can profit from. And while the agency reportedly spends about a third of its $1.5 billion IT budget on security, things aren’t looking so secure. Securing a huge bureaucracy like NASA is difficult, no doubt. But according to Martin’s testimony, as of February 2012 only one percent of NASA’s portable devices and laptops were encrypted [12]. According to Bloomberg BusinessWeek, the executive order called for the U.S. Department of Homeland Security to identify which critical infrastructure is vulnerable to a cyber-attack that would be catastrophic to the economy and public safety. According to Apple a week after Obama issued the order, Apple’s employees computers were attacked by malicious software after they visited a website aimed at iPhone developers. Shortly afterward Microsoft announced that similar malware has infected some of his company computers. According to trade groups representing tech manufacturers and Web companies, the cables and fibers that information travels over are more critical than the devices and programs their members make, although Tech Companies argue other countries might take a cue from the U.S. and set up their own cyber security guidelines. Multiple sets of regulations might mean manufacturers and Web companies would have to create different products and services for different countries, for further increasing cost [13].
  • 20. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6 Security experts hired by the New York Times to detect and block the computer attacks gathered digital evidence that Chinese hackers, using methods that some consultants have associated with the Chinese military in the past, breached The Times’ network. They broke into the e-mail accounts of its Shanghai bureau chief, David Barboza, who wrote the reports on Mr. Wen’s relatives, and Jim Yardley, The Times’ South Asia bureau chief in India, who previously worked as bureau chief in Beijing. Security experts found evidence that the hackers stole the corporate passwords for every Times employee and used those to gain access to the personal computers of 53 employees, most of them outside New York Times’ newsroom [9]. For three straight years, a group of Chinese hackers waged a cyber-war against a family-owned, eight-person software firm in California, according to court records. It started when Solid Oak Inc. founder Brian Milburn claims he discovered that China was stealing his company's parental filtering software, CYBERsitter. The theft hurt their business and sales, which was bad enough. But twelve days after he publicly accused Chinese hackers, he says he was inundated by attempts to bring down his Santa Barbara-based business. Hackers broke into the company's system, shut down its email and web servers, spied on employees using their own webcams and gained access to sensitive company files, according to court records. Apple Inc. reported it was hacked by the same group that hit social- networking monster Facebook in January 2013. The security breaches are the latest in a string of high-profile attacks on companies including The Wall Street Journal and New York Times. Cyber security firm Mandiant also came out with a report in early 2013 that accused a secret Chinese military unit in Shanghai of years of systematic cyber-espionage against more than 140 U.S. companies. Adam Levin, co-founder and chairman of Identity Theft 911, says that for most companies it's not a matter of if they will have a breach but when Levin told FoxBusiness.com that no company is ultimately immune to this [14]. Members of Congress have published proposals that could result in longer prison sentences for hackers. The House Judiciary committee is looking to expand the Computer Fraud and Abuse Act (CFAA), an anti-hacking bill dating back to 1984. Under the new proposals, damaging a computer after accessing it without authorization would carry a maximum 10-year prison term, double the current punishment. "Trafficking" passwords would also carry a 10-year penalty. Hacking and damaging a "critical infrastructure computer" would become the most serious crime, with a maximum 30-year sentence. That would cover any machine that plays a vital role in areas such as power, transportation, and finance [15].
  • 21. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7 3.METHODOLOGY Developing a psychological profile of a likely attacker is an attractive goal. Because of variation among human motivations, and limitations in the knowledge of psychology, such a profile may prove elusive [16]. There are a number of recent and growing trends in the hacking activity landscape that were observed by the Cybercrime division in the past decade dealing with not only the state and local government aspects but with other national governments across the world. Recently, cyber-attacks have no details given to the attackers’ identity. 3.1 Data Gathering In this research study, data was collected from several Department of Justice (DOJ) Cybercrime press releases. However, the bulk of the data was extracted from their department since 2009. DOJ generates reports from cybercrime activity and people across the world. In the United States, more than 35 million dollars in damage has been done to targeted companies. Table 1 consists of 97 hackers listed by nationality, age, job status, reasons for hacking, damage to company, money to judicial system, and punishment from the DOJ for a period of 4 years. A shortlist of hackers is shown in Table 1 as an example. Tables 2 through 4 were constructed from the data collected from the aforementioned references. Table 1. Computer Cybercrime Portfolio 2009-2013 Source: U.S. Department of Justice (2009-2012). Computer Crime and Intellectual Property Section Press Releases [17]. National Age Job Status Reason for Hacking Damage to Business Money to Judicial Sys. Punish Sweden 37 N/A Steal info N/A $650,000 Pending Malaysia N/A N/A Bank fraud N/A N/A 10 Y Romania N/A N/A Steal info N/A N/A 7 Y Russia 55 N/A Steal info N/A $1,000,000 3 Y America 46 N/A Personal gain N/A N/A 18 Y America 36 N/A Destroy company data N/A N/A 10 Y America 49 N/A Personal gain $100,000 $250,000 10 Y America N/A Emp Steal info $5,000 N/A 40 Y America 45 N/A Confidential info N/A $1,000,000 15 Y America 22 N/A Confidential info N/A $350,000 21 Y America 27 N/A Personal gain $9,481.03 $187,659 17 Y America 28 N/A Confidential info N/A $500,000 5 Y/ 1 Y Pro Russia 29 N/A N/A N/A $3.5 million dollarsMoldova 29 N/A N/A N/A America 25 N/A Steal info N/A 171.6 mil 2Y 20 Y
  • 22. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8 Table 2 represents the percentage of the number of hackers by nationality. Table 2. Hacking by Nationality Nationality % Nationality % Unknown 30 Malaysian 2 Russian 3 American 34 Latvian 1 Estonian 11 Romanian 9 Venezuelan 1 Sweden 1 Moldova 1 Albanian 1 Dublin 1 Hungarian 1 Blaine 1 Table 3 indicates the percentage of the type of motivation behind hacking. Table 3. Motivation behind Hacking Reasoning % Steal Information 34 Intentional Damage to Companies 11 Bank fraud 25 Need employment 1 Personal gain 11 Steal Money 3 Commit Multiple Fraud 7 Steal Intellectual Property 5 Table 4 shows the percentage of the type of penalties applied to hackers by the judicial system. Table 4. Type of Penalty Category % Pending 42 Probation 3 Supervised release 2 Cyber warfare 15 Prison 41
  • 23. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9 3.2 Ways to Minimize Potential for Hacking In order to minimize hacking, several techniques are required. The following procedures need to be implemented to help limit the possibility of hacking:  It is quite essential that organizations proactively introduce guidelines of standard use and outline the consequences for inappropriate actions.  People should be informed and have adequate knowledge of hacking. They must be educated of certain commonalities and characteristics connected with this activity, as well as the significant penalties for hacking and the repercussions online networking with other characters claiming to be skillful in raiding others.  Outside the business perspective, people need to be educated as to not post any sensitive information on social networks such as Facebook, Twitter, YouTube, etc.  The organizations can use filters which can prohibit its members from accessing unauthorized software serial numbers, hacking-related materials such as newsgroups, chat-rooms and hacking organizations.  Organizational staff should monitor activities in the working environment and be proactive when information is obtained about hacking activities.  There is a need for cooperation between private, public, as well as governments to reduce hacking activities.  Recognizing good hackers that report vulnerable security weaknesses to companies.  Lawmakers need to address the seriousness of cyber hacking by establishing several special centers across the nation where they can recruit the minds of our skilled youth as early as 15 years of age and provide them the proper training, financial means, and support them through college to those who have the passion and commitment to continue in the field of cyber security.  Global corporations between nations are needed to reduce hacking. In summary, people need to be aware of incidents regarding hacking, the mentality associated with it, the consequences of various hacking actions and possible consequences of interacting and forming online relationships with anonymous individuals who claim to be proficient in invading others' privacy. Many organizations have engaged in enabling employees to collaborate with technology-oriented staffs who demonstrate several physiognomies that can result in hacking activities.
  • 24. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10 4. FINDINGS The hacking became more serious beyond the extent of individual/groups and it is now at a government level between nations. From the data collected, Tables 2-4 have been analyzed and illustrated as a graphical representation shown in Figure 1-3. Figure 1 illustrates the number of hacking by nationality. The highest percentage comes from America (35%) and second to it is unknown (31%). Thirdly, it is Estonian (12%). Figure 1. Hacking by Nationality Figure 2 represents the drive behind hacking; 35% would steal information, 26% is Bank Fraud, and 12% is intentional damage to companies.
  • 25. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11 Figure 2. Motivation behind Hacking Figure 3 shows that 41% of their penalties are still pending which states that their punishments have yet been decided by the courts and 40% are serving a prison sentence. However, 14% are included in cyber warfare which means that their punishment has not been published due to their crime involving the government. Monitoring behavior and motivation of hackers can help improve awareness of their danger and underscores the importance of maintaining robust security, including up-to-date cyber security and anti- virus software. Figure 3. Types of Penalty Figure 4 shows an example illustrating the seriousness of cyber-attacks in today’s society. Chevron, the U.S. headquartered international oil and gas company, has admitted that Stuxnet infected its IT network. Stuxnet is known for destroying centrifuges used in Iran uranium enrichment program. It was designed by a nation state with the intention of targeting Siemens supervisory control and data acquisition systems (SCADA) which controlled the industrial processes inside the enrichment facilities. Industrial Safety and Security Source is reporting that the Stuxnet virus was planted by an Iranian double agent via a memory stick. The Stuxnet malware is widely believed to have caused damage to Iran nuclear program by breaking the motors on 1,000 centrifuges at the Natanz uranium enrichment facility. Kaspersky Lab reported that a new virus dubbed Gauss has attacked computers in the Middle East spying on financial transactions, emails and picking passwords to all kind of pages. The virus resembles Stuxnet and Flame malware which was used to target Iran. Gauss has infected hundreds of personal computers across the Middle East – most of them in Lebanon,
  • 26. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12 but also in Israel and Palestinian territories. Kaspersky Lab has classified the virus, named after one of its major components, as “a cyber-espionage toolkit” [18]. Figure 4. Flame Wars Source: Cyber Security Helping secure one network at a time, 2013 Recent hacker IT attacks could be catastrophic for global business and even cost lives. According to Alicia Buller [19], the three points to note in cyber war are: Companies could become collateral victims in the war between superpowers. Ideas from state nation cyber weapons could be repurposed and copied by amateurs. Cyber criminals may start using weapons gleaned from governments and nation states. Depending on the severity of the latest hacks, establishing a framework to protect the country becomes top priority of the Obama Administration as well as other countries around the world. 5. CONCLUSION While computer hackers constitute a major security concern for individuals, businesses, and governments across the globe, hacking and hackers’ underground culture remains secretive and difficult to identify for both lawmakers and those vulnerable to hacker attacks. The mystery that surrounds much of hacking prevents us from arriving at definitive solutions to the security problem it poses; but our analysis provides at least tentative insights for dealing with this problem. Hacking became a serious problem affecting all levels of business activities from individuals, corporations, as well as governmental agencies. The bulk of the hacking is initiated from Americans about 35%. The type of penalty ranges from jail time to monetary fines. Although there are laws against hacking, the courts cannot
  • 27. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 13 persecute these crimes fast enough to deter people from committing them. From the literature review, the maximum jail penalty is 62 years and fines are $171.6 million dollars. Results show that hackers continue to engage in illegal hacking activities despite the perception of severe judicial punishment. A closer look shows that hackers perceive a high utility value from hacking, little informal sanctions, and a low likelihood of punishment. These observations combined with their disengagement from society, partially explains the hacker's illegal behavior. Whatever their reason, it is a learning experience through which they hope to gain anonymity. Future effort to minimize hacking will undoubtedly include a combination of aggressive legislation, new technological solutions, and increased public awareness and education. Existing laws should be reviewed and amended periodically to allow for appropriate evolution. The international community for online security should respond with collaborative efforts globally towards this terrorist act of hacking in order to manage this predicament. 6. REFERENCES [1] Callwood, K. (2013). “How to Reduce Hacking” eHow.com Retrieved from: http://www.ehow.com/how_8663856_reduce-hacking.html#ixzz2Of7JZqMC [2] Wooten, D. (2009). “Hacking: modern day threat or hobby? (pt. 1)”. Retrieved from: http://www.examiner.com/x-13831-Computer-Security-Examiner~y2009m6d22- Hacking-modern-day-threat-or-hobby-pt-1# [3] Parker, D. (1998). “Fighting Computer Crime: A New Framework for Protecting Information” Retrieved from: http://education.illinois.edu/wp/crime/hacking.htm [4] Shoemaker, D. & Conklin A. (2012). “Cyber security: The Essential Body of Knowledge 1st edition” Cengage Learning. [5] Loganathan, M. & Kirubakaran, E. (2011). “A Study on Cyber Crimes and protection” IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 5, No 1. [6] Stone, D. (1999). “Computer Hacking”, University Laboratory High School, Retrieved from: http://www.ed.uiuc.edu/wp/crime/hacking.htm [7] Computer Weekly (2006). “Act on foreign spy risk, firms urged” Retrieved from: http://www.computerweekly.com/articles/2006/12/01/220307/act-on-foreign-spy-risk firms-urged.htm [8] Silverman, J. (2007). "Could hackers devastate the U.S.economy?” HowStuffWorks.com. Retrieved from: http://computer.howstuffworks.com/die-hard-hacker.htm [9] Perlroth, N. (2013). “Hacker in China Attacked the Times for last 4 months” The New York Times. Retrieved from: http://www.nytimes.com/2013/01/31/technology/chinese- hackers- infiltrate-new-york- times-computers.html?pagewanted=all&_r=0 [10]Canter, D. (2013). “Fighting an Order to fight cybercrime” Bloomberg BusinessWeek. March 11-17, 2013. [11]New Orleans Business News (2011). “Hackers in China hit Western oil companies, security firm reports” The Associated Press. Retrieved from: http://www.nola.com/business/index.ssf/2011/02/hackers_in_china_hit_ western_o.html
  • 28. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 14 [12]Dillow, C. (2012). “In the last year, hackers gained „Full Functional Control‟ of NASA networks, stole the control codes for the is” POPSCI. Retrieved from: http://www.popsci.com/technology/article/2012-03/hackers-gained-full-functional- control-nasa-networks-stole-control-codes-iss-last-year [13]Engleman, E. (2013).“Hacked? Who Ya Gonna Call?” Bloomberg Business Week. February 11- February 17, 2013. [14]Chakraborty, B. (2013). “Small firm hit by 3-year hacking campaign puts face on growing cyber problem” Foxnews.com Retrieved from: http://www.foxnews.com/politics/2013/02/22/small-businesses-big-targets-for-cyber- snoops/#ixzz2Ngj7XMli [15]Peters, J. (2013). “America's Awful Computer-Crime Law Might Be Getting a Whole LotWorse” Retrieved from: http://www.slate.com/blogs/crime/2013/03/25/computer_fraud_and_abuse_act_the_cfa a_america_s_awful_computer_crime_law.html [16]Stolfo, S. (2008) “Insider Attack and Cyber Security: Beyond the Hacker” Vol.39, Springer. [17]U.S. Department of Justice (2009-2012). Computer Crime and Intellectual Property Section Press Releases. Retrieved from: http://www.justice.gov/criminal/cybercrime/pr.html [18]Hatcher, W. (2013). “Cyber Security helping secure one network at a time” Information Systems Audit and Control Association – Greater New Orleans Chapter – 2013. [19]Buller, A. (2013). “The Coming of Cyber War I” Retrieved from: http://gulfbusiness.com/2013/03/the-coming-of-cyber-world-war-i/
  • 29. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1 Ontology Based Multi-Viewed Approach for Requirements Engineering R. Subha Assistant Professor Sri Krishna College of Technology, Kovaipudur, Coimbatore-641 042, Tamil Nadu, India S. Palaniswami Principal Government College of Engineering, Bodinayakanur, Tamil Nadu, India ABSTRACT Software requirement engineering is an important process in software development. When considered software development as a whole, 75% of software failures are due to impaired software requirements. The proposed technique involves a multi viewed approach comprising of Controlled Natural language and ontologies that can be used for representing the requirements. Ontology is an explicit information modelling method which can be used to model applications and their interactions. Controlled natural languages are subsets of natural languages, obtained by restricting the grammar and vocabulary in order to reduce or eliminate ambiguity and complexity that enable reliable automatic semantic analysis of a language. The ontologies are constructed based upon the semantic similarities between the domain and requirement models. The CNL is developed by restricting the vocabulary based on certain rules. The similarity between the two representations can be defined by a program that extracts the extracts the objects and relationship between them. Tool based verification of the similarities is performed. This approach is applicable in software development areas and many official applications such as banking system, currency conversion, weather reports, transport, and sports. Keywords Requirements Engineering , Ontology , Controlled Natural Language 1. INTRODUCTION Software engineering (SE) is the engineering discipline through which software is developed. Commonly the process involves finding out what the client wants, composing this in a list of requirements, designing an architecture capable of supporting all of the requirements, designing, coding, testing and integrating the separate parts, testing the whole,
  • 30. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2 deploying and maintaining the software. Programming is only a small part of software engineering. Requirements engineering (RE) being the first phase of software engineering deals with the process of discovering, eliciting, documenting and maintaining the requirements for a particular computer-based system. Computer systems are designed, and anything that is designed has an intended purpose. If a computer system is unsatisfactory, it is because the system was designed without an adequate understanding of its purpose, or the purpose is deviated from the intended one. Both problems can be mitigated by careful analysis of purpose throughout a system’s life. Requirements Engineering provides a framework for understanding the purpose of a system and the contexts in which it will be used. The requirements engineering bridges the gap between an initial vague recognition that there is some problem to which there is a computer technology, and the task of building a system to address the problem. Ontology formally represents knowledge as a set of concepts within a domain, and the relationships between pairs of concepts[3]. It can be used to model a domain and support reasoning about entities. An ontology renders shared vocabulary and taxonomy which models a domain with the definition of objects/concepts, as well as their properties and relations. Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, informatics, library, enterprise bookmarking, and information architecture as a form of knowledge representation about the world or some part of it [5]. The creation of domain ontologies is also fundamental to the definition and use of an enterprise architecture framework. Model for describing the world that consists of a set of types, properties, and relationship types. There is also generally a view that the feature of the model in an ontology closely resembles the real world. There are many problems associated with requirements engineering. The nature of problems includes defining the system scope, problems of understanding and problems of volatility. The major problems are sharing equal understanding among the different communities involved in the development of a given system, and problems in dealing with the volatile nature of requirements. Problems of understanding during elicitation can lead to requirements which are ambiguous, incomplete, inconsistent, and even incorrect .If changes are not accommodated, the original requirements set will become incomplete, inconsistent with the new situation, and potentially unusable because they capture information that has since become obsolete. The requirement traceability issue is also a major factor concerned with the requirement engineering. These problems may lead to poor
  • 31. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3 requirements and the cancellation of system development, or else the development of a system that is later judged unsatisfactory or unacceptable, has high maintenance costs, or undergoes frequent changes. By improving requirements elicitation, the requirements engineering process can be improved, resulting in enhanced system requirements and potentially a much better system. In order to overcome these issues, a model integrating controlled natural language and ontology providing a multi-viewed approach is proposed to represent requirements from different perspectives of all the stakeholders and the development team. 2. RELATED WORK 2.1 A scenario-driven approach to trace dependency analysis Egyed et al. proposed an approach to trace dependency analysis. Software development artifacts such as model descriptions, diagrammatic languages, abstract (formal) specifications, and source code are highly interrelated where changes in some of them affect others [2]. Trace dependencies characterize such relationships abstractly. The research area focused on an automated approach to generating and validating trace dependencies. The research addressed the severe problem that the absence of trace information or the uncertainty of its correctness limits the usefulness of software models during software development. This is considered to be an important issue affecting requirements engineering. This approach also proposed a method that automates what is normally a time consuming and costly activity due to the quadratic explosion of potential trace dependencies between development artifacts. 2.2 Issues in requirement elicitation Christel M and Kang k, in their research paper titled “Issues in requirements elicitation”, clearly listed all the issues that affect the requirement elicitation phase of software development. According to their research, the requirement elicitation phase suffers from issues dealing with the improper detailing of scope of the system. The communication between the various people involved in developing the system also is causing major problems, since requirement elicitation highly involves with collection of information related to the system. The next major issue listed deals with the volatile nature of the requirements. The paper implies that if these problems are not taken seriously, it might lead to a poor requirement and also the end result may be a failure. 2.3 Communication problems in requirements engineering Al-Rawas et al. explains about the problems of communication between disparate communities involved in the requirements specification activities [1]. The requirements engineering phase of software development projects is characterized by the intensity and importance of communication activities. During this phase, the various stakeholders must be able to communicate
  • 32. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4 their requirements to the analysts, and the analysts need to be able to communicate the specifications they generate back to the stakeholders for validation. The results of this study are discussed in terms of their relation to three major communication barriers ineffectiveness of the current communication channels, restrictions on expressiveness imposed by notations, social and organizational barriers. The results confirm that organizational and social issues have great influence on the effectiveness of communication. They also show that in general, end-users find the notations used by software practitioners to model their requirements difficult to understand and validate. 2.4 Revisiting ontology-based requirements engineering in the age of the semantic web systems Dobson G, Sawyer P , in “Revisiting ontology-based requirements engineering in the age of the semantic web”, propose usage of dependability ontology compliant with the IFIP Working Group 10.4 taxonomy and discuss how this, and other ontologies, must interact in the course of Dependability Requirements Engineering. In particular usage of the links between the dependability ontology, ontology for requirements and domain ontologies, identifying the advantages and difficulties involved are discussed in detail. Ontology is generally based upon some logical formalism, and has the benefits for requirements of explicitly modeling domain knowledge in a machine interpretable way, e.g. allowing requirements to be traced and checked for consistency by an inference engine, and software specifications to be derived. With the emergence of the semantic web, the interest in ontologies for Requirements Engineering is on the increase. Lots of research has been concentrated upon re-interpreting software engineering techniques for the semantic web. Usage of ontology is proved to be highly beneficial for requirement engineering processes. 3. METHODOLOGY In software development requirement refinement is an important process. It has a tremendous impact on all its software development phases. Though a lot of research is involved in the area of solving the various issues affecting the requirement engineering, few issues remain unsolved. The common issues that haunt the requirement elicitation phase include problems concerned with areas of scope, volatility, communication, traceability [4]. Defining the scope of the system plays the major role in developing any new system. Hence solving the scope issues is highly necessary. The requirements of the customers are prone to changes throughout the development of the system which is generally referred as the volatility issue. Developing a system involves a wide class of people and hence
  • 33. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5 communication plays a major role. Any fault that arises in developing the system as a result of mistaken communication is referred as communication issues. Any issues that affect describing and following a requirement in both forward and backward direction are referred as traceability issues. In order to overcome these issues a system consisting of a multi-viewed approach is proposed. The system integrates two views for representing the perspective of different stakeholders. Ontologies and controlled natural languages are used for this purpose. Thus, the proposed technique provides a way for dealing with the various issues affecting requirement engineering. In this paper , we propose a comprehensive approach for dealing with the various issues affecting the requirement elicitation process. The requirement document is analyzed and split in to individual tokens. The tokens are used to extract only the required relevant terms involved in the document. The tokens are used as a base for construction of the ontology. The requirement document is next represented in terms of controlled natural language .The controlled natural language representation also extracts only the key terms involved. Both the representations show the relationship among the objects that are extracted. The similarities between the two representations are measured to verify the level of accuracy. Figure 1. Overall process of integrating ontology and CNL Figure 1 represents the process involved in designing the multi viewed approach consisting of ontology and control natural language. The ontological view of requirements document involves representing the requirement document in terms of concepts and relations among them. The process also involves tagging of the requirement in prior to the construction of the ontology.
  • 34. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6 The CNL view of requirements involves representing the document through restricted vocabulary. This representation enables to clearly establish the relationships among the identified objects through a restricted vocabulary. The mapping of the ontology representation and the CNL representation is performed with the help of v-doc and GMO algorithm. The level of similarity is noted. 3.1 Ontological view of requirement The requirement document is represented in terms of ontology. Ontology formally represents knowledge as a set of concepts within a domain, and the relationships between pairs of concepts. Ontologies are used to model a domain and support reasoning about entities. Ontology provides shared vocabulary and taxonomy which models a domain with the definition of objects/concepts, as well as their properties and relations. Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, informatics, library, enterprise bookmarking, and information architecture as a form of knowledge representation about the world or some part of it. The creation of domain ontologies is also fundamental to the definition and use of an enterprise architecture framework. Model for describing the world that consists of a set of types, properties, and relationship types. There is also generally a view that the feature of the model in ontology closely resembles the real world. The first step in representing the requirement document as ontology may require tagging the parts of speech in the document. The tagging is performed with the help of a tool called POS Tagger. The POS Tagger does the process of marking up a word in a text (corpus) as corresponding to a particular part of speech. Once the tagging is completed, the ontology can be constructed. Formally it can be said that ontology is a statement of a logical theory. Ontologies are often equated with taxonomic hierarchies of classes without class definition and the subsumption relation. Ontologies need not to be limited to these forms. Ontologies are also not limited to conservative definitions, that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world. To specify a conceptualization, one needs to state axioms that do constrain the possible interpretations for the defined terms. The construction of ontology starts with identifying the concepts. Concept represents a set of entities within a domain. Relations are constructed next and it specifies the interaction among concepts involved. Ontology of a program is obtained by defining a set of representational terms. In such ontology, definitions associate the names of entities in the universe of discourse (e.g. classes, relations, functions, or other objects)
  • 35. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7 with human readable text describing what the names mean, and formal axioms that constraint the interpretation and well-formed use of these terms. The ontologies can be designed on the basis of domain document. This consists of 3 steps: 1. Tokenization is done by Stanford pos tagger which reads the text document and assign the parts of speech to each word such as noun, verb, adjectives, etc.(figure 2) 2. The isolation of individual tokens shows the number of nouns, verbs, adjectives in the domain document. 3. Based on the Nouns, verbs and adjectives, the concepts involved are identified. 4. Ontology can be created by using the protégé tool based on the concepts identified. (figure 3) Figure 2. Tokenization
  • 36. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8 Figure 3. Construction of ontology 3.2 CNL view of requirements A sample requirement document is considered. The requirement document will be in ordinary English which has no rules and semantics to make it understandable by the system we convert that into controlled natural language. CNL refers to controlled natural language that are subsets of natural languages, obtained by restricting the grammar and vocabulary in order to reduce or eliminate ambiguity and complexity that enable reliable automatic semantic analysis of a language A tool called Fluent Editor is used to provide a platform for constructing the required CNL. We use Controlled English as the knowledge modeling language. Supported by a suitable Editor, it prohibits one from entering any sentence that is grammatically or morphologically incorrect and actively helps in correcting any error. The Controlled English is a subset of Standard English with restricted grammar and vocabulary; in order to reduce the ambiguity and complexity inherent in full English. The relationships between the various objects involved in the domain are established. A taxonomy tree is also displayed representing the objects hierarchically. A taxonomy tree is constructed with four important parts. They are “thing” which shows “is a” relationship between concepts and relations. The “nothing” shows concepts that cannot have any instances. The “relations” that shows hierarchy of information about the relation between concepts and/or the instances. And finally “attributes” that shows the hierarchy of attributes. Modal expressions are constructed using must, should, can, must not, should not, and cannot. Similarly complex expressions can also be constructed. There are 4 steps:
  • 37. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9 1. The domain document is represented in terms of Controlled English which is a subset of Standard English with restricted grammar and vocabulary in order to reduce the ambiguity and complexity inherent in full English.(Fig 4) 2. The CNL phrases for every corresponding OWL statements are created using a CNL editor. 3. Modality is enabled to create modal expressions that are used to express relationship in CNL. 4. Complex sentences as well as simple sentences can be created using the CNL editor. Figure 4. Construction of CNL 3.3 Mapping of ontology and CNL The mapping of the CNL and ontology are performed, after both representations are constructed completely. The objective of this module is to measure the level of similarity that is established between the two forms of representation. The obtained CNL is extracted as objects and relationships. The obtained objects are written on to a separate file. The objects obtained from ontology are kept on a separate file manually. The mapping of the two representations are performed through a program as well as verified by a tool. The program is used to extract the objects and the relationship that exist between them. The first part of program deals the extraction of the objects and relationship between the objects. The program
  • 38. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10 consists of pre defined relationships. The extracted objects and relationships from the OWL file is compared with the pre defined relationships that are already entered. After extraction, the mapping of the extracted data is performed. If the objects and relationship match, a positive output is obtained stating “ontology matching”. If the objects and relationships don’t match, a similar output stating “ontology not matching” is obtained. The mapping involves the usage of V-Doc algorithm and the GMO algorithm. The v-doc constructs virtual documents for each entity in the ontologies and use Vector Space Model to compute similarities between the virtual documents. The GMO algorithm is a novel graph matching algorithm. The mapping of the ontology representation and the CNL representation is performed with the help of v-doc and GMO algorithm. The level of similarity is noted. The steps are 1. Relationship and objects that exists in the ontological view of representation is extracted. 2. Relationship and objects that exists in the CNL view of representation is extracted. 3. The extracted relationships and objects are placed in individual files. 4. A program is used to check the level of similarity between the objects that are extracted. 5. Tool based verification is further performed. Figure 5. Matching OWL files
  • 39. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11 Thus, the tool based verification of matching is shown if Figure 5. The ontology1 column shows the list of objects in OWL file for ontology .The ontology2 column shows the list of objects in OWL file for CNL. The similarity column shows the level of similarity between the two OWL files. 4. CONCLUSION As for as software development is concerned, requirement elicitation is an important process. The existing system used single view approach, thus failing to deal with the lot of issues affecting requirements engineering. Therefore a technique is proposed to refine the requirements by integrating ontology and controlled natural language, thus providing a multi viewed approach. The ontological view is mapped with the controlled natural language view to calculate the level of similarity. Thus, usage of a multi viewed approach helps in resolving most of the issues affecting the requirement elicitation process. The proposed process provides a simple and useful Traceability scheme. Refining the requirements provides a lot of advantages and reduces a lot of cost in building the system. The research on the area of RE has grown fast in the last few years. In spite of this fact, there are still open issues. In our work we initially identified such issues and investigated the main existent initiatives that are addressing them. Further, the work can be improvised by exploring the full potential of ontologies, thus improving quality of the knowledge base. 5. REFERENCES [1] Al-Rawas, A., Easterbrook, S. Communication problems in requirements engineering- a field study.(1996) [2] Egyed, A., A scenario-driven approach to trace dependency analysis. IEEE trans software engineering., 29(2) (2003), 11632. [3] Denny Vrandecic., York Sure. An approach to the automatic evaluation of ontologies. Journal of Applied Ontology, 3, (1-2), (2008), seiten 41-62. [4] Christel, M., Kang, K. Issues in requirement elicitation. IEEE trans software engineering., 78(2) (2005), 156-198 [5] Dobson, G., Sawyer, P. Revisiting ontology-based requirements engineering in the age of the semantic web. Dependable requirements engineering of computerised systems at NPPs, (2006) [6] Mellor, S. MDA distilled: principles of model driven Architecture. Addison-Wesley Professional, (2004) [7] http://www.protege.com. [8] http://xobjects.seu.edu.cn/project/falcon/falcon.html
  • 40. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1 Modified Colonial Competitive Algorithm: An Approach for Graph Coloring Problem Hojjat Emami Computer Engineering Department, Islamic Azad University, Miyandoab Branch Miyandoab, Iran Parvaneh Hasanzadeh Computer Engineering Department, Islamic Azad University, Miyandoab Branch Miyandoab, Iran ABSTRACT This paper describes a modified version of colonial competitive algorithm and how it is used to solve graph coloring problem (GCP). Colonial competitive algorithm (CCA) is a meta-heuristic optimization and stochastic search technique that is inspired from socio- political phenomenon of imperialistic competition. CCA has high convergence rate and in optimization problems can reach to global optimum. Graph coloring problem is the process of finding an optimal coloring for any given graph so that adjacent vertices have different colors. Graph coloring problem is an optimization problem.The original CCA algorithm is designed for continuous problems whereas graph coloring problem is a discrete problem. So in this paper we applied some changes in the assimilation and revolution operators of the original CCA algorithm and presented a modified CCA which is called MCCA. The performance of the MCCA algorithm is evaluated on five well-known graph coloring benchmarks. Simulation results demonstrate that MCCA is a feasible and capable algorithm. Keywords Colonial competitive algorithm, graph coloring problem, modified colonial competitive algorithm. 1. INTRODUCTION Graph coloring is a special case of graph labeling. Coloring a graph involves assigning colors for each vertex of graph, so that any two adjacent vertices have different colors. One of the main challenges in the graph coloring problem (GCP) is to find the least number of colors for which there is a valid coloring of the vertices of the graph. Graph coloring problem provides a useful test bed for representing many real world problems including time scheduling, frequency assignment, register allocation, and circuit board testing [1]. The graph coloring problem is a complex and NP-hard problem [2]; therefore many methods have been presented for solving the graph coloring problem. The most common and successful methods that are
  • 41. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2 introduced for solving graph coloring problem use a conflict minimizing approach as its goals. i.e., given k colors, a coloring is sought which minimizes the number of adjacent vertices bearing the same color. Researchers used various methods for solving graph coloring problem that some of them include meta-heuristic algorithms (such as genetic algorithm [3], particle swarm optimization [4, 5], ant colony optimization [6, 7], and differential evolution [8]), scatter search [9], variable space search [10], learning automata [11], distributed algorithms [12], and some hybrid algorithms [5, 13, 14]. In this paper, we used another meta-heuristic algorithm ⎯colonial competitive algorithm [15] ⎯ to solve graph coloring problem. Colonial competitive algorithm (CCA) is an optimization and search algorithm. This socio-politically algorithm is inspired from imperialistic competition among imperialists and colonies. CCA is a population based evolutionary algorithm and contains two main steps: the movement of the colonies toward their relevant imperialists and the imperialistic competition. CCA has been used in many engineering and optimization tasks. Original CCA is inherently designed for continuous problems whereas graph coloring problem is a discrete problem [15]. So in this paper, a modified discrete version of CCA is proposed to deal with the solution of graph coloring problem. The success of the new proposed method is shown by evaluating its performance on the well-known graph coloring benchmarks. This paper proceeds as follows. Section 2 describes the graph coloring problem. Colonial competitive algorithm is explained in Section 3. Section 4 presents the proposed modified colonial competitive algorithm and how it is used to solve graph coloring problem. Section 5 presents our empirical results and Section 6 concludes the paper and gives an outlook of the future works. 2. GRAPH COLORING PROBLEM (GCP) Graph coloring is an NP-hard problem and is still a very hot research field of topic. Graph coloring problem is a practical method of representing many real world problems such as pattern matching, scheduling, frequency assignment, register allocation, and circuit board testing [1]. In graph theory, graph coloring involves an assignment of labels to elements of a graph subject to certain constraints. In other words, coloring a graph is a way of coloring the vertices of graph so that adjacent vertices have different colors. Generally this process called vertex coloring. There are other forms of graph coloring including edge coloring or face coloring that can be transformed into a vertex coloring problem. A coloring that uses K colors is called a K-coloring. The smallest number of colors required to
  • 42. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3 color a graph G is called its chromatic number. A minimum coloring of a graph is a coloring that uses as few different colors as possible. Formally the graph coloring problem can be stated as follows: Given an undirected graph G with a set of vertices V, and a set of edges E, (G=(V,E)), a K-Coloring of G consists of assigning a color to each vertex of V such that no two adjacent vertices share the same color. One of the most interesting challenges in the graph coloring problem is to find a correctly coloring that uses exactly the predefined chromatic number of graph [1, 16]. In other words, in coloring process all vertices of graph must be colored with minimal number of colors. Figure 1 shows the coloring process of a simple graph. This graph has 4 vertices and 4 edges. The chromatic number of this graph is 3. (a) (b) Figure 1. A simple example of graph coloring process. (a) Graph G before coloring, (b) Graph G after coloring. 3. COLONIAL COMPETITIVE ALGORITHM (CCA) Colonial competitive algorithm (CCA) was developed by Atashpaz-gargari and et al., and has been applied in various optimization and engineering tasks [15]. CCA is a global search and optimization meta-heuristic that is inspired by the social-political competition. Like other evolutionary algorithms, CCA begins its work with an initial population. In CCA agents in the population are called countries. After computing the cost of these countries by using a cost function, some of the best countries are selected to be the imperialist states and the remaining countries form the colonies of the mentioned imperialists. All colonies of the population are divided among imperialists based on imperialist’s power. The power of countries is conversely related to their cost. The imperialist together with their colonies form some empires. After forming initial empires, the colonies in each empire are start moving toward their relevant imperialist state. This process is a simple simulation of assimilation policy that was pursued by some of the imperialist countries. This movement is shown in Figure 2. In this movement, a colony goes toward the imperialist by a random value which is randomly distributed between 0 and β×d. Where d is the distance between imperialist and the colony, β is a control parameter. θ and x are random numbers with uniform distribution [15]. 1 3 2 4 1 3 2 4
  • 43. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4 𝑥~𝑈(0, 𝛽 × 𝑑) (1) 𝜃~𝑈(−𝜑, 𝜑) (2) In equation (2), 𝜑 is a random number with uniform distribution. In this paper, 𝛽 and 𝜑 are set to 2 and 𝜋/4 (Rad) respectively. All empires try to take the possession of colonies of other empires and control them. The imperialistic competition gradually brings about an increase in the power of powerful empires and a decrease in the power of weaker empires. In the original CCA, the imperialistic competition is modeled just by picking one of the weakest colonies of the weakest empire and then making a competition among all other empires in order to possess this colony. Figure 2. Motion of a colony toward its relevant imperialist [15] In CCA solution space is modeled as a search space. Each position in the search space is a potential solution of the problem. Moving colonies toward their relevant imperialists and imperialistic competition are the two important operators of the CCA and causes the countries in the solution space converge to a state that is the global optimum and satisfies the problem constraints. CCA has high convergence rate and can often achieve to global optimum. Figure 3 shows the pseudo code of the original CCA. 4. GRAPH COLORING USING MODIFIED COLONIAL COMPETITIVE ALGORITHM The original CCA is designed to solve continuous problems whereas graph coloring problem is a discrete problem. Hence a discrete version of CCA is needed for solving graph coloring problem. This section describes how modified colonial competitive algorithm (MCCA) is used to solve graph coloring problem. The goal of using MCCA to solve graph coloring problem is to find reliable and optimal colorings for graph coloring problem instances.
  • 44. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5 1. Initialize the empires. 2. Move the colonies toward their relevant imperialist (Assimilation). 3. Randomly change the position of some colonies (Revolution). 4. If there is a colony in an empire which has lower cost than the imperialist, exchange the positions of that colony and the imperialist. 5. Unite the similar empires. 6. Compute the total cost of all empires. 7. Pick the weakest colony (colonies) from the weakest empires and give it (them) to one of the empires (Imperialistic Competition). 8. Eliminate the powerless empires. 9. If termination conditions satisfied, stop, if not go to 2. Figure 3. Pseudo code of the CCA [15] At the beginning, a population of popN countries is generated. If the graph coloring problem instance has n vertices then each country in the initial population will consist of a random permutated list of the integers {1, 2, …, n}. After forming the initial population, the countries have to be assessed according to the cost function explained later. Some of the best countries (countries with least cost value) are selected to be imperialist states and the remaining form the colonies of the mentioned imperialists. Within the main loop of the MCCA, imperialist in each empire, try to attract their colonies toward themselves. Then imperialistic competition begins among empires. During this competition process the weaker imperialists are collapsed whereas powerful imperialists increase their power. The MCCA executes for a fixed number of iterations, where iteration is defined as a cycle of assimilation, revolution, uniting and competition stages. Following subsections describe the attributes of the proposed MCCA method. 4.1 Forming Initial Population The first step in the MCCA as any other population based algorithm, is to create an initial population. This population consists of countries. Each country consists of a permutated list of the integers and indicates a potential solution to the problem. Each integer number in the country denotes a color used for coloring the graph coloring problem instance. In fact if a graph coloring problem instance has n vertices then a country will be a vector of integer numbers with size 1×n. Figure 4.a illustrates the process of creation of initial population using a sample graph with 4 vertices and 4 edges. The chromatic number of this graph is 3. Also Figure 4.b indicates a population of size 4 created for solving the graph indicated in Figure 4.a. In the graph coloring problem, the input is an undirected graph and the output is an optimal coloring (i.e. a permutated list of colors assigned to the vertices of the graph).
  • 45. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6 Figure 4. Graph Coloring process (creating initial population) 4.2 Cost Function A cost function is used to evaluate the cost of the countries in the population. From optimization viewpoint, a country with lowest cost value is preferable. Here the cost of a country is equal to the total number of color conflicts in it. In other words if two vertices of the graph be adjacent and in the coloring process have the same color then there is a conflict. A simple criterion for cost function can be stated as follows: , , 1 1 , 1 ( ) ( ) ( ) , 0 ( ) ( ) V V i j i j N N i j i j i j i i j C if Color n Color n Cost Country C C if Color n Color n           (3) Where V N is the total number of vertices, ,i jC is a counter variable that keeps the number of conflicts, and ( )i Color n indicates the color of the vertex i. 4.3 Modified Assimilation Operator In the original CCA assimilation operator was designed for continuous problems whereas in the graph coloring problem we deal with a discrete problem. So a modified discrete version of this operator is needed. The goal of assimilation policy is that the colony countries are similar to their relevant imperialists. In this paper this fact is modeled by conveying some random portions of imperialist country to its colonies. Figure 5 shows country 1 3 2 1 2 2country 2 2 1 1 3country 2 3 2 3 4country 1 2 2 3 1 3 2 4 1 1 1 3 2 3 2 2 2 1 1 2 3 3 1 4Vertice s Colors a) Graph with 4 vertices and 4 edges b) Initial population
  • 46. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7 modified assimilation operator with a simple example graph that showed in Figure 4.a. imperialist 2 1 3 2 Colony 2 3 1 1 imperialist 2 1 3 2 Colony 2 3 1 2 a. before applying assimilation operator b. after applying assimilation operator Figure 5. Modification of assimilation operator 4.4 Modified Revolution operator The revolution operator increases the exploration power of the CCA algorithm and helps it to escape from local optimums. In the origin CCA, revolution causes a country to suddenly change its position. In this paper, instead of the revolution operator, we used an operator similar to mutation in the genetic algorithm [17]. For each country, some elements (which is called victim elements) of that country are selected and their values replaced with other randomly generated integers. In each empire, victim elements are randomly selected from the cols vN N total number of elements in that empire. colsN indicates the total number of colonies in the empire and vN denotes total number of vertices of the graph coloring problem instance. In our implementation we selected the revolution rate to be 30% (r=30%) within each empire. In this case the number of revolutions in an empire is given by: # revolutions = r×( colsN -1) × vN (4) Figure 6 demonstrates revolution process on a sample country. Figure 6. View of modified revolution operator on a sample country 4.5 Other Operators In the proposed method, imperialistic competition, unite similar empires, and elimination of powerless empires are implemented similar to what proposed in the original continuous CCA algorithm. Similar to original CCA, imperialist competition is modeled just by picking one of the weakest colonies from the weakest empire and then making a competition between other empires to posses this colony and control it. Also in uniting process, 3 1 1 3 3 1 2 3
  • 47. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8 if the distance between any two empires is smaller than a predefined threshold then these empires will combine together and create a new united empire. In the elimination process of powerless empires, if an empire loses all of its colonies, then will collapse and its colonies will be divided among other empires. This operator is implemented similar to what proposed mechanism in the original CCA. The main steps of MCCA method are summarized in the pseudo code as shown in Figure 7. Figure 7. The process of applying MCCA on the graph coloring problem 5. RESULTS Five datasets are used to asses the efficiency of the proposed MCCA method. These datasets are Myceil3.col, Myceil4.col, Myceil5.col, queen5_5.col, and queen7-7.col [18]. These datasets cover instances of dataset of low, medium and large dimensions. Myceil3.col has 11 vertices and 20 edges. Myceil4.col has 23 vertices and 71 edges. Myceil5.col has 47 vertices and 236 edges. The chromatic number for Myceil3.col, Myceil4.col, and Myceil5.col are 4, 5, and 6 respectively. Queen5_5.col has 25 vertices and 160 edges. Queen7-7.col has 49 vertices and 476 edges. The chromatic Input: an acyclic and undirected graph Output: a valid and optimal coloring for the input graph Step 1: set the initial parameters including:  algorithm parameters such as: the maximum iterative count Max_Itr, the population size popN , and other MCCA parameters.  problem parameters such as: number of graph vertices, edges Step2: Graph = CreateAadjacencyMatrix (input graph); Step 3: ColoredGraph = MCCA (Graph); //solve the GCP by using DICA Method Step 3.1: Pop = Initialize a population of size popN . Step 3.2: InitialCost = CostFunction (Pop); Step 3.3: Empires = CreateInitialEmpires (Pop); Step 3.3: Set iterative count itrCount = 0. Step 3.4: Empires = AssimilateColonies(Empires); Step 3.5: Empires = RevoloveColonies(Empires); Step 3.6: Empires = UniteSimilarEmpires (Empires); Step 3.7: Empires = ImperialisticCompetition (Empires); Step 3.8: itrCount = itrCount +1. If itrCount <Max_Itr, go to Step 3.4. Step 4: return the ColoredGraph;
  • 48. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9 number for Queen5-5 and Queen7-7 are 5 and 7 respectively. The characteristics of these datasets are summarized in Table 1. Also Table 2 indicates the general parameter setting for MCCA that were used in our implementation. The proposed algorithm is implemented using MATLAB software on a computer with 3.00 GHz CPU and 512MB RAM. The efficiency of the MCCA algorithms is evaluated on the graph coloring benchmarks. The performance of the MCCA is measured by the following criterion.  The average success/failure over 10 replication runs of the algorithm simulation: performance 100% SR TR   (5) Where SR is the number of successful runs and TR is the total number of simulation runs. How many the number of correct and successful runs will be higher then the performance of algorithm will be larger. Tables 3 gives the results over 10 runs obtained based on performance measure in equation (5). From our simulations it appears that the proposed algorithm can find the valid and optimal coloring for the graph coloring problem instances. Also it is clear that the proper tuning of algorithm parameters is very important for that the algorithm to be successful. MCCA has less runtime and can use for large datasets. Table 1. Characteristics of datasets considered Graph Number of Vertices Number of Edges Chromatic Number Myceil3.col 11 20 4 Myceil4.col 23 71 5 Myciel5.col 47 236 6 queen5_5.col 25 160 5 Queen7-7.col 49 476 7
  • 49. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10 Table 2. The MCCA method parameters setup. Parameter Value Population size 300 Number of initial imperialists 10 % of population size Number of colonies (Population size) – (Number of initial imperialists) Iteration count 100 Revolution rate 0.30 Uniting threshold 0.02 Assimilation coefficient 2 Assimilation angle coefficient 0.45 Damp ratio 0.90 Table 4. Results of MCCA algorithm on five data sets; The quality of solutions is evaluated using performance metric. The table shows means of performance for 10 independent runs. Graph Number of Vertices Number of Edges MCCA Performance (%) Myciel3.col 11 20 100 Myciel4.col 23 71 100 Myciel5.col 47 236 97 queen5_5.col 25 160 94 queen7_7.col 49 952 83.5 6. CONCLUSIONS The paper has presented a modified colonial competitive algorithm (MCCA) for finding effective graph coloring schemes. Also the success of proposed method is demonstrated on five well-known graph coloring benchmarks. The proposed method has less runtime and can find optimal and valid solutions for graph coloring problem. This method is appropriate for working with both low and high dimensional graph coloring problem instances. Future work will focus on improving the results further for the graph coloring problem, by combining the proposed MCCA algorithm with other existing heuristic and mathematical methods.
  • 50. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11 REFERENCES [1] Jensen T.R., and Toft B., "Graph Coloring Problems", Wiley interscience Series in Discrete Mathematics and Optimization, 1995. [2] Arathi R., Markov, I. L., and Sakallah, K. A., "Breaking Instance-Independent Symmetries in Exact Graph Coloring", Journal of Artificial Intelligence Research 26, 2006, pp. 289-322. [3] Fleurent C., and Ferland, J.A., “Genetic and hybrid algorithms for graph coloring,” Annals of Operations Research,vol. 63, 1996, pp. 437–461. [4] Anh T.H., Giang T.T.T., and Vinh T.L., “A novel particle swarm optimization – Based algorithm for the graph coloring problem”, Proceedings of International Conference on Information Engineering and Computer Science, ICIECS 2009. [5] Qin, J., Yin Y., and Ban, X., “Hybrid Discrete Particle Swarm Algorithm for Graph Coloring Problem”, Journal of Computers, VOL. 6, NO. 6, June 2011, pp. 1175-1182. [6] SangHyuck, A., SeungGwan L., and TaeChoong Ch., “Modified ant colony system for coloring graphs”, Proceedings of the Joint Conference of the Fourth International Conference on Information, Communications and Signal Processing, and Fourth Pacific Rim Conference on Multimedia, 2003, pp. 1849 – 1853. [7] Dowsland, K. A. and Thompson, J. M., “An improved ant colony optimization heuristic for graph coloring”, Discrete Applied Mathematics, Vol. 156, Issue 3, 2008, pp. 313-324. [8] Fister, I., and Brest, J., “Using differential evolution for the graph coloring”, IEEE Symposium on Differential Evolution (SDE), 2011, pp. 1-7. [9] Hamiez, J-P., and HAO, J.K, “Scatter Search For graph coloring”, Lecture Notes in Computer Science 2310: 168-179, Springer, 2002. [10]Hertz, A., Plumettaz, M., Zufferey, N., “Variable space search for graph coloring”, Discrete Applied Mathematics Vol. 156, 2008, pp. 2551–2560. [11]Torkestani, J.A., and Meybodi, M.R., “Graph Coloring Problem Based on Learning Automata”, International Conference on Information Management and Engineering, ICIME '09, 2009, pp. 718-722. [12]Choudhary, S., and Purohit, G.N., “Distributed algorithm for optimized vertex coloring”, International Conference on Methods and Models in Computer Science (ICM2CS), 2010, pp. 65–69. [13]Galinier. P., “Hybrid Evolutionary Algorithms for graph coloring”. J.Combin. Optim. Vol. 3, No.4, 1999, pp. 379-397. [14]Yang, Z., Liu, H., Xiao, X., and Wu, W., “Improved hybrid genetic algorithm and its application in auto-coloring problem”, International Conference on Computer Design and Applications (ICCDA), 2010, pp. 461-464. [15]Atashpaz-Gargari, E., Hashemzadeh, F., Rajabioun, R., Lucas, C., “Colonial competitive algorithm: A novel approach for PID controller design in MIMO distillation column process”, International Journal of Intelligent Computing and Cybernetics, Vol. 1, issue: 3, 2008, pp. 337 – 355. [16]Kubale M., "Introduction to Computational Complexity and Algorithmic Graph Coloring", Gdanskie Towarzystwo Naukowe, 1998. [17]Melanie, M., “An Introduction to genetic Algorithms”, Massachusetts: MIT Press, 1999. [18][Online:] http://mat.gsia.cmu.edu/COLOR/instances [Accessed on 10 March 2013].
  • 51. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1 Security and Privacy in E-Passport Scheme using Authentication Protocols and Multiple Biometrics Technology V.K. NARENDIRA KUMAR Assistant Professor, Department of Information Technology, Gobi Arts & Science College (Autonomous), Gobichettipalayam – 638 453, Erode District, Tamil Nadu, India. B. SRINIVASAN Associate Professor, PG & Research Department of Computer Science, Gobi Arts & Science College (Autonomous), Gobichettipalayam – 638 453, Erode District, Tamil Nadu, India. ABSTRACT Electronic passports (e-Passports) have known a wide and fast deployment all around the world since the International Civil Aviation Organization (ICAO) the world has adopted standards whereby passports can store biometric identifiers. The purpose of biometric passports is to prevent the illegal entry of traveler into a specific country and limit the use of counterfeit documents by more accurate identification of an individual. The e-passport, as it is sometimes called, represents a bold initiative in the deployment of two new technologies: Cryptography security and multiple biometrics (face, fingerprints, palm prints and iris). A passport contains the important personal information of holder such as photo, name, date of birth and place, nationality, date of issue, date of expiry, authority and so on. The goal of the adoption of the electronic passport is not only to expedite processing at border crossings, but also to increase security. Important in their own right, e-passports are also the harbinger of a wave of next-generation e-passport: several national governments plan to deploy e-passport integrating cryptography algorithm and multiple biometrics. The paper consider only those passport scenarios whose passport protocols base on public-key cryptography, certificates, and a public key infrastructure without addressing the protocols itself detailed, but this is no strong constraint. Furthermore assume the potential passport applier to use ordinary PCs with Windows or Linux software and an arbitrary connection to the Internet. Technological securities issues are to be found in several dimension, but below paper focus on hardware, software, and infrastructure as some of the most critical issues. Keywords Biometrics, e-Passport, Internet, Face, Iris, Palm Print and Fingerprint. 1. INTRODUCTION The electronic passports have been successfully deployed in many countries around the world. Besides classical “paper” properties, these travel documents are equipped with an electronic chip employing wireless communication interface, so-called RFID chip (Radio Frequency
  • 52. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2 Identification). In addition to the electronic copy of the data printed in the passport (name of the holder, birth date, photo, etc.), the chip may contain e.g. biometric measures of the holder and may employ sophisticated cryptographic techniques providing enhanced security compared to the classical passports. For instance, it should be much harder to copy an electronic passport compared the classical one. The e-Passports create opportunities for States to enhance global civil aviation safety while at the same time improving the efficiency of aviation operations. The e-Passport can contribute to this because verification of the public key infrastructure certificates associated with e-Passports can provide border control authorities with an assurance that documents are genuine and unaltered, which in turn allows the biometric information contained in e- Passports to be relied on to automate aspects of the border clearance process. RFID chip has no conductive power contacts that would supply it with the energy, other means from the world of physics have to be borrowed. The power and the communication channels employ the near magnetic field around the reader. For instance, when the chip needs to send information to the reader, it alters this surrounding field which is detected by the reader. Of course, if this modification is not properly filtered, unwanted information about the behavior of the chip may propagate in the surrounding electromagnetic field, as well. This phenomenon is what cryptologists call a side channel. Electronic passports include contactless chip which stores personal data of the passport holder, information about the passport and the issuing institution. In its simplest form an electronic passport contains just a collection of read-only files, more advanced variants can include sophisticated cryptographic mechanisms protecting security of the document and / or privacy of the passport holder. Its goal is to provide foolproof passport identification using a combination of biometrics and cryptographic security. 2. LITERATURE SURVEY Juels et al (2005) discussed security and privacy issues that apply to e- passports. They expressed concerns that, the contact-less chip embedded in an e-passport allows the e-passport contents to be read without direct contact with an IS and, more importantly, with the e-passport booklet closed. They argued that data stored in the chip could be covertly collected by means of “skimming” or “eavesdropping”. Because of low entropy, secret keys stored would be vulnerable to brute force attacks as demonstrated by Laurie (2007). Koch and Karger (2005) suggested that an
  • 53. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3 e-passport may be susceptible to “splicing attack”, “fake finger attack” and other related attacks that can be carried out when an e-passport bearer presents the e-passport to hotel clerks. There has been considerable press coverage (Johnson, 2006; Knight, 2006; Reid, 2006) on security weaknesses in e-passports. These reports indicated that it might be possible to “clone” an e-passport. 2.1. Biometrics in passports Biometrics in e-passports complying with the ICAO specifications now provide for the optional inclusion of an encoded biometric to confirm the holder's identity, or other data to verify the document's authenticity. This makes possible an unprecedented level of document security, offering border control authorities a high level of confidence in the validity of travel documents. A biometric in a machine readable passport will only be able to contain information of the passport holder, and no other additional person. Therefore, this section only covers the vulnerabilities of facial images, fingerprints, palm print and iris images. 2.2. Face Recognition Face recognition are the most common biometric characteristic used by humans to make a personal recognition, hence the idea to use this biometric in technology. This is a no intrusive method and is suitable for covert recognition applications. The applications of facial recognition range from static ("mug shots") to dynamic, uncontrolled face identification in a cluttered background (subway, airport). Face verification involves extracting a feature set from a two-dimensional image of the user's face and matching it with the template stored in a database. The most popular approaches to face recognition are based on either: 1) the location and shape of facial attributes such as eyes, eyebrows, nose, lips and chin, and their spatial relationships, or 2) the overall (global) analysis of the face image that represents a face as a weighted combination of a number of canonical faces. It is questionable if a face itself is a sufficient basis for recognizing a person from a large number of identities with an extremely high level of confidence. Facial recognition system should be able to automatically detect a face in an image, extract its features and then recognize it from a general viewpoint (i.e., from any pose) which is a rather difficult task. Another problem is the fact that the face is a changeable social organ displaying a variety of expressions [4]. 2.3. Fingerprint Recognition A fingerprint is a pattern of ridges and furrows located on the tip of each finger. Fingerprints were used for personal identification for many centuries and the matching accuracy was very high. Patterns have been extracted by
  • 54. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4 creating an inked impression of the fingertip on paper. Today, compact sensors provide digital images of these patterns. Fingerprint recognition for identification acquires the initial image through live scan of the finger by direct contact with a reader device that can also check for validating attributes such as temperature and pulse. In real-time verification systems, images acquired by sensors are used by the feature extraction module to compute the feature values. The feature values typically correspond to the position and orientation of certain critical points known as minutiae points. The matching process involves comparing the two-dimensional minutiae patterns extracted from the user's print with those in the template. One problem with the current fingerprint recognition systems is that they require a large amount of computational resources [2]. 2.4. Palm print Recognition The palm print recognition module is designed to carry out the person identification process for the unknown person. The palm print image is the only input data for the recognition process. The person identification details are the expected output value. The input image feature is compared with the database image features. The relevancy is estimated with reference to the threshold value. The most relevant image is selected for the person’s identification. If the comparison result does not match with the input image then the recognition process is declared as unknown person. The recognition module is divided into four sub modules. They are palm print selection, result details, ordinal list and ordinal measurement. The palm print image selection sub module is designed to select the palm print input image. The file open dialog is used to select the input image file. The result details produce the list of relevant palm print with their similarity ratio details. The ordinal list shows the ordinal feature based comparisons. The ordinal measurement sub module shows the ordinal values for each region. 2.5. Iris Recognition Iris recognition technology is based on the distinctly colored ring surrounding the pupil of the eye. Made from elastic connective tissue, the iris is a very rich source of biometric data, having approximately 266 distinctive characteristics. These include the orbicular meshwork, a tissue that gives the appearance of dividing the iris radically, with striations, rings, furrows, a corona, and freckles. Iris recognition technology uses about 173 of these distinctive characteristics. Iris recognition can be used in both verification and identification systems. Iris recognition systems use a small, high-quality camera to capture a black and white, high-resolution image of the iris. The systems then define the boundaries of the iris, establish a coordinate system over the iris, and define the zones for analysis within the coordinate system [12].
  • 55. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5 2.6. Design of Biometric System Five objectives, cost, user acceptance and environment constraints, accuracy, computation speed and security should be considered when designing a biometric system. They are inter-related, as is shown in Figure 1.2. Reducing accuracy can increase speed. Typical examples are hierarchical approaches. Reducing user acceptance can improve accuracy. For instance, users are required to provide more samples for training the system. Increasing cost can enhance security. More sensors can be embedded to collect different signals for aliveness detection. In some applications, some environmental constraints such as memory usage, power consumption, size of templates, and size of devices have to be factored into a design. A biometric system installed in a PDA (Personal Digital Assistant) requires low power and memory usage, but these requirements are not essential for access control. A practical biometric system should balance all these aspects [7]. 3. E-PASSPORT PKI VALIDATION E-Passport validation achieved via the exchange of Public Key Infrastructure (PKI) certificates is essential for the interoperability benefits of e-Passports to be realized. PKI validation does not require or involve any exchange of the personal data of passport holders, and the validation transactions help combat identity fraud. The business case for validating e- Passports is compelling. Border control authorities can confirm that:  The document held by the traveler was issued by a bonfire authority.  The biographical and biometric information endorsed in the document at issuance has not subsequently been altered.  Provided active authentication and / or chip authentication is supported by the e-Passport, the electronic information in the document is not a copy (i.e. clone).  If the document has been reported lost or has been cancelled, the validation check can help confirm whether the document remains in the hands of the person to whom it was issued. As a result passport issuing authorities can better engage border control authorities in participating countries in identifying and removing from circulation bogus documents. E-Passport validation is therefore an essential element to capitalize on the investment made by States in developing e- Passports to contribute to improved border security and safer air travel globally. Because the benefits of e-Passport validation are collective, cumulative and universal, the broadest possible implementation of e- Passport validation is desirable.
  • 56. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6 4. IMPLEMENTATION OF E-PASSPORT SYSTEM In order to implement this internet passport authentication system using multiple biometric identification efficiently, ASP.NET program is used. This program could speed up the development of this system because it has facilities to draw forms and to add library easily. There are three ways of doing authentication and authorization in ASP.NET: Biometric authentication is the process of determining the authenticity of a user based on the user's credentials. Whenever a user logs on to an application, the user is first authenticated and then authorized. The application's web.config file contains all of the configuration settings for an ASP.NET application. It is the job of the authentication provider to verify the credentials of the user and decide whether a particular request should be considered authenticated or not. An biometric authentication provider is used to prove the identity of the users in a system. ASP.NET provides three ways to authenticate a user: Forms Authentication: This authentication mode is based on cookies where the user name and the password are stored either in a text file or the database. After a user is authenticated, the user's credentials are stored in a cookie for use in that session. When the user has not logged in and requests for a page that is insecure, he or she is redirected to the login page of the application. Forms authentication supports both session and persistent cookies. Windows Authentication: This is the default authentication mode in ASP.NET. Using this mode, a user is authenticated based on his/her Windows account. Windows Authentication can be used only in an intranet environment where the administrator has full control over the users in the network. Passport Authentication: Passport authentication is a centralized authentication service that uses Microsoft's Passport Service to authenticate the users of an application. It allows the users to create a single sign-in name and password to access any site that has implemented the Passport single sign-in (SSI) service. Authorization is the process of determining the accessibility to a resource for a previously authenticated user. Note that authorization can only work with authenticated users, hence ensuring that no un-authenticated user can access the application. The default authentication mode is anonymous authentication.
  • 57. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7 4.1. Passport Authentication in Win HTTP Microsoft Windows HTTP Services (Win HTTP) fully support the client side use of the Passport authentication protocol. It provides an overview of the transactions involved in Passport authentication and how to handle them. Win HTTP provides platform support for e-Passport by implementing the client-side protocol for Passport authentication. It frees applications from the details of interacting with the Passport infrastructure and the Stored User Names, Passwords and biometric identification. 4.2. Passport Single Sign-In Passport allows users to create a single sign-in name, password and biometric identification to access passport site that has implemented the Passport single sign-in (SSI) service. By implementing the Passport SSI, it won't have to implement user-authentication mechanism. Users authenticate with the SSI, which passes their identities to passport site securely. Although passport authenticates users, it doesn't grant or deny access to individual sites i.e. Passport does only authentication not authorization. Passport simply tells a participating site who the user is. Each site must implement its own access-control mechanisms based on the user's Passport User ID (PUID). The following figure 2 shows overview of Authentication works. Figure 1. Overview of Passport Authentication. P1 Initial Page request, P2 Redirect for authentication, P3 Authentication request sign-in page, P4 Sign-in page, P5 User credentials, P6 Update website cookies and redirect,
  • 58. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8 P7 Encrypted authentication query string, P8 Site cookies and requested web page First user requests any page from his web server. Since user is not authenticated, passport web server redirects its request for authentication with Sign-In logo. When user presses Sign-In button, request will go to Passport server for Sign-In page. Once the Sign-In page comes to browser, user will enter his authentication details like Passport ID, Password and biometric identification. When user credentials are submitted, Credentials are validated in Passport server. Then Cookies are created in server and response is send to the browser with encrypted query string. Now both cookies and query string is having details about authentication. Once user is authenticating, he will be taken to page which is requested first. 1. Web user authenticates with enterprise security system (authentication can be through Web server) 2. Enterprise security system provides an authentication reference to Web user 3. Web user requests a dynamic resource from Web server, providing authentication reference 4. Web server requests application function from application on behalf of Web user, providing Web user’s authentication reference 5. Application requests authentication document from enterprise security system, corresponding to Web user’s authentication reference 6. Enterprise security system provides authentication document, including authorization attributes for the Web user, and authentication event description 7. Application performs application function for Web server 8. Web server generates dynamic resource for Web user
  • 59. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9 Figure 2. Passport Application Chain 4.3. Initial Request When a client requests a resource on a server that requires Passport authentication, the server checks the request for the presence of tickets. If a valid ticket is sent with the request, the server responds with the requested resource. If the ticket does not exist on the client, the server responds with a 302 status code. The response includes the challenge header, "WWW- Authenticate: Passport". Clients that are not using Passport can follow the redirection to the Passport login server. More advanced clients typically contact the Passport nexus to determine the location of the Passport login server. The following figure 3 shows the initial request to a Passport affiliate. Central to the Passport network is the Passport Nexus, which facilitates synchronization of Passport participant sites to assure that each site has the latest details on network configuration and other issues. Each Passport component (Passport Manager, Login servers, Update servers, and so on) periodically communicates with the Nexus to retrieve the information it needs to locate, and properly communicate with, the other components in the Passport network. This information is retrieved as an XML document called a Component Configuration Document, or CCD.
  • 60. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10 Figure 3. The initial request to a Passport affiliate. Figure 4. A client ticket request to a Passport login server. 4.4. Passport Login Server The figure 4 shows the passport login server to a Passport affiliate. A Passport login server handles all requests for tickets for any resource in a Passport domain authority. Before a request can be authenticated using Passport, the client application must contact the login server to obtain the appropriate tickets. When a client requests tickets from a Passport login server, the login server typically responds with a 401 status code to indicate that user credentials must be provided. When these credentials are provided, the login server responds with the tickets required to access the specified resource on the server that contains the originally requested resource. The login server can also redirect the client to another server that can provide the requested resource.
  • 61. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11 4.5. Authenticated Request When the client has the tickets that correspond to a given server, those tickets are included with all requests to that server. If the tickets have not been modified since they were retrieved from the Passport login server, and the tickets are valid for the resource server, the resource server sends a response that includes both the requested resource and cookies that indicate that the user is authenticated for future requests. Figure 5. An authenticated request to the Passport login server. The additional cookies in the response are intended to speed the authentication process. Additional requests in the same session for resources on servers in the same Passport Domain Authority all include these additional cookies. Credentials do not need to be sent to the login server again until the cookies expire. 4.6. Passport in Win HTTP Win HTTP handles many of the transaction details internally for Passport authentication. During the initial request, the server responds with a 302 status code when authentication is necessary. The 302 status code actually indicates a redirection and is part of the Passport protocol for backwards compatibility. Win HTTP hides the 302 status code and contacts the Passport nexus, and then the login server. The Win HTTP application is notified of the 401 status code sent by the login server to request user credentials. To the application, however, it appears as if the 401 status originates from the server from which the resource was requested. In this way, the Win HTTP application is unaware of interactions with other servers, and it can handle Passport authentication with the same code that handles other authentication schemes.
  • 62. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12 Typically, a Win HTTP application responds to a 401 status code by supplying authentication credentials. When credentials are supplied with WinHttpSetCredentials or Set Credentials for passport authentication, the credentials are actually being sent to the login server, not to the server indicated in the request. Once retrieved, tickets are managed internally and are automatically sent to applicable servers in future requests.Win HTTP can successfully complete the Passport authentication even if an application disables auto redirection. However, after the Passport authentication is complete, an implicit redirect must occur from the Passport login server URL back to the original URL. If an application has disabled automatic redirection, Win HTTP requires that the application give Win HTTP "permission" to redirect automatically in this special case. 5. ON-LINE SECURE E-PASSPORT PROTOCOL To resolve the security issues identified in both the first- and second- generation of e-Passports, in this section, we present an on-line secure e- Passport protocol (OSEP protocol). The proposed protocol leverages the infrastructure available for the standard non-electronic passports to provide mutual authentication between an e-Passport and an IS. Currently, most security organizations are involved in passive monitoring of the border security checkpoints. When a passport bearer is validated at a border security checkpoint, the bearer’s details are collected and entered into a database. The security organization compares this database against the database of known offenders (for instance, terrorists and wanted criminals). The OSEP protocol changes this to an active monitoring system. The border security check-point or the DV can now crosscheck against the database of known offenders themselves, thus simplifying the process of the identification of criminals. 5.1. Internet Passport Initial Setup All entities involved in the protocol share the public quantities p, q, g where:  p is the modulus, a prime number of the order 1024 bits or more.  q is a prime number in the range of 159 -160 bits.  g is a generator of order q, where Ai < q, gi ≠ 1 mod p.  Each entity has its own public key and private key pair (PKi,SKi) where PKi = g(SK i ) mod p  Entity i’s public key (PKi) is certified by its root certification authority (j), and is represented as CERTj(PKi, i).  The public parameters p, q, g used by an e-Passport are also certified by its root certification authority.
  • 63. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 13 5.2. Phase One – Inspection System Authentication Step 1 (IS) When an e-Passport is presented to an IS, the IS reads the MRZ information on the e-Passport using an MRZ reader and issues the command GET CHALLENGE to the e-Passport chip. Step 2 (P) The e-Passport chip then generates a random eP £ R 1 ≤ eP ≤ q - 1 and computes KeP = geP mod p, playing its part in the key agreement process to establish a session key. The e-Passport replies to the GET CHALLENGE command by sending KeP and its domain parameters p, q, g. eP → IS : KeP , p, q, g Step 3 (IS) On receiving the response from the e-Passport, the IS generates a random IS £R 1 ≤ IS ≤ q - 1 and computes its part of the session key as KIS = gIS mod p. The IS digitally signs the message containing MRZ value of the e-Passport and KeP. SIS = SIGNSKIS (MRZ || KeP) It then contacts the nearest DV of the e-Passports issuing country and obtains its public key. The IS encrypts and sends its signature SIS along with the e-Passport’s MRZ information and KeP using the DV’s public key PKDV. IS → DV: ENCPK DV (SIS, MRZ, KeP), CERTCVCA(PKIS, IS) Step 4 (DV) The DV decrypts the message received from the IS and verifies the CERTCVCA (PKIS, IS) and the signature SIS. If the verification holds, the DV knows that the IS is genuine, and creates a digitally- signed message SDV to prove the IS’s authenticity to the e-Passport. SDV = SIGNSKDV (MRZ || KeP || PKIS), CERTCVCA (PKDV, DV) The DV encrypts and sends the signature SDV using the public key PKIS of IS. DV → IS: ENCPKIS (SDV, [PKeP]) The DV may choose to send the public key of the e-Passport if required. This has an obvious advantage, because the IS system now trusts the DV to be genuine. It can obtain a copy of e-Passport’s PK to verify during e-Passport authentication. Step 5 (IS) After decrypting the message received, the IS computes the session key KePIS = (KIS)eP and encrypts the signature received from the DV, the e-Passport MRZ information and KeP using KePIS. It also digitally signs its part of the session key KIS. IS → eP : KIS, SIGNSKIS (KIS, p, q, g), ENCKePIS (SDV,MRZ,KeP ) 5.3. Phase Two - E-Passport Authentication Step 1 C The IS issues an INTERNAL AUTHENTICATE command to the e-Passport. The e-Passport on receiving the command, the e-Passport creates a signature SeP = SIGNSKeP (MRZ || KePIS) and sends its
  • 64. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 14 domain parameter certificate to the IS. The entire message is encrypted using the session key KePIS. eP → IS : ENCKePIS (SeP , CERTDV (PKeP), CERTDV (p, q, g)) Step 2 (IS) The IS decrypts the message and verifies CERTDV (p, q, g), CERTDV (PKeP) and SeP. If all three verifications hold then the IS is convinced that the e-Passport is genuine and authentic. During the IS authentication phase, and IS sends the e-Passport’s MRZ information to the nearest e-Passport’s DV, which could be an e-Passport country’s embassy. Embassies are DV’s because they are allowed to issue e- Passports to their citizens and because most embassies are located within an IS’s home country, any network connection issues will be minimal. Sending the MRZ information is also advantageous, because the embassy now has a list of all its citizens that have passed through a visiting country’s border security checkpoint. We do not see any privacy implications, because, in most cases, countries require their citizens to register at embassies when they are visiting a foreign country. 6. EXPERIMENTAL RESULTS The key application of a biometrics solution is the identity verification problem of physically tying an MRTD holder to the MRTD they are carrying. There are several typical applications for biometrics during the enrolment process of applying for a passport: The applicant’s biometric template(s) generated by the enrolment process can be searched against one or more biometric databases (identification) to determine whether the applicant is known to any of the corresponding systems (for example, holding a passport under a different identity, criminal record, holding a passport from another state). When the applicant collects the passport (or presents them for any step in the issuance process after the initial application is made and the biometric data is captured) their biometric data can be taken again and verified against the initially captured template . The identities of the staff undertaking the enrolment can be verified to confirm they have the authority to perform their assigned tasks. This may include biometric authentication to initiate digital signature of audit logs of various steps in the issuance process, allowing biometrics to link the staff members to those actions for which they are responsible. Each time traveler (i.e. MRTD holders) enters or exit a State, their identities can be verified against the images or templates created at the time their travel documents were issued. This will ensure that the holder of a document is the legitimate person to whom it was issued and will enhance the effectiveness of any Advance Passenger Information (API) system. Ideally, the biometric
  • 65. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 15 template or templates should be stored on the travel document along with the image, so that travelers’ identities can be verified in locations where access to the central database is unavailable or for jurisdictions where permanent centralized storage of biometric data is unacceptable. Two-way check - The traveler’s current captured biometric image data, and the biometric template from their travel document (or from a central database), can be matched to confirm that the travel document has not been altered. Three-way check - The traveler’s current biometric image data, the image from their travel document, and the image stored in a central database can be matched (via constructing biometric templates of each) to confirm that the travel document has not been altered. This technique matches the person, with their passport; with the database recording the data that was put in that passport at the time it was issued. Four-way check - A fourth confirmatory check, albeit not an electronic one, is visually matching the results of the 3-way check with the digitized photograph on the Data Page of the traveler’s passport. Besides the enrolment and border security applications of biometrics as manifested in one-to-one and one-to-many matching, States should also have regard to, and set their own criteria, in regard to: Accuracy of the biometric matching functions of the system. Issuing States must encode one or more facial, fingerprint, palm print or iris biometrics on the MRTD as per LDS standards (or on a database accessible to the Receiving State). Given an ICAO-standardized biometric image and/or template, Receiving States must select their own biometric verification software, and determine their own biometric scoring thresholds for identity verification acceptance rates – and referral of imposters. 7. CONCLUSIONS The work represents an attempt to acknowledge and account for the presence on inspection system for biometric passport using face, fingerprint, and iris recognition towards their improved identification. The application of biometric recognition in passports requires high accuracy rates; secure data storage, secure transfer of data and reliable generation of biometric data. The passport data is not required to be encrypted, identity thief and terrorists can easily obtain the biometric information. The discrepancy in privacy laws between different countries is a barrier for global implementation and acceptance of biometric passports. A possible solution to un-encrypted wireless access to passport data is to store a unique cryptographic key in printed form that is also obtained upon validation. The key is then used to decrypt passport data and forces thieves to physically obtain passports to steal personal information. More research into the technology, additional access and auditing policies, and further security
  • 66. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 16 enhancements are required before biometric recognition is considered as a viable solution to biometric security in passports. The adversaries might exploit the passports with the lowest level of security. The inclusion of biometric identification information into machine readable passports will improve their robustness against identity theft if additional security measures are implemented in order to compensate for the limitations of the biometric technologies. It enables countries to digitize their security at border control and provides faster and safer processing of an e-passport bearer. The main cryptographic features and biometrics used with e- passports and considered the surrounding procedures. E-passports may provide valuable experience in how to build more secure and biometric identification platforms in the years to come. REFERENCES [1] A. K. Jain, R. Bolle, “Biometric personal identification in networked society” 2010, Norwell, MA: Kluwer. [2] C.Hesher, A.Srivastava, G.Erlebacher, “A novel technique for face recognition using range images” in the Proceedings of Seventh International Symposium on Signal Processing and Its Application, 2009. [3] HOME AFFAIRS JUSTICE, “EU standard specifications for security features and biometrics in passports and travel documents”, Technical report, European Union, 2008. [4] ICAO, “Machine readable travel documents”, Technical report, ICAO 2011. [5] KLUGLER, D., “Advance security mechanisms for machine readable travel documents, Technical report”, Federal Office for Information Security (BSI), Germany, 2012. [6] ICAO, “Machine Readable Travel Documents”, Part 1 Machine Readable Passports. ICAO, Fifth Edition, 2007 [7] Riscure Security Lab, “E-passport privacy attack”, at the Cards Asia Singapore, April 2012. [8] D. Monar, A. Juels, and D. Wagner, “Security and privacy issues in e-passports”, Cryptology ePrint Archive, Report 2005/095, 2011. [9] ICAO, “Biometrics Deployment of Machine Readable Travel Documents”, Version 2.0, May 2010.
  • 67. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1 Comparative Study of WLAN, WPAN, WiMAX Technologies Prof. Mangesh M. Ghonge Assistant Professor Jagadambha College of Engineering & Technology Yavatmal-445001 Prof. Suraj G. Gupta Assistant Professor Jawaharlal Darda Institute of Engineering & Technology Yavatmal-445001 ABSTRACT Today wireless communication systems can be classified in two groups. The first group technology provides low data rate and mobility while the other one provides high data rate and bandwidth with small coverage. Cellular systems and Broadband Wireless Access technologies can be given as proper examples respectively. In this paper, WLAN, WPAN and WiMAX technologies are introduced and comparative study in terms of peak data rate, bandwidth, multiple access techniques, mobility, coverage, standardization, and market penetration are presented. Keywords WLAN, WPAN, WiMAX. 1. INTRODUCTION Wireless broadband technologies promise to make all kinds of information available anywhere, anytime, at a low cost, to a large portion of the population. From the end user perspective the new technologies provide the necessary means to make life more convenient by creating differentiated and personalized services. In the last decade we were primarily used to accessing people via voice, but there are of course other forms of communication like gestures, facial expressions, images and even moving pictures. Today we increasingly need user devices wireless for mobility and flexibility with total coverage for small light and affordable terminals than ever. Evolving of circuit switched networks towards packet switched technology high data rates is acquired and this evolution has opened new opportunities. 2.5 and 3G networks provide high mobility for the packet domain users. On the other hand the development of the technology has opened a new era like WLAN, WPAN and WiMAX communication. Therefore the merging IP based services provide broadband data access in fixed, mobile and nomadic environments supporting voice, video and data
  • 68. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2 traffic with high speed, high capacity and low cost per bit. In this paper WLAN, WPAN and WiMAX Technologies introduced and comparative analysis is done. 2. LITERATURE REVIEW WLAN technologies were first available in late 1990, when vendors initiated introducing products that operated within the 900 MHz frequency band. These solutions, which used non-standard, proprietary designs, provided data transfer rates of approximately 1Mbps. It was considerably slower than the 10 Mbps speed provided by most wired LANs at that time. In 1992, sellers began selling WLAN products that used the 2.4GHz band. Even if these products provided higher data transfer rates than 900 MHz band products they were expensive provided comparatively low data rates, were prone to radio interference and were often designed to use proprietary radio frequency technologies. The Institute of Electrical and Electronic Engineers started the IEEE 802.11 project in 1990 with the objective to develop a MAC and PHY layer specification for wireless connectivity for fixed, portable and moving stations within an area. 3. IEEE 802.11 WLAN/WI-FI Wireless LAN (WLAN, also known as Wi-Fi) is a set of low tier, terrestrial, network technologies for data communication. The WLAN standard operates on the 2.4 GHz and 5 GHz Industrial, Science and Medical (ISM) frequency bands. It is specified by the IEEE 802.11 standard and it comes in many different variations like IEEE 802.11a/b/g/n. The application of WLAN has been most visible in the consumer market where most portable computers support at least one of the variations. In the present study, we overview on different standard in table-1 and four WLAN standards were preferred for comparison that are IEEE 802.11a, IEEE 802.11b, IEEE 802.11g and IEEE 802.11n because these standards are very much popular among the users. It is noted that all 802.11 standards used Ethernet protocol and Carrier Sense Multiple Access / Collision Avoidance (CSMA/CA) for path sharing [1][12][9]. Standards are a set of specifications that all manufacturers must follow in order for their products to be compatible. This is important to insure interoperability between devices in the market. Standards may provide some optional requirements that individual manufacturers may or may not implement in their products.
  • 69. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3 4. OVERVIEW ON IEEE802.11 WLAN STANDARD Table 1. List of Concurrent and Future IEEE Standard of WLAN/ Wi- Fi.[2][6][7][9][10] Sr. No. IEEE 802.11 Standard Year of Release Comments 01 IEEE 802.11a 1999 Speed 54 Mbits and 5 GHz band 02 IEEE 802.11b 1999 Enhancements to 802.11 to support 5.5 and 11 Mbits speed 03 IEEE 802.11c 2001 Bridge operation procedures; included in the IEEE 802.11D standard 04 IEEE 802.11d 2001 International (country-to-country) roaming extensions 05 IEEE 802.11e 2005 Enhancements: QoS, including packet bursting 06 IEEE 802.11F 2003 Inter-Access Point Protocol, Withdrawn February 2006 07 IEEE 802.11g 2003 54 Mbits, 2.4 GHz standard (backwards compatible with b) 08 IEEE 802.11h 2004 Spectrum Managed 802.11a (5 GHz) for European compatibility 9 IEEE 802.11i 2004 Enhanced security 10 IEEE 802.11j 2004 Extensions for Japan 11 IEEE 802.11k 2008 Radio resource measurement enhancements 12 IEEE 802.11n 2009 Higher throughput improvements using Multiple In Multiple Out 13 IEEE 802.11p 2010 WAVE-Wireless Access for the Vehicular Environment 15 IEEE 802.11r 2008 Fast BSS transition (FT) ( 16 IEEE 802.11s July 2011 Mesh Networking, Extended Service Set (ESS) 17 IEEE 802. 11t Define recommended practice for evolution of 802.11wireless performance. 18 IEEE 802.11u February 2011 Improvements related to Hot Spots and 3rd party authorization of clients, e.g. Cellular network offload 19 IEEE 802.11v February 2011 Wireless network management 20 IEEE 802.11w September 2009 Protected Management Frames 21 IEEE 802.11x - Extensible authentication network for enhancement of security 22 IEEE 802.11y 2008 3650–3700 MHz Operation in the U.S. 23 IEEE 802.11z September 2010 Extensions to Direct Link Setup (DLS) (September 2010) 24 IEEE 802.11aa: June 2012 Robust streaming of Audio Video Transport Streams 25 IEEE 802.11ad December 2012 Very High Throughput 60 GHz
  • 70. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4 26 IEEE 802.11ae March 2012 Prioritization of Management Frames In process 27 IEEE 802.11ac: February 2014 Very High Throughput <6 GHz, potential improvements over 802.11n: better modulation scheme (expected ~10% throughput increase), wider channels (estimate in future time 80 to 160 MHz), multi user MIMO 28 IEEE 802.11af: June 2014 TV Whitespace () 29 IEEE 802.11ah: January 2016 Sub 1 GHz sensor network, smart metering. 30 IEEE 802.11ai: February 2015 Fast Initial Link Setup 31 IEEE 802.11mc: March 2015 Maintenance of the standard 32 IEEE 802.11aj: October 2016 China Millimeter Wave : 33 IEEE 802.11aq May 2015 Pre-association Discovery 34 IEEE 802.11ak - General Link 4.1 IEEE 802.11a Ratification of 802.11a took place in 1999. The 802.11a standard uses the 5 GHz spectrum and has a maximum theoretical 54 Mbps data rate. Like in 802.11g, as signal strength weakens due to increased distance, attenuation (signal loss) through obstacles or high noise in the frequency band, the data rate automatically adjusts to lower rates (54/48/36/24/12/9/6 Mbps) to maintain the connection. The 5 GHz spectrum has higher attenuation (more signal loss) than lower frequencies, such as 2.4 GHz used in 802.11b/g standards. Penetrating walls provide poorer performance than with 2.4 GHz. Products with 802.11a are typically found in large corporate networks or with wireless Internet service providers in outdoor backbone networks [9] [12]. 4.2 IEEE 802.11b In 1995, the Federal Communications Commission had allocated several bands of wireless spectrum for use without a license. The FCC stipulated that the use of spread spectrum technology would be required in any devices. In 1990, the IEEE began exploring a standard. In 1997 the 802.11 standard was ratified and is now obsolete. Then in July 1999 the 802.11b standard was ratified. The 802.11 standard provides a maximum theoretical 11 Megabits per second (Mbps) data rate in the 2.4 GHz Industrial, Scientific and Medical (ISM) band [9][12]. 4.3 IEEE 802.11g In 2003, the IEEE ratified the 802.11g standard with a maximum theoretical data rate of 54 megabits per second (Mbps) in the 2.4 GHz ISM band. As
  • 71. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5 signal strength weakens due to increased distance, attenuation (signal loss) through obstacles or high noise in the frequency band, the data rate automatically adjusts to lower rates (54/48/36/24/12/9/6 Mbps) to maintain the connection. When both 802.11b and 802.11g clients are connected to an 802.11g router, the 802.11g clients will have a lower data rate. Many routers provide the option of allowing mixed 802.11b/g clients or they may be set to either 802.11b or 802.11g clients only. To illustrate 54 Mbps, if you have DSL or cable modem service, the data rate offered typically falls from 768 Kbps (less than 1 Mbps) to 6 Mbps. Thus 802.11g offers an attractive data rate for the majority of users. The 802.11g standard is backwards compatible with the 802.11b standard. Today 802.11g is still the most commonly deployed standard [9][12]. 4.4 IEEE 802.11n In January, 2004 the IEEE 802.11 task group initiated work. There have been numerous draft specifications, delays and lack of agreement among committee members. Yes, even in the process of standards development, politics are involved. The Proposed amendment has now been pushed back to early 2010. It should be noted it has been delayed many times already. Thus 802.11n is only in draft status. Therefore, it is possible that changes could be made to the specifications prior to final ratification. The goal of 802.11n is to significantly increase the data throughput rate. While there are a number of technical changes, one important change is the addition of multiple‐input multiple‐output (MIMO) and spatial multiplexing. Multiple antennas are used in MIMO, which use multiple radios and thus more electrical power. 802.11n will operate on both 2.4 GHz (802.11b/b) and 5 GHz (802.11a) bands. This will require significant site planning when installing 802.11n devices. The 802.11n specifications provide both 20 MHz and 40 MHz channel options versus 20 MHz channels in 802.11a and 802.11b/g standards. By bonding two adjacent 20 MHz channels, 802.11n can provide double the data rate in utilization of 40 MHz channels. However, 40 MHz in the 2.4 GHz band will result in interference and is not recommended nor likely which inhibits data throughput in the 2.4 GHz band. It is recommended to use 20 MHz channels in the 2.4 GHz spectrum like 802.11b/g utilizes. For best results of 802.11n, the 5 GHz spectrum will be the best option. Deployment of 802.11n will take some planning effort in frequency and channel selection. Some 5 GHz channels must have dynamic frequency selection (DFS) technology implemented in order to utilize those particular channels [12][8][9].
  • 72. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6 Here, we compared IEEE 802.11 a/b/g/n standard of WLAN/ Wi-Fi we use some basic characteristics like Operating frequency, Modulation technique, Data rate (Mbps), Slot time (µs), Preamble, Throughput, Speed, Indoor Range, Outdoor Range, Multiple Access, Channel Bandwidth, Half/ Full duplex, Number of spatial streams, Mode of operation Ad-hoc, Infrastructure, VANET, FEC Rate, License/Unlicensed. Table 2. Comparison overview of WLAN /Wi-Fi IEEE Standard 802.11 a/ b/ g /n [1][3][4][5][9][11] IEEE 802.11a IEEE 802.11 b IEEE 802.11g IEEE 802.11n Operating frequency 5 GHz UNII/ISM bands 2.4 GHz ISM band 2.4 GHz ISM band 2.4 - 5 GHz Modulation technique BPSK, QPSK, 16-, 64-QAM , OFDM QPSK , DBPSK, DQPSK, CCK, DSSS BPSK, QPSK, 16-, 64-QAM , OFDM 64-QAM, Alamouti, OFDM,CC K, DSSS Data rate (Mbps) 6,9,12,18,24,36,48, 54 1, 2, 5.5, 11 1, 2, 5.5, 11, 6,9,12,18,24,36,48, 54 7.2, 14.4, 21.7, 28.9, 43.3, 57.8, 65, 72.2, 15, 30, 45, 60, 90, 120, 135, 150 Slot time (µs) 9 20 20,(9 optional) Less than 9 Preamble OFDM Long / short (optional) Long/ Short/ OFDM HT PHY for 2.4 and 5 GHz Throughput 23 Mbits 4.3 Mbits 19 Mbits 74 Mbits Speed 54 Mbits 11 Mbits 54 Mbits 248 Mbits Indoor Range 35 Mtrs 38 Mtrs 38 Mtrs 70 Mrs. Outdoor Range 120 Mrs. 140 Mrs. 140 Mrs. 250 Mrs. Multiple Access CSMA/CA CSMA/C A CSMA/CA CSMA/CA Channel Bandwidth 20 MHz 20, 25 MHz 20 MHz 20 or 40 MHz Half/ Full duplex Half Half Half Full duplex Number of spatial streams 1 1 1 1,2,3or 4 Ad-hoc(mode of operation) Yes Yes Yes Yes Infrastructure Yes Yes Yes Yes
  • 73. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7 VANET Yes Yes Yes Yes FEC Rate 1/2,2/3,3/4 NA 1/2,2/3,3/4 3/4, 2/3 and 5/6 Licensed/Unlicens ed Unlicensed Unlicense d Unlicensed Unlicensed 4.5 Wireless Personal Area Network (WPAN) Wireless Personal Area Network (WPAN) technologies have fueled the development as well as the wide proliferation of wireless personal devices (e.g. PDAs, Bluetooth headset, PSP, and etc). Yet, the popularity of these wireless devices has resulted in many forms of frequency spectrum clash amongst the different wireless technologies. To understand the performance of these wireless devices in different interference situations, it is increasingly important to study the coexistence issue amongst the existing wireless technologies. Various wireless technologies have been developed for WPAN purposes. A WPAN could serve to interconnect all the ordinary computing and communicating devices that many people have on their desk or carry with them today; or it could serve a more specialized purpose such as allowing the surgeon and other team members to communicate during an operation. The technology for WPANs is in its infancy and is undergoing rapid development. Proposed operating frequencies are around 2.4 GHz in digital modes. The objective is to facilitate seamless operation among home or business devices and systems. Wireless PAN is based on the standard IEEE 802.15. In this paper, we concentrate on the three most famous IEEE standard 802.15.1, 802.15.3, and 802.15.4 we overview on these standard pads compare on the basis of basic characteristic, application, limitation and their use. 4.6 IEEE 802.15 .1 is a working group of the Institute of Electrical and Electronics Engineers (IEEE) IEEE 802 standards committee which specifies Wireless Personal Area Network (WPAN) standards. It includes seven task groups. IEEE 802.15.1 [16] is a WPAN standard based on the Bluetooth v1.1 specification, which is a short-range radio technology operating in the unlicensed 2.4GHz ISM frequency band. The original goal of Bluetooth was to replace the numerous proprietary cables to provide a universal interface for devices to communicate with each other. However, it soon became to use Bluetooth technology to interconnect various Bluetooth devices to form so-called personal area networks, and facilitate more creative ways of exchanging data. Low cost and smaller footprint of Bluetooth chips consequently met with high demands [9][10][11][14].
  • 74. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8 4.7 IEEE 802.15.3 [17] is designed to facilitate High-Rate Wireless Personal Area Networks (HR-WPAN) for fixed, portable and moving devices within a personal operating space. The main purpose of IEEE 802.15.3 is to provide low cost, low complexity, low power consumption, and high data rate connectivity for wireless personal devices. Thus, it is designed to support at least 11Mbps data rate within at least 10 meters range2. The IEEE 802.15.3 standard is operated in 2.4GHz ISM frequency band. Unlike IEEE 802.15.1, which employs FHSS on PHY layer, IEEE 802.15.3 uses Direct Sequence Spread Spectrum (DSSS), and it does not allow changes of operating channels once a connection is initiated [9][10][11][14]. 4.8 IEEE 802.15.4 [18] addresses the needs of Low-Rate Wireless Personal Area Networks (LR-WPAN). While other WLAN (e.g. IEEE 802.11.a/b/g ) and WPAN (e.g. IEEE 802.15.1 and 802.15.3) technologies focus on providing high data throughput over wireless ad hoc networks, IEEE 802.15.4 is designed to facilitate those wireless networks, which are mostly static, large, and consuming small bandwidth and power. Therefore, the IEEE 802.15.4 technology is anticipated to enable various applications in the fields of home networking, automotive networks, industrial networks, interactive toys and remote metering [9][10][11][14]. Here we compared different standard of WPAN on the basis of the basic characteristic like Topic, Operational Spectrum, Physical Layer Detail, Channel Access, Maximum Data Rate, Modulation Technique, Coverage, Approximate Range, Power Level Issues, Interference, Price, Security, rcv Bandwidth, Number of Channels, Applications, Mode of operation (Ad hoc, Infrastructure, VANET ), License/Unlicensed, QoS needs. Table 3. Comparison of IEEE standard of WPAN [3][5][13][12] [9][10][11][14]. IEEE Standard 802.15.1 802.15.3 802.15.4 Topic Bluetooth High rate WPAN Low rate WPAN Operational Spectrum 2.4 GHz ISM band 2.402-2.480 GHz ISM band 2.4 GHz and 868/915Mhz Physical Layer Detail FHSS 1600 hops per second Uncoded QPSK, Trellis Coded QPSK or 16/32/64-QAM scheme DSSS with BPSK or MSK (O-QPSK) Channel Access Master-Slave Polling, Time Division Duplex(TDD) CSMA-CA and Guaranteed Time Slot(GTS) in a Super frame Structure CSMA-CA and Guaranteed Time Slot(GTS) in a Super frame Structure Maximum Data Rate Up to 1 Mbps(0.72) / 3Mbps 11-55 Mbps/ 110Mbits 868 MHz -20,915 MHz- 40 MHz, 2.4 GHz-250 Kbps, 40 kbps
  • 75. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9 Modulation Technique 8DPSK, DQPSK, _/4-DQPSK, GFSK, AFH QPSK, DQPSK, 16/32/64QAM BPSK, OQPSK, ASK, DSSS, PSSS Coverage <10 m <10m <20m Approximate Range 100m 10m 75m Power Level Issues 1mA-60mA <80mA Very low current drain(20- 50 µA) Interference Present Present Present Price Low(<$10) Medium Very low Security Less Secure. User the SAFER + encryption at baseband layer. Relies on higher layer security Very high level of security including, piracy, encryption and digital service certificate Security features in development rcv Bandwidth 1MHz 15MHz 2MHz Number of Channels 79 5 16 Applications WPAN HR-WPAN LR-WPAN Ad hoc Yes Yes Yes Infrastructure No No No VANET Yes Yes Yes License/Unlicense d Unlicensed Unlicensed Unlicensed QoS needs QoS suitable for voice application Very high QoS Relaxed needs for data rate and QoS 4.9 Worldwide Interoperability for Microwave Access (WiMAX) WiMAX (Worldwide Interoperability for Microwave Access) is a wireless communications standard designed to provide 30 to 40 megabit-per-second data rates, with the 2011 update providing up to 1 Gbit/s for fixed stations. The name "WiMAX" was created by the WiMAX Forum, which was formed in June 2001 to promote conformity and interoperability of the standard. The forum describes WiMAX as "a standards-based technology enabling the delivery of last mile wireless broadband access as an alternative to cable and DSL". IEEE 802.16 stands for WiMAX (Worldwide Interoperability for Microwave Access) is a trademark for a family of telecommunications protocols that provide fixed and mobile Internet access. The 2005 WiMAX revision provided bit rates up to 40 Mbit/s with the 2011 update up to 1 Gbit/s for fixed stations. It supports the frequency bands in the range between 2 GHz and 11 GHz, specifies a metropolitan area networking protocol that will enable a wireless alternative for cable, DSL and T1 level services for last mile broadband access, as well as providing backhaul for 801.11 hotspots.
  • 76. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10 WiMAX allows for infrastructure growth in underserved markets and is today considered the most cost-effective means of delivering secure and reliable bandwidth capable of supporting business critical, realtime applications to the enterprise, institutions and municipalities. It has proven itself on the global stage as a very effective last mile solution. In the United States though, licensed spectrum availability and equipment limitations have held up early WiMAX adoption. In fact, while there are currently 1.2+ million WiMAX subscribers worldwide, only about 11,000 of those are from the United States. Future growth in this market will be driven by wireless ISPs like Clear wire who intends to cover 120-million covered POPs in 80 markets with WiMAX by the end of 2010. Growth will also be driven by the availability of the 3.65-GHz spectrum that the FCC opened up this past year. In this paper, we compared some IEEE Standard 802.16a, 802.16d, 802.16e, 802.16m on the basis of basic characteristic, Application, Limitation and their used.[2][19] Table 4. Different IEEE Standard under 802.16 Standard [19]. Standard Description Status 802.16-2001 Fixed Broadband Wireless Access (10–66 GHz) Superseded 802.16.2-2001 Recommended practice for coexistence Superseded 802.16c-2002 System profiles for 10–66 GHz Superseded 802.16a-2003 Physical layer and MAC definitions for 2–11 GHz Superseded P802.16b License-exempt frequencies (Project withdrawn) Withdrawn P802.16d Maintenance and System profiles for 2–11 GHz (Project merged into 802.16-2004) Merged 802.16-2004 Air Interface for Fixed Broadband Wireless Access System (rollup of 802.16-2001, 802.16a, 802.16c and P802.16d) Superseded P802.16.2a Coexistence with 2–11 GHz and 23.5–43.5 GHz (Project merged into 802.16.2-2004) Merged 802.16.2-2004 Recommended practice for coexistence (Maintenance and rollup of 802.16.2-2001 and P802.16.2a) Current 802.16f-2005 Management Information Base (MIB) for 802.16-2004 Superseded 802.16- 2004/Cor1- 2005 Corrections for fixed operations (co-published with 802.16e- 2005) Superseded 802.16e-2005 Mobile Broadband Wireless Access System Superseded 802.16k-2007 Bridging of 802.16 (an amendment to IEEE 802.1D) Current 802.16g-2007 Management Plane Procedures and Services Superseded P802.16i Mobile Management Information Base (Project merged into 802.16-2009) Merged 802.16-2009 Air Interface for Fixed and Mobile Broadband Wireless Access System Current
  • 77. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11 (rollup of 802.16-2004, 802.16-2004/Cor 1, 802.16e, 802.16f, 802.16g and P802.16i) 802.16j-2009 Multihop relay Current 802.16h-2010 Improved Coexistence Mechanisms for License-Exempt Operation Current 802.16m- 2011 Advanced Air Interface with data rates of 100 Mbit/s mobile and 1 Gbit/s fixed. Also known as Mobile WiMAX Release 2 or Wireless MAN-Advanced. Aiming at fulfilling the ITU- R IMT-Advanced requirements on 4G systems. Current P802.16n Higher Reliability Networks In Progress P802.16p Enhancements to Support Machine-to-Machine Applications In Progress Here our comparison of WiMAX Standard on basis of Spectrum Bandwidth, Propagation, Throughput, Modulation, Usage/ Mobility, Range, Mode of Network (Ad-hoc, Infrastructure, VANET), License/Unlicensed Table 5. Comparison of Different WiMAX Standard 802.16a/ d/ e/ m [2][3][5][9][10][11][20]. WiMAX 802.16 a Fixed WiMAX 802.16d Mobile WiMAX 802.16e MobileWiM AX2.0 802.16m Spectrum Bandwidth 10-66 GHz 2-11GHz 2-6GHz Sub 6 GHz Propagation LOS NLOS NLOS NLOS Throughput up to 134 Mbps up to 75 Mbps up to15 /30 Mbps Over 300Mbps Channelizati on 28 MHz 20 MHz 5 MHz/10 MHz 100 MHz Modulation QPSK, 16QAM 256 subcarriers OFDM- BPSK,QPSK,16 QAM,64QAM OFDMA,QPSK,16QA M,64QAM, 256QAM (optional) 64QAM Usage/ Mobility WMAN Fixed WMAN Fixed WMAN Portable WMAN Portable Range Typical 4- 6 miles Typical 4-6 miles Typical 1-3 miles Typical 1-3 miles Ad-hoc Yes Yes Yes Yes Infrastructure Yes Yes Yes Yes VANET Yes Yes Yes Yes Licensed/Unl icensed Unlicensed Unlicensed 2.3, 2.5, 3.5, 3.7, and 5.8 GHz- Licensed Unlicensed
  • 78. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12 Finally in this paper, we compared the Different technology WLAN/Wi-Fi, WPAN, WiMax on the basis of IEEE Standard, Operating Frequency, Bandwidth, Data rate, Multiple Access, Coverage, Range, Mode of Network, Target Market. Table 6. Comparison of WLAN, WPAN, WiMAX [1][3][4][5][9] [10][11][12][13][14][20]. WLAN/Wi-Fi WPAN WiMax Fixed/Mobile IEEE Standard 802.11 802.15 802.16 Operating Frequency 2.4- 5 GHz 2.4GHz 10-66 GHz Bandwidth 20MHz 15 MHZ 5-6 GHz Data rate 1-150 Mbps 40 kbps- 110 Mbps 15,30,75,134,over300Mbps Multiple Access CSMA/CA CSMA-CA OFDM/OFDMA Coverage Small Small Low Range 250 m 10- 75 m 1-6 mile Mode of Network Ad-hoc, Infrastructure and VANET Ad-hoc and VANET Ad-hoc, Infrastructure and VANET Target Market Home/ Enterprise Home/ Enterprise Home/ Enterprise 5. CONCLUSION This paper has presented a description of the most prominent developing wireless access networks. Detailed technical comparative analysis between WLAN, WPAN, WiMAX wireless networks that provide an alternative solution to the problem of information access in remote inaccessible areas where wired networks are not cost effective have been looked into. This work has proved that the WiMAX standard goal is not to replace Wi-Fi in its applications, but rather to supplement it in order to form a wireless network web. REFERENCES [1] Anil Kumar Singh, Bharat Mishra, “COMPARATIVE STUDY ON WIRELESS LOCAL AREA NETWORK STANDARDS”, International Journal of Applied Engineering and Technology ISSN: 2277-212X , 2012 Vol. 2 (3) July-September. [2] Sourangsu Banerji, Rahul Singha Chowdhury, "Wi-Fi & WiMAX: A Comparative Study”, Indian Journal of Engineering Vol 2,No. 5, March 2013. [3] Jan Magne Tjensvold, “Comparison of the IEEE 802.11, 802.15.1, 802.15.4 and 802.15.6 wireless standards”, September 18, 2007
  • 79. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 13 [4] Raju Sharma, Dr. Gurpal Singh, Rahul Agnihotri “Comparison of performance analysis of 802.11a, 802.11b and 802.11g standard”, (IJCSE) International Journal on Computer Science and Engineering Vol. 02, No. 06, 2010 [5] Amit R. Welekar, Prashant Borkar, S. S. Dorle, “Comparative Study of IEEE 802.11, 802.15,802.16, 802.20 Standards for Distributed VANET” International Journal of Electrical and Electronics Engineering(IJEEE) Vol-1,Iss-3,2012 [6] Vijay Chandramouli, “A Detailed Study on Wireless LAN Technologies” [7] Mr. Jha Rakesh , Mr. Wankhede Vishal A. , Prof. Dr. Upena Dalal, “A Survey of Mobile WiMAX IEEE 802.16m Standard”,(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 1, April 2010. [8] Hyeopgeon Lee, Aran Kim, Kyounghwa Lee, Yongtae Shin ,” A Improved Channel Access Algorithm for IEEE 802.15.4 WPAN ”, International Journal of Security and Its Applications Vol. 6, No. 2, April, 2012 281 [9] Carlo de Morais Cordeiro, Dharma Prakash Agrawal, “ADHOC &SENSOR NETWORKS Theory and Application”, World Scientific Publishing Co. Pvt. Ltd, 2010 [10] IEEE Std 802.15.4-2011, Low-Rate Wireless Personal Area Networks (LR-WPANs). [11] IEEE 802.15 Working Group for WPAN, http://ieee802.org/15/index.html [12] Ling-Jyh Chen, Tony Sun, Mario Gerla, “Modeling Channel Conflict Probabilities between IEEE 802.15 based Wireless Personal Area Networks” [13] https://en.wikipedia.org/wiki/IEEE_802.15 [14] “IEEE 802.15 wpan task group 1 (tg1),” http://www.ieee802.org/15/pub/TG1.html. [15] “IEEE 802.15 wpan task group 3 (tg3),” http://www.ieee802.org/15/pub/TG3.html. [16] “IEEE 802.15.4 wpan-lr task group,” http://www.ieee802.org/15/pub/TG4.html. [17] Aktul Kavas, “Comparative Analysis of WLAN, WiMAX and UMTS Technologies”, PIERS Proceedings, August 27-30, Prague, Czech Republic, 2007.
  • 80. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1 A New Method for Web Development using Search Engine Optimization Chutisant Kerdvibulvech Department of Information & Communication Technology, Rangsit University 52/347 Muang-Ake, Paholyothin Rd, Lak-HokPatumThani 12000, Thailand Kittidech Impaiboon Department of Information & Communication Technology, Rangsit University 52/347 Muang-Ake, Paholyothin Rd, Lak-HokPatumThani 12000, Thailand ABSTRACT In this paper, we present the website development that utilizes Meta tag and the code we used in our website to increase SEO (Search Engine Optimization) in the search engine. When people talk about search engine, a lot of people may think about Google but Google is not the one search engine in the world. Nowadays we have a lot of search engines for example Bing, Ask, Yahoo and etc. In this paper we’ll show the evidence that we found from 20 people in Unilever Thailand we gave them a survey and made a graph to show that our website is better than the old one. After that we’ll show the web browser structure which is very important for web developers when they develop a website. Keywords Web Design; Search Engine Optimization; Code Structure. 1. INTRODUCTION Technology in all fields has grown faster and stronger than it ever has in the past. What machines did in the past, is almost infinitesimal compared to what we can achieve today. But yet the true fuel of such innovations lies with us, the human need [1]. The need that have driven the likes of many scientists and researchers, around the world to further our understanding of the environment we reside in [2]. Like the chain of evolution has proven so and so again, to adapt is to survive; which brings us to one of mankind’s arguably greatest achievements, the Internet. We are driven by the need for social interaction, self-growth and knowledge and through this determination we have given birth to an ever-growing living beast [3]. The World Wide Web, the pinnacle of human communication and interaction has demolished physical limitations and distances [4]. And in these webs of information brought about by sophisticated data communication techniques and a multitude of complex networking lies and equally sophisticated being, the front page of the internet, the very frontier we use to interact with, the websites.
  • 81. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2 2. SEARCH ENGINE OPTIMIZATION Nowadays we have the technology that helps us find the information that we want, it is called the “search engine”. Imagine if we did not have a search engine, then when people want to find information, it’ll be very difficult because they have to remember the URL [5]. Related studies showed how to develop web application that invokes web service [6] [7]. Search engine optimization is the way that we use to aid in increasing our web ranking on the search engine, the following section will describe what we did and what we did not do on the code of our website. Why do we have to utilize search engine optimization or SEO? The answer is because SEO can help us in increasing the chances of people that searching for education or international university in Thailand to find our website. How do we do that? It’s all about the “tags” in HTML code and that tag is called the “Meta Tags”. Meta Tags are source codes in the part of the head tag, in HTML normally when we open up a webs page a part of the head tag will contain Meta Tags that shows the attribute of a that web page. In this case, the robot of each search engine that sees the Meta Tag will process and index our website in their system. Let’s move on to how we used these Meta Tags and how we used them in our website. 2.1 Meta Content Type Tag This tag is very important for SEO because we use it for webpages to display the result correctly with the correct kind of character set and type of document on the website. <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> This is how we represent Meta content type tag in our website to show the SEO and increase our web ranking. 2.2 Meta Description Tag This tag is used to explain a brief description of the website, the tag content should not be too short or too long because according to the rule of SEO, we should only have 150 characters and it is important to ensure that the description that is written on the Meta Description tag be related to the page,
  • 82. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3 for example: this page is about the university then the description must be about the university. <META NAME='DESCRIPTION' CONTENT='write the description of the website page'/> This is the standard Meta Description tag that people use and next is the Meta Description tag that we used on our website. <meta name="description" content="Rangsit International College offers unparallel education. Study and learn from the best Thailand has to offer in international undergraduate and graduate courses."> 2.3 Title Element - Page Titles This tag is another important tag that a lot of people don’t know. Title tag is used for telling web crawlers what the topic of the current page is about. <title> University website</title> This is an example of how to use the title tag let’s see how it works on our website. <title>Rangsit International College - Education has never been better </title> We use the word Rangsit international college and education because when people search about international schools or international universities they can find us easily. Also if they search for education they can find us this way. 2.4 <div> Tag Why we don’t use the <table> Tag In Thailand if you’re a web developer or just somebody that is viewing the page source on a website in Thailand you will see that a lot of websites in our country uses the table tag to create web page layouts. Why issuing the table tag in our website a bad thing? The answer is when the search engine robot (spider, crawler) read the table tag it’ll read the content line by line and the table tag is exactly like a table as described by its name.
  • 83. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4 When the search engine robot reads the information then it’ll not read it correctly. This mean it will read the content in the wrong way, the problem is that, if they read the content in our website the wrong way when they index the information in our website it’ll be wrong or a certain part of the content will be out of context, obviously this is bad for us so we have to change to the use of the <div> tag when we want to create a container to keep the information in our website. Tables all take up more space in term of bytes, making for slower page loading. Compared to using CSS for layouts, tables take longer to implement and breaks up content, especially images. Users accessing the web with screen readers will also find it difficult to scan the contents of a page. Tables are also bad for the long run as they are complicated to edit and manage [8]. <table border="1"> <tr> <td>January</td><td>$100</td> </tr> </table> This is an example of a table tag in use that makes the search engine robot read the information in the wrong way in the website. In our website we use the <div> tag it is better than the <table> tag mainly because when the spider reads the information it’ll understand and keep the information on our content clearly. <div> Information in this box </div> This is the structure of the <div> tag that people use and next is the <div> tag in our website that we use I’ll pick just one <div> <div id="ric_logo"> <imgsrc="../images/Ric_logo.png" alt="Rangsit International College" /> </div> In this <div> we represent the logo of our college and this <div> keep the image on it to show the logo image of our college.
  • 84. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5 2.5 When you use <img> don’t forget to write information in the alt tag Why do we need have to write the alt in the <img/> tag? That’s because the search engine robot can’t read the image and it doesn’t know what the image is about so according to SEO standards we have to write information inthe <alt> tag to explain what that image is about, then the search engine robot will know exactly what that image tells us.Which in turn reveals more information about our web page, translating in to relevant content. That of course depends on what we put in the <alt> tag. Proper usage of <alt> tags also ensures that our HTML is valid. Meaning, it follows the guidelines set by the W3C. [9] <imgsrc="../images/Ric_logo.png" alt="Rangsit International College" /> This is the structure of the <img/> tag in our website and we write the alt to explain to the search engine robot to understand what this image is about. 2.6 The On Page Ranking Factors The on page ranking factors is the first group and identifies its ranking factor. This is the on the page SEO. These parts of the factors are those that the web developers are in total control of, they have the complete power to create their own content, or change it or delete it. On the page SEO categories consists of Content, HTML and architecture. 2.6.1 Content This is simply a no brainer, the content is what you put out for your users to experience, the only reason they come to your website is because of this. This is in fact the least technical aspect, but the most vital part. After all content is king. In this category itself, if we follow the periodic table we have five more sub categories. 2.6.2 Content Quality: What makes your content special? Developers and creators have to put stuff out, but not just any stuff. They need quality. Quality distinguishes competitors from one another, and people always want the best. 2.6.3 Content Research Before you know your enemy, you need to know yourself. It’s always important to do your research especially if you are going put yourself in the open. Learning about the most appropriate terms for your content can help by allowing people to relate to them easier, or even just understand them easier.
  • 85. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6 In SEO terms, this would be the keywords that will be used on the site. Keywords are a factor in which Search Engines use to present ads or search results to users. So basically this is what the user will type in when searching for something. Of course keywords alone do not directly affect search results, if it were that easy, we wouldn’t need SEO. 2.6.4 Content Words Closely followed by Content Research, after the research you need to put it to use; using the right keywords for the right content. Keywords help highlight your content and what you are trying to present. Using them correctly will allow the user to feel like they have something they needed, abusing them however will just result in them clicking away. 2.6.5 Content Engagement If you’ve got visitors you need to make sure they stay and have a good time. Search Engines may try to measure how engaging content is by how long users stay on your site, if they click in and move away immediately (bounce rate), it might be something they might not be looking. 2.6.6 Content Freshness The same old stuff can get boring very fast, if users don’t have enough to keep them going. They probably won’t be coming back for more. Google also tries to support content freshness by what it calls “Query Deserved Freshness”. If there’s an increase in a search more than usual, Google will try to find the right content for that period of time. This might result in the website getting a slight boost up the search results. 2.6.7 HTML This section concerns a few important HTML tags used on the website. The entire structure of the HTML does matter but these ones you have to pay a little more attention to as they come up first when a spider wants to know more about a website. 2.6.8 HTML Title Tag The title says it all, (no pun intended). This tag will tell the spider what a page is all about. Like every person needs a name to distinguish themselves or like a report needs a title to explain what it’s going to be about a Title tag will include and should include keywords that will help the page introduce itself to the user and the spider as well. 2.6.9 HTML Description Tag Although this is not something that directly influences the search engine or the spider, it’s still just as important. A description tag will be displayed when the web site come up in the search listings. A good description of the website directly to the user will help in letting the user know what he needs to know about a site without wasting his or her time. This description may very well be the deciding factor for a person to click on the link or not.
  • 86. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7 2.6.10 HTML Header Tags Much like the Title tags, headers function as headings within the page itself. Header help indentify sections of content to the viewers. Search Engines also make use of these headers to find out more about content. It’s always a good idea have appropriate headings for content, if not for the search engine, then for the users who will be making use of the content. 2.6.11 Architecture The architecture is the overall structure of your web page, right down to the HTML foundations, the way links are set up between pages and up to the very content and the way you display this content. 2.6.12 Site Crawlabilty For a spider or crawler to be able to effectively do its job, it needs to be able to maneuver through the website. A web developer must make sure that the links all go somewhere and lead to the right places; avoiding dead links and misplaced links. Depending on how you place your links some pages might be accessible and the content on these pages might not even appear for the spider’s index. Third party plug-ins like Flash or even JavaScript can cause links to unexpectedly disappear, or using such content instead of links will reduce a spider’s ability to crawl through the site. The use of site maps greatly helps in carefully detailing the pathways of a site. 2.6.13 Site Speed Fast websites are fast for a reason, that’s because they don’t want their users to wait. Nobody likes to wait longer than they should have to. Properly optimizing the structure and using correct content placement can help speed up website loading times. Over use of too many animations or effects can greatly reduce performance. If the home page is being slow, people might not even wait for your site to load. According to Google, a fast site may get a minor advantage over their slower peers. Having a Descriptive URL: If your website is about kittens, make sure you don’t call it monstertruckmadness.com. Unless you actually have monster trucks running over helpless little kittens. It’s important for a URL to be easily read, and short so that users can quickly get to the point, the more they have to scan through the URL, the less likely they will click on it. Especially if there’s something shorter they can read. 2.7 The Off Page Ranking Factors These elements of the table are things that web developers can’t completely control. They are the external factors that surround the web site, but they are just as impactful. The sub categories can be broken down to Links, Social, Trust and Personal.
  • 87. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8 2.7.1 Links Links are an extremely important factor; both internally and externally. What goes around comes around. And on the internet, that happens through links. Links going out and coming in help determines your own standing among other websites. To search engines, links mean a lot, they are all a part of what a website is, but there are good and bad links, and some are just better than others. 2.7.2 Link Quality Google and other search engines has way of determine how good a link is, or how it is a better link. As it is possible for many different websites to link to any other website, there are, however some sites that are more reliable or popular sites. The link backs from these sites have high qualities that may reflect the reliability and popularity of your own site. 2.7.3 Link Text The very phrases that are use on the links pointing back or away from you also weigh in as a factor to not overlook. Just because a website has a ton of links, this does not mean its better. Relevancy is an all important factor in any case, so the text describing the link counts. 2.7.4 Link Numbers Although quality is preferred over quantity, you still need the quantity. Less you have, the less you can be evaluated for especially when come to links. 2.7.5 Social In the end it’s all about the people. You may have the best links, the best website or even the best content, the best, at least for you. But for the others out there they will be the own judge of that. That however is actually a good thing. When people come across something amazing or something they find interesting, most people would want to share them, or just talk about them. Social Reputation The online social presence you maintain and something you can represent yourself online, social accounts are important in way that they let creators or developers establish themselves as identities. These social accounts let’s you interact and communicate with the users. However; the main point is to be active on social media. This where a lot of buzz can be generated, and it’s always better to approach people directly and with today’s social media like Face book and Twitter, you have a direct line between the users and creators. 2.7.6 Social Share This is the attention and activity you get on social media in relation to your website either through linking from social media or shares on Face book, or
  • 88. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9 likes, tweeting about it, digging it or any other form of social sharing mechanism provided by the social media website. 2.7.7 Trust Trust is simply that comes from being legitimate on your own content; having the right influence, coming from the right source, being believable and legitimate. Although it is still unsure how search engines measure a trust level of a website, nevertheless for humans they are very much an important factor. 2.7.8 Trust Authority The best way to establish trust with users and search engines is by having a good reputation. Being true to your service and customers, or just simply being a legitimate service or content provider can help reign in good remarks from other people. This can happen through online reviews from blogs, or just simply more sharing from trusted social media accounts and links from popular sites; After all the power of authority is given by the people. 2.7.9 Trust History The overall history of the website, it’s almost like a track record for the lifetime of your site. Search engines will try to see if anything out of order has ever happened in the past. Of course the longer you’ve stayed the higher chances of you being trustworthy because the longer and more exposure you’ve had with users, though this is not always the case. 2.7.10 Personal Ever wonder why you see different search results when searching in one country than in another country? That’s because the major search engines out there have come to include localization factors to narrow down certain search results. This doesn’t happen to every single thing you search for, but when it does. It’s because the certain website is optimized in such a way. 2.7.11 Personal Country Country is the easiest of factors to come across; depending on where you are in the country your website may not be relevant to someone else on the other side of the planet. Of course this does not just pertain to websites; the search terms themselves may have different meanings depending on the country. 2.7.12 Personal Locality These are local search, narrowing down according to the city. Much like the country level, this will affect what is relevant in a local range. 2.7.13 Personal History This solely depends on the personal preference of a person. Like the links they have clicked or continually click after searching for something. There
  • 89. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10 is no way to influence this other trying to get the first impressions right, on a person’s first visit to the web page. Personal Social Connections: This is the impression of what a person’s social circle might have about web sites. The best way to strengthen this front is by actively participating in social networks. The more friends the more likely you content will be shared and higher chances of friends of friends finding your site. 2.8 The Violation These are things that are shunned upon by search engines. Things that web developers should avoid if they want to stay on good terms with the major search engines out there. These violations can generally result in a website being taken off from search engine result listings. 2.8.1 Thin Content Content that is lacking or relatively simple. Either the words do not coherently adhere to any form of grammatical sense, such as large texts of Spam created just for the purpose of getting a chance to appear in the listings or that the content itself is insufficient or repetitive. These factors can cause search engines to flag websites with penalties. 2.8.2 Keyword Stuffing Keywords are there to help, not abuse. However there are people who do just that. Keywords are good indicators to search engines of what you want to be found for but overusing them does not help. This is illegal in terms of SEO. Though it is not sure what extent of using keywords are considered keyword stuffing, but using a lot more than you have to might get you in to trouble. 2.8.3 Hidden Text This is a follow up from the previous technique. Instead of having all these unnecessary keywords become visible they are hidden in backgrounds by taking advantage of text colors or just simply hidden elsewhere; usually a place unbeknownst to the user. Search Engines however are fully aware of this, and they don’t like it. 2.8.4 Cloaking This is the worst sin you can commit; the ultimate act of bad SEO. Cloaking involves letting the search engine have a different version of the site. So basically speaking, creating a fake Spam site that catches the attention of the search engine, but once a user clicks on it. It’s a different site.
  • 90. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11 2.8.5 Paid Links This is when you pay for links; the exchange of money in return for high volume link backs or simply just buying links. 2.8.6 Link Spam Link spamming pretty much like all spam is repetitive posting of your link on every single blog, forum or comment section through the use of automated software. This is bad SEO practice. Not only will the links feel out of place, if they keep popping up everywhere people will start to tire of it, and those who actually click on it out of curiosity may never even come back. 2.9 Blocking 2.9.1 Personal Blocking Certain search engines gives the user to block an entire site from the result listings or block web pages they don’t find very nice for whatever reason. In more cases it’s because the websites are not relevant to the users but they keep appearing when the user search for a particular thing. This may necessarily be a bad website, but the user just happens to find it annoying. 2.9.2 Trust Blocking A handful of people blocking may not seem like a much of a deal. But once a good number of related people begin doing so, then there might be trouble. This is seen as a negative factor and might result overall blocking for any other average user. 3. DESIGN AND LAYOUT In the present time a lot of people may think and suspect that personal computers and laptops are where people make use of the internet the most. This is in fact, not true. Mobile users are quickly exceeding desktop users. Anything the desktop PC can do, the mobile device can too,including surfing the web. In our web design we have to manage the screen resolution carefully because on the web browser they place things in the order they see, and in the form of the box model, the contents on the web take up space in the form of rectangles; filling in available spaces from left to right and top to bottom. Depending on your screen size you might see more of the content or less. For the user they may not care what screen resolution they have when viewing websites, but for web developers this creates an absolute nightmare when it comes to positioning objects on the page or placing content appropriately. If you are not careful the design might take unexpected turns with certain resolutions. With the increasing ranges of screen sizes and the
  • 91. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12 other way around, more people are using a variety of devices to view and interact with websites. A developer could simply decide not to cater to a range of audiences, but from a business perspective this is highly unacceptable. With a multitude of screen resolutions to consider and the fact that more and more people are beginning to use handheld devices to view web pages on the internet. Placing content on the web is no longer as simple as just putting it there. There are many factors to consider such as the target audience, the screens they will be using, how much content there are and finally, the most important factor, how the web developer will be placing said content. Creating a different version for each website is simply something that is not worth it. The website itself needed to adapt to the differences. The Layout is a vital factor in creating a website, as this will be the portion that users will be interacting, not only is it the face of the website, it is the entire external structure that users will be interacting with; So there is no surprise to why a lot of web designers heavily place priority in trying to get their layouts right before anything else. There are basically three layouts that web designers ponder over when creating their website; the Fixed Layout, the Fluid Layout and the Elastic Layout. I will provide a brief explanation of each layout. There is no best layout for choosing, each form will have its own use and the web developer has to choose the correct layout based on his or her customers. As we move beyond these layouts, we will go in to something known as Responsive Web Design, which is very much what layouts are trying to achieve but done more sophisticatedly. This is why we have to manage the screen carefully and why screen is so important to our website that we create. There are three kinds of layout we will explain them. For our website we used the Fluid Layout and we will show the code that we used on it and some images describing the three different kinds of layout. 3.1 The Fixed Layout Each user has a different screen resolution such as 800x600, 1024x780, 1280x800, 1280x960, 1280x1024 pixels. Depending on what the developer sees fit, some websites may have smaller pixel dimensions for example a screen resolution at 800x600 and defined by the width of the content of about 780 pixels and the arrangement is on the center of the screen. So the users that use a screen resolution of 800x600 they will see a page full screen. For those using a screen resolution of 1024x780 they will see the content of the page in the center and the remaining area will either be a background color or a background image created by the web developer. If we make the screen resolution 1024x780 and the users use a resolution of
  • 92. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 13 800x600 the problem is they have to scroll and see the website. Which is time consuming and troublesome for the user. This shows us how crucial screen resolutions are to the web viewing experience. Figure 1 shows an example of a fixed layout. We have discarded the use of the fixed layout. Figure 1. Fixed Layout [4] 3.2 The Fluid Layout This layout is more user friendly as it adjusts to the user’s screen as much as possible. However compared to fixed layouts, it s much more difficult to make and control for the matter, because what developer sees on his screen may not be the same for the users. Much of the trouble may come when placing content such as videos and images with fixed dimensions, as these may become harder to scale along with the screen sizes. This is the layout we use to create our website. In short, Fluid Layout is the layout structure that uses percentages instead of pixels to determine content dimensions, which is different from the Fixed Layout that uses pixel values to control the page no matter what screen resolution a web page is being viewed in. In Figure 2 we replace the pixel values with percentages. For example if we create the width and height of the page as 100% then if users use 800x600, 1024x780 or any screen resolution they can view our website full screen so it’s mean that if we use this design our website can open with almost every screen resolution on a computer. Figure 2. Fluid Layout [4]
  • 93. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 14 From the picture above the top part of the site displays a header which is wide 90%, the main part that shows the content is about 50% and 20% for the lateral side. This is a good example that makes web designer and web developer to look at and understand easily because it shows a simple webpage that makes it clear. This layout is also used in our website because it can show our website in every screen resolution and it doesn’t take too much time for doing this design layout and the result is very great so that’s why we prefer this layout. This is the way we set the whole body of our page we set it 100% and our website we create it as a single page that a lot of websites in the world use. #page1_content { position: relative; width:100%; height:95%; margin:0 auto; background-color:#2e2f31; color:#d8d8d8; z-index:0; } This is an example of page 1 that we use 100% of width and 95% of height and it depends on each page. 3.3 The Elastic Layout A lot of testing has to go in to the design before it can be fit for all screens. In an elastic layout a special unit called the “em” is used for the unit of measurement. The em unit scales according to the users font size which is set in the unit em. This scaling can be done via the browser. 3.4 The Responsive Web Design This is undoubtedly the next big standard that all web developers are and should be conforming to. Much like the layout techniques that are mentioned above, in fact the same thing but all combined together to provide the ultimate user friendly website possible. This technique is made possible through the use of a CSS3 selector called the media query and a fluid grid.
  • 94. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 15 Responsive Web Design is probably the most confusing and the hardest to get right, but it provides the most flexible user experience across a multitude of screens, if not possibly all screens. . Figure 3. Example experimental results 3.5 Graph from survey of 20 people In this part, the graph of the survey is shown that we made for the evidence that our website is better than the old website. Then we have about 5 graphs to show what people like why they like the old website and new website that we create. What we found from the survey we thought that our website is better than the old one but the truth is some people they think the style of the old one is better. The percentage that we found in the survey 85% of people they like our website and 15% they like the old website and we think that it is alright for us because we cannot make any website that a 100% people like. It is very difficult. Table I and Figure 3 show the experimental results. Table 1. Web Development Survey from 20 persons Table I Web Development Survey from 20 persons Which website design is better? New Website 17 Old Website 3 Which website design is better ? New website 17 Old website 3 0 2 4 6 8 10 12 14 16 18 Web Development Survey from 20 persons
  • 95. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 16 This is the graph that we made from our survey. The survey was given to 20 employees working at Unilever Thailand. We also found out that older people tend to like the old website better than the new one. But in this case it’s not about the ages but it’s about the opinion. Those that like the new design do so because of better design and usability. This graph is the evidence tells us 85% of people think our old website design is better. The first thing that we make is better design that show the new website 10 out of the 17 chose the new website because of better design, demonstrating the fact that people often judge first on looks. An eye catching website has better chances of inviting people rather than a dull website. This means more than 50% of those who chose the website, did so because of the design [10]. This fact further emphasized as none of those who chose the previous did so because of design. Visual appeal is an important factor in attracting attention. Our website had gone with a modern look, appealing to younger people. In fact more time was spent on designing the website than actually coding it. As we continually asked for feedback from those who were testing the website, design has always been an important factor, not just in web design but almost everywhere. Taking this in to account, we had strived to make the design as suitable and as grand as possible. We had gone with a black and grey color scheme that allowed us to easily build contrast but adding color. As University websites serve to provide information for possibly future students, we found that it was important for us to be able to visually highlight important content on a web page. A dark background makes any bright color instantly pop, therefore guiding the user’s eye. However this also raises problems colors visually clashing as there is no middle ground the colors can fall back on. We hoped to eliminate this problem by only using a set of colors to a certain amount, on a certain page. This next graph examines the remaining portion of those who chose the website. Each person can choose the website for more than one reason. Of course a website’s design is not the only thing of importance. You can’t make fried eggs with an egg’s shell. You need the yolk. A rather silly analogy as it may be, but what it trying to tell is very important. A website can look as great as it wants, but without good content, or access to that content for the matter, a user will quickly dismiss a website [11]. In order to collect data on how well we had structured the content we raised another topic for those who were surveyed.
  • 96. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 17 Figure 4. Web Development Survey Yet again, more than half have chosen the new website, but however this time some had actually preferred the old website. The main reason was that the navigation on the old website was simpler. But then again, those who chose the new website also gave a contradicting reason, they had said the navigation on the new website was better. Judging by the numbers, it seems we would have to take this as factual data rather than personal preference. We employed a one page website design, putting as much content as we can in to a single page and dividing them in to sections. Users navigate via a fixed navigation bar with topics regarding the University. Clicking on the navigation will allow the user to smoothly scroll to corresponding location. Simply, but it is also stylish. The next question is also concerned with user being able to make use of the content. Much like anything that has words in it. The most important thing is being able to read it. The main thing we could do ensure that users will have the most comfort reading our content was font selection, font sizing and spacing [12]. Table 2. Why they Choose the website that they like? Table II Why they Choose the website that they like? Font easier to read New Website 5 Old Website 0 Font easier to read New website 5 Old website 0 0 1 2 3 4 5 6 Web Development Survey from 20 persons. Why they choose the website that they like ?
  • 97. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 18 Though the numbers are significantly less than that of the other reasons, its importance is still proven by the fact that there are those who have chosen the new website over the old one because of readability issues. As shown in Figure 4, much like the design, the new website provides enough contrast which are easy on the user’s eyes. After all people need to be able to know what they have managed to find on the website, and if they can’t have pictures explaining things to them, then at least have words that they can read. We have worked with the font BebasNeue, which is an easily distinguishable font and rather modern looking. This helps tie in the rest of the web design. Typography is an important design element that many people look over, often without a second thought. The best fonts are the ones that are the simplest, as users are trying to read content, not have to decipher them every time they want to find out about something. The last remaining portion raises the questions of the layout of the content. It is whether the structuring of content made sense, or whether it followed a good pattern. As we have mentioned before we had used a fluid layout for the website, and screen resolution is also important. This is the graph is to find out whether people thought our website was accessible or not. So 35% of people think that our website is more accessible and 10 % think that the old website is better. Accessibility in this case refers to viewing on different screens, different browsers and the overall simplicity of the link structure from one web page to another. 4. STRUCTURE The main languages of the web are the HTML (Hyper Text Markup Language), the CSS (Cascading Style Sheet) and JavaScript. More on these languages will be explained but know that when a browser works on the web page, these are the languages that it has to interpret for the user as they are the languages used to construct the web pages themselves. The browsers interfaces as we know it are rather similar consisting of an address bar; where you type the URL (Uniform Resource Locator) of a certain web page, other basic functions include being able to book mark web pages, moving back forth between web pages with the back and forward buttons, and the home page button takes you to the page you have set as the default page. For the average user this might be all they know and need to know about a web browser, but for most web developers they might need to delve a little deeper. It is not a surprising fact however that most web developers themselves do not know how a browser works, at least in the most minute of details.
  • 98. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 19 The browser can be broken down in to 7 components, each component working together to display the content as shown in Figure 5. Figure 5.The Browser’s Structure [5] 1) The UI Interface: This is the interface that users interact with, consisting of a graphical interface for browser functionalities such as bookmarking, zooming on web pages, refreshing web pages and other useful functions. This the application stage where users will interact with the browser. This is the visible component that all users are aware of. 2) The Browser Engine: The browser engine is responsible for acting as the bridge between the UI Interface and the Rendering Engine. It will take actions from the user and the results of the Rendering Engine to display a page in the browser window. 3) The Rendering Engine: The vital part of the browser, the Rendering Engine is responsible for displaying the page by parsing the web languages. The rendering engine will follow a flow in which it will gradually render the contents of the page. Chrome and Safari uses a rendering engine called the Web kit while Firefox uses one called the Gecko Engine. The HTML will be parsed and the browser will begin construction of what is called the DOM tree or the Document Object Model. The DOM is the convention by which we control and manipulate HTML, XHTML and XML documents. It’s called a tree because the code is divided in to nodes and laid out in a tree structure. Each HTML mark up tag will have its own hierarchy and the construction of DOM tree basically help lays the foundation for what each mark up means and what it does in relation to the browser. Once the DOM tree has been constructed, the styling elements of the web which is the CSS code will be used for the creation of another tree called the Render Tree. This tree will be responsible for laying down the visual
  • 99. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 20 elements that are included with the HTML. The nodes are laid out in rectangles in what we call the CSS box model where each markup takes up space like a box. Once the necessary elements have been loaded it is time to position the elements by where they will appear exactly on the screen. The last stage is to finally paint the tree. What this means is that the rendering engine will use the UI methods of the underlying Operating system to render out each individual node. 4) Networking: HTTP requests and other networking calls are handled by this component. Implementing Data communication and networking protocols to send and receive data. 5) UI Backend: This is the interface that uses underlying UI of the operating system. That is used to draw basic graphical elements such as windows, check boxes, radio buttons and combo boxes. 6) JavaScript Interpreter: The JavaScript Interpreter is used for the execution and parsing of JavaScript code. 7) Data Storage: The database system of the browser in which it will save user specific data such as cookies and other web related user information. 5. CONCLUSIONS The current five major browsers we in use today consist of Chrome, Internet Explorer, Firefox, Safari and Opera. Below you can see the usage statistics of each browser. It is evident the top three popular browsers are Chrome, Firefox and Internet Explorer. Since each browser renders out pages in slightly different ways, web developers have to constantly be aware of how their websites will turn out on each individual browser, Internet Explorer being a prime example of what works in one browser does not work in another. Browser differences are not the only factors that websites have adapted. REFERENCES [1] Elisabeth Freeman and Eric Freeman, Head First HTML with CSS and XHTML (First Edition), O’Reilly Media, Inc, United States of America, December 2005. [2] Browser Statistics and Trends, Available: http://www.w3schools.com/browsers/browsers_stats.asp [3] J. Jones. (1991, May 10). Networks (2nd ed.) [Online]. Available: http://www.atm.com
  • 100. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 21 [4] Kayla Knight (2009, June 2nd). Fixed vs. Fluid vs. Elastic Layout: What’s The Right One For You? [Online]. Available: http://coding.smashingmagazine.com/2009/06/02/fixed-vs-fluid-vs-elastic-layout- whats-the-right-one-for-you/ [5] Tali Garsiel& Paul Irish (2011, Aug 5). How Browsers Work: Behind the scenes of modern web browsers. Available:www.html5rocks.com/en/tutorials/internals/howbrowserswork/#Painting [6] Mardiana, M., Araki, K. ,and Omori, Y., MDA and SOA approach to development of web application interface, TENCON 2011 - 2011 IEEE Region 10 Conference, pages 226 – 231, 21-24 Nov. 2011. [7] Rachit Mohan Garg, YaminiSood, BalajiKottana, PallaviTotlani, A Framework Based Approach for the Development of Web Based Applications, World of Computer Science and Information Technology Journal (WCSIT), ISSN: 2221-0741, Vol. 1, No. 1, 1-4, Feb. 2011. [8] GavinKistner. Why tables are bad compared to semantic HTML and CSS: http://phrogz.net/css/WhyTablesAreBadForLayout.html 2010 [9] Patrick Sexton. Descriptive and accurate <title> elements and ALT attributes: http://www.feedthebot.com/titleandalttags.html 2011 [10]Why graphic design is important http://www.slideshare.net/LocalInternetTraffic/importance-of-graphic-design-in-web- development (2008) [11]Julie M. Rinder, Fiserv, The importance of Usability Testing in Website Design (July 2012) https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/12257/Rinder2012.pdf? sequence=1 [12] Sandra Gabriele. The role of typography.http://www.longwoods.com/content/18465 (2006)
  • 101. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1 A New Design to Improve the Security Aspects of RSA Cryptosystem Sushma Pradhan and Birendra Kumar Sharma School of Studies in Mathematics, Pt. Ravi Shankar Shukla University, Raipur, Chhattisgarh, India ABSTRACT This paper introduce a security improvement on the RSA cryptosystem, it suggests the use of randomized parameters in the encryption process to make RSA many attacks described in literature, this improvement will make the RSA semantically secure i.e. , an attacker cannot distinguish two encryptions from each other even if the attacker knows (or has chosen) the corresponding plaintexts. This paper also briefly discuss some other attacks on the RSA and the suitable choice of RSA parameter to avoid attacks, also an important issue for the RSA implementation is how to speed up the RSA encryption and decryption process. Keywords RSA cryptosystem, RSA signature, RSA Problem, Public Key Cryptosystems, Private Key Cryptography. 1. INTRODUCTION A powerful tool for protection is the use of Cryptography. Cryptography underlies many of the security mechanisms and builds the science of data encryption and decryption. Cryptography [1] enables us to securely store sensitive data or transmit across insecure networks such that it cannot be read by anyone except the intended recipient. By using a powerful tool such as encryption we gain privacy, authenticity, integrity, and limited access to data. In Cryptography we differentiate between private key cryptographic systems (also known as conventional cryptography systems) and public key cryptographic systems. Private Key Cryptography, also known as secret-key or symmetric-key encryption is based on using one shared secret key for encryption and decryption. The development of fast computers and communication technologies did allow us to define many modern private key cryptographic systems, e.g. in 1960's Feistel cipher [2], Data Encryption Standard (DES), Triple Data Encryption standards (3DES), Advanced Encryption Standard
  • 102. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2 (AES), The International Data Encryption Algorithm (IDEA), Blowfish, RC5, CAST, etc. The problem with private key cryptography was the key management, a system of n communicating parties would require to manage ((n-1)*n)/2, this means that to allow 1000 users to communicate securely, the system must manage 499500 different shared secret key, thus it is not scalable for a large set of users. A new concept in cryptography was introduced in 1976 by Diffie and Hellman [2] called as public-key cryptography and is based on using two keys (Public and Private Key). The use of public key cryptography solved many weaknesses and problems in private key cryptography, many public key cryptographic systems were specified (e.g. RSA [3], ElGamal [4], Diffie-Hellman key exchange [2], elliptic curves [5], etc.). The security of such Public key cryptosystems is often based on apparent difficulties of some mathematical number theory problems (also called "one way functions") like the discrete logarithm problem over finite fields, the discrete logarithm problem on elliptic curves, the integer factorization problem or the Diffie-Hellman Problem, etc. [1]. One of the firstly defined and often used public key cryptosystems is the RSA. The RSA cryptosystem is known as the de-facto standard for Public- key encryption and signature worldwide and it has been patented in the U.S. and Canada. Several standards organizations have written standards that use of the RSA cryptosystem for encryption, and digital signatures [6]. The RSA cryptosystem was named after his inventors R. Rivest, A. Shamir, and L. Adleman and is one of the mostly used public-key cryptosystem. Due to the wide use of the RSA cryptosystem, it is critical to ensure a high level of security for the RSA. In this paper, we introduce a new design to improve the security of the RSA cryptosystem; this is achieved by using randomized parameter that make the encrypted message more difficult for an adversary to break; thus making the RSA more secure. This paper is organized as follows. In the next section the mathematical basics preliminaries on RSA are briefly described. In Section 3, we describe our new scheme with security improvement that can protect us against the given attacks. In Section 4, we give the comparison between basic RSA cryptosystem and our new scheme. Finally, we give a short conclusion in Section 5.
  • 103. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3 2. BASIC PRELIMINARIES The security of the RSA cryptosystem is based on the intractability of the RSA problem. This means that if in the future the RSA problem generally solved then the RSA cryptosystem will no longer be secure. Definition2.1. The RSA problem (RSAP) is the following: given a positive integer n that is a product of two distinct odd primes p and q, a positive integer e such that gcd(e, (p − 1)(q − 1)) = 1, and an integer c, find an integer m such that )(modncme  . This means that the RSA problem is based on finding the e-th roots modulo a composite integer n. Definition2.2. For n>1, let φ (n) denote the number of integers in the interval [1, n] which are relatively prime to n. The function φ is called the Euler phi function (or the Euler totient function). The RSAP has been studied for many years but still an efficient solution was not found thus it is considered as being difficult if the parameters are carefully chosen, but if the factors of n are known then the RSA problem can easily be solved, an adversary can then compute Euler φ (n) = (p−1) (q−1) function, and the private key d, once d is obtained the adversary can decrypt any encrypted text. It is also widely believed that the RSA and the integer factorization problems are computationally equivalent, although no proof of this is known. Remark: The problem of computing the RSA decryption exponent d from the public key (n, e) and the problem of factoring n are computationally equivalent [6]. This imply that when generating RSA keys, it is important that the primes p and q be selected in sufficient size such that factoring n = p*q should be computationally infeasible. The RSA public key and signature scheme is often used in modern communications technologies; it is one of the firstly defined public key cryptosystem that enable secure communicating over public unsecure communication channels. The following algorithms describe the RSA key generation, and the RSA cryptosystem (basic version)
  • 104. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4 Algorithm2.1 Key generation for the RSA public-key encryption Each user A creates an RSA public key and the corresponding private key. User A should do the following: 1. Generate two large random (and distinct) prime’s p and q, each roughly the same size. 2. Compute n = p *q and φ (n) = (p − 1) (q − 1). 3. Select a random integer e, 1 < e < φ (n), such that gcd (e, φ (n)) =1. 4. Use the Euclidean algorithm to compute the unique integer d, 1 < d < φ (n), such that e *d ≡ 1 (mod φ (n)). 5. User A public key is (n, e) and A’s private key is d. Definition2.3. The integer’s e and d in RSA key generation are called the encryption exponent and the decryption exponent, respectively, while n is called the modulus. Algorithm2.2. The RSA public-key encryption and decryption (Basic version) User B encrypts a message m for user A, which A decrypts. 1. Encryption. User B should do the following: 1. Obtain user A authentic public key (n, e). 2. (Represent the message as an integer m in the interval [0, n − 1]. 3. Compute )(modnmc e  . 4. Send the encrypted text message c to user A. 2. Decryption. To recover plaintext m from c, user A should do the following: 1. Use the private key d to recover )(mod)( nmm de  . The original RSA encryption, decryption does not contain any randomized parameter making the RSA cryptosystem deterministic, which means that an attacker can distinguish between two encryptions, based on this many of the attacks listed below can be performed on the RSA basic version. 3. NEW SCHEME The key generation remains unchanged as in the original RSA, see above. The following algorithms describe as follows.
  • 105. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5 Algorithm3.1. The RSA public-key encryption and decryption (Modified version) User B encrypts a message m for user A, which A decrypts. 1. Encryption. User B should do the following: 1. Obtain user A authentic public keys (n, e). 2. Represent the message as an integer m in the interval [0, n − 1]. 3. Select a random integer k, 1 < k < n, such that gcd (k, n)) = 1. 4. Compute )(mod1 nkc e  5. Compute )(mod2 nkmc e  6. Send the encrypted text message ),( 21 cc to user A. 2. Decryption. To recover plaintext m from 2c , user A should do the following: 1. Use own private key d and compute: )(mod1 nkc  2. Use the Euclidean algorithm and calculate the unique integer s, 1 < s < n, such that s *k ≡1 (mod n). 3. Compute )(mod)()(2 nmksmskmsc eee  . 4. Recover m, use the private key d and compute: nmm de mod)(  The following example illustrates the use of modified RSA cryptosystem. Example: (RSA Encryption/Decryption) Key Generation: Assume p = 2350, q = 2551, n = p* q = 6012707 1. Encryption. User B should do the following: 1. User A authentic public key e = 3674911 2. Message m = 31 3. Random k = 525 4. Compute: 3674911 525 = 20639mod6012707 5. Compute: 367491125 31 = 2314247mod6012707 6. Send (20639, 2314247) to user A 2. Decryption. To recover plaintext m from c, user A should do the following: 1. User A private key d= 422191, compute: 422191 20639 =
  • 106. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6 525mod6012707 2. Extended GCD (525, 6012707) → s = 3516002 3. Compute: 2314247*3516002 = 2913413mod6012707 4. Recover m: 422191 2913413 = 31mod6012707. The RSA encryption/decryption is much slower than commonly used symmetric key encryption algorithms such as the well know algorithm DES and this is the reason why in practice RSA encryption is commonly used to encrypt symmetrical keys or to encrypt small amount of data, there are many software solutions or hardware implementations to speeding up the RSA encryption/decryption process. For more information about speeding up RSA software implementations see [6]. Because the basic version of the RSA cryptosystem has no randomization component an attacker can successfully launch many kinds of attack, now we discuss some of these attacks. 1. Known plain-text attack: A known-plaintext attack is one where the adversary has a quantity of plaintext and corresponding cipher-text [6]. Given such a sorted set }},}....{,{},,{{ 2211 rr cpcpcpS  (where Ppi  plaintext set, Cci  cipher text set, r < φ (n) is the order of Z*n) an adversary can determine the plaintext xp if the corresponding xc is in S. The modified version of the RSA described above use k as randomizing parameter; this can protect the encrypted text against known plain text attacks. 2. Small Public/Private exponent e/d Attack: To reduce decryption time, one may wish to use a small value of private exponent d or reduce the encryption time using a small public exponent e, but this can result in a total break of the RSA cryptosystem as Coppersmith [10] and M.Wiener [11] showed. 3. Johan Hstad and Don Coppersmith Attack: If the same clear text message is sent to more recipients in an encrypted way, and the receivers share the same exponent e, but different p, q, and n, then it is easy to decrypt the original clear text message via the Chinese remainder theorem [6]. Johan Hstad [7] described this attack and Don Coppersmith [8] improved it.
  • 107. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7 4. Common Modulus Attack: If also same message m is encrypted twice using the same modulus n, then one can recover the message m as follows: Let )(mod1 1 nmc e  , and )(mod2 2 nmc e  be the cipher texts corresponding to message m, where gcd 1),( 21 ee , then attacker recovers original message nccm ba mod211  for 121  beae . Using the extended great common divisor (GCD) one can determine a and b then calculate m without knowing private key d, this is known in the literature as the Common Modulus Attack that requires ))((log 2 KO , where k is maximum size of a or b. 5. Timing Attack: One attack on the RSA implementation is the Timing Attack; Kocher [9] demonstrated that an attack can determine a private key by keeping track on how long a computer takes to decrypt a message. 6. Adaptive chosen cipher text attacks (ACC attacks): In 1998, Daniel Bleichenbacher [12] described the first practical adaptive chosen cipher text attack, against RSA-encrypted messages using the PKCS#1v1[13] padding scheme (a padding scheme randomizes and adds structure to an RSA- encrypted message, so it is possible to determine whether a decrypted message is valid.) Bleichenbacher was able to mount a practical attack against RSA implementations of the Secure Socket Layer protocol (SSL) [14], and to recover session keys, here it is important to mention that such protocol is still often used in internet to secure emails and e-payment via internet. As a result of this work, cryptographers now recommend the use of provably secure padding schemes such as Optimal Asymmetric Encryption Padding, and RSA Laboratories has released new versions of PKCS#1 that are not vulnerable to these attacks. 7. Attacks on the factorization problem: Some powerful attacks on the RSA cryptosystem are the attacks on the factorization problem; the factoring algorithms to solve the factorization problem come in two parts: special purpose and general purpose algorithms. The efficiency of special purpose depends on the unknown factors, whereas the efficiency of the latter depends on the number to be factored. Special purpose algorithms are best for factoring numbers with small factors, but the numbers used for the modulus in the RSA do not have any small factors. Therefore, general purpose factoring algorithms are the more important ones in the context of cryptographic systems and their security. A major requirement to avoid factorization attacks on the RSA cryptosystem is that p and q should be about the same bits length and
  • 108. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8 sufficiently large. For a moderate security level p and q should be at least 1024 bits length, this will result in a 2048 bit length for modulus n. furthermore p and q should be random prime number and not of some special case binary bit structure. The following table summarizes the running time for some of the well known integer factoring algorithms where p denotes the smallest prime factor of n, and e = 2.718 is the Euler’s constant. Table 1. Factorization algorithms Algorithm Runtime Estimation 1.Pollards Rho [15] O(p) 2.Pollards p − 1 [16] O (p*) where p* is the largest prime factor of p − 1. 3.Williams p + 1 [17] O (p*) where p* is the largest prime factor of p + 1. 4.Elliptic Curve Method (ECM) [18] )( )2/1)lnlnln2))(1(1( ppo eO  5.Quadratic Sieve (Q.S.) [19] )( )2/1)lnlnln2))(1(1( NNo eO  6.Number Filed Sieve (NFS) [20] )( )3/2)ln(ln31)))(ln1(92.1( NNo eO  In 2010, the largest number factored by a general-purpose factoring algorithm was 768 bits long [21] using distributed implementation thus some experts believe that 1024-bit keys may become breakable in the near future so it is currently recommended to use 2048 for midterm security and a 4096-bit keys for long term security. Now, the described RSA security improvement in this paper can protect us against the following attacks: Table 2. Improved RSA is immune against the following attacks Attack Justification 1.Known plain-text attack Is not possible as described above
  • 109. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9 2. Small public exponent e Is not possible due to the use of random integer k 3. Hasted and Coppersmith attack Is not possible because every msg. have unique ki 4. Common Modulus Attack Is not possible because every msg. have unique ki 5. Timing Attack Using k in encryption and decryption process will make it difficult to distinguish between the time for k and the time for public e or private key d 6.ACC attacks One can use randomized integer k instead of secure padding This will make the RSA cryptosystem more secure compared with the basic version of the RSA cryptosystem. The modification makes the RSA cryptosystem semantically secure that means an attacker cannot distinguish two encryptions from each other even if the attacker knows (or has chosen) the corresponding plaintexts. 4. COMPARISION WITH STANDARD RSA CRYPTOSYSTEM We can compare the new scheme to the RSA cryptosystem. For the latter, the natural security parameter is n = the logarithm of the RSA modulus. The public and secret keys of RSA have size O (n), and both encryption and decryption require time )( 3 nO (using ordinary multiplication algorithms). For our new scheme, the natural security parameter is the dimension k. The keys for the new system are relatively large: size )( 3 kO for the public key and )( 2 kO for the secret key. However, the time required for encryption is only O (n) and no multiplications are needed. Decryption requires time )( 3 kO , comparable to RSA (again using ordinary multiplication algorithms). 5. CONCLUSION In this paper, we briefly discussed the improve security of the RSA public- key cryptosystem. This improvement use randomized parameter to change every encrypted message block such that even if the same message is sent more than once the encrypted message block will look different. The major advantage gained in the security improvement described above is making RSA system immune against many well known attacks on basic RSA cryptosystem; thus making the RSA encryption more secure. This is
  • 110. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10 essential because RSA is implemented in many security standards and protocols and a weak RSA may result in a whole compromised system. Although the security improvement make RSA more secure nevertheless it should be noted that the RSA modulus n bit length should be at least 2048 to ensure a moderate security and to avoid powerful attacks on the discrete logarithm and factorization problem. This security consideration and other mentioned in literature should be used to define an improved version of the RSA. REFERENCES [1] D. Kahn., The Code breakers: The comprehensive History of Secret Communication from Ancient to the Internet, Scribner, 1967. [2] W. Diffie and M.Hellman, New directions in cryptography, IEEE Transactions on Information Theory, vol. 22 (1976), 644-654. [3] R. L. Rivest, A. Shamir, and L. Adelman, A method for obtaining digital signatures and public key cryptosystems” Commun. Of the ACM, Vol.21 (1978), 120-126. [4] T. ElGamal, A public-key cryptosystem and a signature scheme based on discrete logarithms, IEEE Transactions on Information Theory, Vol. 31 (1985), 469-472. [5] N. Koblitz, Elliptic curve cryptosystems, Mathematics of Computation, Vol.48 (1987), 203-209. [6] A. Menezes, P. van Oorscot and S. Vanstone, Handbook of Applied Cryptography, CRC Press, ISBN: 0-8493-8523-7, 1999 [7] Hstad and M. Nslund, The Security of all RSA and Discrete Log Bits, Journal of the ACM, Vol.51, No.2 (2004), 187-230. [8] Don Coppersmith, Small Solutions to Polynomial Equations, and Low Exponent RSA Vulnerabilities, Journal of Cryptology, Vol. 10, No. 4, (Dec. 1997). [9] P. Kocher, Timing attacks on implementations of Diffie-Hellman, RSA, DSS and other systems. Advances in Cryptology, Vol. 1109 (1996), 104-113. [10] Don Coppersmith, Matthew K. Franklin, Jacques Patarin, Michael K. Reiter, Low Exponent RSA with Related Messages, EUROCRYPT (1996), 1-9. [11] M. Wiener., Cryptanalysis of short RSA secret exponents, IEEE Transactions on Information Theory, Vol. 36(1990), 553- 558. [12] Daniel Bleichenbacher., Chosen ciphertext attacks against protocols based on the RSA encryption standard PKCS #1, Advances in Cryptology CRYPTO ’98 Lecture Notes in Computer Science, Vol. 1462 (1998). [13] PKCS#1: RSA Cryptography Standard, website: http://www.rsa.com/rsalabs/node.asp?id=2125 [14] The Transport Layer Security (TLS) Protocol, version 1.2, website: http://tools.ietf.org/html/rfc5246. [15] J. M. Pollard, A Monte Carlo method for factorization, BIT Numerical Mathematics, Vol. 15, No. 3 (1975), 331-334. [16] J. M. Pollard, Theorems of Factorization and Primality Testing”, Proceedings of the Cambridge Philosophical Society, Vol.76, No. 3 (1974), 521-528.
  • 111. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11 [17] H. C. Williams., A p+1 method of factoring, Mathematics of Computation, Vol. 39, No.159 (1982), 225-234. [18] B. Dixon, A.K. Lenstra, Massively parallel elliptic curve factoring, Advances in Cryptology-EUROCRYPT’ 92 Lecture Notes in Computer Science, Vol. 658, (1993), 183- 193. [19] C. Pomerance, The quadratic sieve factoring algorithm, Advances in Cryptology Lecture Notes in Computer Science, Vol.209 (1985), 169-182. [20] J. Buchmann, J. Loho, J. Zayer, An implementation of the general number field sieve, Advances in Cryptology-CRYPTO’ 93 Lecture Notes in Computer Science, Vol.773 (1994), 159-165. [21] RSA Laboratories, the RSA Factoring Challenge http://www.rsa.com/rsalabs/node.asp-id=2092
  • 112. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1 A Hybrid Model of Multimodal Approach for Multiple Biometrics Recognition P. Prabhusundhar Assistant Professor, Department of Information Technology, Gobi Arts & Science College (Autonomous), Gobichettipalayam – 638 453, Erode District, Tamil Nadu, India. V.K. Narendira Kumar Assistant Professor, Department of Information Technology, Gobi Arts & Science College (Autonomous), Gobichettipalayam – 638 453, Erode District, Tamil Nadu, India. B. Srinivasan Associate Professor, PG & Research Department of Computer Science, Gobi Arts & Science College (Autonomous), Gobichettipalayam – 638 453, Erode District, Tamil Nadu, India. ABSTRACT A single biometric identifier in making a personal identification is often not able to meet the desired performance requirements. Biometric identification based on multiple biometrics represents an emerging trend. Automated biometric systems for human identification measure a “signature” of the human body, compare the resulting characteristic to a database, and render an application dependent decision. These biometric systems for personal authentication and identification are based upon physiological or behavioral features which are typically distinctive, although time varying, such as Face recognition, Iris recognition, Fingerprint verification, Palm print verification in making a personal identification. Multi-biometric systems, which consolidate information from multiple biometric sources, are gaining popularity because they are able to overcome limitations such as non-universality, noisy sensor data, large intra-user variations and susceptibility to spoof attacks that are commonly encountered in uni-biometric systems. In this paper, it addresses the concept issues and the applications strategies of multi-biometric systems. Keywords Biometrics, Fingerprint, Iris, Palm print, Face recognition and Sensors. 1. INTRODUCTION A Biometric is defined as a unique, measurable, biological characteristic or trait for automatically recognizing or verifying the identity of a human being. Statistically analyzing these biological characteristics has become known as the science of biometrics. These days, biometric technologies are
  • 113. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2 typically used to analyze human characteristics for security purposes. Five of the most common physical biometric patterns analyzed for security purposes are the fingerprint, hand, eye, face, and voice. Biometric fusion is the process of combining information from multiple biometric readings, either before, during or after a decision has been made regarding identification or authentication from a single biometric. The data information from those multiple modals can be combined in several levels: sensor, feature, score and decision level fusions. Security is not enforced by focusing on a single parameter. Instead of solving a one-dimensional problem, a secure environment requires multiple dimensions of critical check points. Secure authentication is provided by multiple parameters. One parameter is a security token an individual uniquely possesses, such as a physical key or a smart card. Another parameter is an item an individual uniquely knows, such as a PIN. An additional parameter is an individual's unique biological characteristic, such as DNA or an iris code [8]. Some of the challenges commonly encountered by biometric systems are listed here: a) Noise in sensed data: The biometric data being presented to the system may be contaminated by noise due to imperfect acquisition conditions or subtle variations in the biometric itself. b) Non-universality: The biometric system may not be able to acquire meaningful biometric data from a subset of individuals resulting in a failure-to-enroll (FTE) error. c) Upper bound on identification accuracy: The matching performance of a unibiometric system cannot be indefinitely improved by tuning the feature extraction and matching modules. There is an implicit upper bound on the number of distinguishable patterns (i.e., the number of distinct biometric feature sets) that can be represented using a template. d) Spoof attacks: Behavioral traits such as voice and signature are vulnerable to spoof attacks by an impostor attempting to mimic the traits corresponding to legitimately enrolled subjects. Some of the limitations of a unibiometric system can be addressed by designing a system that consolidates multiple sources of biometric information. This can be accomplished by having multiple traits of an individual or multiple feature extraction and matching algorithms operating on the same biometric. Such systems, known as multibiometric systems, can improve the matching accuracy of a biometric system while increasing population coverage and deterring spoof attacks. This paper presents an overview of multibiometric systems.
  • 114. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3 2. MULTIPLE BIOMETRICS Multiple Biometrics refers to the use of a combination of two or more biometric modalities in a verification / identification system. Identification based on multiple biometrics represents an emerging trend. The most compelling reason to combine different modalities is to improve the recognition rate. This can be done when biometric features of different biometrics are statistically independent. There are other reasons to combine two or more biometrics. One is that different biometric modalities might be more appropriate for the different applications. Another reason is simply customer preference [5]. A variety of factors should be considered when designing a multiple biometric system. These include the choice and number of biometric traits; the level in the biometric system at which information provided by multiple traits should be integrated; the methodology adopted to integrate the information; and the cost versus matching performance trade-off [8]. Multiple Biometric systems capture two or more biometric data. Fusion techniques are applied to combine and analyze the data in order to produce a better recognition rate. Such technologies can not only overcome the restriction and shortcomings from single modal systems, but also probably produce lower error rate in recognizing persons [7]. To integrate fully biometric identification systems will be a lengthy process, but the technology has the potential to change the way the world works, no more passwords and smart cards, just using your body as your key. However, biometrics has been usefully applied for matters of lower importance, time monitoring systems and industry authentication systems. As the progress of technology increases, it is assured that biometrics can be effectively applied to important systems. There is no doubt that biometrics is the next stage of ubiquitous security technology in our increasingly paranoid, authoritarian society. However, there is still much to be done: customers are scared off by high failure-to-enroll and false non-match rates as well as incompatibilities. Furthermore, system security as a whole needs more care to be taken of. Future improvements in acquisition technology and algorithms as well as the availability of industry standards will certainly assure a bright future for biometrics. Will this be the end of traditional password or token-based systems certainly not biometrics is not the perfect solution either; it is just a good trade-off between security and ease of use. 2.1 Face Recognition Face recognition analyzes facial characteristics. It requires a digital camera to capture one or more facial images of the subject for recognition.
  • 115. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4 With a facial recognition system, one can measure unique features of ears, nose, eyes, and mouth from different individuals, and then match the features with those stored in the template of systems to recognize subjects under test. Popular face recognition applications include surveillance at airports, major athletic events, and casinos. The technology involved has become relatively mature now, but it has shortcomings, especially when one attempts to identify individuals in different environmental settings involving light, pose, and background variations. Also, some user-based influences must be taken into consideration, for example, mustache, hair, skin tone, facial expression, cosmetics, and surgery and glasses. Still there is a possibility that a fraudulent user could simply replace a photo of the authorized person’s to obtain access permission. Some major vendors include Viisage Technology, Inc. and AcSys Biometrics Corporation. 2.2 Fingerprint Recognition The patterns of fingerprints can be found on a fingertip. Whorls, arches, loops, patterns of ridges, furrows and minutiae are the measurable minutiae features, which can be extracted from fingerprints. The matching process involves comparing the 2-D features with those in the template. There are a variety of approaches of fingerprint recognition, some of which can detect if a live finger is presented, and some cannot. A main advantage of fingerprint recognition is that it can keep a very low error rate. However, some people do not have distinctive fingerprints for verification and 15% of people cannot use their fingerprints due to wetness or dryness of fingers. Also, an oily latent image left on scanner from previous user may cause problems. Furthermore, there are also legal issues associated with fingerprints and many people may be unwilling to have their thumbprints documented. The most popular applications of fingerprint recognition are network security, physical access entry, criminal investigation, etc. So far, there are many vendors that make fingerprint scanners; one of the leaders in this area is Identix, Inc. 2.3 Palm Print Recognition Palm print recognition measures and analyzes Palm print images to determine the identity of a subject under test. Specific measurements include location of joints, shape and size of palm. Palm print recognition is relatively simple; therefore, such systems are inexpensive and easy to use. And there are not negative effects on its accuracy with individual anomalies, such as dry skin. In addition, it can be integrated with other biometric systems. Another advantage of the technology is that it can
  • 116. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5 accommodate a wide range of applications, including time and attendance recording, where it has been proved extremely popular. Since Palm print geometry is not very distinctive, it cannot be used to identify a subject from a very large population. Further, Palm print geometry information is changeable during the growth period of children. A major vendor for this technology is Recognition Systems, Inc [6]. Figure 1. Examples of some of the biometric traits used for authenticating an individual
  • 117. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6 2.4 Iris Recognition Iris biometrics involves analyzing features found in the colored ring of tissue that surrounds the pupil. Complex iris patterns can contain many distinctive features such as ridges, crypts, rings, and freckles. Undoubtedly, iris scanning is less intrusive than other eye-related biometrics. A conventional camera element is employed to obtain iris information. It requires no close contact between user and camera. In addition, irises of identical twins are not same, even thought people can seldom identify them. Meanwhile, iris biometrics works well when people wear glasses. The most recent iris systems have become more users friendly and cost effective. However, it requests a careful balance of light, focus, resolution and contrast in order to extract features from images. Some popular applications for iris biometrics can be employee verification, and immigration process at airports or seaports. A major vendor for iris recognition technology is Iridian Technologies, Inc. 3. CHALLENGES TO MULTI-BIOMETRIC SYSTEM Based on applications and facts presented in the previous sections, followings are the challenges in designing the multi modal systems. Successful pursuit of these biometric challenges will generate significant advances to improve safety and security in future missions.  The sensors used for acquiring the data should show consistency in performance under variety of operational environment. The sensor should be fast in collecting quality images from a distance and should have low cost with no failures to enroll.  The information obtained from different biometric sources can be combined at five different levels such as sensor level, feature level, score level, rank level and decision level. Therefore selecting the best level of fusion will have the direct impact on performance and cost involved in developing a system.  There are Numbers of techniques available for fusion in multi-biometric system; the multiple source of information is available. Hence it is challenging to find the optimal solution for the application provided.  In multi-biometric systems the information acquired from different sources can be processed either in sequence or parallel. Hence it is challenging to decide about the processing architecture to be employed in designing the multi-biometric system.
  • 118. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7 4. IMPLEMENTATION In general, the use of the terms multimodal or multi-biometric indicates the presence and use of more than one biometric aspect (modality, sensor, instance and/or algorithm) in some form of combined use for making a specific biometric verification/identification decision. The goal of multi- biometrics is to reduce one or more of the following: • False accept rate (FAR) • False reject rate (FRR) • Failure to enroll rate (FTE) • Susceptibility to artifacts or mimics Multi modal biometric systems take input from single or multiple sensors measuring two or more different modalities of biometric characteristics. For example a system with fingerprint and face recognition would be considered “multimodal” even if the “OR” rule was being applied, allowing users to be verified using either of the modalities.  Multi algorithmic biometric systems Multi algorithmic biometric systems take a single sample from a single sensor and process that sample with two or more different algorithms.  Multi-instance biometric systems Multi-instance biometric systems use one sensor or possibly more sensors to capture samples of two or more different instances of the same biometric characteristics. Example is capturing images from multiple fingers.  Multi-sensorial biometric systems Multi-sensorial biometric systems sample the same instance of a biometric trait with two or more distinctly different sensors. Processing of the multiple samples can be done with one algorithm or combination of algorithms. Example face recognition application could use both a visible light camera and an infrared camera coupled with specific frequency. 4.1 Fusion in Multimodal biometric systems A Mechanism that can combine the classification results from each biometric channel is called as biometric fusion. We need to design this fusion. Multimodal biometric fusion combines measurements from different biometric traits to enhance the strengths. Fusion at matching score, rank and decision level has been extensively studied in the literature. Various levels of fusion are: Sensor level, feature level, matching score level and decision level [6].
  • 119. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8 4.1.1 Sensor level Fusion In sensor Fusion we combine the biometric traits coming from sensors like Thumbprint scanner, Video Camera, Iris Scanner etc, to form a composite biometric trait and process. 4.1.2 Feature Level Fusion In feature level fusion signal coming from different biometric channels are first preprocessed, and feature vectors are extracted separately, using specific fusion algorithm we combine these feature vectors to form a composite feature vector. This composite feature vector is then used for classification process. 4.1.3 Matching Score Level Here, rather than combining the feature vector, we process them separately and individual matching score is found, then depending on the accuracy of each biometric channel we can fuse the matching level to find composite matching score which will be used for classification. 4.1.4 Decision level Fusion Each modality is first pre-classified independently. The final classification is based on the fusion of the outputs of the different modalities. Multimodal biometric system can implement any of these fusion strategies or combination of them to improve the performance of the system. 5. EXPERIMENTAL RESULTS Performance statistics are computed from the real and fraud scores. Real scores are those that result from comparing elements in the target and query sets of the same subject. Fraud scores are those resulting from comparisons of different subjects. Use each fusion score as a threshold and compute the false-accept rate (FAR) and false-reject rate (FRR) by selecting those fraud scores and genuine scores, respectively, on the wrong side of this threshold and divide by the total number of scores used in the test. A mapping table of the threshold values and the corresponding error rates (FAR and FRR) are stored. The complement of the FRR (1 – FRR) is the genuine accept-rate (GAR). The GAR and the FAR are plotted against each other to yield a ROC curve, a common system performance measure. We choose a desired operational point on the ROC curve and use the FAR of that point to determine the corresponding threshold from the mapping table.
  • 120. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9 Figure 2. Min-Max Normalization with different fusions For example, at a FAR of 0.1% the simple sum fusion with the min-max normalization has a GAR of 94.9%, which is considerably better than that of face, 75.3%, and fingerprint, 83.0%. Also, using any of the normalization techniques in lieu of not normalizing the data proves beneficial. The simplest normalization technique, the min-max, yields the best performance in this example. Figure 2 illustrates the results of Min-Max normalization for a spectrum of fusion methods. The simple sum fusion method yields the best performance over the range of FARs. Interestingly, the Genuine-Accept Rate for sum and product probability rules falls off dramatically at a lower FAR. GAR for the spectrum of normalization and fusion techniques at FARs of 1% and 0.1% respectively. At 1% FAR, the sum of probabilities fusion works the best. However, these results do not hold true at a FAR of 0.1%. The simple sum rule generally performs well over the range of normalization techniques. These results demonstrate the utility of using multimodal biometric systems for achieving better matching performance. They also indicate that the method chosen for fusion has a significant impact on the resulting performance. In operational biometric systems, application requirements drive the selection of tolerable error rates and in both single modal and multimodal biometric systems, implementers are forced to make a trade-off between usability and security. In operational biometric systems, application requirements drive the selection of tolerable error rates and in both single-modal and multimodal biometric systems, implementers are forced to make a trade-off between usability and security. Clearly the use of these fusion and normalization techniques enhances the performance significantly over the single-modal face or fingerprint classifiers.
  • 121. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10 6. PERFORMANCE OF MULTIMODAL BIOMETRICS Multimodal Biometric systems are often evaluated solely on the basis of recognition system performance. But it is important to note that other factors are involved in the deployment of a bio-metric system. One factor is the quality and ruggedness of the sensors used. Clearly the quality of the sensors used will affect the performances of the associated recognition algorithms. What should be evaluated is therefore the sensor/algorithm combination, but this is difficult because often the same sensors are not used in both the enrolment and test phases. In practice therefore the evaluation is made on the basis of the recognition algorithm's resistance to the use of various types of sensor (interoperability problem). Another key factor in determining the acceptability of a biometric solution is the quality of the associated communication inter-face. In addition to ease of use, acquisition speed and processing speed are key factors, which are in many cases not evaluated in practice. In the case of a verification system, two error rates are evaluated which vary in opposite directions: the false rejection rate FRR (rejection of a legitimate user called “the client”) and the false acceptance rate FAR (acceptance of an impostor). The decision of acceptance or rejection of a person is thus taken by comparing the answer of the system to a threshold (called the decision threshold). The values of FAR and FRR are thus dependent on this threshold which can be chosen so as to reduce the global error of the system. The decision threshold must be adjusted according to the desired characteristics for the application considered. High security applications require a low FAR which has the effect of increasing the FRR, while Low security applications are less demanding in terms of FAR. EER denotes Equal Error Rate (FAR=FRR). This threshold must be calculated afresh for each application, to adapt it to the specific population concerned. This is done in general using a small database recorded for this purpose. Different biometric application types make different trade-offs between the false match rate and false non-match rate (FMR and FNMR). Lack of understanding of the error rates is a primary source of confusion in assessing system accuracy in vendor and user communities alike. Performance capabilities have been traditionally shown in the form of ROC (receiver- or relative-operating characteristic) plots, in which the probability of a false-acceptance is plotted versus the probability of a false-rejection for varying decision thresholds. Unfortunately, with ROC plots, curves corresponding to well-performing systems tend to bunch together near the
  • 122. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11 lower left corner, impeding a clear visualization of competitive systems. More recently, a variant of an ROC plot, the detection error tradeoff (DET) plot has been used, which plots the same tradeoff using a normal deviate scale. Figure 3. Example of Verification Performance Comparison for Same Hypothetical Systems, A and B, for both (a) ROC and (b) DET plots Although the complete DET curve is needed to fully describe system error tradeoffs, it is desirable to report performance using a single number. Often the equal-error-rate (EER), the point on the DET curve where the FA rate and FR rate are equal, is used as this single summary number. However, the suitability of any system or techniques for an application must be determined by taking into account the various costs and impacts of the errors and other factors such as implementations and lifetime support costs and end-user acceptance issues. There is a tradeoff between the probability of correct detect and identify rate and the false alarm rate. If we increase the probability of correct detect and identify rate, the false alarm rate will increase. A Watch list Receiver Operating Characteristic curve is used to show the relationship between the probabilities of correct detects and identify rate and the false alarm rate. In practice, most applications that operate in the watch list task can be grouped into five operational areas: a) Extremely low false alarm: In this application, any alarm requires immediate action. This could lead to public disturbance and confusion. An alarm and subsequent action may give away the fact that surveillance is being performed and how, and may minimize the possibility of catching a future suspect. b) Extremely high probability of detect and identify: In this application, we are mostly concerned with detecting someone on the
  • 123. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12 watch list; false alarms are a secondary concern and will be dealt with according to pre-defined procedures. c) Low false alarm and detect/identify: In this application we are more concerned with lower false alarms and can deal with low detect/identify. d) High false alarm and detect/identify: In this application we are more concerned with higher detect/identify performance and can deal with a high false alarm rate as well. e) No threshold: User wants all results with confidence measures on each for investigation case building. 7. CONCLUSION A Multimodal Biometrics technique, which combines multiple biometrics in making a personal identification, can be used to overcome the limitations of individual biometrics. We developed a multimodal biometrics system which integrates decisions made by Face recognition, Iris recognition, Fingerprint verification, Palm print verification. Multi-biometric systems alleviate a few of the problems observed in uni-modal biometric systems. Besides improving matching performance, they also address the problems of non- universality and spoofing.With the widespread deployment of biometric systems in several civilian and government applications, it is only a matter of time before multimodal bio-metric systems begin to impact the way in which identity is established in the 21st century. Multiple Biometric technologies could make a huge positive impact into society, if it is correctly utilized to increase the robustness of security systems across the world. This would help to cope with the rising levels of fraud, crime and terrorism. REFERENCES [1] John Daugman, “How iris recognition works” IEEE Transactions on Circuits and Systems for Video Technology, 14(1):21–30, 2004. Page No. 103-109. [2] Chang, “New multi-biometric approaches for improved person identification,” PhD Dissertation, Department of Computer Science and Engineering, University of Notre Dame, 2004. Page No. 153-159. [3] C.Hesher, A.Srivastava, G.Erlebacher, “A novel technique for face recognition using range images” in the Proceedings of Seventh International Symposium on Signal Processing and Its Application, 2003. Page No. 58-69. [4] Barral and A. Tria, “Fake fingers in fingerprint recognition: Glycerin supersedes gelatin”, In Formal to Practical Security. Springer, 2009. Page No. 83-92. [5] Bergman, “Multi-biometric match-on-card alliance formed” Biometric Technology Today, vol. 13, no. 5, 2005. Page No. 1-9.
  • 124. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 13 [6] F. YANG, M. Baofeng, "Two Models Multimodal Biometric Fusion Based on Fingerprint, Palm-print and Hand-Geometry",DOI-1-4244-1120-3/07, IEEE,2007. [7] Teddy Ko, “Multimodal Biometric Identification for Large User Population Using Fingerprint, Face and Iris Recognition”, Proceedings of the 34th Applied Imagery and Pattern Recognition Workshop (AIPR05) ,2005. [8] A.K.Jain, R.Bolle, “Biometrics-personal identification in networked society” Norwell, 1999, Page No. 23-36. [9] C. Soutar, D. Roberge, A. Stoianov, R. Gilroy and B.V.K. V. Kumar, “Biometric Encryption, Enrollment and Verification Procedures”, Proc. SPIE 3386, 24-35, 1998.
  • 125. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 1 CBR Based Performance Analysis of OLSR Routing Protocol in MANETs Jogendra Kumar Department of Computer Science & Engineering G. B. Pant engineering College Pauri Garhwal Uttarakhand, India ABSTRACT Mobile ad-hoc network is an autonomous system which has its own rules and regulations. MANETs control themselves by configuration the system. In this paper, we analysed and implemented TC HELLO messages by using multipoint relay (MPR) of OLSR. The routing performance is then checked using the Qualnet 5.0.2 simulator. To simulate the performance of the OLSR (Optimised Link State Routing) routing protocol, we took different performance metrics like hello messages sent, hello messages received, TC messages generated, TC messages replied and TC messages received on Constant Bit Rate (CBR) using the random waypoint model. Keywords Ad-hoc Network, MANETs, OLSR, Routing Protocol, Qualnet 5.0.2, Simulator. 1. INTRODUCTION A MANET consists of mobile nodes, a router with multiple hosts and wireless communication devices. The wireless communication devices are transmitters, receivers and smart antennas. These antennas can be of any kind and nodes can be fixed or mobile. The term node referred to as which are free to move arbitrarily in every direction means it is a asymmetric that provide communicate between two or more from one direction is good communication but reverse not be good. These nodes can be a mobile phone, laptop, personal digital assistance and personal computer. It is greatly desired to have a quick communication infrastructure. MANET is the quick remedy for any disaster situation [1][3][7][17]. The mobile nodes in wireless network communicate with each other within range because it is a self organized network means configure automatically. The mobile nodes form a network automatically without a fixed infrastructure and central management. The topology of the network changes every time by getting in and out of the mobile nodes in the network because mobile ad-hoc network has dynamic nature. MANET [1][2][5][7][9][11][17][18][22] stands for Mobile Ad hoc Network. It is not centralized autonomous wireless system because it has not central network which consists of free nodes [2]. Figure 1 shows the mobility change due to dynamic nature of the MANET routing protocol.
  • 126. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 2 Figure 1. The dynamic scenario of network topology with mobility 2. OLSR (OPTIMIZATION LINK STATE ROUTING) IN MANETs OLSR [8][17][19][23] is a proactive routing protocol stores and updates its routing table information permanently. OLSR keeps track of routing table in order to provide a route if needed or route all time available for communication. OLSR can be implemented in all ad hoc networks due to its nature OLSR is called as proactive routing protocol. OLSR protocols all the nodes in the network do not broadcast the route packets only Multipoint Relay (MPR) [14] [16] [18] nodes can broadcast route packets. These MPR nodes can be selected in the neighbours of source node in a network. Each node in the network keeps a list of MPR nodes information and stores that information in routing table. This MPR selector is obtained from HELLO packets sending between in neighbours nodes within range of that node only neighbours. These routes are established before any source node intends to send a message to a particular destination. Each and every node in the network keeps a routing table and update information periodically. This is the reason the routing overhead for OLSR is minimum than other reactive routing protocols and it provide a shortest route to the destination in the network. There is no need to build the new routes, as the existing in use route does not increase enough routing overhead because every node already builds. OLSR reduces the route discovery delay. Figure 2 shows the broadcast packet from center node A to all the other neighbours’ nodes that nodes attached to A. Distance counting is based on the hop count.
  • 127. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 3 A B C E G F D Figure 2. Showing the Multipoint relays (hop count = 2) Figure 3 shows the nodes in the network sending HELLO messages to their neighbours. These messages are sent at a predetermined interval in OLSR to determine the link status [1][2][16]. A B Asymmetric Asymmetric Symmetric Figure 3. HELLO Messages in MANET Using OLSR The HELLO messages contain the entire neighbor information store in routing table. This enables the mobile node to have a table in which it has information about its entire multiple hop neighbors. A node chooses minimal number of MPR nodes, when symmetric connections are made. It broadcast topology control (TC)[16][18][22] messages with information about link status at predetermined TC interval. TC messages also calculate the routing table’s information and update periodically. 3. RELATED WORKS Vats et al. [13] proposed a MANET routing protocol in the OLSR were performance analyzed. The performance of OLSR protocol through a network of different size showed that it had better performance in all aspects. The performance of OLSR which can be achieved by Hello Traffic Sent (bit/sec), Total TC Message Sent (TTMS) and Total TC Message Forward (TTMF), Total hello message and TC traffic sent (bit/sec), Routing traffic received (pkt/s), Routing traffic sent (pkt/s), MPR count using the OPNET Modular simulation tool.
  • 128. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 4 Kaur et al. [9] proposed a MANET routing protocol. OLSR performs best in terms of load and throughput. GRP performs best in terms of delay and routing overhead. TORA is the worst choice when we consider any of the four performance parameters. We can say that OLSR is best as compared to GRP and TORA in all traffic volumes since it has maximum throughput using OPNET modeler simulation tools. All the above works proposed several routing protocols to construct a route performance on TC hello message. This paper checks the performance of the OLSR routing protocol using CBR in Qualnet which gives faster performance and takes minimum time for executing the scenarios as compared to OPNET. 4. SIMULATION PARAMETERS AND PERFORMANCE METRIC 4.1 SIMULATION PARAMETERS Table 1. Simulation Parameters Parameter Name Parameter Values Area 700m*700m Simulation Time 260sec Channel-Frequency 2.4 GHz Path loss-Model Two Ray Model Shadowing-Model Constant Number of Nodes 50 nodes Routing Protocols OLSR PHY-Model PHY802.11b Antenna-Model Omni directional Mobility Model Random-Waypoint Model Traffic Source CBR Data Rate 2 Mbps 4.2 PERFORMANCE METRIC Hello Messages Received: Total number of Hello Messages Received by the node. Hello Messages Sent: Total number of Hello Messages Sent by the node. TC Messages Received: Total number of TC Messages Received by the node. TC Messages Generated: Total number of TC Messages Generated by the node. TC Messages Relayed: Total number of TC Messages Relayed by the node
  • 129. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 5 4.3 SIMULATION TOOLS Qualnet 5.0.2 [24] is an extended version of Glomosim. Glomosim simulator tools for wireless network. Design scenarios and routing protocol in mobile ad-hoc network (MANET) [1] [4] [5] [16] but Qualnet use for both wireless and wired network. Qualnet is more 10 times powerful as compared to the Glomosim because it takes less time for the execution of the scenarios, establish more nodes at the same time and taken the performance easily as compared to Glomosim and NS2, OPNET, etc. 4.4 NODES PLACEMENT AND ANIMATION VIEW OF OLSR ROUTING PROTOCOLS Figure 4. Showing Nodes Placement Scenarios of OLSR routing protocol for 50 nodes In Figure 4, we described the nodes placement strategies of the random waypoint model. We have taken an area of 700*700 m wireless network which attached with all nodes randomly. OLSR routing protocol use CBR apply for source node to destination node with constant speed. In this model, all 50 nodes have constant speed. Overall execution time of this scenario is 260 sec and data rate flow is 2 Mbps with a channel frequency of 2.4 Ghz. We have taken omni-directional model for controlling both direction signals. 4
  • 130. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 6 4.5 SIMULATION VIEW OF OLSR ROUTING PROTOCOL Figure 5. Showing the simulation view of OLSR routing protocol for 50 nodes Figure 5 shows the animation view of OLSR routing protocols’ scenarios using the Qualnet simulation tool and takes the performance on the basis of metrics like Hello Messages Received, Hello Messages Sent, TC Messages Received, TC Messages Generated and TC Messages Relayed. 5. SIMULATION RESULTS AND DISCUSSION OF OLSR ROUTING PROTOCOLS OLSR: Hello Messages Sent Figure 6 shows the Hello Messages Sent by root sending a hello message broadcast to all neighbors to attach nodes using the OLSR routing protocols. Only 15 packets were sent in this scenario. The rate was 4 packets/s. Figure 6. Showing the performance result of OLSR: Hello Messages sent from the Nodes
  • 131. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 7 OLSR: Hello Messages Received: Figure 7 shows hello messages received from the nodes. In case of OLSR routing, packet are sent at a constant rate but are received at a different rate due to some interference. We have seen nodes 21, 29 receiving 100% packets but at nodes 8, 49 receiving minimum 40% packet received and all other nodes receiving packet approximately 50%. In overall scenarios all packet received not 100%. Figure 7. Showing the performance result of OLSR: Hello Messages Received at the Nodes Combine performance result of OLSR: Figure 8 shows the combined performance of the hello messages sent and the hello messages received at the various nodes. Figure 8. Showing the combined performance result of OLSR: Hello Messages Sent and Hello Messages Received at the Nodes
  • 132. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 8 OLSR: TC Messages Generated: Figure 9 shows the performances of TC messages generated by MPR. In MPR have all information related to the attached list of nodes like sending address, destination address, secret code, MAC address are updated periodically. Figure 9 shows nodes 7, 41 and 49 where no TC messages were generated because these nodes were not selected by MPR. In OLSR routing protocols, TC messages generated almost 95 % and less than 5 % are not generated by MPR. Figure 9. Showing the performance result of OLSR: TC Messages Generated at the Nodes OLSR: TC Messages Received: Figure 10 shows the performance of TC messages Received in MPR. Figure 10, shows nodes 7, 41 and 49 where no TC messages generated but received packets because these nodes were attached to the center so that these nodes have some information related to the neighbors. Nodes 21 to 29 received 100 % TC messages and nodes 12, 20, 27, 35 and 43 received 85% and the other nodes received less than 40 %. Figure 10. Showing the performance result of OLSR: TC Messages Received at the Nodes
  • 133. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 9 TC Messages Relayed: Figure 11 shows the performances of TC messages relayed by MPR. Figure 11 shows nodes 7, 41 and 49 where no TC messages were relayed. From nodes 14, 16 and 38, TC messages relayed 100 %. TC messages and nodes 6, 10, 13, 21, 22, 23, 27, 35, 43, 27, 35 and 43 TC messages relayed almost 65% and the other nodes relayed less than 20 % of TC messages. Figure 11. Showing the performance result of OLSR: TC Messages Relayed Vs Nodes Combine performance result of OLSR: TC Messages Generated, TC Messages received and TC Messages Relayed: Figure 12 shows the combine performance of OLSR: TC Messages Generated, TC Messages received and TC Messages Relayed and Nodes. Figure 12. Showing the combine performance result of OLSR: TC Messages Generated, TC Messages Received and TC Messages Relayed at the Nodes
  • 134. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 10 6. CONCLUSIONS AND FUTURE WORKS This paper discusses Mobile ad-hoc networks (MANETs) and checks the performance of the optimization link state routing (OLSR) protocol on the basis of constant bit rate. The performance is checked using the random waypoint model in which nodes are placed randomly. In the OLSR routing protocol, hello messages are created to sense the neighboring nodes and a list of MPR selection nodes and TC hello messages controlling the route calculation and its routing information are maintained periodically, minimizing the broadcasting by MPR. In OLSR routing protocols, sending and receiving packet 95% and less than 2% packets are wasted so that this protocol is best for large networks. In case of MPR selected nodes gives TC Messages Generated at 90%, 85% TC Message Received and 80% TC Messages Relayed so that designing and controlling messages in OLSR routing protocol almost 80% so that OLSR routing protocol gives better performances in case of large networks and small networks due to proactive routing nature protocols as compared to other proactive routing protocols in MANETs. As future work, another routing protocol and different nodes placement strategies, energy consumption, fixed bit rate, and variable bit rate will be analysed on the basis on applying different loads and modification of existing routing protocols. REFERENCES [1] A. B. Malany, V. R. S. Dhulipala, R. M. Chandrasekaran, “Throughput and Delay Comparison of MANET Routing Protocols,” Intl. Journal Open Problems Comp. Math., Vol. 2, No. 3, Sep 2009. [2] Alexander Klein, “Performance Comparison and Evaluation of AODV, OLSR, and SBR in Mobile Ad-Hoc Networks”, IEEE Personal Communications, pp. 571-575, 2008. [3] C. Perkins, E.M. Royer, S.R. Das and M.K. Marina, “Performance Comparison of Two On-demand Routing Protocols for Ad Hoc Networks”, IEEE Personal Communications, pp. 16-28, Feb. 2001. [4] C. Siva Rammurty and B. S. Manoj, “Ad hoc wireless networks architectures and protocols,” ISBN 978-81-317-0688-6, 2011. [5] Dilpreet Kaur and Naresh Kumar, “Comparative Analysis of AODV, OLSR, TORA, DSR and DSDV Routing Protocols in Mobile Ad-Hoc Networks”, I. J. Computer of Network and Information Security, 2013, 3, 39-46. [6] Elizabeth Royer and C. K. Toh, “A Review of Current Routing Protocols for Ad hoc mobile Wireless Networks”, RFC 2409, IEEE Personal Communications 1999. [7] G. Karthiga, J. Benitha Christinal and Jeban Chandir Moses, “Performance Analysis of Various Ad-Hoc Routing Protocols in Multicast Environment”, IJCST, Vo l. 2, Issue 1, March 2011, pp. 161-165.
  • 135. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 11 [8] Haas and M. Pearlman. The performance of query control scheme for the zone routing protocol. ACM/IEEE Transactions on Networking, 9(4) pages 427-438, August 2001. [9] Harmanpreet Kaur and Jaswinder Singh,” Performance comparison of OLSR, GRP and TORA using OPNET”, International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 2, Issue 10, October 2012. [10]Hong Jiang, 1994. “Performance Comparison of Three Routing Protocols for Ad Hoc Networks”, Communications of the ACM, Vol. 37. [11]Hong, K. Xu, M. Gerla, “Scalable Routing Protocols for Mobile Ad-Hoc Networks” IEEE Network Magazine, Vol. 16, Issue 4, pp. 11–21. [12]Krishna Kumar Chandel, Sanjeev Sharma and Santosh Sahu,” Performance Analysis of Routing Protocols Based on IPV4 and IPV6 for MANET”, International Journal of Computer Technology and Electronics Engineering (IJCTEE), Vol. 2, Issue 3, June 2012. [13]Kuldeep Vats, Monika Sachdeva, Krishan Saluja and Amit Rathee,” Simulation and Performance Analysis of OLSR Routing Protocol Using OPNET”, International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 2, Issue 2, February 2012. [14]M. Joa-Ng and I. T. Lu, “A Peer-to-Peer zone based two-level link state routing for mobile Ad Hoc Networks”, IEEE Journal on Selected Areas in Communications, Special Issue on Ad-Hoc Networks, August 1999, pp. 1415-1425. [15]M.K. J. Kumar, R.S. Rajesh, “Performance Analysis of MANET Routing Protocols in different Mobility Models”, IJCSNS International Journal of Computer Science and Network 22 Security, Vol. 9, No. 2, February 2009. [16]M.L Sharma, Noor Fatima, Rizvi Nipun Sharma, Anu Malhan and Swati Sharma, “Performance Evaluation of MANET Routing Protocols under CBR and FTP traffic classes”, Int. J. Comp. Tech. Appl., Vol. 2 (3), pp. 393-400. [17]M. Sreerama Murty and M. Venkat Das,” Performance Evaluation of MANET Routing protocols using Reference Point Group Mobility and Random Way Point Models”, International Journal of Ad hoc, Sensor & Ubiquitous Computing (IJASUC), Vol. 2, No. 1, March 2011, pp. 33-43. [18]Pearlman MR, Haas ZJ, “Determining the optimal configuration for the zone routing protocol”, IEEE Journal on Selected Areas in Communications 1999, Vol. 17, pp. 1395–1414. [19]QualNet documentation, “QualNet 5.0 Model Library: Advanced Wireless”, http://www.scalablenetworks.com/products/Qualnet/download.php#docs. [20]T. Clausen, P. Jacquet, “RFC 3626 Optimized Link State Routing Protocol (OLSR)" October 2003. [21]S. A. Ade and P. A. Tijare, “Performance Comparison of AODV, DSDV, OLSR and DSR Routing Protocols in Mobile Ad Hoc Networks”, International Journal of
  • 136. International Journal of Computer Science and Business Informatics IJCSBI.ORG ISSN: 1694-2108 | Vol. 3, No. 1. JULY 2013 12 Information Technology and Knowledge Management July-December 2010, Vol. 2, No. 2, pp. 545-548. [22]Syed Basha Shaik and S. P. Setty, “Performance Comparison of AODV, DSR and ANODR for Grid Placement Model”, International Journal of Computer Applications, Vol. 11, No. 12, 6-9, 2010. [23]Thakore Mitesh, “Performance Analysis of AODV and OLSR Routing Protocol with Different Topologies”, International Journal of Science and Research (IJSR), Vol. 2, Issue 1, January 2013. [24]The Qualnet 5.0.2 simulator tools. [Online]. Available at www.scalable-networks.com.