SlideShare a Scribd company logo
SEABIRDS SOLUTION
        IEEE 2012 – 2013
    SOFTWARE PROJECTS IN
       VARIOUS DOMAINS
      | JAVA | J2ME | J2EE |
    DOTNET |MATLAB |NS2 |
SBGC                          SBGC

24/83, O Block, MMDA COLONY   4th FLOOR SURYA COMPLEX,

ARUMBAKKAM                    SINGARATHOPE BUS STOP,

CHENNAI-600106                OLD      MADURAI   ROAD,   TRICHY-
                              620002




Web: www.ieeeproject.in
E-Mail: ieeeproject@hotmail.com

Trichy                        Chennai
Mobile: - 09003012150         Mobile: - 09944361169
Phone: - 0431-4012303
SBGC Provides IEEE 2012-2013 projects for all Final Year Students. We do assist the students
with Technical Guidance for two categories.


               Category 1: Students with new project ideas / New or Old
               IEEE Papers.


               Category 2: Students selecting from our project list.
When you register for a project we ensure that the project is implemented to your fullest
satisfaction and you have a thorough understanding of every aspect of the project.


SBGC PROVIDES YOU THE LATEST IEEE 2012 PROJECTS / IEEE 2013 PROJECTS
FOR FOLLOWING DEPARTMENT STUDENTS


B.E, B.TECH, M.TECH, M.E, DIPLOMA, MS, BSC, MSC, BCA, MCA, MBA, BBA, PHD,
B.E (ECE, EEE, E&I, ICE, MECH, PROD, CSE, IT, THERMAL, AUTOMOBILE,
MECATRONICS, ROBOTICS) B.TECH(ECE, MECATRONICS, E&I, EEE, MECH , CSE, IT,
ROBOTICS) M.TECH(EMBEDDED SYSTEMS, COMMUNICATION SYSTEMS, POWER
ELECTRONICS,        COMPUTER        SCIENCE,      SOFTWARE         ENGINEERING,       APPLIED
ELECTRONICS, VLSI Design) M.E(EMBEDDED                      SYSTEMS,       COMMUNICATION
SYSTEMS,          POWER         ELECTRONICS,         COMPUTER         SCIENCE,       SOFTWARE
ENGINEERING, APPLIED ELECTRONICS, VLSI Design) DIPLOMA (CE, EEE, E&I, ICE,
MECH,PROD, CSE, IT)


MBA(HR,       FINANCE,        MANAGEMENT,           HOTEL        MANAGEMENT,           SYSTEM
MANAGEMENT, PROJECT MANAGEMENT, HOSPITAL MANAGEMENT, SCHOOL
MANAGEMENT, MARKETING MANAGEMENT, SAFETY MANAGEMENT)


We also have training and project, R & D division to serve the students and make them job
oriented professionals
PROJECT SUPPORTS AND DELIVERABLES

     Free Course (JAVA & DOT NET)

     Project Abstract

     IEEE PAPER

     IEEE Reference Papers, Materials &

     Books in CD

     PPT / Review Material

     Project Report (All Diagrams & Screen

   shots)

     Working Procedures

     Algorithm Explanations

     Project Installation in Laptops

     Project Certificate
TECHNOLOGY                                 : JAVA

DOMAIN                                     : IEEE TRANSACTIONS ON NETWORKING


S.NO    TITLES                   ABSTRACT                                                     YEAR
   1.   Balancing the Trade- In This Project, we propose schemes to balance the trade-        2012
        Offs between Query offs between data availability and query delay under
        Delay      and     Data different system settings and requirements. Mobile nodes
        Availability          in in one partition are not able to access data hosted by
        MANETs                   nodes in other partitions, and hence significantly degrade
                                 the performance of data access. To deal with this
                                 problem, We apply data replication techniques.
   2.   MeasuRouting:         A In this paper we present a theoretical framework for          2012
        Framework            for MeasuRouting. Furthermore, as proofs-of-concept, we
        Routing        Assisted present synthetic and practical monitoring applications to
        Traffic Monitoring       showcase the utility enhancement achieved with
                                 MeasuRouting.
   3.   Cooperative       Profit We model Optimal cooperation using the theory of             2012
        Sharing in Coalition- transferable payoff coalitional games. We show that the
        Based         Resource optimum cooperation strategy, which involves the
        Allocation in Wireless acquisition, deployment, and allocation of the channels
        Networks                 and base stations (to customers), can be computed as the
                                 solution of a concave or an integer optimization. We next
                                 show that the grand coalition is stable in many different
                                 settings.
   4.   Bloom Cast: Efficient In this paper we propose Bloom Cast, an efficient and           2012
        Full-Text      Retrieval effective full-text retrieval scheme, in unstructured P2P
        over      Unstructured networks. Bloom Cast is effective because it guarantees
        P2Ps with Guaranteed perfect recall rate with high probability.
        Recall
   5.   On          Optimizing We propose a novel overlay formation algorithm for             2012
        Overlay Topologies unstructured P2P networks. Based on the file sharing
        for     Search        in pattern exhibiting the power-law property, our proposal
        Unstructured Peer-to- is unique in that it poses rigorous performance
        Peer Networks            guarantees.
   6.   An         MDP-Based In this paper, we propose an automated Markov Decision           2012
        Dynamic Optimization Process (MDP)-based methodology to prescribe optimal
        Methodology          for sensor node operation to meet application requirements
        Wireless         Sensor and adapt to changing environmental stimuli. Numerical
        Networks                 results confirm the optimality of our proposed
                                 methodology and reveal that our methodology more
                                 closely meets application requirements compared to other
                                 feasible policies.
   7.   Obtaining      Provably In this paper, we address The Internet Topology               2012
        Legitimate      Internet Problems by providing a framework to generate small,
        Topologies               realistic, and policy-aware topologies. We propose HBR,
a novel sampling method, which exploits the inherent
                              hierarchy of the policy-aware Internet topology. We
                              formally prove that our approach generates connected
                              and legitimate topologies, which are compatible with the
                              policy-based routing conventions and rules.
8.   Extrema Propagation:     This paper introduces Extrema Propagation, a 2012
     Fast      Distributed    probabilistic technique for distributed estimation of the
     Estimation of Sums       sum of positive real numbers. The technique relies on the
     and Network Sizes        exchange of duplicate insensitive messages and can be
                              applied in flood and/or epidemic settings, where
                              multipath routing occurs; it is tolerant of message loss; it
                              is fast, as the number of message exchange steps can be
                              made just slightly above the theoretical minimum; and it
                              is fully distributed, with no single point of failure and the
                              result produced at every node.
9.  Latency Equalization      We propose a Latency Equalization (LEQ) service, 2012
    as a New Network          which equalizes the perceived latency for all clients.
    Service Primitive
10. Grouping-Enhanced         This paper proposes a scheme, referred to as Grouping-       2012
    Resilient Probabilistic   enhanced Resilient Probabilistic En-route Filtering
    En-Route Filtering of     (GRPEF). In GRPEF, an efficient distributed algorithm is
    Injected False Data in    proposed to group nodes without incurring extra groups,
    WSNs                      and a multi axis division based approach for deriving
                              location-aware keys is used to overcome the threshold
                              problem and remove the dependence on the sink
                              immobility and routing protocols.
11. On Achieving Group-       A multicast scheme is stragegyproof if no receiver has       2012
    Strategy      proof       incentive to lie about her true valuation. It is further
    Multicast                 group strategyproof if no group of colluding receivers
                              has incentive to lie. We study multicast schemes that
                              target group strategyproofness, in both directed and
                              undirected networks.
12. Distributed -Optimal      In this paper, we develop a framework for user               2012
    User Association and      association in infrastructure-based wireless networks,
    Cell Load Balancing       specifically focused on flow-level cell load balancing
    in Wireless Networks      under spatially in homogeneous traffic distributions. Our
                              work encompasses several different user association
                              policies: rate-optimal, throughput-optimal, delay-optimal,
                              and load-equalizing, which we collectively denote ?-
                              optimal user association.
13. Opportunistic Flow-       In this paper, we propose Consistent Net Flow (CNF)          2012
    Level         Latency     architecture for measuring per-flow delay measurements
    Estimation      Using     within routers. CNF utilizes the existing Net Flow
    Consistent Net Flow       architecture that already reports the first and last
                              timestamps per flow, and it proposes hash-based
                              sampling to ensure that two adjacent routers record the
same flows.
14. Leveraging          a   Resource discovery is critical to the usability and 2012
    Compound      Graph-    accessibility of grid computing systems. Distributed
    Based DHT for Multi-    Hash Table (DHT) has been applied to grid systems as a
    Attribute     Range     distributed mechanism for providing scalable range-
    Queries         with    query and multi-attribute resource discovery. Multi-
    Performance Analysis    DHT-based approaches depend on multiple DHT
                            networks with each network responsible for a single
                            attribute. Single-DHT-based approaches keep the
                            resource information of all attributes in a single node.
                            Both classes of approaches lead to high overhead. In this
                            paper, we propose a Low-Overhead Range-query Multi-
                            attribute (LORM) DHT-based resource discovery
                            approach. Unlike other DHT-based approaches, LORM
                            relies on a single compound graph-based DHT network
                            and distributes resource information among nodes in
                            balance by taking advantage of the compound graph
                            structure. Moreover, it has high capability to handle the
                            large-scale and dynamic characteristics of resources in
                            grids. Experimental results demonstrate the efficiency of
                            LORM in comparison with other resource discovery
                            approaches. LORM dramatically reduces maintenance
                            and resource discovery overhead. In addition, it yields
                            significant improvements in resource location efficiency.
                            We also analyze the performance of the LORM approach
                            rigorously by comparing it with other multi-DHT-based
                            and single-DHT-based approaches with respect to their
                            overhead and efficiency. The analytical results are
                            consistent with experimental results, and prove the
                            superiority of the LORM approach in theory
15. Exploiting    Excess     Excess capacity (EC) is the unused capacity in a 2012
    Capacity to Improve     network. We propose EC management techniques to
    Robustness of WDM       improve network performance. Our techniques exploit
    Mesh Networks           the EC in two ways. First, a connection preprovisioning
                            algorithm is used to reduce the connection setup time.
                            Second, whenever possible, we use protection schemes
                            that have higher availability and shorter protection
                            switching time. Specifically, depending on the amount of
                            EC available in the network, our proposed EC
                            management        techniques     dynamically      migrate
                            connections between high-availability, high-backup-
                            capacity protection schemes and low-availability, low-
                            backup-capacity protection schemes. Thus, multiple
                            protection schemes can coexist in the network. The four
                            EC management techniques studied in this paper differ in
                            two respects: when the connections are migrated from
one protection scheme to another, and which connections
                          are migrated. Specifically, Lazy techniques migrate
                          connections only when necessary, whereas Proactive
                          techniques migrate connections to free up capacity in
                          advance. Partial Backup Reprovisioning (PBR)
                          techniques try to migrate a minimal set of connections,
                          whereas Global Backup Reprovisioning (GBR)
                          techniques migrate all connections. We develop integer
                          linear program (ILP) formulations and heuristic
                          algorithms for the EC management techniques. We then
                          present numerical examples to illustrate how the EC
                          management techniques improve network performance
                          by exploiting the EC in wavelength-division-
                          multiplexing (WDM) mesh networks
16. Revisiting Dynamic In unstructured peer-to-peer networks, the average 2012
    Query Protocols in response latency and traffic cost of a query are two main
    Unstructured Peer-to- performance metrics. Controlled-flooding resource query
    Peer Networks         algorithms are widely used in unstructured networks such
                          as peer-to-peer networks. In this paper, we propose a
                          novel algorithm named Selective Dynamic Query (SDQ).
                          Based on mathematical programming, SDQ calculates
                          the optimal combination of an integer TTL value and a
                          set of neighbors to control the scope of the next query.
                          Our results demonstrate that SDQ provides finer grained
                          control than other algorithms: its response latency is
                          close to the well-known minimum one via Expanding
                          Ring; in the mean time, its traffic cost is also close to the
                          minimum. To our best knowledge, this is the first work
                          capable of achieving a best trade-off between response
                          latency and traffic cost.

17. Adaptive                A distributed adaptive opportunistic routing scheme for 2012
    Opportunistic Routing   multihop wireless ad hoc networks is proposed. The
    for Wireless Ad Hoc     proposed scheme utilizes a reinforcement learning
    Networks                framework to opportunistically route the packets even in
                            the absence of reliable knowledge about channel
                            statistics and network model. This scheme is shown to be
                            optimal with respect to an expected average per-packet
                            reward criterion. The proposed routing scheme jointly
                            addresses the issues of learning and routing in an
                            opportunistic context, where the network structure is
                            characterized by the transmission success probabilities.
                            In particular, this learning framework leads to a
                            stochastic routing scheme that optimally “explores” and
                            “exploits” the opportunities in the network.
18. Design,                 This load balancer improves both throughput and 2012
Implementation, and response time versus a single node, while exposing a
    Performance of A single interface to external clients. The algorithm
    Load Balancer for SIP achieves       Transaction Least-Work-Left (TLWL),
    Server Clusters – achieves its performance by integrating several features:
    projects 2012          knowledge of the SIP protocol; dynamic estimates of
                           back-end server load; distinguishing transactions from
                           calls; recognizing variability in call length; and
                           exploiting differences in processing costs for different
                           SIP transactions.
19. Router Support for An increasing number of datacenter network 2012
    Fine-Grained Latency applications, including automated trading and high-
    Measurements         – performance computing, have stringent end-to-end
    projects 2012          latency requirements where even microsecond variations
                           may be intolerable. The resulting fine-grained
                           measurement demands cannot be met effectively by
                           existing technologies, such as SNMP, NetFlow, or active
                           probing. Instrumenting routers with a hash-based
                           primitive has been proposed that called as Lossy
                           Difference Aggregator (LDA) to measure latencies down
                           to tens of microseconds even in the presence of packet
                           loss. Because LDA does not modify or encapsulate the
                           packet, it can be deployed incrementally without changes
                           along the forwarding path. When compared to Poisson-
                           spaced active probing with similar overheads, LDA
                           mechanism delivers orders of magnitude smaller relative
                           error. Although ubiquitous deployment is ultimately
                           desired, it may be hard to achieve in the shorter term
20. A Framework for Monitoring transit traffic at one or more points in a 2012
    Routing       Assisted network is of interest to network operators for reasons of
    Traffic Monitoring – traffic accounting, debugging or troubleshooting,
    projects 2012          forensics, and traffic engineering. Previous research in
                           the area has focused on deriving a placement of monitors
                           across the network towards the end of maximizing the
                           monitoring utility of the network operator for a given
                           traffic routing. However, both traffic characteristics and
                           measurement objectives can dynamically change over
                           time, rendering a previously optimal placement of
                           monitors suboptimal. It is not feasible to dynamically
                           redeploy/reconfigure measurement infrastructure to cater
                           to such evolving measurement requirements. This
                           problem is addressed by strategically routing traffic sub-
                           populations over fixed monitors. This approach is
                           MeasuRouting. The main challenge for MeasuRouting is
                           to work within the constraints of existing intra-domain
                           traffic engineering operations that are geared for
                           efficiently utilizing bandwidth resources, or meeting
Quality of Service (QoS) constraints, or both. A
                             fundamental feature of intra-domain routing, that makes
                             MeasuRouting feasible, is that intra-domain routing is
                             often specified for aggregate flows. MeasuRouting, can
                             therefore, differentially route components of an
                             aggregate flow while ensuring that the aggregate
                             placement is compliant to original traffic engineering
                             objectives.
21. Independent              In order to achieve resilient multipath routing we 2012
    Directed     Acyclic     introduce the concept of Independent Directed Acyclic
    Graphs for Resilient     Graphs (IDAGs) in this study. Link-independent (Node-
    Multipath Routing        independent) DAGs satisfy the property that any path
                             from a source to the root on one DAG is link-disjoint
                             (node-disjoint) with any path from the source to the root
                             on the other DAG. Given a network, we develop
                             polynomial time algorithms to compute link-independent
                             and node-independent DAGs. The algorithm developed
                             in this paper: (1) provides multipath routing; (2) utilizes
                             all possible edges; (3) guarantees recovery from single
                             link failure; and (4) achieves all these with at most one
                             bit per packet as overhead when routing is based on
                             destination address and incoming edge. We show the
                             effectiveness of the proposed IDAGs approach by
                             comparing key performance indices to that of the
                             independent trees and multiple pairs of independent trees
                             techniques through extensive simulations
22. A    Greedy      Link    Information-theoretic broadcast channels (BCs) and 2012
    Scheduler for Wireless   multiple-access channels (MACs) enable a single node to
    Networks         With    transmit data simultaneously to multiple nodes, and
    Gaussian     Multiple-   multiple nodes to transmit data simultaneously to a single
    Access and Broadcast     node, respectively. In this paper, we address the problem
    Channels                 of link scheduling in multihop wireless networks
                             containing nodes with BC and MAC capabilities. We
                             first propose an interference model that extends protocol
                             interference models, originally designed for point-to-
                             point channels, to include the possibility of BCs and
                             MACs. Due to the high complexity of optimal link
                             schedulers, we introduce the Multiuser Greedy
                             Maximum Weight algorithm for link scheduling in
                             multihop wireless networks containing BCs and MACs.
                             Given a network graph, we develop new local pooling
                             conditions and show that the performance of our
                             algorithm can be fully characterized using the associated
                             parameter, the multiuser local pooling factor. We provide
                             examples of some network graphs, on which we apply
                             local pooling conditions and derive the multiuser local
pooling factor. We prove optimality of our algorithm in
                             tree networks and show that the exploitation of BCs and
                             MACs improve the throughput performance considerably
                             in multihop wireless networks.

23. A        Quantization    We consider rate optimization in multicast systems that 2012
    Theoretic Perspective    use several multicast trees on a communication network.
    on Simulcast and         The network is shared between different applications. For
    Layered      Multicast   that reason, we model the available bandwidth for
    Optimization             multicast as stochastic. For specific network topologies,
                             we show that the multicast rate optimization problem is
                             equivalent to the optimization of scalar quantization. We
                             use results from rate-distortion theory to provide a bound
                             on the achievable performance for the multicast rate
                             optimization problem. A large number of receivers
                             makes the possibility of adaptation to changing network
                             conditions desirable in a practical system. To this end,
                             we derive an analytical solution to the problem that is
                             asymptotically optimal in the number of multicast trees.
                             We derive local optimality conditions, which we use to
                             describe a general class of iterative algorithms that give
                             locally optimal solutions to the problem. Simulation
                             results are provided for the multicast of an i.i.d. Gaussian
                             process, an i.i.d. Laplacian process, and a video source.
24. Bit Weaving A Non-       Ternary Content Addressable Memories (TCAMs) have 2012
    Prefix Approach to       become the de facto standard in industry for fast packet
    Compressing Packet       classification. Unfortunately, TCAMs have limitations of
    Classifiers in TCAMs     small capacity, high power consumption, high heat
                             generation, and high cost. The well-known range
                             expansion problem exacerbates these limitations as each
                             classifier rule typically has to be converted to multiple
                             TCAM rules. One method for coping with these
                             limitations is to use compression schemes to reduce the
                             number of TCAM rules required to represent a classifier.
                             Unfortunately, all existing compression schemes only
                             produce prefix classifiers. Thus, they all miss the
                             compression opportunities created by non-prefix ternary
                             classifiers.
25. Cooperative     Profit   We consider a network in which several service 2012
    Sharing in Coalition-    providers offer wireless access service to their respective
    Based        Resource    subscribed customers through potentially multi-hop
    Allocation in Wireless   routes. If providers cooperate, i.e., pool their resources,
    Networks                 such as spectrum and base stations, and agree to serve
                             each others' customers, their aggregate payoffs, and
                             individual shares, can potentially substantially increase
                             through efficient utilization of resources and statistical
multiplexing. The potential of such cooperation can
                           however be realized only if each provider intelligently
                           determines who it would cooperate with, when it would
                           cooperate, and how it would share its resources during
                           such cooperation. Also, when the providers share their
                           aggregate revenues, developing a rational basis for such
                           sharing is imperative for the stability of the coalitions.
                           We model such cooperation using transferable payoff
                           coalitional game theory. We first consider the scenario
                           that locations of the base stations and the channels that
                           each provider can use have already been decided apriori.
                           We show that the optimum cooperation strategy, which
                           involves the allocations of the channels and the base
                           stations to mobile customers, can be obtained as
                           solutions of convex optimizations. We next show that the
                           grand coalition is stable in this case, i.e. if all providers
                           cooperate, there is always an operating point that
                           maximizes the providers' aggregate payoff, while
                           offering each such a share that removes any incentive to
                           split from the coalition. Next, we show that when the
                           providers can choose the locations of their base stations
                           and decide which channels to acquire, the above results
                           hold in important special cases. Finally, we examine
                           cooperation when providers do not share their payoffs,
                           but still share their resources so as to enhance individual
                           payoffs. We show that the grand coalition continues to be
                           stable.
26. CSMACN         Carrier A wireless transmitter learns of a packet loss and infers 2012
    Sense Multiple Access collision only after completing the entire transmission. If
    With         Collision the transmitter could detect the collision early [such as
    Notification           with carrier sense multiple access with collision detection
                           (CSMA/CD) in wired networks], it could immediately
                           abort its transmission, freeing the channel for useful
                           communication. There are two main hurdles to realize
                           CSMA/CD in wireless networks. First, a wireless
                           transmitter cannot simultaneously transmit and listen for
                           a collision. Second, any channel activity around the
                           transmitter may not be an indicator of collision at the
                           receiver. This paper attempts to approximate CSMA/CD
                           in wireless networks with a novel scheme called
                           CSMA/CN (collision notification). Under CSMA/CN,
                           the receiver uses PHY-layer information to detect a
                           collision and immediately notifies the transmitter. The
                           collision notification consists of a unique signature, sent
                           on the same channel as the data. The transmitter employs
                           a listener antenna and performs signature correlation to
discern this notification. Once discerned, the transmitter
                           immediately aborts the transmission. We show that the
                           notification signature can be reliably detected at the
                           listener antenna, even in the presence of a strong self-
                           interference from the transmit antenna. A prototype
                           testbed of 10 USRP/GNU Radios demonstrates the
                           feasibility and effectiveness of CSMA/CN.

27. Dynamic     Power      A major problem in wireless networks is coping with 2012
    Allocation  Under      limited resources, such as bandwidth and energy. These
    Arbitrary  Varying     issues become a major algorithmic challenge in view of
    Channels—An Online     the dynamic nature of the wireless domain. We consider
    Approach               in this paper the single-transmitter power assignment
                           problem under time-varying channels, with the objective
                           of maximizing the data throughput. It is assumed that the
                           transmitter has a limited power budget, to be sequentially
                           divided during the lifetime of the battery. We deviate
                           from the classic work in this area, which leads to explicit
                           "water-filling" solutions, by considering a realistic
                           scenario where the channel state quality changes
                           arbitrarily from one transmission to the other. The
                           problem is accordingly tackled within the framework of
                           competitive analysis, which allows for worst case
                           performance guarantees in setups with arbitrarily varying
                           channel conditions. We address both a "discrete" case,
                           where the transmitter can transmit only at a fixed power
                           level, and a "continuous" case, where the transmitter can
                           choose any power level out of a bounded interval. For
                           both cases, we propose online power-allocation
                           algorithms with proven worst-case performance bounds.
                           In addition, we establish lower bounds on the worst-case
                           performance of any online algorithm, and show that our
                           proposed algorithms are optimal.
28. Economic Issues in In designing and managing a shared infrastructure, one 2012
    Shared Infrastructures must take account of the fact that its participants will
                           make self-interested and strategic decisions about the
                           resources that they are willing to contribute to it and/or
                           the share of its cost that they are willing to bear. Taking
                           proper account of the incentive issues that thereby arise,
                           we design mechanisms that, by eliciting appropriate
                           information from the participants, can obtain for them
                           maximal social welfare, subject to charging payments
                           that are sufficient to cover costs. We show that there are
                           incentivizing roles to be played both by the payments
                           that we ask from the participants and the specification of
                           how resources are to be shared. New in this paper is our
formulation of models for designing optimal
                            management policies, our analysis that demonstrates the
                            inadequacy of simple sharing policies, and our proposals
                            for some better ones. We learn that simple policies may
                            be far from optimal and that efficient policy design is not
                            trivial. However, we find that optimal policies have
                            simple forms in the limit as the number of participants
                            becomes large.
29. On New Approaches       Society relies heavily on its networked physical 2012
    of Assessing Network    infrastructure and information systems. Accurately
    Vulnerability           assessing the vulnerability of these systems against
    Hardness         and    disruptive events is vital for planning and risk
    Approximation           management. Existing approaches to vulnerability
                            assessments of large-scale systems mainly focus on
                            investigating inhomogeneous properties of the
                            underlying graph elements. These measures and the
                            associated heuristic solutions are limited in evaluating
                            the vulnerability of large-scale network topologies.
                            Furthermore, these approaches often fail to provide
                            performance guarantees of the proposed solutions. In this
                            paper, we propose a vulnerability measure, pairwise
                            connectivity, and use it to formulate network
                            vulnerability assessment as a graph-theoretical
                            optimization problem, referred to as -disruptor. The
                            objective is to identify the minimum set of critical
                            network elements, namely nodes and edges, whose
                            removal results in a specific degradation of the network
                            global pairwise connectivity. We prove the NP-
                            completeness and inapproximability of this problem and
                            propose an pseudo-approximation algorithm to
                            computing the set of critical nodes and an pseudo-
                            approximation algorithm for computing the set of critical
                            edges. The results of an extensive simulation-based
                            experiment show the feasibility of our proposed
                            vulnerability assessment framework and the efficiency of
                            the proposed approximation algorithms in comparison to
                            other approaches.
30. Quantifying    Video-   With the proliferation of multimedia content on the 2012
    QoE Degradations of     Internet, there is an increasing demand for video streams
    Internet Links          with high perceptual quality. The capability of present-
                            day Internet links in delivering high-perceptual-quality
                            streaming services, however, is not completely
                            understood. Link-level degradations caused by
                            intradomain routing policies and inter-ISP peering
                            policies are hard to obtain, as Internet service providers
                            often     consider     such     information    proprietary.
Understanding link-level degradations will enable us in
                           designing future protocols, policies, and architectures to
                           meet the rising multimedia demands. This paper presents
                           a trace-driven study to understand quality-of-experience
                           (QoE) capabilities of present-day Internet links using 51
                           diverse ISPs with a major presence in the US, Europe,
                           and Asia-Pacific. We study their links from 38 vantage
                           points in the Internet using both passive tracing and
                           active probing for six days. We provide the first
                           measurements of link-level degradations and case studies
                           of intra-ISP and inter-ISP peering links from a
                           multimedia standpoint. Our study offers surprising
                           insights into intradomain traffic engineering, peering link
                           loading, BGP, and the inefficiencies of using
                           autonomous system (AS)-path lengths as a routing
                           metric. Though our results indicate that Internet routing
                           policies are not optimized for delivering high-perceptual-
                           quality streaming services, we argue that alternative
                           strategies such as overlay networks can help meet QoE
                           demands over the Internet. Streaming services apart, our
                           Internet measurement results can be used as an input to a
                           variety of research problems.

31. Order        Matters   Modern wireless interfaces support a physical-layer 2012
    Transmission           capability called Message in Message (MIM). Briefly,
    Reordering        in   MIM allows a receiver to disengage from an ongoing
    Wireless Networks      reception and engage onto a stronger incoming signal.
                           Links that otherwise conflict with each other can be
                           made concurrent with MIM. However, the concurrency is
                           not immediate and can be achieved only if conflicting
                           links begin transmission in a specific order. The
                           importance of link order is new in wireless research,
                           motivating MIM-aware revisions to link-scheduling
                           protocols. This paper identifies the opportunity in MIM-
                           aware reordering, characterizes the optimal improvement
                           in throughput, and designs a link-layer protocol for
                           enterprise wireless LANs to achieve it. Testbed and
                           simulation results confirm the performance gains of the
                           proposed system.
32. Static Routing and     In this paper, we investigate the static multicast advance 2012
    Wavelength             reservation (MCAR) problem for all-optical wavelength-
    Assignment      for    routed WDM networks. Under the advanced reservation
    Multicast   Advance    traffic model, connection requests specify their start time
    Reservation in All-    to be some time in the future and also specify their
    Optical Wavelength-    holding times. We investigate the static MCAR problem
    Routed        WDM      where the set of advance reservation requests is known
Networks                 ahead of time. We prove the MCAR problem is NP-
                             complete, formulate the problem mathematically as an
                             integer linear program (ILP), and develop three efficient
                             heuristics, seqRWA, ISH, and SA, to solve the problem
                             for practical size networks. We also introduce a
                             theoretical lower bound on the number of wavelengths
                             required. To evaluate our heuristics, we first compare
                             their performances to the ILP for small networks, and
                             then simulate them over real-world, large-scale networks.
                             We find the SA heuristic provides close to optimal
                             results compared to the ILP for our smaller networks, and
                             up to a 33% improvement over seqRWA and up to a 22%
                             improvement over ISH on realistic networks. SA
                             provides, on average, solutions 1.5-1.8 times the cost
                             given by our conservative lower bound on large
                             networks.
33. System-Level             We consider a robust-optimization-driven system-level 2012
    Optimization        in   approach to interference management in a cellular
    Wireless     Networks    broadband system operating in an interference-limited
    Managing Interference    and highly dynamic regime. Here, base stations in
    and Uncertainty via      neighboring cells (partially) coordinate their transmission
    Robust Optimization      schedules in an attempt to avoid simultaneous max-
                             power transmission to their mutual cell edge. Limits on
                             communication overhead and use of the backhaul require
                             base station coordination to occur at a slower timescale
                             than the customer arrival process. The central challenge
                             is to properly structure coordination decisions at the slow
                             timescale, as these subsequently restrict the actions of
                             each base station until the next coordination period.
                             Moreover, because coordination occurs at the slower
                             timescale, the statistics of the arriving customers, e.g.,
                             the load, are typically only approximately known-thus,
                             this coordination must be done with only approximate
                             knowledge of statistics. We show that performance of
                             existing approaches that assume exact knowledge of
                             these statistics can degrade rapidly as the uncertainty in
                             the arrival process increases. We show that a two-stage
                             robust optimization framework is a natural way to model
                             two-timescale decision problems. We provide tractable
                             formulations for the base-station coordination problem
                             and show that our formulation is robust to fluctuations
                             (uncertainties) in the arriving load. This tolerance to load
                             fluctuation also serves to reduce the need for frequent
                             reoptimization across base stations, thus helping
                             minimize the communication overhead required for
                             system-level interference reduction. Our robust
optimization formulations are flexible, allowing us to
                             control the conservatism of the solution. Our simulations
                             show that we can build in robustness without significant
                             degradation of nominal performance.
   34. The Case for Feed- Variable latencies due to communication delays or 2012
       Forward         Clock system noise is the central challenge faced by time-
       Synchronization       keeping algorithms when synchronizing over the
                             network. Using extensive experiments, we explore the
                             robustness of synchronization in the face of both normal
                             and extreme latency variability and compare the
                             feedback approaches of ntpd and ptpd (a software
                             implementation of IEEE-1588) to the feed-forward
                             approach of the RADclock and advocate for the benefits
                             of a feed-forward approach. Noting the current lack of
                             kernel support, we present extensions to existing
                             mechanisms in the Linux and FreeBSD kernels giving
                             full access to all available raw counters, and then
                             evaluate the TSC, HPET, and ACPI counters' suitability
                             as hardware timing sources. We demonstrate how the
                             RADclock achieves the same microsecond accuracy with
                             each counter.


TECHNOLOGY                 : JAVA

DOMAIN                     : IEEE TRANSACTIONS ON NETWORK SECURITY



S.NO    TITLES               ABSTRACT                                                 YEAR
   1.   Design          and  We have designed and implemented TARF, a robust 2012
        Implementation   of  trust-aware routing framework for dynamic wireless
        TARF: A Trust-Aware  sensor networks (WSN). Without tight time
        Routing Framework    synchronization or known Geographic information,
        for WSNs             TARF provides trustworthy and energy-efficient route.
                             Most importantly, TARF proves effective against those
                             harmful attacks developed out of identity deception; the
                             resilience of TARF is verified through extensive
                             evaluation with both simulation and empirical
                             experiments on large-scale WSNs under various
                             scenarios including mobile and RF-shielding network
                             conditions.
   2.   Risk-Aware           In this paper, we propose a risk-aware response 2012
        Mitigation       for mechanism to systematically Cope with the identified
        MANET        Routing routing attacks. Our risk-aware approach is based on an
        Attacks              extended Dempster-Shafer mathematical theory of
Evidence introducing a notion of importance factors.
3.   Survivability            In this paper, we study survivability issues for RFID. We     2012
     Experiment         and   first present an RFID survivability experiment to define a
     Attack                   foundation to measure the degree of survivability of an
     Characterization   for   RFID system under varying attacks. Then we model a
     RFID                     series of malicious scenarios using stochastic process
                              algebras and study the different effects of those attacks
                              on the ability of the RFID system to provide critical
                              services even when parts of the system have been
                              damaged.
4.   Detecting        and     In       this      paper,       we       represent       an   2012
     Resolving   Firewall     innovative policy anomaly management            framework
     Policy Anomalies         for firewalls, adopting a rule-based segmentation
                              technique       to    identify policy anomalies and derive
                              effective anomaly resolutions. In particular, we articulate
                              a grid-based representation technique, providing an
                              intuitive cognitive sense about policy anomaly.
5.   Automatic                In this paper, we present a complete solution for             2012
     Reconfiguration  for     dynamically changing system membership in a large-
     Large-Scale Reliable     scale Byzantine-fault-tolerant system. We present a
     Storage Systems          service that tracks system membership and periodically
                              notifies other system nodes of changes.
6.   Detecting Anomalous      In this paper, we introduce the community anomaly             2012
     Insiders            in   detection system (CADS), an unsupervised learning
     Collaborative            framework to detect insider threats based on the access
     Information Systems      logs of collaborative environments. The framework is
                              based on the observation that typical CIS users tend to
                              form community structures based on the subjects
                              accessed
7.   An Extended Visual       Conventional visual secret sharing schemes generate           2012
     Cryptography             noise-like random pixels on shares to hide secret images.
     Algorithm for General    It suffers a management problem. In this paper, we
     Access Structures        propose a general approach to solve the above-
                              mentioned problems; the approach can be used for binary
                              secret images in non computer-aided decryption
                              environments.
8.   Mitigating Distributed   In this paper, we extend port-hopping to support              2012
     Denial of Service        multiparty applications, by proposing the BIGWHEEL
     Attacks in Multiparty    algorithm, for each application server to communicate
     Applications in the      with multiple clients in a port-hopping manner without
     Presence of Clock        the need for group synchronization. Furthermore, we
     Drift                    present an adaptive algorithm, HOPERAA, for enabling
                              hopping in the presence of bounded asynchrony, namely,
                              when the communicating parties have clocks with clock
                              drifts.
9.   On the Security and      Content distribution via network coding has received a        2012
Efficiency of Content lot of attention lately. However, direct application of
      Distribution       via network coding may be insecure. In particular, attackers
      Network Coding         can inject "bogus” data to corrupt the content distribution
                             process so as to hinder the information dispersal or even
                             deplete the network resource. Therefore, content
                             verification is an important and practical issue when
                             network coding is employed.
10.   Packet-Hiding          In this paper, we address the problem of selective             2012
      Methods            for jamming attacks in wireless networks. In these attacks,
      Preventing Selective the adversary is active only for a short period of time,
      Jamming Attacks        selectively targeting messages of high importance.
11.   Stochastic Model of                                                                   2012
      Multi virus Dynamics
12.   Peering Equilibrium Our scheme relies on a game theory modeling, with a               2012
      Multipath Routing: A non-cooperative potential game considering both routing
      Game           Theory and congestions costs. We compare different PEMP
      Framework          for policies to BGP Multipath schemes by emulating a
      Internet       Peering realistic peering scenario.
      Settlements
13.   Modeling          and Our scheme uses the Power Spectral Density (PSD)                2012
      Detection           of distribution of the scan traffic volume and its
      Camouflaging Worm      corresponding Spectral Flatness Measure (SFM) to
                             distinguish the C-Worm traffic from background traffic.
                             The performance data clearly demonstrates that our
                             scheme can effectively detect the C-Worm
                             propagation.two heuristic algorithms for the two sub
                             problems.
14.   Analysis of a Botnet We present the design of an advanced hybrid peer-to-             2012
      Takeover               peer botnet. Compared with current botnets, the proposed
                             botnet is harder to be shut down, monitored, and
                             hijacked. It provides individualized encryption and
                             control traffic dispersion.
15.   Efficient     Network As real-time traffic such as video or voice increases on        2012
      Modification        to the Internet, ISPs are required to provide stable quality as
      Improve QoS Stability well as connectivity at failures. For ISPs, how to
      at Failures            effectively improve the stability of these qualities at
                             failures with the minimum investment cost is an
                             important issue, and they need to effectively select a
                             limited number of locations to add link facilities.
16.   Detecting       Spam Compromised machines are one of the key security                 2012
      Zombies             by threats on the Internet; they are often used to launch
      Monitoring Outgoing various security attacks such as spamming and spreading
      Messages               malware, DDoS, and identity theft. Given that spamming
                             provides a key economic incentive for attackers to recruit
                             the large number of compromised machines, we focus on
                             the detection of the compromised machines in a network
that are involved in the spamming activities, commonly
                            known as spam zombies. We develop an effective spam
                            zombie detection system named SPOT by monitoring
                            outgoing messages of a network. SPOT is designed based
                            on a powerful statistical tool called Sequential
                            Probability Ratio Test, which has bounded false positive
                            and false negative error rates. In addition, we also
                            evaluate the performance of the developed SPOT system
                            using a two-month e-mail trace collected in a large US
                            campus network. Our evaluation studies show that SPOT
                            is an effective and efficient system in automatically
                            detecting compromised machines in a network. For
                            example, among the 440 internal IP addresses observed
                            in the e-mail trace, SPOT identifies 132 of them as being
                            associated with compromised machines. Out of the 132
                            IP addresses identified by SPOT, 126 can be either
                            independently confirmed (110) or highly likely (16) to be
                            compromised. Moreover, only seven internal IP
                            addresses associated with compromised machines in the
                            trace are missed by SPOT. In addition, we also compare
                            the performance of SPOT with two other spam zombie
                            detection algorithms based on the number and percentage
                            of spam messages originated or forwarded by internal
                            machines, respectively, and show that SPOT outperforms
                            these two detection algorithms.

17. A Hybrid Approach to    Real-world entities are not always represented by the 2012
    Private        Record   same set of features in different data sets. Therefore,
    Matching      Network   matching records of the same real-world entity
    Security 2012 Java      distributed across these data sets is a challenging task. If
                            the data sets contain private information, the problem
                            becomes even more difficult. Existing solutions to this
                            problem generally follow two approaches: sanitization
                            techniques and cryptographic techniques. We propose a
                            hybrid technique that combines these two approaches and
                            enables users to trade off between privacy, accuracy, and
                            cost. Our main contribution is the use of a blocking phase
                            that operates over sanitized data to filter out in a privacy-
                            preserving manner pairs of records that do not satisfy the
                            matching condition. We also provide a formal definition
                            of privacy and prove that the participants of our protocols
                            learn nothing other than their share of the result and what
                            can be inferred from their share of the result, their input
                            and sanitized views of the input data sets (which are
                            considered public information). Our method incurs
                            considerably lower costs than cryptographic techniques
and yields significantly more accurate matching results
                           compared to sanitization techniques, even when privacy
                           requirements are high.
18. ES-MPICH2:       A     An increasing number of commodity clusters are 2012
    Message    Passing     connected to each other by public networks, which have
    Interface     with     become a potential threat to security sensitive parallel
    Enhanced   Security    applications running on the clusters. To address this
    Network    Security    security issue, we developed a Message Passing Interface
    2012 Java              (MPI) implementation to preserve confidentiality of
                           messages communicated among nodes of clusters in an
                           unsecured network. We focus on MPI rather than other
                           protocols, because MPI is one of the most popular
                           communication protocols for parallel computing on
                           clusters. Our MPI implementation—called ES-
                           MPICH2—was built based on MPICH2 developed by the
                           Argonne National Laboratory. Like MPICH2, ES-
                           MPICH2 aims at supporting a large variety of
                           computation and communication platforms like
                           commodity clusters and high-speed networks. We
                           integrated encryption and decryption algorithms into the
                           MPICH2 library with the standard MPI interface and;
                           thus, data confidentiality of MPI applications can be
                           readily preserved without a need to change the source
                           codes of the MPI applications. MPI-application
                           programmers can fully configure any confidentiality
                           services in MPICHI2, because a secured configuration
                           file in ES-MPICH2 offers the programmers flexibility in
                           choosing any cryptographic schemes and keys
                           seamlessly incorporated in ES-MPICH2. We used the
                           Sandia Micro Benchmark and Intel MPI Benchmark
                           suites to evaluate and compare the performance of ES-
                           MPICH2 with the original MPICH2 version. Our
                           experiments show that overhead incurred by the
                           confidentiality services in ES-MPICH2 is marginal for
                           small messages. The security overhead in ES-MPICH2
                           becomes more pronounced with larger messages. Our
                           results also show that security overhead can be
                           significantly reduced in ES-MPICH2 by high-
                           performance clusters.
19. Ensuring Distributed   Cloud computing enables highly scalable services to be 2012
    Accountability   for   easily consumed over the Internet on an as-needed basis.
    Data Sharing in the    A major feature of the cloud services is that users’ data
    Cloud                  are usually processed remotely in unknown machines
                           that users do not own or operate. While enjoying the
                           convenience brought by this new emerging technology,
                           users’ fears of losing control of their own data
(particularly, financial and health data) can become a
                             significant barrier to the wide adoption of cloud services.
                             To address this problem, here, we propose a novel highly
                             decentralized information accountability framework to
                             keep track of the actual usage of the users’ data in the
                             cloud. In particular, we propose an object-centered
                             approach that enables enclosing our logging mechanism
                             together with users’ data and policies. We leverage the
                             JAR programmable capabilities to both create a dynamic
                             and traveling object, and to ensure that any access to
                             users’ data will trigger authentication and automated
                             logging local to the JARs. To strengthen user’s control,
                             we also provide distributed auditing mechanisms. We
                             provide extensive experimental studies that demonstrate
                             the efficiency and effectiveness of the proposed
                             approaches.
20. BECAN:              A    Injecting false data attack is a well known serious threat 2012
    Bandwidth-Efficient      to wireless sensor network, for which an adversary
    Cooperative              reports bogus information to sink causing error decision
    Authentication           at upper level and energy waste in en-route nodes. In this
    Scheme for Filtering     paper, we propose a novel bandwidth-efficient
    Injected False Data in   cooperative authentication (BECAN) scheme for filtering
    Wireless       Sensor    injected false data. Based on the random graph
    Networks – projects      characteristics of sensor node deployment and the
    2012                     cooperative bit-compressed authentication technique, the
                             proposed BECAN scheme can save energy by early
                             detecting and filtering the majority of injected false data
                             with minor extra overheads at the en-route nodes. In
                             addition, only a very small fraction of injected false data
                             needs to be checked by the sink, which thus largely
                             reduces the burden of the sink. Both theoretical and
                             simulation results are given to demonstrate the
                             effectiveness of the proposed scheme in terms of high
                             filtering probability and energy saving.
21. A Flexible Approach      There is an increasing need for fault tolerance 2012
    to Improving System      capabilities in logic devices brought about by the scaling
    Reliability      with    of transistors to ever smaller geometries. This paper
    Virtual Lockstep         presents a hypervisor-based replication approach that can
                             be applied to commodity hardware to allow for virtually
                             lockstepped execution. It offers many of the benefits of
                             hardware-based lockstep while being cheaper and easier
                             to implement and more flexible in the configurations
                             supported. A novel form of processor state fingerprinting
                             is also presented, which can significantly reduce the fault
                             detection latency. This further improves reliability by
                             triggering rollback recovery before errors are recorded to
a checkpoint. The mechanisms are validated using a full
                            prototype and the benchmarks considered indicate an
                            average performance overhead of approximately 14
                            percent with the possibility for significant optimization.
                            Finally, a unique method of using virtual lockstep for
                            fault injection testing is presented and used to show that
                            significant detection latency reduction is achievable by
                            comparing only a small amount of data across replicas
22. A      Learning-Based   Despite the conventional wisdom that proactive security 2012
    Approach to Reactive    is superior to reactive security, we show that reactive
    Security                security can be competitive with proactive security as
                            long as the reactive defender learns from past attacks
                            instead of myopically overreacting to the last attack. Our
                            game-theoretic model follows common practice in the
                            security literature by making worst case assumptions
                            about the attacker: we grant the attacker complete
                            knowledge of the defender's strategy and do not require
                            the attacker to act rationally. In this model, we bound the
                            competitive ratio between a reactive defense algorithm
                            (which is inspired by online learning theory) and the best
                            fixed proactive defense. Additionally, we show that,
                            unlike proactive defenses, this reactive strategy is robust
                            to a lack of information about the attacker's incentives
                            and knowledge
23. Automated Security      Despite the conventional wisdom that proactive security 2012
    Test Generation with    is superior to reactive security, we show that reactive
    Formal Threat Models    security can be competitive with proactive security as
                            long as the reactive defender learns from past attacks
                            instead of myopically overreacting to the last attack. Our
                            game-theoretic model follows common practice in the
                            security literature by making worst case assumptions
                            about the attacker: we grant the attacker complete
                            knowledge of the defender's strategy and do not require
                            the attacker to act rationally. In this model, we bound the
                            competitive ratio between a reactive defense algorithm
                            (which is inspired by online learning theory) and the best
                            fixed proactive defense. Additionally, we show that,
                            unlike proactive defenses, this reactive strategy is robust
                            to a lack of information about the attacker's incentives
                            and knowledge.

24. Automatic               Byzantine-fault-tolerant replication enhances the 2012
    Reconfiguration  for    availability and reliability of Internet services that store
    Large-Scale Reliable    critical state and preserve it despite attacks or software
    Storage Systems         errors. However, existing Byzantine-fault-tolerant
                            storage systems either assume a static set of replicas, or
have limitations in how they handle reconfigurations
                         (e.g., in terms of the scalability of the solutions or the
                         consistency levels they provide). This can be problematic
                         in long-lived, large-scale systems where system
                         membership is likely to change during the system
                         lifetime. In this paper, we present a complete solution for
                         dynamically changing system membership in a large-
                         scale Byzantine-fault-tolerant system. We present a
                         service that tracks system membership and periodically
                         notifies other system nodes of membership changes. The
                         membership service runs mostly automatically, to avoid
                         human configuration errors; is itself Byzantine-fault-
                         tolerant and reconfigurable; and provides applications
                         with a sequence of consistent views of the system
                         membership. We demonstrate the utility of this
                         membership service by using it in a novel distributed
                         hash table called dBQS that provides atomic semantics
                         even across changes in replica sets. dBQS is interesting
                         in its own right because its storage algorithms extend
                         existing Byzantine quorum protocols to handle changes
                         in the replica set, and because it differs from previous
                         DHTs by providing Byzantine fault tolerance and
                         offering strong semantics. We implemented the
                         membership service and dBQS. Our results show that the
                         approach works well, in practice: the membership service
                         is able to manage a large system and the cost to change
                         the system membership is low.
25. JS-Reduce Defending Web queries, credit card transactions, and medical 2012
    Your     Data   from records are examples of transaction data flowing in
    Sequential           corporate data stores, and often revealing associations
    Background           between individuals and sensitive information. The serial
    Knowledge Attacks    release of these data to partner institutions or data
                         analysis centers in a nonaggregated form is a common
                         situation. In this paper, we show that correlations among
                         sensitive values associated to the same individuals in
                         different releases can be easily used to violate users'
                         privacy by adversaries observing multiple data releases,
                         even if state-of-the-art privacy protection techniques are
                         applied. We show how the above sequential background
                         knowledge can be actually obtained by an adversary, and
                         used to identify with high confidence the sensitive values
                         of an individual. Our proposed defense algorithm is
                         based on Jensen-Shannon divergence; experiments show
                         its superiority with respect to other applicable solutions.
                         To the best of our knowledge, this is the first work that
                         systematically investigates the role of sequential
background knowledge in serial release of transaction
                             data.

26. Mitigating DistributedNetwork-based applications commonly open some 2012
    Denial of Service     known communication port(s), making themselves easy
    Attacks in Multiparty targets for (distributed) Denial of Service (DoS) attacks.
    Applications in the   Earlier solutions for this problem are based on port-
    Presence of Clock     hopping between pairs of processes which are
    Drifts                synchronous or exchange acknowledgments. However,
                          acknowledgments, if lost, can cause a port to be open for
                          longer time and thus be vulnerable, while time servers
                          can become targets to DoS attack themselves. Here, we
                          extend port-hopping to support multiparty applications,
                          by proposing the BIGWHEEL algorithm, for each
                          application server to communicate with multiple clients
                          in a port-hopping manner without the need for group
                          synchronization. Furthermore, we present an adaptive
                          algorithm, HOPERAA, for enabling hopping in the
                          presence of bounded asynchrony, namely, when the
                          communicating parties have clocks with clock drifts. The
                          solutions are simple, based on each client interacting
                          with the server independently of the other clients,
                          without the need of acknowledgments or time server(s).
                          Further, they do not rely on the application having a
                          fixed port open in the beginning, neither do they require
                          the clients to get a "first-contact” port from a third party.
                          We show analytically the properties of the algorithms
                          and also study experimentally their success rates,
                          confirm the relation with the analytical bounds.
27. On the Security and Content distribution via network coding has received a 2012
    Efficiency of Content lot of attention lately. However, direct application of
    Distribution      via network coding may be insecure. In particular, attackers
    Network Coding        can inject "bogus” data to corrupt the content distribution
                          process so as to hinder the information dispersal or even
                          deplete the network resource. Therefore, content
                          verification is an important and practical issue when
                          network coding is employed. When random linear
                          network coding is used, it is infeasible for the source of
                          the content to sign all the data, and hence, the traditional
                          "hash-and-sign” methods are no longer applicable.
                          Recently, a new on-the-fly verification technique has
                          been proposed by Krohn et al. (IEEE S&P '04), which
                          employs a classical homomorphic hash function.
                          However, this technique is difficult to be applied to
                          network coding because of high computational and
                          communication overhead. We explore this issue further
by carefully analyzing different types of overhead, and
                             propose methods to help reducing both the computational
                             and communication cost, and provide provable security
                             at the same time.

28. Security of Bertino-     Recently, Bertino, Shang and Wagstaff proposed a time- 2012
    Shang-Wagstaff Time-     bound hierarchical key management scheme for secure
    Bound     Hierarchical   broadcasting. Their scheme is built on elliptic curve
    Key      Management      cryptography and implemented with tamper-resistant
    Scheme for Secure        devices. In this paper, we present two collusion attacks
    Broadcasting             on Bertino-Shang-Wagstaff scheme. The first attack does
                             not need to compromise any decryption device, while the
                             second attack requires to compromise single decryption
                             device only. Both attacks are feasible and effective.
29. Survivability            Radio Frequency Identification (RFID) has been 2012
    Experiment         and   developed as an important technique for many high
    Attack                   security and high integrity settings. In this paper, we
    Characterization   for   study survivability issues for RFID. We first present an
    RFID                     RFID survivability experiment to define a foundation to
                             measure the degree of survivability of an RFID system
                             under varying attacks. Then we model a series of
                             malicious scenarios using stochastic process algebras and
                             study the different effects of those attacks on the ability
                             of the RFID system to provide critical services even
                             when parts of the system have been damaged. Our
                             simulation model relates its statistic to the attack
                             strategies and security recovery. The model helps system
                             designers and security specialists to identify the most
                             devastating attacks given the attacker's capacities and the
                             system's recovery abilities. The goal is to improve the
                             system survivability given possible attacks. Our model is
                             the first of its kind to formally represent and simulate
                             attacks on RFID systems and to quantitatively measure
                             the degree of survivability of an RFID system under
                             those attacks.
30. Persuasive        Cued   This paper presents an integrated evaluation of the 2012
    Click-Points Design,     Persuasive Cued Click-Points graphical password
    Implementation, and      scheme, including usability and security evaluations, and
    Evaluation     of    a   implementation considerations. An important usability
    Knowledge-      Based    goal for knowledge-based authentication systems is to
    Authentication           support users in selecting passwords of higher security,
    Mechanism                in the sense of being from an expanded effective security
                             space. We use persuasion to influence user choice in
                             click-based graphical passwords, encouraging users to
                             select more random, and hence more difficult to guess,
                             click-points.
31. Resilient             Modern computer systems are built on a foundation of 2012
       Authenticated         software components from a variety of vendors. While
       Execution of Critical critical applications may undergo extensive testing and
       Applications       in evaluation procedures, the heterogeneity of software
       Untrusted             sources threatens the integrity of the execution
       Environments          environment for these trusted programs. For instance, if
                             an attacker can combine an application exploit with a
                             privilege escalation vulnerability, the operating system
                             (OS) can become corrupted. Alternatively, a malicious or
                             faulty device driver running with kernel privileges could
                             threaten the application. While the importance of
                             ensuring application integrity has been studied in prior
                             work, proposed solutions immediately terminate the
                             application once corruption is detected. Although, this
                             approach is sufficient for some cases, it is undesirable for
                             many critical applications. In order to overcome this
                             shortcoming, we have explored techniques for leveraging
                             a trusted virtual machine monitor (VMM) to observe the
                             application and potentially repair damage that occurs. In
                             this paper, we describe our system design, which
                             leverages efficient coding and authentication schemes,
                             and we present the details of our prototype
                             implementation to quantify the overhead of our
                             approach. Our work shows that it is feasible to build a
                             resilient execution environment, even in the presence of a
                             corrupted OS kernel, with a reasonable amount of storage
                             and performance overhead.

TECHNOLOGY                       : JAVA

DOMAIN                           : IEEE TRANSACTIONS ON DATA MINING



S.NO    TITLES               ABSTRACT                                               YEAR
   1.   A Survival Modeling  In this paper, we propose a survival modeling approach 2012
        Approach            to
                             to promoting ranking diversity for biomedical
        Biomedical     Searchinformation retrieval. The proposed approach concerns
        Result Diversification
                             with finding relevant documents that can deliver more
        Using Wikipedia      different aspects of a query. First, two probabilistic
                             models derived from the survival analysis theory are
                             proposed for measuring aspect novelty.
   2.   A Fuzzy Approach for In this paper, we propose a new fuzzy clustering 2012
        Multitype Relational approach for multitype relational data (FC-MR). In FC-
        Data Clustering      MR, different types of objects are clustered
                             simultaneously. An object is assigned a large
membership with respect to a cluster if its related objects
                               in this cluster have high rankings.
3.   Anonimos: An LP-          We present Anonimos, a Linear Programming-based               2012
     Based Approach for        technique for anonymization of edge weights that
     Anonymizing               preserves linear properties of graphs. Such properties
     Weighted       Social     form the foundation of many important graph-theoretic
     Network Graphs            algorithms such as shortest paths problem, k-nearest
                               neighbors, minimum cost spanning Tree and maximizing
                               information spread.
4.   A Methodology for         In this paper, we tackle discrimination prevention in data    2012
     Direct and Indirect       mining and propose new techniques applicable for direct
     Discrimination            or indirect Discrimination prevention individually or
     Prevention in Data        both at the same time. We discuss how to clean training
     Mining                    datasets and outsourced datasets in such a way that direct
                               and/or indirect discriminatory decision rules are
                               converted        to    legitimate     (non-discriminatory)
                               Classification rules.
5.   Mining Web Graphs         In this paper, aiming at providing a general framework        2012
     for Recommendations       on mining Web graphs for recommendations, (1) we first
                               propose a novel diffusion method which propagates
                               similarities between different nodes and generates
                               recommendations; (2) then we illustrate how to
                               generalize different recommendation problems into our
                               graph diffusion framework.
6.   Prediction of User’s      Predicting user's behavior while serving the Internet can     2012
     Web-Browsing              be applied effectively in various critical applications.
     Behavior: Application     Such application has traditional tradeoffs between
     of Markov Model           modeling complexity and prediction accuracy. In this
                               paper, we analyze and study Markov model and all- Kth
                               Markov model in Web prediction. We propose a new
                               modified Markov model to alleviate the issue of
                               scalability in the number of paths.
7.   Prototype     Selection   This paper provides a survey of the prototype selection       2012
     for Nearest Neighbor      methods proposed in the literature from a theoretical and
     Classification:           empirical point of view. Considering a theoretical point
     Taxonomy           and    of view, we propose a taxonomy based on the main
     Empirical Study           characteristics presented in prototype selection and we
                               analyze their advantages and drawbacks, the nearest
                               neighbor classifier suffers from several drawbacks such
                               as high storage requirements, low efficiency in
                               classification response, and low noise Tolerance.
8.   Query Planning for        We present a low-cost, scalable technique to answer           2012
     Continuous                continuous aggregation queries using a network of
     Aggregation Queries       aggregators of dynamic data items. In such a network of
     over a Network of         data aggregators, each data aggregator serves a set of
     Data Aggregators          data items at specific coherencies.
9.   Revealing    Density-   In this paper, we introduce a novel density-based            2012
     Based      Clustering   network clustering method, called gSkeletonClu (graph-
     Structure from the      skeleton based clustering). By projecting an undirected
     Core-Connected Tree     network to its core-connected maximal spanning tree, the
     of a Network            clustering problem can be converted to detect core
                             connectivity components on the tree.
10. Scalable Learning of     This study of collective behavior is to understand how       2012
    Collective Behavior      individuals behave in a social networking environment.
                             Oceans of data generated by social media like Facebook,
                             Twitter, Flickr, and YouTube present opportunities and
                             challenges to study collective behavior on a large scale.
                             In this work, we aim to learn to predict collective
                             behavior in social media
11. Weakly      Supervised   This paper proposes a novel probabilistic modeling           2012
    Joint Sentiment-Topic    framework called joint sentiment-topic (JST) model
    Detection from Text      based on latent Dirichlet allocation (LDA), which detects
                             sentiment and topic simultaneously from text. A
                             reparameterized version of the JST model called
                             Reverse-JST, obtained by reversing the sequence of
                             sentiment and topic generation in the modeling process,
                             is also studied.
12. A Framework for          Due to a wide range of potential applications, research on   2012
    Personal      Mobile     mobile commerce has received a lot of interests from
    Commerce      Pattern    both of the industry and academia. Among them, one of
    Mining and Prediction    the active topic areas is the mining and prediction of
                             users’ mobile commerce behaviors such as their
                             movements and purchase transactions. In this paper, we
                             propose a novel framework, called Mobile Commerce
                             Explorer (MCE), for mining and prediction of mobile
                             users’ movements and purchase transactions under the
                             context of mobile commerce. The MCE framework
                             consists of three major components: 1) Similarity
                             Inference Model (SIM) for measuring the similarities
                             among stores and items, which are two basic mobile
                             commerce entities considered in this paper; 2) Personal
                             Mobile Commerce Pattern Mine (PMCP-Mine)
                             algorithm for efficient discovery of mobile users’
                             Personal Mobile Commerce Patterns (PMCPs); and 3)
                             Mobile Commerce Behavior Predictor (MCBP) for
                             prediction of possible mobile user behaviors. To our best
                             knowledge, this is the first work that facilitates mining
                             and prediction of mobile users’ commerce behaviors in
                             order to recommend stores and items previously
                             unknown to a user. We perform an extensive
                             experimental evaluation by simulation and show that our
                             proposals produce excellent results.
13. Efficient    Extended Extended Boolean retrieval (EBR) models were proposed 2012
    Boolean Retrieval     nearly three decades ago, but have had little practical
                          impact, despite their significant advantages compared to
                          either ranked keyword or pure Boolean retrieval. In
                          particular,EBR models produce meaningful rankings;
                          their query model allows the representation of complex
                          concepts in an and-or format; and they are scrutable, in
                          that the score assigned to a document depends solely on
                          the content of that document, unaffected by any
                          collection statistics or other external factors. These
                          characteristics make EBR models attractive in domains
                          typified by medical and legal searching, where the
                          emphasis is on iterative development of reproducible
                          complex queries of dozens or even hundreds of terms.
                          However, EBR is much more computationally expensive
                          than the alternatives. We consider the implementation of
                          the p-norm approach to EBR, and demonstrate that ideas
                          used in the max-score and wand exact optimization
                          techniques for ranked keyword retrieval can be adaptedto
                          allow selective bypass of documents via a low-cost
                          screening process for this and similar retrieval models.
                          We also propose term independent bounds that are able
                          to further reduce the number of score calculations for
                          short, simple queries under the extended Boolean
                          retrieval model. Together, these methods yield an overall
                          saving from 50 to 80percent of the evaluation cost on test
                          queries drawn from biomedical search.


14. Improving Aggregate   Recommender systems are becoming increasingly
    Recommendation        important to individual users and businesses for
    Diversity     Using   providingpersonalized recommendations. However,
    Ranking-Based         while the majority of algorithms proposed in
    Techniques            recommender systems literature have focused on
                          improving recommendation accuracy (as exemplified by
                          the recent Netflix Prize competition) , other important
                          aspects of recommendation quality, such as the diversity
                          of recommendations, have often been overlooked. In this
                          paper, we introduce and explore a number of item
                          ranking techniques that can generate recommendations
                          that have substantially higher aggregate diversity across
                          all users while maintaining comparable levels of
                          recommendation accuracy. Comprehensive empirical
                          evaluation consistently shows the diversity gains of the
proposed techniques using several real-world rating
                           datasets and different rating prediction algorithms.

15. BibPro: A Citation Dramatic increase in the number of academic 2012
    Parser   Based    on publications has led to growing demand for efficient
    Sequence Alignment   organization of the resources to meet researchers' needs.
                         As a result, a number of network services have compiled
                         databases from the public resources scattered over the
                         Internet. However, publications by different conferences
                         and journals adopt different citation styles. It is an
                         interesting problem to accurately extract metadata from a
                         citation string which is formatted in one of thousands of
                         different styles. It has attracted a great deal of attention in
                         research in recent years. In this paper, based on the
                         notion of sequence alignment, we present a citation
                         parser called BibPro that extracts components of a
                         citation string. To demonstrate the efficacy of BibPro, we
                         conducted experiments on three benchmark data sets.
                         The results show that BibPro achieved over 90 percent
                         accuracy on each benchmark. Even with citations and
                         associated metadata retrieved from the web as training
                         data, our experiments show that BibPro still achieves a
                         reasonable performance

16. Extending    Attribute Data quantity is the main issue in the small data set 2012
    Information for Small problem, because usually insufficient data will not lead
    Data Set Classification to a robust classification performance. How to extract
                            more effective information from a small data set is thus
                            of considerable interest. This paper proposes a new
                            attribute construction approach which converts the
                            original data attributes into a higher dimensional feature
                            space to extract more attribute information by a
                            similarity-based algorithm using the classification-
                            oriented fuzzy membership function. Seven data sets
                            with different attribute sizes are employed to examine the
                            performance of the proposed method. The results show
                            that the proposed method has a superior classification
                            performance when compared to principal component
                            analysis (PCA), kernel principal component analysis
                            (KPCA), and kernel independent component analysis
                            (KICA) with a Gaussian kernel in the support vector
                            machine (SVM) classifier

17. Horizontal           Preparing a data set for analysis is generally the most 2012
    Aggregations in SQL time consuming task in a data mining project, requiring
    to Prepare Data Sets many complex SQL queries, joining tables, and
for   Data    Mining aggregating columns. Existing SQL aggregations have
    Analysis             limitations to prepare data sets because they return one
                         column per aggregated group. In general, a significant
                         manual effort is required to build data sets, where a
                         horizontal layout is required. We propose simple, yet
                         powerful, methods to generate SQL code to return
                         aggregated columns in a horizontal tabular layout,
                         returning a set of numbers instead of one number per
                         row. This new class of functions is called horizontal
                         aggregations. Horizontal aggregations build data sets
                         with a horizontal denormalized layout (e.g., point-
                         dimension, observationvariable, instance-feature), which
                         is the standard layout required by most data mining
                         algorithms. We propose three fundamental methods to
                         evaluate horizontal aggregations: CASE: Exploiting the
                         programming CASE construct; SPJ: Based on standard
                         relational algebra operators (SPJ queries); PIVOT: Using
                         the PIVOT operator, which is offered by some DBMSs.
                         Experiments with large tables compare the proposed
                         query evaluation methods. Our CASE method has similar
                         speed to the PIVOT operator and it is much faster than
                         the SPJ method. In general, the CASE and PIVOT
                         methods exhibit linear scalability, whereas the SPJ
                         method does

18. Enabling Multilevel Privacy Preserving Data Mining (PPDM) addresses the 2012
    Trust    in Privacy problem of developing accurate models about aggregated
    Preserving     Data data without access to precise information in individual
    Mining              data record. A widely studied perturbation-based PPDM
                        approach introduces random perturbation to individual
                        values to preserve privacy before data are published.
                        Previous solutions of this approach are limited in their
                        tacit assumption of single-level trust on data miners. In
                        this work, we relax this assumption and expand the scope
                        of perturbation-based PPDM to Multilevel Trust (MLT-
                        PPDM). In our setting, the more trusted a data miner is,
                        the less perturbed copy of the data it can access. Under
                        this setting, a malicious data miner may have access to
                        differently perturbed copies of the same data through
                        various means, and may combine these diverse copies to
                        jointly infer additional information about the original
                        data that the data owner does not intend to release.
                        Preventing such diversity attacks is the key challenge of
                        providing MLT-PPDM services. We address this
                        challenge by properly correlating perturbation across
                        copies at different trust levels. We prove that our solution
is robust against diversity attacks with respect to our
                              privacy goal. That is, for data miners who have access to
                              an arbitrary collection of the perturbed copies, our
                              solution prevent them from jointly reconstructing the
                              original data more accurately than the best effort using
                              any individual copy in the collection. Our solution allows
                              a data owner to generate perturbed copies of its data for
                              arbitrary trust levels ondemand.This feature offers data
                              owners maximum flexibility.
19. Using Rule Ontology       Inferential rules are as essential to the Semantic Web 2012
    in Repeated Rule          applications as ontology. Therefore, rule acquisition is
    Acquisition       from    also an important issue, and the Web that implies
    Similar Web Sites         inferential rules can be a major source of rule acquisition.
                              We expect that it will be easier to acquire rules from a
                              site by using similar rules of other sites in the same
                              domain rather than starting from scratch. We proposed an
                              automatic rule acquisition procedure using a rule
                              ontology RuleToOnto, which represents information
                              about the rule components and their structures. The rule
                              acquisition procedure consists of the rule component
                              identification step and the rule composition step. We
                              developed A* algorithm for the rule composition and we
                              performed experiments demonstrating that our ontology-
                              based rule acquisition approach works in a real-world
                              application.
20. Efficient Processing of   There is a growing need for systems that react 2012
    Uncertain Events in       automatically to events. While some events are generated
    Rule-Based Systems        externally and deliver data across distributed systems,
                              others need to be derived by the system itself based on
                              available information. Event derivation is hampered by
                              uncertainty attributed to causes such as unreliable data
                              sources or the inability to determine with certainty
                              whether an event has actually occurred, given available
                              information. Two main challenges exist when designing
                              a solution for event derivation under uncertainty. First,
                              event derivation should scale under heavy loads of
                              incoming events. Second, the associated probabilities
                              must be correctly captured and represented. We present a
                              solution to both problems by introducing a novel generic
                              and formal mechanism and framework for managing
                              event derivation under uncertainty. We also provide
                              empirical evidence demonstrating the scalability and
                              accuracy of our approach
21. Feature    Selection      Data and knowledge management systems employ 2012
    Based   on    Class-      feature selection algorithms for removing irrelevant,
    Dependent Densities       redundant, and noisy information from the data. There
for High-Dimensional are two well-known approaches to feature selection,
    Binary Data          feature ranking (FR) and feature subset selection (FSS).
                         In this paper, we propose a new FR algorithm, termed as
                         class-dependent density-based feature elimination
                         (CDFE), for binary data sets. Our theoretical analysis
                         shows that CDFE computes the weights, used for feature
                         ranking, more efficiently as compared to the mutual
                         information measure. Effectively, rankings obtained from
                         both the two criteria approximate each other. CDFE uses
                         a filtrapper approach to select a final subset. For data sets
                         having hundreds of thousands of features, feature
                         selection with FR algorithms is simple and
                         computationally efficient but redundant information may
                         not be removed. On the other hand, FSS algorithms
                         analyze the data for redundancies but may become
                         computationally impractical on high-dimensional data
                         sets. We address these problems by combining FR and
                         FSS methods in the form of a two-stage feature selection
                         algorithm. When introduced as a preprocessing step to
                         the FSS algorithms, CDFE not only presents them with a
                         feature subset, good in terms of classification, but also
                         relieves them from heavy computations. Two FSS
                         algorithms are employed in the second stage to test the
                         two-stage feature selection idea. We carry out
                         experiments with two different classifiers (naive Bayes'
                         and kernel ridge regression) on three different real-life
                         data sets (NOVA, HIVA, and GINA) of the”Agnostic
                         Learning versus Prior Knowledge” challenge. As a stand-
                         alone method, CDFE shows up to about 92 percent
                         reduction in the feature set size. When combined with the
                         FSS algorithms in two-stages, CDFE significantly
                         improves their classification accuracy and exhibits up to
                         97 percent reduction in the feature set size. We also
                         compared CDFE against the winning entries of the
                         challenge and f- und that it outperforms the best results
                         on NOVA and HIVA while obtaining a third position in
                         case of GINA.

22. Ranking        Model    With the explosive emergence of vertical search 2012
    Adaptation       for    domains, applying the broad-based ranking model
    Domain-Specific         directly to different domains is no longer desirable due to
    Search                  domain differences, while building a unique ranking
                            model for each domain is both laborious for labeling data
                            and time-consuming for training models. In this paper,
                            we address these difficulties by proposing a
                            regularization based algorithm called ranking adaptation
SVM (RA-SVM), through which we can adapt an
                            existing ranking model to a new domain, so that the
                            amount of labeled data and the training cost is reduced
                            while the performance is still guaranteed. Our algorithm
                            only requires the prediction from the existing ranking
                            models, rather than their internal representations or the
                            data from auxiliary domains. In addition, we assume that
                            documents similar in the domain-specific feature space
                            should have consistent rankings, and add some
                            constraints to control the margin and slack variables of
                            RA-SVM adaptively. Finally, ranking adaptability
                            measurement is proposed to quantitatively estimate if an
                            existing ranking model can be adapted to a new domain.
                            Experiments performed over Letor and two large scale
                            datasets crawled from a commercial search engine
                            demonstrate the applicabilities of the proposed ranking
                            adaptation algorithms and the ranking adaptability
                            measurement.
23. Slicing:   A   New      Several anonymization techniques, such as generalization 2012
    Approach to Privacy     and bucketization, have been designed for privacy
    Preserving     Data     preserving microdata publishing. Recent work has shown
    Publishing              that general ization loses considerable amount of
                            information, especially for high-dimensional data.
                            Bucketization, on the other hand, does not prevent
                            membership disclosure and does not apply for data that
                            do not have a clear separation between quasi- identifying
                            attributes and sensitive attributes. In this paper, we
                            present a novel technique called slicing, which partitions
                            the data both horizontally and vertically. We show that
                            slicing preserves better data utility than generalization
                            and can be used for membership disclosure protection.
                            Another important advantage of slicing is that it can
                            handle high-dimensional data. We show how slicing can
                            be used for attribute disclosure protection and develop an
                            efficient algorithm for computing the sliced data that
                            obey the ℓ-diversity requirement. Our workload
                            experiments confirm that slicing preserves better utility
                            than generalization and is more effective than
                            bucketization in workloads involving the sensitive
                            attribute. Our experiments also demonstrate that slicing
                            can be used to prevent membership disclosure.
24. Improving Aggregate     Recommender systems are becoming increasingly 2012
    Recommendation          important to individual users and businesses for
    Diversity       Using   providing personalized recommendations. However,
    Ranking-Based           while the majority of algorithms proposed in
    Techniques- projects    recommender systems literature have focused on
improving recommendation accuracy, other important
                            aspects of recommendation quality, such as the diversity
                            of recommendations, have often been overlooked. In this
                            paper, we introduce and explore a number of item
                            ranking techniques that can generate recommendations
                            that have substantially higher aggregate diversity across
                            all users while maintaining comparable levels of
                            recommendation accuracy. Comprehensive empirical
                            evaluation consistently shows the diversity gains of the
                            proposed techniques using several real-world rating
                            datasets and different rating prediction algorithms.
25. Horizontal              Preparing a data set for analysis is generally the most 2012
    Aggregations in SQL     time consuming task in a data mining project, requiring
    to Prepare Data Sets    many complex SQL queries, joining tables and
    for   Data    Mining    aggregating columns. Existing SQL aggregations have
    Analysis                limitations to prepare data sets because they return one
                            column per aggregated group. In general, a significant
                            manual           effort        is        required        to
                            build data sets, where a horizontal layout is required. We
                            propose simple, yet powerful, methods to generate SQL
                            code to return aggregated columns in a horizontal tabular
                            layout, returning a set of numbers instead of one number
                            per row. This new class of functions is called horizontal
                            aggregations.
                            Horizontal aggregations build data sets with a horizontal
                            denormalized layout (e.g. point-dimension, observation-
                            variable, instance-feature), which is the standard layout
                            required by most data mining algorithms. We propose
                            three fundamental methods to evaluate horizontal
                            aggregations:               CASE:                Exploiting
                            the programming CASE construct; SPJ: Based on
                            standard relational algebra operators (SPJ queries);
                            PIVOT: Using the PIVOT operator, which is offered by
                            some DBMSs. Experiments with large tables compare
                            the proposed query evaluation methods. Our CASE
                            method has similar speed to the PIVOT operator and it is
                            much faster than the SPJ method. In general, the CASE
                            and PIVOT methods exhibit linear scalability,
                            whereasthe SPJ method does not
26. Scalable Learning of    This study of collective behavior is to understand how 2012
    Collective Behavior -   individuals behave in a social networking environment.
    projects 2012           Oceans of data generated by social media like Face book,
                            Twitter, Flicker, and YouTube present opportunities and
                            challenges to study collective behavior on a large scale.
                            In this work, we aim to learn to predict collective
                            behavior in social media. In particular, given information
about some individuals, how can we infer the behavior of
                             unobserved individuals in the same network? A social-
                             dimension-based approach has been shown effective in
                             addressing the heterogeneity of connections presented in
                             social media. However, the networks in social media are
                             normally of colossal size, involving hundreds of
                             thousands of actors. The scale of these networks entails
                             scalable learning of models for collective behavior
                             prediction. To address the scalability issue, we propose
                             an edge-centric clustering scheme to extract sparse social
                             dimensions. With sparse social dimensions, the proposed
                             approach can efficiently handle networks of millions of
                             actors while demonstrating a comparable prediction
                             performance to other non-scalable methods
27. Outsourced Similarity    This paper considers a cloud computing setting in which 2012
    Search on Metric Data    similarity querying of metric data is outsourced to a
    Assets – projects        service provider. The data is to be revealed only to
                             trusted users, not to the service provider or anyone else.
                             Users query the server for the most similar data objects
                             to a query example. Outsourcing offers the data owner
                             scalability and a low-initial investment. The need for
                             privacy may be due to the data being sensitive (e.g., in
                             medicine), valuable (e.g., in astronomy), or otherwise
                             confidential. Given this setting, the paper presents
                             techniques that transform the data prior to supplying it to
                             the service provider for similarity queries on the
                             transformed data. Our techniques provide interesting
                             trade-offs between query cost and accuracy. They are
                             then further extended to offer an intuitive privacy
                             guarantee. Empirical studies with real data demonstrate
                             that the techniques are capable of offering privacy while
                             enabling efficient and accurate processing of similarity
                             queries.
28. A Framework for          A Time Series Clique (TSC) consists of multiple time 2012
    Similarity Search of     series which are related to each other by natural relations.
    Time Series Cliques      The natural relations that are found between the time
    with Natural Relations   series depend on the application domains. For example, a
                             TSC can consist of time series which are trajectories in
                             video that have spatial relations. In conventional time
                             series retrieval, such natural relations between the time
                             series are not considered. In this paper, we formalize the
                             problem of similarity search over a TSC database. We
                             develop a novel framework for efficient similarity search
                             on TSC data. The framework addresses the following
                             issues. First, it provides a compact representation for
                             TSC data. Second, it uses a multidimensional relation
vector to capture the natural relations between the
                             multiple time series in a TSC. Lastly, the framework
                             defines a novel similarity measure that uses the compact
                             representation and the relation vector. We conduct an
                             extensive performance study, using both real-life and
                             synthetic data sets. From the performance study, we
                             show that our proposed framework is both effective and
                             efficient for TSC retrieval
29. A             Genetic    Several systems that rely on consistent data to offer high- 2012
    Programming              quality services, such as digital libraries and e-commerce
    Approach to Record       brokers, may be affected by the existence of duplicates,
    Deduplication            quasi replicas, or near-duplicate entries in their
                             repositories. Because of that, there have been significant
                             investments from private and government organizations
                             for developing methods for removing replicas from its
                             data repositories. This is due to the fact that clean and
                             replica-free repositories not only allow the retrieval of
                             higher quality information but also lead to more concise
                             data and to potential savings in computational time and
                             resources to process this data. In this paper, we propose a
                             genetic programming approach to record deduplication
                             that combines several different pieces of evidence
                             extracted from the data content to find a deduplication
                             function that is able to identify whether two entries in a
                             repository are replicas or not. As shown by our
                             experiments, our approach outperforms an existing state-
                             of-the-art method found in the literature. Moreover, the
                             suggested functions are computationally less demanding
                             since they use fewer evidence. In addition, our genetic
                             programming approach is capable of automatically
                             adapting these functions to a given fixed replica
                             identification boundary, freeing the user from the burden
                             of having to choose and tune this parameter.
30. A        Probabilistic   Databases enable users to precisely express their 2012
    Scheme for Keyword-      informational needs using structured queries. However,
    Based    Incremental     database query construction is a laborious and error-
    Query Construction       prone process, which cannot be performed well by most
                             end users. Keyword search alleviates the usability
                             problem at the price of query expressiveness. As
                             keyword search algorithms do not differentiate between
                             the possible informational needs represented by a
                             keyword query, users may not receive adequate results.
                             This paper presents IQP - a novel approach to bridge the
                             gap between usability of keyword search and
                             expressiveness of database queries. IQP enables a user to
                             start with an arbitrary keyword query and incrementally
refine it into a structured query through an interactive
                          interface. The enabling techniques of IQP include: 1) a
                          probabilistic framework for incremental query
                          construction; 2) a probabilistic model to assess the
                          possible informational needs represented by a keyword
                          query; 3) an algorithm to obtain the optimal query
                          construction process. This paper presents the detailed
                          design of IQP, and demonstrates its effectiveness and
                          scalability through experiments over real-world data and
                          a user study.
31. Anónimos An LP- The increasing popularity of social networks has initiated 2012
    Based Approach for a fertile research area in information extraction and data
    Anonymizing           mining. Anonymization of these social graphs is
    Weighted       Social important to facilitate publishing these data sets for
    Network Graphs        analysis by external entities. Prior work has concentrated
                          mostly on node identity anonymization and structural
                          anonymization. But with the growing interest in
                          analyzing social networks as a weighted network, edge
                          weight anonymization is also gaining importance. We
                          present Anónimos, a Linear Programming-based
                          technique for anonymization of edge weights that
                          preserves linear properties of graphs. Such properties
                          form the foundation of many important graph-theoretic
                          algorithms such as shortest paths problem, k-nearest
                          neighbors, minimum cost spanning tree, and maximizing
                          information spread. As a proof of concept, we apply
                          Anónimos to the shortest paths problem and its
                          extensions, prove the correctness, analyze complexity,
                          and experimentally evaluate it using real social network
                          data sets. Our experiments demonstrate that Anónimos
                          anonymizes the weights, improves k-anonymity of the
                          weights, and also scrambles the relative ordering of the
                          edges sorted by weights, thereby providing robust and
                          effective anonymization of the sensitive edge-weights.
                          We also demonstrate the composability of different
                          models generated using Anónimos, a property that allows
                          a single anonymized graph to preserve multiple linear
                          properties.

32. Answering     General Time is an important dimension of relevance for a large 2012
    Time-Sensitive        number of searches, such as over blogs and news
    Queries               archives. So far, research on searching over such
                          collections has largely focused on locating topically
                          similar documents for a query. Unfortunately, topic
                          similarity alone is not always sufficient for document
                          ranking. In this paper, we observe that, for an important
class of queries that we call time-sensitive queries, the
                           publication time of the documents in a news archive is
                           important and should be considered in conjunction with
                           the topic similarity to derive the final document ranking.
                           Earlier work has focused on improving retrieval for
                           “recency” queries that target recent documents. We
                           propose a more general framework for handling time-
                           sensitive queries and we automatically identify the
                           important time intervals that are likely to be of interest
                           for a query. Then, we build scoring techniques that
                           seamlessly integrate the temporal aspect into the overall
                           ranking mechanism. We present an extensive
                           experimental evaluation using a variety of news article
                           data sets, including TREC data as well as real web data
                           analyzed using the Amazon Mechanical Turk. We
                           examine several techniques for detecting the important
                           time intervals for a query over a news archive and for
                           incorporating this information in the retrieval process.
                           We show that our techniques are robust and significantly
                           improve result quality for time-sensitive queries
                           compared to state-of-the-art retrieval techniques.

33. Clustering       with All clustering methods have to assume some cluster 2012
    Multiviewpoint-Based relationship among the data objects that they are applied
    Similarity Measure     on. Similarity between a pair of objects can be defined
                           either explicitly or implicitly. In this paper, we introduce
                           a novel multiviewpoint-based similarity measure and two
                           related clustering methods. The major difference between
                           a traditional dissimilarity/similarity measure and ours is
                           that the former uses only a single viewpoint, which is the
                           origin, while the latter utilizes many different viewpoints,
                           which are objects assumed to not be in the same cluster
                           with the two objects being measured. Using multiple
                           viewpoints, more informative assessment of similarity
                           could be achieved. Theoretical analysis and empirical
                           study are conducted to support this claim. Two criterion
                           functions for document clustering are proposed based on
                           this new measure. We compare them with several well-
                           known clustering algorithms that use other popular
                           similarity measures on various document collections to
                           verify the advantages of our proposal.
34. Cluster-Oriented       This paper presents a novel cluster-oriented ensemble 2012
    Ensemble Classifier classifier. The proposed ensemble classifier is based on
    Impact of Multicluster original concepts such as learning of cluster boundaries
    Characterization   on by the base classifiers and mapping of cluster
    Ensemble Classifier confidences to class decision using a fusion classifier.
Learning                 The categorized data set is characterized into multiple
                             clusters and fed to a number of distinctive base
                             classifiers. The base classifiers learn cluster boundaries
                             and produce cluster confidence vectors. A second level
                             fusion classifier combines the cluster confidences and
                             maps to class decisions. The proposed ensemble
                             classifier modifies the learning domain for the base
                             classifiers and facilitates efficient learning. The proposed
                             approach is evaluated on benchmark data sets from UCI
                             machine learning repository to identify the impact of
                             multicluster boundaries on classifier learning and
                             classification accuracy. The experimental results and
                             two-tailed sign test demonstrate the superiority of the
                             proposed cluster-oriented ensemble classifier over
                             existing ensemble classifiers published in the literature.
35. Effective      Pattern   Many data mining techniques have been proposed for 2012
    Discovery   for Text     mining useful patterns in text documents. However, how
    Mining                   to effectively use and update discovered patterns is still
                             an open research issue, especially in the domain of text
                             mining. Since most existing text mining methods adopted
                             term-based approaches, they all suffer from the problems
                             of polysemy and synonymy. Over the years, people have
                             often held the hypothesis that pattern (or phrase)-based
                             approaches should perform better than the term-based
                             ones, but many experiments do not support this
                             hypothesis. This paper presents an innovative and
                             effective pattern discovery technique which includes the
                             processes of pattern deploying and pattern evolving, to
                             improve the effectiveness of using and updating
                             discovered patterns for finding relevant and interesting
                             information. Substantial experiments on RCV1 data
                             collection and TREC topics demonstrate that the
                             proposed solution achieves encouraging performance.
36. Efficient Fuzzy Type-    In a traditional keyword-search system over XML data, a 2012
    Ahead Search in XML      user composes a keyword query, submits it to the system,
    Data                     and retrieves relevant answers. In the case where the user
                             has limited knowledge about the data, often the user feels
                             “left in the dark” when issuing queries, and has to use a
                             try-and-see approach for finding information. In this
                             paper, we study fuzzy type-ahead search in XML data, a
                             new information-access paradigm in which the system
                             searches XML data on the fly as the user types in query
                             keywords. It allows users to explore data as they type,
                             even in the presence of minor errors of their keywords.
                             Our proposed method has the following features: 1)
                             Search as you type: It extends Autocomplete by
supporting queries with multiple keywords in XML data.
                            2) Fuzzy: It can find high-quality answers that have
                            keywords matching query keywords approximately. 3)
                            Efficient: Our effective index structures and searching
                            algorithms can achieve a very high interactive speed. We
                            study research challenges in this new search framework.
                            We propose effective index structures and top-k
                            algorithms to achieve a high interactive speed. We
                            examine effective ranking functions and early
                            termination techniques to progressively identify the top-k
                            relevant answers. We have implemented our method on
                            real data sets, and the experimental results show that our
                            method achieves high search efficiency and result
                            quality.

37. Feature     Selection   Data and knowledge management systems employ 2012
    Based    on    Class-   feature selection algorithms for removing irrelevant,
    Dependent Densities     redundant, and noisy information from the data. There
    for High-Dimensional    are two well-known approaches to feature selection,
    Binary Data             feature ranking (FR) and feature subset selection (FSS).
                            In this paper, we propose a new FR algorithm, termed as
                            class-dependent density-based feature elimination
                            (CDFE), for binary data sets. Our theoretical analysis
                            shows that CDFE computes the weights, used for feature
                            ranking, more efficiently as compared to the mutual
                            information measure. Effectively, rankings obtained from
                            both the two criteria approximate each other. CDFE uses
                            a filtrapper approach to select a final subset. For data sets
                            having hundreds of thousands of features, feature
                            selection with FR algorithms is simple and
                            computationally efficient but redundant information may
                            not be removed. On the other hand, FSS algorithms
                            analyze the data for redundancies but may become
                            computationally impractical on high-dimensional data
                            sets. We address these problems by combining FR and
                            FSS methods in the form of a two-stage feature selection
                            algorithm. When introduced as a preprocessing step to
                            the FSS algorithms, CDFE not only presents them with a
                            feature subset, good in terms of classification, but also
                            relieves them from heavy computations. Two FSS
                            algorithms are employed in the second stage to test the
                            two-stage feature selection idea. We carry out
                            experiments with two different classifiers (naive Bayes'
                            and kernel ridge regression) on three different real-life
                            data sets (NOVA, HIVA, and GINA) of the”Agnostic
                            Learning versus Prior Knowledge” challenge. As a stand-
alone method, CDFE shows up to about 92 percent
                              reduction in the feature set size. When combined with the
                              FSS algorithms in two-stages, CDFE significantly
                              improves their classification accuracy and exhibits up to
                              97 percent reduction in the feature set size. We also
                              compared CDFE against the winning entries of the
                              challenge and f- und that it outperforms the best results
                              on NOVA and HIVA while obtaining a third position in
                              case of GINA.
38. Feedback      Matching    There is a need to promote drastically increased levels of 2012
    Framework           for   interoperability of product data across a broad spectrum
    Semantic                  of stakeholders, while ensuring that the semantics of
    Interoperability     of   product knowledge are preserved, and when necessary,
    Product Data              translated. In order to achieve this, multiple methods
                              have been proposed to determine semantic maps across
                              concepts from different representations. Previous
                              research has focused on developing different individual
                              matching methods, i.e., ones that compute mapping
                              based on a single matching measure. These efforts
                              assume that some weighted combination can be used to
                              obtain the overall maps. We analyze the problem of
                              combination of multiple individual methods to determine
                              requirements specific to product development and
                              propose a solution approach called FEedback Matching
                              Framework with Implicit Training (FEMFIT). FEMFIT
                              provides the ability to combine the different matching
                              approaches using ranking Support Vector Machine
                              (ranking SVM). The method accounts for nonlinear
                              relations between the individual matchers. It overcomes
                              the need to explicitly train the algorithm before it is used,
                              and further reduces the decision-making load on the
                              domain expert by implicitly capturing the expert's
                              decisions without requiring him to input real numbers on
                              similarity. We apply FEMFIT to a subset of product
                              constraints across a commercial system and the ISO
                              standard. We observe that FEMIT demonstrates better
                              accuracy (average correctness of the results) and stability
                              (deviation from the average) in comparison with other
                              existing combination methods commonly assumed to be
                              valid in this domain.
39. Fractal-Based Intrinsic   Dimensionality reduction is an important step in 2012
    Dimension Estimation      knowledge discovery in databases. Intrinsic dimension
    and Its Application in    indicates the number of variables necessary to describe a
    Dimensionality            data set. Two methods, box-counting dimension and
    Reduction                 correlation dimension, are commonly used for intrinsic
                              dimension estimation. However, the robustness of these
two methods has not been rigorously studied. This paper
                            demonstrates that correlation dimension is more robust
                            with respect to data sample size. In addition, instead of
                            using a user selected distance d, we propose a new
                            approach to capture all log-log pairs of a data set to more
                            precisely estimate the correlation dimension. Systematic
                            experiments are conducted to study factors that influence
                            the computation of correlation dimension, including
                            sample size, the number of redundant variables, and the
                            portion of log-log plot used for calculation. Experiments
                            on real-world data sets confirm the effectiveness of
                            intrinsic dimension estimation with our improved
                            method. Furthermore, a new supervised dimensionality
                            reduction method based on intrinsic dimension
                            estimation was introduced and validated.
40. Horizontal              Preparing a data set for analysis is generally the most 2012
    Aggregations in SQL     time consuming task in a data mining project, requiring
    to Prepare Data Sets    many complex SQL queries, joining tables, and
    for   Data    Mining    aggregating columns. Existing SQL aggregations have
    Analysis                limitations to prepare data sets because they return one
                            column per aggregated group. In general, a significant
                            manual effort is required to build data sets, where a
                            horizontal layout is required. We propose simple, yet
                            powerful, methods to generate SQL code to return
                            aggregated columns in a horizontal tabular layout,
                            returning a set of numbers instead of one number per
                            row. This new class of functions is called horizontal
                            aggregations. Horizontal aggregations build data sets
                            with a horizontal denormalized layout (e.g., point-
                            dimension, observation-variable, instance-feature), which
                            is the standard layout required by most data mining
                            algorithms. We propose three fundamental methods to
                            evaluate horizontal aggregations: CASE: Exploiting the
                            programming CASE construct; SPJ: Based on standard
                            relational algebra operators (SPJ queries); PIVOT: Using
                            the PIVOT operator, which is offered by some DBMSs.
                            Experiments with large tables compare the proposed
                            query evaluation methods. Our CASE method has similar
                            speed to the PIVOT operator and it is much faster than
                            the SPJ method. In general, the CASE and PIVOT
                            methods exhibit linear scalability, whereas the SPJ
                            method does not.
41. Low-Rank       Kernel   Traditional clustering techniques are inapplicable to 2012
    Matrix Factorization    problems where the relationships between data points
    for       Large-Scale   evolve over time. Not only is it important for the
    Evolutionary            clustering algorithm to adapt to the recent changes in the
Clustering               evolving data, but it also needs to take the historical
                             relationship between the data points into consideration.
                             In this paper, we propose ECKF, a general framework for
                             evolutionary clustering large-scale data based on low-
                             rank kernel matrix factorization. To the best of our
                             knowledge, this is the first work that clusters large
                             evolutionary data sets by the amalgamation of low-rank
                             matrix approximation methods and matrix factorization-
                             based clustering. Since the low-rank approximation
                             provides a compact representation of the original matrix,
                             and especially, the near-optimal low-rank approximation
                             can preserve the sparsity of the original data, ECKF
                             gains computational efficiency and hence is applicable to
                             large evolutionary data sets. Moreover, matrix
                             factorization-based methods have been shown to
                             effectively cluster high-dimensional data in text mining
                             and multimedia data analysis. From a theoretical
                             standpoint, we mathematically prove the convergence
                             and correctness of ECKF, and provide detailed analysis
                             of its computational efficiency (both time and space).
                             Through extensive experiments performed on synthetic
                             and real data sets, we show that ECKF outperforms the
                             existing methods in evolutionary clustering.
42. Mining         Online    Posting reviews online has become an increasingly 2012
    Reviews for Predicting   popular way for people to express opinions and
    Sales Performance A      sentiments toward the products bought or services
    Case Study in the        received. Analyzing the large volume of online reviews
    Movie Domain             available would produce useful actionable knowledge
                             that could be of economic values to vendors and other
                             interested parties. In this paper, we conduct a case study
                             in the movie domain, and tackle the problem of mining
                             reviews for predicting product sales performance. Our
                             analysis shows that both the sentiments expressed in the
                             reviews and the quality of the reviews have a significant
                             impact on the future sales performance of products in
                             question. For the sentiment factor, we propose Sentiment
                             PLSA (S-PLSA), in which a review is considered as a
                             document generated by a number of hidden sentiment
                             factors, in order to capture the complex nature of
                             sentiments. Training an S-PLSA model enables us to
                             obtain a succinct summary of the sentiment information
                             embedded in the reviews. Based on S-PLSFA, we
                             propose ARSA, an Autoregressive Sentiment-Aware
                             model for sales prediction. We then seek to further
                             improve the accuracy of prediction by considering the
                             quality factor, with a focus on predicting the quality of a
review in the absence of user-supplied indicators, and
                             present ARSQA, an Autoregressive Sentiment and
                             Quality Aware model, to utilize sentiments and quality
                             for predicting product sales performance. Extensive
                             experiments conducted on a large movie data set confirm
                             the effectiveness of the proposed approach.
   43. Privacy    Preserving Privacy preservation is important for machine learning 2012
       Decision         Tree and data mining, but measures designed to protect private
       Learning        Using information often result in a trade-off: reduced utility of
       Unrealized Data Sets  the training samples. This paper introduces a privacy
                             preserving approach that can be applied to decision tree
                             learning, without concomitant loss of accuracy. It
                             describes an approach to the preservation of the privacy
                             of collected data samples in cases where information
                             from the sample database has been partially lost. This
                             approach converts the original sample data sets into a
                             group of unreal data sets, from which the original
                             samples cannot be reconstructed without the entire group
                             of unreal data sets. Meanwhile, an accurate decision tree
                             can be built directly from those unreal data sets. This
                             novel approach can be applied directly to the data storage
                             as soon as the first sample is collected. The approach is
                             compatible with other privacy preserving approaches,
                             such as cryptography, for extra protection


TECHNOLOGY                          : JAVA

DOMAIN                              : IEEE TRANSACTIONS ON MOBILE COMPUTING



S.NO    TITLES                    ABSTRACT                                                YEAR
   1.   The         Boomerang     We present the boomerang protocol to efficiently retain 2012
        Protocol: Tying Data to   information at a particular geographic location in a
        Geographic Locations in   sparse network of highly mobile nodes without using
        Mobile     Disconnected   infrastructure networks. To retain information around
        Networks                  certain physical location, each mobile device passing
                                  that location will carry the information for a short
                                  while.
   2.   Nature-Inspired  Self-    In this paper, we present new models and algorithms 2012
        Organization, Control,    for control and optimization of a class of next
        and Optimization in       generation communication networks: Hierarchical
        Heterogeneous Wireless    Heterogeneous Wireless Networks (HHWNs), under
        Networks                  real-world physical constraints. Two biology-inspired
                                  techniques, a Flocking Algorithm (FA) and a Particle
Swarm Optimizer (PSO), are investigated in this
                                context.
3.   A      Cost     Analysis   In this paper, we have developed analytical framework 2012
     Framework for NEMO         to measure the costs of the basic protocol for Network
     Prefix Delegation-Based    Mobility (NEMO), and four representative prefix
     Schemes                    delegation-based schemes. Our results show that cost
                                of packet delivery through the partially optimized route
                                dominates over other costs.
4.   OMAN: A Mobile Ad          In this paper, we present a high-level view of the 2012
     Hoc Network Design         OMAN architecture, review specific mathematical
     System                     models used in the network representation, and show
                                how OMAN is used to evaluate tradeoffs in MANET
                                design. Specifically, we cover three case studies of
                                optimization. 1-robust power control under uncertain
                                channel information for a single physical layer
                                snapshot. 2-scheduling with the availability of
                                directional radiation patterns. 3-optimizing topology
                                through movement planning of relay nodes.
5.   Energy-Efficient           In this paper, we formulate the resource allocation 2012
     Cooperative       Video    problem for general multihop multicast network flows
     Distribution       with    and derive the optimal solution that minimizes the total
     Statistical        QoS     energy consumption while guaranteeing a statistical
     Provisions         over    end-to-end delay bound on each network path.
     Wireless Networks
6.   Leveraging          the    In this paper, we consider the implications of spectrum 2012
     Algebraic Connectivity     heterogeneity on connectivity and routing in a
     of a Cognitive Network     Cognitive Radio Ad Hoc Network (CRAHN). We
     for Routing Design         study the Laplacian spectrum of the CRAHN graph
                                when the activity of primary users is considered. We
                                introduce the cognitive algebraic connectivity, i.e., the
                                second smallest eigenvalue of the Laplacian of a graph,
                                in a cognitive scenario.
7.   Efficient        Virtual   In this paper, we will study a directional virtual 2012
     Backbone Construction      backbone (VB) in the network where directional
     with     Routing   Cost    antennas are used. When constructing a VB, we will
     Constraint in Wireless     take routing and broadcasting into account since they
     Networks          Using    are two common operations in wireless networks.
     Directional Antennas       Hence, we will study a VB with guaranteed routing
                                costs, named ? Minimum rOuting Cost Directional VB
                                (?-MOC-DVB).
8.   Stateless    Multicast     In this paper, we have developed a stateless receiver- 2012
     Protocol for Ad Hoc        based multicast (RBMulticast) protocol that simply
     Networks.                  uses a list of the multicast members' (e.g., sinks')
                                addresses, embedded in packet headers, to enable
                                receivers to decide the best way to forward the
                                multicast traffic.
9.  Detection of Selfish CCA tuning can be exploited by selfish nodes to obtain 2012
    Manipulation of Carrier an unfair share of the available bandwidth.
    Sensing    in    802.11 Specifically, a selfish entity can manipulate the CCA
    Networks                threshold to ignore ongoing transmissions; this
                            increases the probability of accessing the medium and
                            provides the entity a higher, unfair share of the
                            bandwidth.
10. Handling Selfishness in In a mobile ad hoc network, the mobility and resource 2012
    Replica Allocation over constraints of mobile nodes may lead to network
    a Mobile Ad Hoc partitioning or performance degradation. Several data
    Network                 replication techniques have been proposed to minimize
                            performance degradation. Most of them assume that all
                            mobile nodes collaborate fully in terms of sharing their
                            memory space. In reality, however, some nodes may
                            selfishly decide only to cooperate partially, or not at
                            all, with other nodes. These selfish nodes could then
                            reduce the overall data accessibility in the network. In
                            this paper, we examine the impact of selfish nodes in a
                            mobile ad hoc network from the perspective of replica
                            allocation. We term this selfish replica allocation. In
                            particular, we develop a selfish node detection
                            algorithm that considers partial selfishness and novel
                            replica allocation techniques to properly cope with
                            selfish replica allocation. The conducted simulations
                            demonstrate the proposed approach outperforms
                            traditional cooperative replica allocation techniques in
                            terms of data accessibility,
                            communication cost, and average query delay.
11. Acknowledgment-Based We propose a broadcast algorithm suitable for a wide 2012
    Broadcast Protocol for range of vehicular scenarios, which only employs local
    Reliable and Efficient information acquired via periodic beacon messages,
    Data Dissemination in containing acknowledgments of the circulated
    Vehicular    Ad    Hoc broadcast messages. Each vehicle decides whether it
    Networks                belongs to a connected dominating set (CDS). Vehicles
                            in the CDS use a shorter waiting period before possible
                            retransmission. At time-out expiration, a vehicle
                            retransmits if it is aware of at least one neighbor in
                            need of the message. To address intermittent
                            connectivity and appearance of new neighbors, the
                            evaluation timer can be restarted. Our algorithm
                            resolves propagation at road intersections without any
                            need to even recognize intersections. It is inherently
                            adaptable to different mobility regimes, without the
                            need to classify network or vehicle speeds. In a
                            thorough simulation-based performance evaluation, our
                            algorithm is shown to provide higher reliability and
message efficiency than existing approaches for
                            nonsafety applications.

12. Toward Reliable Data   This paper addresses the problem of delivering data 2012
    Delivery for Highly    packets for highly dynamic mobile ad hoc networks in
    Dynamic Mobile Ad      a reliable and timely manner. Most existing ad hoc
    Hoc Networks           routing protocols are susceptible to node mobility,
                           especially for large-scale networks. Driven by this
                           issue, we propose an efficient Position-based
                           Opportunistic Routing (POR) protocol which takes
                           advantage of the stateless property of geographic
                           routing and the broadcast nature of wireless medium.
                           When a data packet is sent out, some of the neighbor
                           nodes that have overheard the transmission will serve
                           as forwarding candidates, and take turn to forward the
                           packet if it is not relayed by the specific best forwarder
                           within a certain period of time. By utilizing such in-
                           the-air backup, communication is maintained without
                           being interrupted. The additional latency incurred by
                           local route recovery is greatly reduced and the
                           duplicate relaying caused by packet reroute is also
                           decreased. In the case of communication hole, a Virtual
                           Destination-based Void Handling (VDVH) scheme is
                           further proposed to work together with POR. Both
                           theoretical analysis and simulation results show that
                           POR achieves excellent performance even under high
                           node mobility with acceptable overhead and the new
                           void handling scheme also works well.
13. Protecting    Location While many protocols for sensor network security 2012
    Privacy    in   Sensor provide confidentiality for the content of messages,
    Networks against a contextual information usually remains exposed. Such
    Global Eavesdropper    contextual information can be exploited by an
                           adversary to derive sensitive information such as the
                           locations of monitored objects and data sinks in the
                           field. Attacks on these components can significantly
                           undermine any network application. Existing
                           techniques defend the leakage of location information
                           from a limited adversary who can only observe
                           network traffic in a small region. However, a stronger
                           adversary, the global eavesdropper, is realistic and can
                           defeat these existing techniques. This paper first
                           formalizes the location privacy issues in sensor
                           networks under this strong adversary model and
                           computes a lower bound on the communication
                           overhead needed for achieving a given level of location
                           privacy. The paper then proposes two techniques to
provide location privacy to monitored objects (source-
                              location privacy)—periodic collection and source
                              simulation—and two techniques to provide location
                              privacy to data sinks (sink-location privacy)—sink
                              simulation and backbone flooding. These techniques
                              provide trade-offs between privacy, communication
                              cost, and latency. Through analysis and simulation, we
                              demonstrate that the proposed techniques are efficient
                              and effective for source and sink-location privacy in
                              sensor networks.


14. local         broadcastThere are two main approaches, static and dynamic, to 2012
    algorithms in wireless broadcast algorithms in wireless ad hoc networks. In
    ad      hoc   networks the static approach, local algorithms determine the
    reducing the number of status (forwarding/nonforwarding) of each node
    transmissions          proactively based on local topology information and a
                           globally known priority function. In this paper, we first
                           show that local broadcast algorithms based on the static
                           approach cannot achieve a good approximation factor
                           to the optimum solution (an NP-hard problem).
                           However, we show that a constant approximation
                           factor is achievable if (relative) position information is
                           available. In the dynamic approach, local algorithms
                           determine the status of each node “on-the-fly” based on
                           local topology information and broadcast state
                           information. Using the dynamic approach, it was
                           recently shown that local broadcast algorithms can
                           achieve a constant approximation factor to the
                           optimum solution when (approximate) position
                           information is available. However, using position
                           information can simplify the problem. Also, in some
                           applications it may not be practical to have position
                           information. Therefore, we wish to know whether local
                           broadcast algorithms based on the dynamic approach
                           can achieve a constant approximation factor without
                           using position information. We answer this question in
                           the positive – we design a local broadcast algorithm in
                           which the status of each node is decided “on-the-fly”
                           and prove that the algorithm can achieve both full
                           delivery and a constant approximation to the optimum
                           solution
15. Compressed-Sensing-    This article presents the design of a networked system 2012
    Enabled         Video for joint compression, rate control and error correction
    Streaming for Wireless of video over resource-constrained embedded devices
    Multimedia     Sensor based on the theory of compressed sensing. The
Networks               objective of this work is to design a cross-layer system
                           that jointly controls the video encoding rate, the
                           transmission rate, and the channel coding rate to
                           maximize the received video quality. First, compressed
                           sensing based video encoding for transmission over
                           wireless multimedia sensor networks (WMSNs) is
                           studied. It is shown that compressed sensing can
                           overcome many of the current problems of video over
                           WMSNs, primarily encoder complexity and low
                           resiliency to channel errors. A rate controller is then
                           developed with the objective of maintaining fairness
                           among video streams while maximizing the received
                           video quality. It is shown that the rate of compressed
                           sensed video can be predictably controlled by varying
                           only the compressed sensing sampling rate. It is then
                           shown that the developed rate controller can be
                           interpreted as the iterative solution to a convex
                           optimization problem representing the optimization of
                           the rate allocation across the network. The error
                           resiliency properties of compressed sensed images and
                           videos are then studied, and an optimal error detection
                           and correction scheme is presented for video
                           transmission over lossy channels. Finally, the entire
                           system is evaluated through simulation and testbed
                           evaluation. The rate controller is shown to outperform
                           existing TCP-friendly rate control schemes in terms of
                           both fairness and received video quality. Testbed
                           results also show that the rates converge to stable
                           values in real channels.
16. Hop-by-Hop Routing in Wireless Mesh Network (WMN) has become an 2012
    Wireless         Mesh important edge network to provide Internet access to
    Networks          with remote areas and wireless connections in a
    Bandwidth Guarantees   metropolitan scale. In this paper, we study the problem
                           of identifying the maximum available bandwidth path,
                           a fundamental issue in supporting quality-of-service in
                           WMNs. Due to interference among links, bandwidth, a
                           well-known bottleneck metric in wired networks, is
                           neither concave nor additive in wireless networks. We
                           propose a new path weight which captures the available
                           path bandwidth information. We formally prove that
                           our hop-by-hop routing protocol based on the new path
                           weight satisfies the consistency and loop-freeness
                           requirements. The consistency property guarantees that
                           each node makes a proper packet forwarding decision,
                           so that a data packet does traverse over the intended
                           path. Our extensive simulation experiments also show
that our proposed path weight outperforms existing
                               path metrics in identifying high-throughput paths
17. Handling Selfishness in    In a mobile ad hoc network, the mobility and resource 2012
    Replica Allocation over    constraints of mobile nodes may lead to network
    a Mobile Ad Hoc            partitioning or performance degradation. Several data
    Network-         Mobile    replication techniques have been proposed to minimize
    Computing,      projects   performance degradation. Most of them assume that all
    2012                       mobile nodes collaborate fully in terms of sharing their
                               memory space. In reality, however, some nodes may
                               selfishly decide only to cooperate partially, or not at
                               all, with other nodes. These selfish nodes could then
                               reduce the overall data accessibility in the network. In
                               this paper, we examine the impact of selfish nodes in a
                               mobile ad hoc network from the perspective of replica
                               allocation. We term this selfish replica allocation. In
                               particular, we develop a selfish node detection
                               algorithm that considers partial selfishness and novel
                               replica allocation techniques to properly cope with
                               selfish replica allocation. The conducted simulations
                               demonstrate the proposed approach outperforms
                               traditional cooperative replica allocation techniques in
                               terms of data accessibility, communication cost, and
                               average query delay.
18. Toward Reliable Data       This paper addresses the problem of delivering data 2012
    Delivery for Highly        packets for highly dynamic mobile ad hoc networks in
    Dynamic Mobile Ad          a reliable and timely manner. Most existing ad hoc
    Hoc Networks- Mobile       routing protocols are susceptible to node mobility,
    Computing,    projects     especially for large-scale networks. Driven by this
    2012                       issue, we propose an efficient Position-based
                               Opportunistic Routing (POR) protocol which takes
                               advantage of the stateless property of geographic
                               routing and the broadcast nature of wireless medium.
                               When a data packet is sent out, some of the neighbor
                               nodes that have overheard the transmission will serve
                               as forwarding candidates, and take turn to forward the
                               packet if it is not relayed by the specific best forwarder
                               within a certain period of time. By utilizing such in-
                               the-air backup, communication is maintained without
                               being interrupted. The additional latency incurred by
                               local route recovery is greatly reduced and the
                               duplicate relaying caused by packet reroute is also
                               decreased. In the case of communication hole, a Virtual
                               Destination-based Void Handling (VDVH) scheme is
                               further proposed to work together with POR. Both
                               theoretical analysis and simulation results show that
                               POR achieves excellent performance even under high
node mobility with acceptable overhead and the new
                              void handling scheme also works well
19. Fast Data Collection in   We investigate the following fundamental question – 2012
    Tree-Based     Wireless   how fast can information be collected from a wireless
    Sensor        Networks-   sensor network organized as tree? To address this, we
    Mobile     Computing,     explore and evaluate a number of different techniques
    projects 2012             using realistic simulation models under the many-to-
                              one communication paradigm known as convergecast.
                              We first consider time scheduling on a single frequency
                              channel with the aim of minimizing the number of time
                              slots required (schedule length) to complete a
                              convergecast. Next, we combine scheduling with
                              transmission power control to mitigate the effects of
                              interference, and show that while power control helps
                              in reducing the schedule length under a single
                              frequency, scheduling transmissions using multiple
                              frequencies is more efficient. We give lower bounds on
                              the schedule length when interference is completely
                              eliminated, and propose algorithms that achieve these
                              bounds. We also evaluate the performance of various
                              channel assignment methods and find empirically that
                              for moderate size networks of about 100 nodes, the use
                              of multi-frequency scheduling can suffice to eliminate
                              most of the interference. Then, the data collection rate
                              no longer remains limited by interference but by the
                              topology of the routing tree. To this end, we construct
                              degree-constrained spanning trees and capacitated
                              minimal spanning trees, and show significant
                              improvement in scheduling performance over different
                              deployment densities. Lastly, we evaluate the impact of
                              different interference and channel models on the
                              schedule length.
20. Protecting    Location    The location privacy issue in sensor networks under 2012
    Privacy    in   Sensor    this strong adversary model is considered. Proposed
    Networks Against a        two techniques to provide location privacy to
    Global Eavesdropper       monitored objects (source-location privacy)-periodic
    JAVA                      collection and source simulation-and two techniques to
                              provide location privacy to data sinks (sink-location
                              privacy)-sink simulation and backbone flooding.
21. Energy-Efficient          For real-time video broadcast where multiple users are 2012
    Cooperative      Video    interested in the same content, mobile-to-mobile
    Distribution      with    cooperation can be utilized to improve delivery
    Statistical       QoS     efficiency and reduce network utilization. Under such
    Provisions        over    cooperation, however, real-time video transmission
    Wireless Networks –       requires end-to-end delay bounds. Due to the inherently
    projects 2012             stochastic nature of wireless fading channels,
deterministic delay bounds are prohibitively difficult to
                              guarantee. For a scalable video structure, an alternative
                              is to provide statistical guarantees using the concept of
                              effective capacity/bandwidth by deriving quality of
                              service exponents for each video layer. Using this
                              concept, we formulate the resource allocation problem
                              for general multihop multicast network flows and
                              derive the optimal solution that minimizes the total
                              energy consumption while guaranteeing a statistical
                              end-to-end delay bound on each network path. A
                              method is described to compute the optimal resource
                              allocation at each node in a distributed fashion.
                              Furthermore,       we      propose    low     complexity
                              approximation algorithms for energy-efficient flow
                              selection from the set of directed acyclic graphs
                              forming the candidate network flows. The flow
                              selection and resource allocation process is adapted for
                              each video frame according to the channel conditions
                              on the network links. Considering different network
                              topologies, results demonstrate that the proposed
                              resource allocation and flow selection algorithms
                              provide notable performance gains with small
                              optimality gaps at a low computational cost.

22. A Novel MAC Scheme        This paper proposes a novel medium access control 2012
    for        Multichannel   (MAC) scheme for multichannel cognitive radio (CR)
    Cognitive Radio Ad Hoc    ad hoc networks, which achieves high throughput of
    Networks                  CR system while protecting primary users (PUs)
                              effectively. In designing the MAC scheme, we consider
                              that the PU signal may cover only a part of the network
                              and the nodes can have the different sensing result for
                              the same PU even on the same channel. By allowing
                              the nodes to use the channel on which the PU exists as
                              long as their transmissions do not disturb the PU, the
                              proposed MAC scheme fully utilizes the spectrum
                              access opportunity. To mitigate the hidden PU problem
                              inherent to multichannel CR networks where the PU
                              signal is detectable only to some nodes, the proposed
                              MAC scheme adjusts the sensing priorities of channels
                              at each node with the PU detection information of other
                              nodes and also limits the transmission power of a CR
                              node to the maximum allowable power for
                              guaranteeing the quality of service requirement of PU.
                              The performance of the proposed MAC scheme is
                              evaluated by using simulation. The simulation results
                              show that the CR system with the proposed MAC
accomplishes good performance in throughput and
                               packet delay, while protecting PUs properly
23. A Statistical Mechanics-   Characterizing the performance of ad hoc networks is 2012
    Based Framework to         one of the most intricate open challenges; conventional
    Analyze       Ad    Hoc    ideas based on information-theoretic techniques and
    Networks with Random       inequalities have not yet been able to successfully
    Access                     tackle this problem in its generality. Motivated thus, we
                               promote the totally asymmetric simple exclusion
                               process (TASEP), a particle flow model in statistical
                               mechanics, as a useful analytical tool to study ad hoc
                               networks with random access. Employing the TASEP
                               framework, we first investigate the average end-to-end
                               delay and throughput performance of a linear multihop
                               flow of packets. Additionally, we analytically derive
                               the distribution of delays incurred by packets at each
                               node, as well as the joint distributions of the delays
                               across adjacent hops along the flow. We then consider
                               more complex wireless network models comprising
                               intersecting flows, and propose the partial mean-field
                               approximation (PMFA), a method that helps tightly
                               approximate the throughput performance of the system.
                               We finally demonstrate via a simple example that the
                               PMFA procedure is quite general in that it may be used
                               to accurately evaluate the performance of ad hoc
                               networks with arbitrary topologies.

24. Acknowledgment-Based       We propose a broadcast algorithm suitable for a wide 2012
    Broadcast Protocol for     range of vehicular scenarios, which only employs local
    Reliable and Efficient     information acquired via periodic beacon messages,
    Data Dissemination in      containing acknowledgments of the circulated
    Vehicular   Ad    Hoc      broadcast messages. Each vehicle decides whether it
    Networks                   belongs to a connected dominating set (CDS). Vehicles
                               in the CDS use a shorter waiting period before possible
                               retransmission. At time-out expiration, a vehicle
                               retransmits if it is aware of at least one neighbor in
                               need of the message. To address intermittent
                               connectivity and appearance of new neighbors, the
                               evaluation timer can be restarted. Our algorithm
                               resolves propagation at road intersections without any
                               need to even recognize intersections. It is inherently
                               adaptable to different mobility regimes, without the
                               need to classify network or vehicle speeds. In a
                               thorough simulation-based performance evaluation, our
                               algorithm is shown to provide higher reliability and
                               message efficiency than existing approaches for
                               nonsafety applications.
25. Characterizing       the
                         Cellular text messaging services are increasingly being 2012
    Security Implications of
                         relied upon to disseminate critical information during
    Third-Party Emergencyemergencies. Accordingly, a wide range of
    Alert Systems over   organizations including colleges and universities now
    Cellular Text Messaging
                         partner with third-party providers that promise to
    Services             improve physical security by rapidly delivering such
                         messages. Unfortunately, these products do not work as
                         advertised due to limitations of cellular infrastructure
                         and therefore provide a false sense of security to their
                         users. In this paper, we perform the first extensive
                         investigation and characterization of the limitations of
                         an Emergency Alert System (EAS) using text messages
                         as a security incident response mechanism. We show
                         emergency alert systems built on text messaging not
                         only can meet the 10 minute delivery requirement
                         mandated by the WARN Act, but also potentially cause
                         other voice and SMS traffic to be blocked at rates
                         upward of 80 percent. We then show that our results
                         are representative of reality by comparing them to a
                         number of documented but not previously understood
                         failures. Finally, we analyze a targeted messaging
                         mechanism as a means of efficiently using currently
                         deployed infrastructure and third-party EAS. In so
                         doing, we demonstrate that this increasingly deployed
                         security infrastructure does not achieve its stated
                         requirements for large populations.
26. Converge Cast On the In this paper, we define an ad hoc network where 2012
    Capacity and Delay multiple sources transmit packets to one destination as
    Tradeoffs            Converge-Cast network. We will study the capacity
                         delay tradeoffs assuming that n wireless nodes are
                         deployed in a unit square. For each session (the session
                         is a dataflow from k different source nodes to 1
                         destination node), k nodes are randomly selected as
                         active sources and each transmits one packet to a
                         particular destination node, which is also randomly
                         selected. We first consider the stationary case, where
                         capacity is mainly discussed and delay is entirely
                         dependent on the average number of hops. We find that
                         the per-node capacity is Θ (1/√(n log n)) (given
                         nonnegative functions f(n) and g(n): f(n) = O(g(n))
                         means there exist positive constants c and m such that
                         f(n) ≤ cg(n) for all n ≥ m; f(n)= Ω (g(n)) means there
                         exist positive constants c and m such that f(n) ≥ cg(n)
                         for all n ≥ m; f(n) = Θ (g(n)) means that both f(n) = Ω
                         (g(n)) and f(n) = O(g(n)) hold), which is the same as
                         that of unicast, presented in (Gupta and Kumar, 2000).
Then, node mobility is introduced to increase network
                            capacity, for which our study is performed in two steps.
                            The first step is to establish the delay in single-session
                            transmission. We find that the delay is Θ (n log k)
                            under 1-hop strategy, and Θ (n log k/m) under 2-hop
                            redundant strategy, where m denotes the number of
                            replicas for each packet. The second step is to find
                            delay and capacity in multisession transmission. We
                            reveal that the per-node capacity and delay for 2-hop
                            nonredundancy strategy are Θ (1) and Θ (n log k),
                            respectively. The optimal delay is Θ (√(n log k)+k)
                            with redundancy, corresponding to a capacity of Θ
                            (√((1/n log k) + (k/n log k)). Therefore, we obtain that
                            the capacity delay tradeoff satisfies delay/rate ≥ Θ (n
                            log k) for both strategies.


27. Cooperative Download We consider a complex (i.e., nonlinear) road scenario 2012
    in            Vehicular where users aboard vehicles equipped with
    Environments            communication       interfaces    are    interested    in
                            downloading large files from road-side Access Points
                            (APs). We investigate the possibility of exploiting
                            opportunistic encounters among mobile nodes so to
                            augment the transfer rate experienced by vehicular
                            downloaders. To that end, we devise solutions for the
                            selection of carriers and data chunks at the APs, and
                            evaluate them in real-world road topologies, under
                            different AP deployment strategies. Through extensive
                            simulations, we show that carry&forward transfers can
                            significantly increase the download rate of vehicular
                            users in urban/suburban environments, and that such a
                            result holds throughout diverse mobility scenarios, AP
                            placements and network loads
28. Detection of Selfish Recently, tuning the clear channel assessment (CCA) 2012
    Manipulation of Carrier threshold in conjunction with power control has been
    Sensing    in   802.11 considered for improving the performance of WLANs.
    Networks                However, we show that, CCA tuning can be exploited
                            by selfish nodes to obtain an unfair share of the
                            available bandwidth. Specifically, a selfish entity can
                            manipulate the CCA threshold to ignore ongoing
                            transmissions; this increases the probability of
                            accessing the medium and provides the entity a higher,
                            unfair share of the bandwidth. We experiment on our
                            802.11 testbed to characterize the effects of CCA
                            tuning on both isolated links and in 802.11 WLAN
                            configurations.     We      focus     on     AP-client(s)
configurations, proposing a novel approach to detect
                              this misbehavior. A misbehaving client is unlikely to
                              recognize low power receptions as legitimate packets;
                              by intelligently sending low power probe messages, an
                              AP can efficiently detect a misbehaving node. Our key
                              contributions are: 1) We are the first to quantify the
                              impact of selfish CCA tuning via extensive
                              experimentation on various 802.11 configurations. 2)
                              We propose a lightweight scheme for detecting selfish
                              nodes that inappropriately increase their CCAs. 3) We
                              extensively evaluate our system on our testbed; its
                              accuracy is 95 percent while the false positive rate is
                              less than 5 percent.S
29. Distributed Throughput    We consider throughput-optimal power allocation in 2012
    Maximization        in    multi-hop wireless networks. The study of this problem
    Wireless Networks via     has been limited due to the non-convexity of the
    Random          Power     underlying optimization problems, that prohibits an
    Allocation                efficient solution even in a centralized setting. We take
                              a randomization approach to deal with this difficulty.
                              To this end, we generalize the randomization
                              framework originally proposed for input queued
                              switches to an SINR rate-based interference model.
                              Further, we develop distributed power allocation and
                              comparison algorithms that satisfy these conditions,
                              thereby achieving (nearly) 100% throughput. We
                              illustrate the performance of our proposed power
                              allocation solution through numerical investigation and
                              present several extensions for the considered problem.
30. Efficient  Rendezvous     Recent research shows that significant energy saving 2012
    Algorithms          for   can be achieved in mobility-enabled wireless sensor
    Mobility-Enabled          networks (WSNs) that visit sensor nodes and collect
    Wireless         Sensor   data from them via short-range communications.
    Networks                  However, a major performance bottleneck of such
                              WSNs is the significantly increased latency in data
                              collection due to the low movement speed of mobile
                              base stations. To address this issue, we propose a
                              rendezvous-based data collection approach in which a
                              subset of nodes serve as rendezvous points that buffer
                              and aggregate data originated from sources and transfer
                              to the base station when it arrives. This approach
                              combines the advantages of controlled mobility and in-
                              network data caching and can achieve a desirable
                              balance between network energy saving and data
                              collection delay. We propose efficient rendezvous
                              design algorithms with provable performance bounds
                              for mobile base stations with variable and fixed tracks,
respectively. The effectiveness of our approach is
                               validated through both theoretical analysis and
                               extensive simulations.
31. Efficient        Virtual   Directional antennas can divide the transmission range 2012
    Backbone Construction      into several sectors. Thus, through switching off
    with     Routing   Cost    sectors in unnecessary directions in wireless networks,
    Constraint in Wireless     we can save bandwidth and energy consumption. In
    Networks          Using    this paper, we will study a directional virtual backbone
    Directional Antennas       (VB) in the network where directional antennas are
                               used. When constructing a VB, we will take routing
                               and broadcasting into account since they are two
                               common operations in wireless networks. Hence, we
                               will study a VB with guaranteed routing costs, named α
                               Minimum rOuting Cost Directional VB (α-MOC-
                               DVB). Besides the properties of regular VBs, α-MOC-
                               DVB also has a special constraint - for any pair of
                               nodes, there exists at least one path all intermediate
                               directions on which must belong to α-MOC-DVB and
                               the number of intermediate directions on the path is
                               smaller than α times that on the shortest path. We prove
                               that construction of a minimum α-MOC-DVB is an
                               NP-hard problem in a general directed graph. A
                               heuristic algorithm is proposed and theoretical analysis
                               is also discussed in the paper. Extensive simulations
                               demonstrate that our α-MOC-DVB is much more
                               efficient in the sense of VB size and routing costs
                               compared to other VBs.
32. Energy-Efficient           Distributed Information SHaring (DISH) is a new 2012
    Strategies        for      cooperative approach to designing multichannel MAC
    Cooperative                protocols. It aids nodes in their decision making
    Multichannel     MAC       processes by compensating for their missing
    Protocols                  information via information sharing through
                               neighboring nodes. This approach was recently shown
                               to significantly boost the throughput of multichannel
                               MAC protocols. However, a critical issue for ad hoc
                               communication devices, viz. energy efficiency, has yet
                               to be addressed. In this paper, we address this issue by
                               developing simple solutions that reduce the energy
                               consumption without compromising the throughput
                               performance and meanwhile maximize cost efficiency.
                               We propose two energy-efficient strategies: in-situ
                               energy conscious DISH, which uses existing nodes
                               only, and altruistic DISH, which requires additional
                               nodes called altruists. We compare five protocols with
                               respect to these strategies and identify altruistic DISH
                               to be the right choice in general: it 1) conserves 40-80
percent of energy, 2) maintains the throughput
                             advantage, and 3) more than doubles the cost efficiency
                             compared to protocols without this strategy. On the
                             other hand, our study also shows that in-situ energy
                             conscious DISH is suitable only in certain limited
                             scenarios.
33. Estimating Parameters    We propose a method for estimating parameters of 2012
    of           Multiple    multiple target objects by using networked binary
    Heterogeneous Target     sensors whose locations are unknown. These target
    Objects         Using    objects may have different parameters, such as size and
    Composite      Sensor    perimeter length. Each sensors, which is incapable of
    Nodes                    monitoring the target object's parameters, sends only
                             binary data describing whether or not it detects target
                             objects coming into, moving around, or leaving the
                             sensing area at every moment. We previously
                             developed a parameter estimation method for a single
                             target object. However, a straight-forward extension of
                             this method is not applicable for estimating multiple
                             heterogeneous target objects. This is because a
                             networked binary sensor at an unknown location
                             cannot provide information that distinguishes
                             individual target objects, but it can provide information
                             on the total perimeter length and size of multiple target
                             objects. Therefore, we propose composite sensor nodes
                             with multiple sensors in a predetermined layout for
                             obtaining additional information for estimating the
                             parameter of each target object. As an example of a
                             composite sensor node, we consider a two-sensor
                             composite sensor node, which consists of two sensors,
                             one at each of the two end points of a line segment of
                             known length. For the two-sensor composite sensor
                             node, measures are derived such as the two sensors
                             detecting target objects. These derived measures are the
                             basis for identifying the shape of each target object
                             among a given set of categories (for example, disks and
                             rectangles) and estimating parameters such as the
                             radius and lengths of two sides of each target object.
                             Numerical examples demonstrate that networked
                             composite sensor nodes consisting of two binary
                             sensors enable us to estimate the parameters of target
                             objects.

34. Fast          Capture—   The technology of Radio Frequency IDentification 2012
    Recapture Approach for   (RFID) enables many applications that rely on passive,
    Mitigating the Problem   battery-less wireless devices. If a RFID reader needs to
    of Missing RFID Tags     gather the ID from multiple tags in its range, then it
needs to run an anticollision protocol. Due to errors on
                            the wireless link, a single reader session, which
                            contains one full execution of the anticollision
                            protocol, may not be sufficient to retrieve the ID of all
                            tags. This problem can be mitigated by running
                            multiple, redundant reader sessions and use the
                            statistical relationship between these sessions. On the
                            other hand, each session is time consuming and
                            therefore the number of sessions should be kept
                            minimal. We optimize the process of running multiple
                            reader sessions, by allowing only some of the tags
                            already discovered to reply in subsequent reader
                            sessions. The estimation procedure is integrated with
                            an actual tree-based anticollision protocol, and
                            numerical results show that the reliable tag resolution
                            algorithm attain high speed of protocol execution,
                            while not sacrificing the reliability of the estimators
                            used to assess the probability of missing tags.
35. Fast Data Collection in We investigate the following fundamental question- 2012
    Tree-Based     Wireless how fast can information be collected from a wireless
    Sensor Networks         sensor network organized as tree? To address this, we
                            explore and evaluate a number of different techniques
                            using realistic simulation models under the many-to-
                            one communication paradigm known as convergecast.
                            We first consider time scheduling on a single frequency
                            channel with the aim of minimizing the number of time
                            slots required (schedule length) to complete a
                            convergecast. Next, we combine scheduling with
                            transmission power control to mitigate the effects of
                            interference, and show that while power control helps
                            in reducing the schedule length under a single
                            frequency, scheduling transmissions using multiple
                            frequencies is more efficient. We give lower bounds on
                            the schedule length when interference is completely
                            eliminated, and propose algorithms that achieve these
                            bounds. We also evaluate the performance of various
                            channel assignment methods and find empirically that
                            for moderate size networks of about 100 nodes, the use
                            of multifrequency scheduling can suffice to eliminate
                            most of the interference. Then, the data collection rate
                            no longer remains limited by interference but by the
                            topology of the routing tree. To this end, we construct
                            degree-constrained spanning trees and capacitated
                            minimal spanning trees, and show significant
                            improvement in scheduling performance over different
                            deployment densities. Lastly, we evaluate the impact of
different interference and channel models on the
                               schedule length.
36. Fault Localization Using   Faulty components in a network need to be localized 2012
    Passive      End-to-End    and repaired to sustain the health of the network. In this
    Measurements         and   paper, we propose a novel approach that carefully
    Sequential Testing for     combines active and passive measurements to localize
    Wireless          Sensor   faults in wireless sensor networks. More specifically,
    Networks                   we formulate a problem of optimal sequential testing
                               guided by end-to-end data. This problem determines an
                               optimal testing sequence of network components based
                               on end-to-end data in sensor networks to minimize
                               expected testing cost. We prove that this problem is
                               NP-hard, and propose a recursive approach to solve it.
                               This approach leads to a polynomial-time optimal
                               algorithm for line topologies while requiring
                               exponential running time for general topologies. We
                               further develop two polynomial-time heuristic schemes
                               that are applicable to general topologies. Extensive
                               simulation shows that our heuristic schemes only
                               require testing a very small set of network components
                               to localize and repair all faults in the network. Our
                               approach is superior to using active and passive
                               measurements in isolation. It also outperforms the
                               state-of-the-art approaches that localize and repair all
                               faults in a network.
37. FESCIM Fair, Efficient,    In multihop cellular networks, the mobile nodes usually 2012
    and Secure Cooperation     relay others' packets for enhancing the network
    Incentive  Mechanism       performance and deployment. However, selfish nodes
    for Multihop Cellular      usually do not cooperate but make use of the
    Networks                   cooperative nodes to relay their packets, which has a
                               negative effect on the network fairness and
                               performance. In this paper, we propose a fair and
                               efficient incentive mechanism to stimulate the node
                               cooperation. Our mechanism applies a fair charging
                               policy by charging the source and destination nodes
                               when both of them benefit from the communication. To
                               implement this charging policy efficiently, hashing
                               operations are used in the ACK packets to reduce the
                               number of public-key-cryptography operations.
                               Moreover, reducing the overhead of the payment
                               checks is essential for the efficient implementation of
                               the incentive mechanism due to the large number of
                               payment transactions. Instead of generating a check per
                               message, a small-size check can be generated per route,
                               and a check submission scheme is proposed to reduce
                               the number of submitted checks and protect against
collusion attacks. Extensive analysis and simulations
                              demonstrate that our mechanism can secure the
                              payment and significantly reduce the checks' overhead,
                              and the fair charging policy can be implemented almost
                              computationally free by using hashing operations.
38. Geometry and Motion-      This paper presents positioning algorithms for cellular 2012
    Based       Positioning   network-based vehicle tracking in severe non-line-of-
    Algorithms for Mobile     sight (NLOS) propagation scenarios. The aim of the
    Tracking   in NLOS        algorithms is to enhance positional accuracy of
    Environments              network-based positioning systems when the GPS
                              receiver does not perform well due to the complex
                              propagation environment. A one-step position
                              estimation method and another two-step method are
                              proposed and developed. Constrained optimization is
                              utilized to minimize the cost function which takes
                              account of the NLOS error so that the NLOS effect is
                              significantly reduced. Vehicle velocity and heading
                              direction measurements are exploited in the algorithm
                              development, which may be obtained using a
                              speedometer and a heading sensor, respectively. The
                              developed algorithms are practical so that they are
                              suitable for implementation in practice for vehicle
                              applications. It is observed through simulation that in
                              severe NLOS propagation scenarios, the proposed
                              positioning methods outperform the existing cellular
                              network-based positioning algorithms significantly.
                              Further, when the distance measurement error is
                              modeled as the sum of an exponential bias variable and
                              a Gaussian noise variable, the exact expressions of the
                              CRLB are derived to benchmark the performance of
                              the positioning algorithms.
39. Handling Selfishness in   In a mobile ad hoc network, the mobility and resource 2012
    Replica Allocation over   constraints of mobile nodes may lead to network
    a Mobile Ad Hoc           partitioning or performance degradation. Several data
    Network                   replication techniques have been proposed to minimize
                              performance degradation. Most of them assume that all
                              mobile nodes collaborate fully in terms of sharing their
                              memory space. In reality, however, some nodes may
                              selfishly decide only to cooperate partially, or not at
                              all, with other nodes. These selfish nodes could then
                              reduce the overall data accessibility in the network. In
                              this paper, we examine the impact of selfish nodes in a
                              mobile ad hoc network from the perspective of replica
                              allocation. We term this selfish replica allocation. In
                              particular, we develop a selfish node detection
                              algorithm that considers partial selfishness and novel
replica allocation techniques to properly cope with
                             selfish replica allocation. The conducted simulations
                             demonstrate the proposed approach outperforms
                             traditional cooperative replica allocation techniques in
                             terms of data accessibility, communication cost, and
                             average query delay.
40. Heuristic        Burst   IEEE 802.16 OFDMA systems have gained much 2012
    Construction Algorithm   attention for their ability to support high transmission
    for Improving Downlink   rates and broadband access services. For multiuser
    Capacity    in   IEEE    environments, IEEE 802.16 OFDMA systems require a
    802.16         OFDMA     resource allocation algorithm to use the limited
    Systems                  downlink resource efficiently. The IEEE 802.16
                             standard defines that resource allocation should be
                             performed with a rectangle region of slots, called a
                             burst. However, the standard does not specify how to
                             construct bursts. In this paper, we propose a heuristic
                             burst construction algorithm, called HuB, to improve
                             the downlink capacity in IEEE 802.16 OFDMA
                             systems. To increase the downlink capacity, during
                             burst constructions HuB reduces resource wastage by
                             considering padded slots and unused slots and reduces
                             resource usage by considering the power boosting
                             possibility. For simple burst constructions, HuB makes
                             a HuB-tree, in which a node represents an available
                             downlink resource and edges of a node represent a
                             burst rectangle region. Thus, making child nodes of a
                             parent node is the same as constructing a burst in a
                             given downlink resource. We analyzed the proposed
                             algorithm and performed simulations to compare the
                             performance of the proposed algorithm with existing
                             algorithms. Our simulation study results show that HuB
                             shows improved downlink capacity over existing
                             algorithms.
41. Hop-by-Hop Routing in    Wireless Mesh Network (WMN) has become an 2012
    Wireless         Mesh    important edge network to provide Internet access to
    Networks          with   remote areas and wireless connections in a
    Bandwidth Guarantees     metropolitan scale. In this paper, we study the problem
                             of identifying the maximum available bandwidth path,
                             a fundamental issue in supporting quality-of-service in
                             WMNs. Due to interference among links, bandwidth, a
                             well-known bottleneck metric in wired networks, is
                             neither concave nor additive in wireless networks. We
                             propose a new path weight which captures the available
                             path bandwidth information. We formally prove that
                             our hop-by-hop routing protocol based on the new path
                             weight satisfies the consistency and loop-freeness
requirements. The consistency property guarantees that
                              each node makes a proper packet forwarding decision,
                              so that a data packet does traverse over the intended
                              path. Our extensive simulation experiments also show
                              that our proposed path weight outperforms existing
                              path metrics in identifying high-throughput paths.
42. Jointly Optimal Source-   Emerging media overlay networks for wireless 2012
    Flow, Transmit-Power,     applications aim at delivering Variable Bit Rate (VBR)
    and        Sending-Rate   encoded media contents to nomadic end users by
    Control for Maximum-      exploiting the (fading-impaired and time-varying)
    Throughput Delivery of    access capacity offered by the "last-hop” wireless
    VBR Traffic over Faded    channel. In this application scenario, a still open
    Links                     question concerns the closed-form design of control
                              policies that maximize the average throughput sent
                              over the wireless last hop, under constraints on the
                              maximum connection bandwidth available at the
                              Application (APP) layer, the queue capacity available
                              at the Data Link (DL) layer, and the average and peak
                              energies sustained by the Physical (PHY) layer. The
                              approach we follow relies on the maximization on a
                              per-slot basis of the throughput averaged over the
                              fading statistic and conditioned on the queue state,
                              without resorting to cumbersome iterative algorithms.
                              The resulting optimal controller operates in a cross-
                              layer fashion that involves the APP, DL, and PHY
                              layers of the underlying protocol stack. Finally, we
                              develop the operating conditions allowing the proposed
                              controller also to maximize the unconditional average
                              throughput (i.e., the throughput averaged over both
                              queue and channel-state statistics). The carried out
                              numerical tests give insight into the connection
                              bandwidth-versus-queue delay trade-off achieved by
                              the optimal controller.
43. Moderated      Group      This paper describes the design and implementation of 2012
    Authoring System for      a file system-based distributed authoring system for
    Campus-Wide               campus-wide workgroups. We focus on documents for
    Workgroups                which changes by different group members are harder
                              to automatically reconcile into a single version. Prior
                              approaches relied on using group-aware editors. Others
                              built collaborative middleware that allowed the group
                              members to use traditional authoring tools. These
                              approaches relied on an ability to automatically detect
                              conflicting updates. They also operated on specific
                              document types. Instead, our system relies on users to
                              moderate and reconcile updates by other group
                              members. Our file system-based approach also allows
group members to modify any document type. We
                            maintain one updateable copy of the shared content on
                            each group member's node. We also hoard read-only
                            copies of each of these updateable copies in any
                            interested group member's node. All these copies are
                            propagated to other group members at a rate that is
                            solely dictated by the wireless user availability. The
                            various copies are reconciled using the moderation
                            operation; each group member manually incorporates
                            updates from all the other group members into their
                            own copy. The various document versions eventually
                            converge into a single version through successive
                            moderation operations. The system assists with this
                            convergence process by using the made-with
                            knowledge of all causal file system reads of contents
                            from other replicas. An analysis using a long-term
                            wireless user availability traces from a university
                            shows the strength of our asynchronous and distributed
                            update propagation mechanism. Our user space file
                            system prototype exhibits acceptable file system
                            performance. A subjective evaluation showed that the
                            moderation operation was intuitive for students.

44. Network Connectivity We investigate the communication range of the nodes 2012
    with a Family of Group necessary for network connectivity, which we call
    Mobility Models        bidirectional connectivity, in a simple setting. Unlike in
                           most of existing studies, however, the locations or
                           mobilities of the nodes may be correlated through
                           group mobility: nodes are broken into groups, with
                           each group comprising the same number of nodes, and
                           lie on a unit circle. The locations of the nodes in the
                           same group are not mutually independent, but are
                           instead conditionally independent given the location of
                           the group. We examine the distribution of the smallest
                           communication range needed for bidirectional
                           connectivity, called the critical transmission range
                           (CTR), when both the number of groups and the
                           number of nodes in a group are large. We first
                           demonstrate that the CTR exhibits a parametric
                           sensitivity with respect to the space each group
                           occupies on the unit circle. Then, we offer an
                           explanation for the observed sensitivity by identifying
                           what is known as a very strong threshold and
                           asymptotic bounds for CTR.
45. OMAN A Mobile Ad We present a software library that aids in the design of 2012
    Hoc Network Design mobile ad hoc networks (MANET). The OMAN design
System                  engine works by taking a specification of network
                            requirements and objectives, and allocates resources
                            which satisfy the input constraints and maximize the
                            communication performance objective. The tool is used
                            to explore networking design options and challenges,
                            including: power control, adaptive modulation, flow
                            control, scheduling, mobility, uncertainty in channel
                            models, and cross-layer design. The unaddressed niche
                            which OMAN seeks to fill is the general framework for
                            optimization of any network resource, under arbitrary
                            constraints, and with any selection of multiple
                            objectives. While simulation is an important part of
                            measuring the effectiveness of implemented
                            optimization techniques, the novelty and focus of
                            OMAN is on proposing novel network design
                            algorithms, aggregating existing approaches, and
                            providing a general framework for a network designer
                            to test out new proposed resource allocation methods.
                            In this paper, we present a high-level view of the
                            OMAN architecture, review specific mathematical
                            models used in the network representation, and show
                            how OMAN is used to evaluate tradeoffs in MANET
                            design. Specifically, we cover three case studies of
                            optimization. The first case is robust power control
                            under uncertain channel information for a single
                            physical layer snapshot. The second case is scheduling
                            with the availability of directional radiation patterns.
                            The third case is optimizing topology through
                            movement planning of relay nodes.
46. Robust       Topology Topology engineering concerns with the problem of 2012
    Engineering          in automatic determination of physical layer parameters to
    Multiradio Multichannel form a network with desired properties. In this paper,
    Wireless Networks       we investigate the joint power control, channel
                            assignment, and radio interface selection for robust
                            provisioning of link bandwidth in infrastructure
                            multiradio multichannel wireless networks in presence
                            of channel variability and external interference. To
                            characterize the logical relationship between spatial
                            contention constraints and transmit power, we
                            formulate the joint power control and radio-channel
                            assignment as a generalized disjunctive programming
                            problem. The generalized Benders decomposition
                            technique is applied for decomposing the radio-channel
                            assignment (combinatorial constraints) and network
                            resource allocation (continuous constraints) so that the
                            problem can be solved efficiently. The proposed
algorithm is guaranteed to converge to the optimal
                            solution within a finite number of iterations. We have
                            evaluated our scheme using traces collected from two
                            wireless testbeds and simulation studies in Qualnet.
                            Experiments show that the proposed algorithm is
                            superior to existing schemes in providing larger
                            interference margin, and reducing outage and packet
                            loss probabilities.

47. SenseLess A Database- The 2010 FCC ruling on white spaces proposes relying 2012
    Driven White Spaces on a database of incumbents as the primary means of
    Network                 determining white space availability at any white space
                            device (WSD). While the ruling provides broad
                            guidelines for the database, the specifics of its design,
                            features, implementation, and use are yet to be
                            determined. Furthermore, architecting a network where
                            all WSDs rely on the database raises several systems
                            and networking challenges that have remained
                            unexplored. Also, the ruling treats the database only as
                            a storehouse for incumbents. We believe that the
                            mandated use of the database has an additional
                            opportunity: a means to dynamically manage the RF
                            spectrum. Motivated by this opportunity, in this paper,
                            we present SenseLess, a database-driven white spaces
                            network. As suggested by its very name, in SenseLess,
                            WSDs rely on a database service to determine white
                            spaces availability as opposed to spectrum sensing. The
                            service, using a combination of an up-to-date database
                            of incumbents, sophisticated signal propagation
                            modeling, and an efficient content dissemination
                            mechanism to ensure efficient, scalable, and safe white
                            space network operation. We build, deploy, and
                            evaluate SenseLess and compare our results to ground
                            truth spectrum measurements. We present the unique
                            system design considerations that arise due to operating
                            over the white spaces. We also evaluate its efficiency
                            and scalability. To the best of our knowledge, this is
                            the first paper that identifies and examines the systems
                            and networking challenges that arise from operating a
                            white space network, which is solely dependent on a
                            channel occupancy database.
48. Smooth       Trade-Offs Throughput capacity in mobile ad hoc networks has 2012
    between     Throughput been studied extensively under many different mobility
    and Delay in Mobile Ad models. However, most previous research assumes
    Hoc Networks            global mobility, and the results show that a constant
                            per-node throughput can be achieved at the cost of very
high delay. Thus, we are having a very big gap here,
                        i.e., either low throughput and low delay in static
                        networks or high throughput and high delay in mobile
                        networks. In this paper, employing a practical restricted
                        random mobility model, we try to fill this gap.
                        Specifically, we assume that a network of unit area
                        with n nodes is evenly divided into cells with an area of
                        n -2α, each of which is further evenly divided into
                        squares with an area of n-2β(0≤ α ≤ β ≤1/2). All nodes
                        can only move inside the cell which they are initially
                        distributed in, and at the beginning of each time slot,
                        every node moves from its current square to a
                        uniformly chosen point in a uniformly chosen adjacent
                        square. By proposing a new multihop relay scheme, we
                        present smooth trade-offs between throughput and
                        delay by controlling nodes' mobility. We also consider
                        a network of area nγ (0 ≤ γ ≤ 1) and find that network
                        size does not affect the results obtained before.
49. Spectrum-Aware      Cognitive radio (CR) networks have been proposed as 2012
    Mobility Management a solution to both spectrum inefficiency and spectrum
    in Cognitive Radio scarcity problems. However, they face several
    Cellular Networks   challenges based on the fluctuating nature of the
                        available spectrum, making it more difficult to support
                        seamless communications, especially in CR cellular
                        networks. In this paper, a spectrum-aware mobility
                        management scheme is proposed for CR cellular
                        networks. First, a novel network architecture is
                        introduced to mitigate heterogeneous spectrum
                        availability. Based on this architecture, a unified
                        mobility management framework is developed to
                        support diverse mobility events in CR networks, which
                        consists of spectrum mobility management, user
                        mobility management, and intercell resource allocation.
                        The spectrum mobility management scheme
                        determines a target cell and spectrum band for CR
                        users adaptively dependent on time-varying spectrum
                        opportunities, leading to increase in cell capacity. In
                        the user mobility management scheme, a mobile user
                        selects a proper handoff mechanism so as to minimize
                        a switching latency at the cell boundary by considering
                        spatially heterogeneous spectrum availability. Intercell
                        resource allocation helps to improve the performance
                        of both mobility management schemes by efficiently
                        sharing spectrum resources with multiple cells.
                        Simulation results show that the proposed method can
                        achieve better performance than conventional handoff
schemes in terms of both cell capacity as well as
                              mobility support in communications.
   50. Stateless    Multicast Multicast routing protocols typically rely on the a priori
       Protocol for Ad Hoc creation of a multicast tree (or mesh), which requires
       Networks               the individual nodes to maintain state information. In
                              dynamic networks with bursty traffic, where long
                              periods of silence are expected between the bursts of
                              data, this multicast state maintenance adds a large
                              amount of communication, processing, and memory
                              overhead for no benefit to the application. Thus, we
                              have developed a stateless receiver-based multicast
                              (RBMulticast) protocol that simply uses a list of the
                              multicast members' (e.g., sinks') addresses, embedded
                              in packet headers, to enable receivers to decide the best
                              way to forward the multicast traffic. This protocol,
                              called Receiver-Based Multicast, exploits the
                              knowledge of the geographic locations of the nodes to
                              remove the need for costly state maintenance (e.g.,
                              tree/mesh/neighbor table maintenance), making it
                              ideally suited for multicasting in dynamic networks.
                              RBMulticast was implemented in the OPNET
                              simulator and tested using a sensor network
                              implementation. Both simulation and experimental
                              results confirm that RBMulticast provides high success
                              rates and low delay without the burden of state
                              maintenance.


        TECHNOLOGY : JAVA

        DOMAIN          : IEEE TRANSACTIONS ON IMAGE PROCESSING


S.NO     TITLES                 DESCRIPTION                                            YEAR
   1.    Image           Super- In this paper, we propose a sparse neighbor selection 2012
         Resolution With Sparse scheme for SR reconstruction.We first predetermine a
         Neighbor Embedding     larger number of neighbors as potential candidates and
                                develop an extended Robust-SL0 algorithm
                                tosimultaneously find the neighbors and to solve the
                                reconstruction weights. Recognizing that the -nearest
                                neighbor ( -NN) for reconstruction should have similar
                                local geometric structures basedon clustering, we
                                employ a local statistical feature, namely histograms
                                of oriented gradients (HoG) of low-resolution (LR)
                                image patches, to perform such clustering.
   2.    Scalable   Coding   of This paper proposes a novel scheme of scalable 2012
Encrypted Images        coding for encrypted images. In the encryption phase,
                             the original pixel values are masked by a modulo-256
                             addition with pseudorandom numbers that are derived
                             from a secret key. Then, the data of quantized sub
                             image and coefficients are regarded as a set of bit
                             streams.
3.   PDE-Based               The proposed model is based on using the single 2012
     Enhancement of Color    vectors of the gradient magnitude and the second
     Images in RGB Space     derivatives as a manner to relate different color
                             components of the image. This model can be viewed
                             as a generalization of the Bettahar–Stambouli filter to
                             multivalued images. The proposed algorithm is more
                             efficient than the mentioned filter and some previous
                             works at color images denoising and deblurring
                             without creating false colors
4.   Abrupt         Motion   The robust tracking of abrupt motion is a challenging 2012
     Tracking         Via    task in computer vision due to its large motion
     Intensively  Adaptive   uncertainty. While various particle filters and
     Markov-Chain Monte      conventional Markov-chain Monte Carlo (MCMC)
     Carlo Sampling          methods have been proposed for visual tracking, these
                             methods often suffer from the well-known local-trap
                             problem or from poor convergence rate. In this paper,
                             we propose a novel sampling-based tracking scheme
                             for the abrupt motion problem in the Bayesian
                             filtering framework. To effectively handle the local-
                             trap problem, we first introduce the stochastic
                             approximation Monte Carlo (SAMC) sampling
                             method into the Bayesian filter tracking framework, in
                             which the filtering distribution is adaptively estimated
                             as the sampling proceeds, and thus, a good
                             approximation to the target distribution is achieved. In
                             addition, we propose a new MCMC sampler with
                             intensive adaptation to further improve the sampling
                             efficiency, which combines a density-grid-based
                             predictive model with the SAMC sampling, to give a
                             proposal adaptation scheme. The proposed method is
                             effective and computationally efficient in addressing
                             the abrupt motion problem. We compare our approach
                             with several alternative tracking algorithms, and
                             extensive experimental results are presented to
                             demonstrate the effectiveness and the efficiency of the
                             proposed method in dealing with various types of
                             abrupt motions.
5.   Vehicle Detection in    We present an automatic vehicle detection system for 2012
     Aerial   Surveillance   aerial surveillance in this paper. In this system, we
     Using       Dynamic     escape from the stereotype and existing frameworks of
Bayesian Networks         vehicle detection in aerial surveillance, which are
                               either region based or sliding window based. We
                               design a pixelwise classification method for vehicle
                               detection. The novelty lies in the fact that, in spite of
                               performing pixelwise classification, relations among
                               neighboring pixels in a region are preserved in the
                               feature extraction process.We consider features
                               including vehicle colors and local features. For vehicle
                               color extraction, we utilize a color transform to
                               separate vehicle colors and nonvehicle colors
                               effectively. For edge detection, we apply moment
                               preserving to adjust the thresholds of the Canny edge
                               detector automatically, which increases the
                               adaptability and the accuracy for detection in various
                               aerial images. Afterward, a dynamic Bayesian
                               network (DBN) is constructed for the classification
                               purpose. We convert regional local features into
                               quantitative observations that can be referenced when
                               applying pixelwise classification via DBN.
                               Experiments were conducted on a wide variety of
                               aerial videos. The results demonstrate flexibility and
                               good generalization abilities of the proposed method
                               on a challenging data set with aerial surveillance
                               images taken at different heights and under different
                               camera angles.
6.   A      Secret-Sharing-    A new blind authentication method based on the secret 2012
     Based Method for          sharing technique with a data repair capability for
     Authentication       of   grayscale document images via the use of the Portable
     Grayscale Document        Network Graphics (PNG) image is proposed. An
     Images via the Use of     authentication signal is generated for each block of a
     the PNG Image With a      grayscale document image, which, together with the
     Data Repair Capability    binarized block content, is transformed into several
                               shares using the Shamir secret sharing scheme. The
                               involved parameters are carefully chosen so that as
                               many shares as possible are generated and embedded
                               into an alpha channel plane. The alpha channel plane
                               is then combined with the original grayscale image to
                               form a PNG image. During the embedding process,
                               the computed share values are mapped into a range of
                               alpha channel values near their maximum value of 255
                               to yield a transparent stego-image with a disguise
                               effect. In the process of image authentication, an
                               image block is marked as tampered if the
                               authentication signal computed from the current block
                               content does not match that extracted from the shares
                               embedded in the alpha channel plane. Data repairing is
then applied to each tampered block by a reverse
                                Shamir scheme after collecting two shares from
                                unmarked blocks. Measures for protecting the security
                                of the data hidden in the alpha channel are also
                                proposed. Good experimental results prove the
                                effectiveness of the proposed method for real
                                applications.
7.   Learn to Personalized      Increasingly developed social sharing websites, like
     Image Search from the      Flickr and Youtube, allow users to create, share,
     Photo         Sharing      annotate and comment medias. The large-scale user-
     Websites – projects        generated meta-data not only facilitate users in sharing
     2012                       and organizing multimedia content,but provide useful
                                information to improve media retrieval and
                                management. Personalized search serves as one of
                                such examples where the web search experience is
                                improved by generating the returned list according to
                                the modified user search intents. In this paper, we
                                exploit the social annotations and propose a novel
                                framework simultaneously considering the user and
                                query relevance to learn to personalized image search.
                                The basic premise is to embed the user preference and
                                query-related search intent into user-specific topic
                                spaces. Since the users’ original annotation is too
                                sparse for topic modeling, we need to enrich users’
                                annotation pool before user-specific topic spaces
                                construction. The proposed framework contains two
                                components:
8.   A         Discriminative   Action recognition is very important for many 2012
     Model of Motion and        applications such as video surveillance, human-
     Cross Ratio for View-      computer interaction, and so on; view-invariant action
     Invariant        Action    recognition is hot and difficult as well in this field. In
     Recognition                this paper, a new discriminative model is proposed for
                                video-based view-invariant action recognition. In the
                                discriminative model, motion pattern and view
                                invariants are perfectly fused together to make a better
                                combination of invariance and distinctiveness. We
                                address a series of issues, including interest point
                                detection in image sequence, motion feature extraction
                                and description, and view-invariant calculation. First,
                                motion detection is used to extract motion information
                                from videos, which is much more efficient than
                                traditional background modeling and tracking-based
                                methods. Second, as for feature representation, we
                                exact variety of statistical information from motion
                                and view-invariant feature based on cross ratio. Last,
                                in the action modeling, we apply a discriminative
probabilistic model-hidden conditional random field to
                               model motion patterns and view invariants, by which
                               we could fuse the statistics of motion and projective
                               invariability of cross ratio in one framework.
                               Experimental results demonstrate that our method can
                               improve the ability to distinguish different categories
                               of actions with high robustness to view change in real
                               circumstances.
9.    A     General     Fast   In this paper, we propose a general framework for 2012
      Registration             performance improvement of the current state-of-the-
      Framework           by   art registration algorithms in terms of both accuracy
      Learning Deformation–    and computation time. The key concept involves rapid
      Appearance Correlation   prediction of a deformation field for registration
                               initialization, which is achieved by a statistical
                               correlation model learned between image appearances
                               and deformation fields. This allows us to immediately
                               bring a template image as close as possible to a
                               subject image that we need to register. The task of the
                               registration algorithm is hence reduced to estimating
                               small deformation between the subject image and the
                               initially warped template image, i.e., the intermediate
                               template (IT). Specifically, to obtain a good subject-
                               specific initial deformation, support vector regression
                               is utilized to determine the correlation between image
                               appearances and their respective deformation fields.
                               When registering a new subject onto the template, an
                               initial deformation field is first predicted based on the
                               subject's image appearance for generating an IT. With
                               the IT, only the residual deformation needs to be
                               estimated, presenting much less challenge to the
                               existing registration algorithms. Our learning-based
                               framework affords two important advantages: 1) by
                               requiring only the estimation of the residual
                               deformation between the IT and the subject image, the
                               computation time can be greatly reduced; 2) by
                               leveraging good deformation initialization, local
                               minima giving suboptimal solution could be avoided.
                               Our framework has been extensively evaluated using
                               medical images from different sources, and the results
                               indicate that, on top of accuracy improvement,
                               significant registration speedup can be achieved, as
                               compared with the case where no prediction of initial
                               deformation is performed.
10.   A            Geometric   We present a geometric framework for explicit 2012
      Construction        of   derivation of multivariate sampling functions (sinc) on
      Multivariate     Sinc    multidimensional lattices. The approach leads to a
Functions                generalization of the link between sinc functions and
                               the Lagrange interpolation in the multivariate setting.
                               Our geometric approach also provides a frequency
                               partition of the spectrum that leads to a nonseparable
                               extension of the 1-D Shannon (sinc) wavelets to the
                               multivariate setting. Moreover, we propose a
                               generalization of the Lanczos window function that
                               provides a practical and unbiased approach for signal
                               reconstruction on sampling lattices. While this
                               framework is general for lattices of any dimension, we
                               specifically characterize all 2-D and 3-D lattices and
                               show the detailed derivations for 2-D hexagonal body-
                               centered cubic (BCC) and face-centered cubic (FCC)
                               lattices. Both visual and numerical comparisons
                               validate the theoretical expectations about superiority
                               of the BCC and FCC lattices over the commonly used
                               Cartesian lattice.
11.   A Novel Algorithm for    The challenges in local-feature-based image matching 2012
      View and Illumination    are variations of view and illumination. Many
      Invariant       Image    methods have been recently proposed to address these
      Matching                 problems by using invariant feature detectors and
                               distinctive descriptors. However, the matching
                               performance is still unstable and inaccurate,
                               particularly when large variation in view or
                               illumination occurs. In this paper, we propose a view
                               and illumination invariant image-matching method.
                               We iteratively estimate the relationship of the relative
                               view and illumination of the images, transform the
                               view of one image to the other, and normalize their
                               illumination for accurate matching. Our method does
                               not aim to increase the invariance of the detector but
                               to improve the accuracy, stability, and reliability of
                               the matching results. The performance of matching is
                               significantly improved and is not affected by the
                               changes of view and illumination in a valid range. The
                               proposed method would fail when the initial view and
                               illumination method fails, which gives us a new sight
                               to evaluate the traditional detectors. We propose two
                               novel indicators for detector evaluation, namely, valid
                               angle and valid illumination, which reflect the
                               maximum allowable change in view and illumination,
                               respectively. Extensive experimental results show that
                               our method improves the traditional detector
                               significantly, even in large variations, and the two
                               indicators are much more distinctive.
12.   A Spectral and Spatial   This paper presents an algorithm designed to measure 2012
Measure    of    Local the local perceived sharpness in an image. Our method
      Perceived Sharpness in utilizes both spectral and spatial properties of the
      Natural Images          image: For each block, we measure the slope of the
                              magnitude spectrum and the total spatial variation.
                              These measures are then adjusted to account for visual
                              perception, and then, the adjusted measures are
                              combined via a weighted geometric mean. The
                              resulting measure, i.e., S3 (spectral and spatial
                              sharpness), yields a perceived sharpness map in which
                              greater values denote perceptually sharper regions.
                              This map can be collapsed into a single index, which
                              quantifies the overall perceived sharpness of the whole
                              image. We demonstrate the utility of the S3 measure
                              for within-image and across-image sharpness
                              prediction, no-reference image quality assessment of
                              blurred images, and monotonic estimation of the
                              standard deviation of the impulse response used in
                              Gaussian blurring. We further evaluate the accuracy of
                              S3 in local sharpness estimation by comparing S3
                              maps to sharpness maps generated by human subjects.
                              We show that S3 can generate sharpness maps, which
                              are highly correlated with the human-subject maps.
13.   A Unified Feature and The goal of feature selection is to identify the most 2012
      Instance      Selection informative features for compact representation,
      Framework        Using whereas the goal of active learning is to select the
      Optimum Experimental most informative instances for prediction. Previous
      Design                  studies separately address these two problems, despite
                              of the fact that selecting features and instances are
                              dual operations over a data matrix. In this paper, we
                              consider the novel problem of simultaneously
                              selecting the most informative features and instances
                              and develop a solution from the perspective of
                              optimum experimental design. That is, by using the
                              selected features as the new representation and the
                              selected instances as training data, the variance of the
                              parameter estimate of a learning function can be
                              minimized. Specifically, we propose a novel approach,
                              which is called Unified criterion for Feature and
                              Instance selection (UFI), to simultaneously identify
                              the most informative features and instances that
                              minimize the trace of the parameter covariance matrix.
                              A greedy algorithm is introduced to efficiently solve
                              the optimization problem. Experimental results on two
                              benchmark data sets demonstrate the effectiveness of
                              our proposed method.
14.   An Algorithm for the Speeded-Up Robust Features is a feature extraction 2012
Contextual Adaption of   algorithm designed for real-time execution, although
      SURF Octave Selection    this is rarely achievable on low-power hardware such
      With Good Matching       as that in mobile robots. One way to reduce the
      Performance       Best   computation is to discard some of the scale-space
      Octaves                  octaves, and previous research has simply discarded
                               the higher octaves. This paper shows that this
                               approach is not always the most sensible and presents
                               an algorithm for choosing which octaves to discard
                               based on the properties of the imagery. Results
                               obtained with this best octaves algorithm show that it
                               is able to achieve a significant reduction in
                               computation without         compromising matching
                               performance.
15.   An Efficient Camera      In the field of machine vision, camera calibration 2012
      Calibration Technique    refers to the experimental determination of a set of
      Offering    Robustness   parameters that describe the image formation process
      and Accuracy Over a      for a given analytical model of the machine vision
      Wide Range of Lens       system. Researchers working with low-cost digital
      Distortion               cameras and off-the-shelf lenses generally favor
                               camera calibration techniques that do not rely on
                               specialized optical equipment, modifications to the
                               hardware, or an a priori knowledge of the vision
                               system. Most of the commonly used calibration
                               techniques are based on the observation of a single 3-
                               D target or multiple planar (2-D) targets with a large
                               number of control points. This paper presents a novel
                               calibration technique that offers improved accuracy,
                               robustness, and efficiency over a wide range of lens
                               distortion. This technique operates by minimizing the
                               error between the reconstructed image points and their
                               experimentally determined counterparts in “distortion
                               free” space. This facilitates the incorporation of the
                               exact lens distortion model. In addition, expressing
                               spatial orientation in terms of unit quaternions greatly
                               enhances the proposed calibration solution by
                               formulating a minimally redundant system of
                               equations that is free of singularities. Extensive
                               performance benchmarking consisting of both
                               computer simulation and experiments confirmed
                               higher accuracy in calibration regardless of the
                               amount of lens distortion present in the optics of the
                               camera. This paper also experimentally confirmed that
                               a comprehensive lens distortion model including
                               higher order radial and tangential distortion terms
                               improves calibration accuracy.
16.   Bayesian    Estimation   Structured illumination microscopy is a recent 2012
for          Optimized imaging technique that aims at going beyond the
      Structured Illumination classical optical resolution by reconstructing high-
      Microscopy              resolution (HR) images from low-resolution (LR)
                              images acquired through modulation of the transfer
                              function of the microscope. The classical
                              implementation has a number of drawbacks, such as
                              requiring a large number of images to be acquired and
                              parameters to be manually set in an ad-hoc manner
                              that have, until now, hampered its wide dissemination.
                              Here, we present a new framework based on a
                              Bayesian inverse problem formulation approach that
                              enables the computation of one HR image from a
                              reduced number of LR images and has no specific
                              constraints on the modulation. Moreover, it permits to
                              automatically estimate the optimal reconstruction
                              hyperparameters and to compute an uncertainty bound
                              on the estimated values. We demonstrate through
                              numerical evaluations on simulated data and examples
                              on real microscopy data that our approach represents a
                              decisive advance for a wider use of HR microscopy
                              through structured illumination.
17.   Binarization of Low- It is difficult to directly apply existing binarization 2012
      Quality        Barcode approaches to the barcode images captured by mobile
      Images Captured by device due to their low quality. This paper proposes a
      Mobile Phones Using novel scheme for the binarization of such images. The
      Local    Window      of barcode and background regions are differentiated by
      Adaptive Location and the number of edge pixels in a search window. Unlike
      Size                    existing approaches that center the pixel to be
                              binarized with a window of fixed size, we propose to
                              shift the window center to the nearest edge pixel so
                              that the balance of the number of object and
                              background pixels can be achieved. The window size
                              is adaptive either to the minimum distance to edges or
                              minimum element width in the barcode. The threshold
                              is calculated using the statistics in the window. Our
                              proposed method has demonstrated its capability in
                              handling the nonuniform illumination problem and the
                              size variation of objects. Experimental results
                              conducted on 350 images captured by five mobile
                              phones achieve about 100% of recognition rate in
                              good lighting conditions, and about 95% and 83% in
                              bad lighting conditions. Comparisons made with nine
                              existing binarization methods demonstrate the
                              advancement of our proposed scheme
18.   B-Spline       Explicit A new formulation of active contours based on 2012
      Active Surfaces An explicit functions has been recently suggested. This
Efficient  Framework     novel framework allows real-time 3-D segmentation
      for Real-Time 3-D        since it reduces the dimensionality of the segmentation
      Region-Based             problem. In this paper, we propose a B-spline
      Segmentation             formulation of this approach, which further improves
                               the computational efficiency of the algorithm. We also
                               show that this framework allows evolving the active
                               contour using local region-based terms, thereby
                               overcoming the limitations of the original method
                               while preserving computational speed. The feasibility
                               of real-time 3-D segmentation is demonstrated using
                               simulated and medical data such as liver computer
                               tomography and cardiac ultrasound images.
19.   Change Detection in      This paper presents an unsupervised distribution-free 2012
      Synthetic     Aperture   change detection approach for synthetic aperture radar
      Radar Images based on    (SAR) images based on an image fusion strategy and a
      Image Fusion and         novel fuzzy clustering algorithm. The image fusion
      Fuzzy Clustering         technique is introduced to generate a difference image
                               by using complementary information from a mean-
                               ratio image and a log-ratio image. In order to restrain
                               the background information and enhance the
                               information of changed regions in the fused difference
                               image, wavelet fusion rules based on an average
                               operator and minimum local area energy are chosen to
                               fuse the wavelet coefficients for a low-frequency band
                               and a high-frequency band, respectively. A
                               reformulated fuzzy local-information C-means
                               clustering algorithm is proposed for classifying
                               changed and unchanged regions in the fused
                               difference image. It incorporates the information about
                               spatial context in a novel fuzzy way for the purpose of
                               enhancing the changed information and of reducing
                               the effect of speckle noise. Experiments on real SAR
                               images show that the image fusion strategy integrates
                               the advantages of the log-ratio operator and the mean-
                               ratio operator and gains a better performance. The
                               change detection results obtained by the improved
                               fuzzy clustering algorithm exhibited lower error than
                               its preexistences.

20.   Color Constancy for Color constancy algorithms are generally based on the 2012
      Multiple Light Sources simplifying assumption that the spectral distribution of
                             a light source is uniform across scenes. However, in
                             reality, this assumption is often violated due to the
                             presence of multiple light sources. In this paper, we
                             will address more realistic scenarios where the
                             uniform light-source assumption is too restrictive.
First, a methodology is proposed to extend existing
                              algorithms by applying color constancy locally to
                              image patches, rather than globally to the entire
                              image. After local (patch-based) illuminant estimation,
                              these estimates are combined into more robust
                              estimations, and a local correction is applied based on
                              a modified diagonal model. Quantitative and
                              qualitative experiments on spectral and real images
                              show that the proposed methodology reduces the
                              influence of two light sources simultaneously present
                              in one scene. If the chromatic difference between
                              these two illuminants is more than 1° , the proposed
                              framework outperforms algorithms based on the
                              uniform light-source assumption (with error-reduction
                              up to approximately 30%). Otherwise, when the
                              chromatic difference is less than 1° and the scene can
                              be considered to contain one (approximately) uniform
                              light source, the performance of the proposed method
                              framework is similar to global color constancy
                              methods.
21.   Coupled Bias–Variance   Subspace-based face representation can be looked as a 2012
      Tradeoff for Cross-     regression problem. From this viewpoint, we first
      Pose Face Recognition   revisited the problem of recognizing faces across pose
                              differences, which is a bottleneck in face recognition.
                              Then, we propose a new approach for cross-pose face
                              recognition using a regressor with a coupled bias-
                              variance tradeoff. We found that striking a coupled
                              balance between bias and variance in regression for
                              different poses could improve the regressor-based
                              cross-pose face representation, i.e., the regressor can
                              be more stable against a pose difference. With the
                              basic idea, ridge regression and lasso regression are
                              explored. Experimental results on CMU PIE, the
                              FERET, and the Multi-PIE face databases show that
                              the proposed bias-variance tradeoff can achieve
                              considerable      reinforcement       in     recognition
                              performance
22.   Depth From Motion       Space-variantly blurred images of a scene contain 2012
      and Optical Blur With   valuable depth information. In this paper, our
      an Unscented Kalman     objective is to recover the 3-D structure of a scene
      Filter                  from motion blur/optical defocus. In the proposed
                              approach, the difference of blur between two
                              observations is used as a cue for recovering depth,
                              within a recursive state estimation framework. For
                              motion blur, we use an unblurred-blurred image pair.
                              Since the relationship between the observation and the
scale factor of the point spread function associated
                                with the depth at a point is nonlinear, we propose and
                                develop a formulation of unscented Kalman filter for
                                depth estimation. There are no restrictions on the
                                shape of the blur kernel. Furthermore, within the same
                                formulation, we address a special and challenging
                                scenario of depth from defocus with translational
                                jitter. The effectiveness of our approach is evaluated
                                on synthetic as well as real data, and its performance
                                is also compared with contemporary techniques.
23.   Design of Almost          It is a well-known fact that (compact-support) dyadic 2012
      Symmetric Orthogonal      wavelets [based on the two channel filter banks (FBs)]
      Wavelet Filter Bank       cannot be simultaneously orthogonal and symmetric.
      Via            Direct     Although orthogonal wavelets have the energy
      Optimization              preservation property, biorthogonal wavelets are
                                preferred in image processing applications because of
                                their symmetric property. In this paper, a novel
                                method is presented for the design of almost
                                symmetric orthogonal wavelet FB. Orthogonality is
                                structurally imposed by using the unnormalized lattice
                                structure, and this leads to an objective function,
                                which is relatively simple to optimize. The designed
                                filters have good frequency response, flat group delay,
                                almost symmetric filter coefficients, and symmetric
                                wavelet function
24.   Design of Interpolation   Traditionally, subpixel interpolation in stereo-vision 2012
      Functions for Subpixel-   systems was designed for the block-matching
      Accuracy        Stereo-   algorithm. During the evaluation of different
      Vision Systems            interpolation strategies, a strong correlation was
                                observed between the type of the stereo algorithm and
                                the subpixel accuracy of the different solutions.
                                Subpixel interpolation should be adapted to each
                                stereo algorithm to achieve maximum accuracy. In
                                consequence, it is more important to propose
                                methodologies for interpolation function generation
                                than specific function shapes. We propose two such
                                methodologies based on data generated by the stereo
                                algorithms. The first proposal uses a histogram to
                                model the environment and applies histogram
                                equalization to an existing solution adapting it to the
                                data. The second proposal employs synthetic images
                                of a known environment and applies function fitting to
                                the resulted data. The resulting function matches the
                                algorithm and the data as best as possible. An
                                extensive evaluation set is used to validate the
                                findings. Both real and synthetic test cases were
employed in different scenarios. The test results are
                                consistent and show significant improvements
                                compared with traditional solutions.

25.   Entropy-Functional-       In this paper, an entropy-functional-based online 2012
      Based Online Adaptive     adaptive decision fusion (EADF) framework is
      Decision         Fusion   developed for image analysis and computer vision
      Framework          With   applications. In this framework, it is assumed that the
      Application to Wildfire   compound        algorithm     consists     of     several
      Detection in Video        subalgorithms, each of which yields its own decision
                                as a real number centered around zero, representing
                                the confidence level of that particular subalgorithm.
                                Decision values are linearly combined with weights
                                that are updated online according to an active fusion
                                method based on performing entropic projections onto
                                convex sets describing subalgorithms. It is assumed
                                that there is an oracle, who is usually a human
                                operator, providing feedback to the decision fusion
                                method. A video-based wildfire detection system was
                                developed to evaluate the performance of the decision
                                fusion algorithm. In this case, image data arrive
                                sequentially, and the oracle is the security guard of the
                                forest lookout tower, verifying the decision of the
                                combined algorithm. The simulation results are
                                presented.
26.   Fast         Semantic     Exploring context information for visual recognition 2012
      Diffusion for Large-      has recently received significant research attention.
      Scale Context-Based       This paper proposes a novel and highly efficient
      Image    and   Video      approach, which is named semantic diffusion, to
      Annotation                utilize semantic context for large-scale image and
                                video annotation. Starting from the initial annotation
                                of a large number of semantic concepts (categories),
                                obtained by either machine learning or manual
                                tagging, the proposed approach refines the results
                                using a graph diffusion technique, which recovers the
                                consistency and smoothness of the annotations over a
                                semantic graph. Different from the existing graph-
                                based learning methods that model relations among
                                data samples, the semantic graph captures context by
                                treating the concepts as nodes and the concept
                                affinities as the weights of edges. In particular, our
                                approach is capable of simultaneously improving
                                annotation accuracy and adapting the concept
                                affinities to new test data. The adaptation provides a
                                means to handle domain change between training and
                                test data, which often occurs in practice. Extensive
experiments are conducted to improve concept
                               annotation results using Flickr images and TV
                               program videos. Results show consistent and
                               significant performance gain (10 on both image and
                               video data sets). Source codes of the proposed
                               algorithms are available online.
27.   Gradient-Based Image     A major problem in imaging applications such as 2012
      Recovery      Methods    magnetic resonance imaging and synthetic aperture
      From       Incomplete    radar is the task of trying to reconstruct an image with
      Fourier Measurements     the smallest possible set of Fourier samples, every
                               single one of which has a potential time and/or power
                               cost. The theory of compressive sensing (CS) points to
                               ways of exploiting inherent sparsity in such images in
                               order to achieve accurate recovery using sub-Nyquist
                               sampling schemes. Traditional CS approaches to this
                               problem consist of solving total-variation (TV)
                               minimization programs with Fourier measurement
                               constraints or other variations thereof. This paper
                               takes a different approach. Since the horizontal and
                               vertical differences of a medical image are each more
                               sparse or compressible than the corresponding TV
                               image, CS methods will be more successful in
                               recovering these differences individually. We develop
                               an algorithm called GradientRec that uses a CS
                               algorithm to recover the horizontal and vertical
                               gradients and then estimates the original image from
                               these gradients. We present two methods of solving
                               the latter inverse problem, i.e., one based on least-
                               square optimization and the other based on a
                               generalized Poisson solver. After a thorough
                               derivation of our complete algorithm, we present the
                               results of various experiments that compare the
                               effectiveness of the proposed method against other
                               leading methods.
28.   Groupwise Registration   Groupwise registration is concerned with bringing a 2012
      of Multimodal Images     group of images into the best spatial alignment. If
      by an Efficient Joint    images in the group are from different modalities, then
      Entropy Minimization     the intensity correspondences across the images can be
      Scheme                   modeled by the joint density function (JDF) of the
                               cooccurring image intensities. We propose a so-called
                               treecode registration method for groupwise alignment
                               of multimodal images that uses a hierarchical
                               intensity-space subdivision scheme through which an
                               efficient yet sufficiently accurate estimation of the
                               (high-dimensional) JDF based on the Parzen kernel
                               method is computed. To simultaneously align a group
of images, a gradient-based joint entropy minimization
                             was employed that also uses the same hierarchical
                             intensity-space subdivision scheme. If the Hilbert
                             kernel is used for the JDF estimation, then the
                             treecode method requires no data-dependent
                             bandwidth selection and is thus fully automatic. The
                             treecode method was compared with the ensemble
                             clustering (EC) method on four different publicly
                             available multimodal image data sets and on a
                             synthetic monomodal image data set. The obtained
                             results indicate that the treecode method has similar
                             and, for two data sets, even superior performances
                             compared to the EC method in terms of registration
                             error and success rate. The obtained good registration
                             performances can be mostly attributed to the
                             sufficiently accurate estimation of the JDF, which is
                             computed through the hierarchical intensity-space
                             subdivision scheme, that captures all the important
                             features needed to detect the correct intensity
                             correspondences across a multimodal group of images
                             undergoing registration.
29.   Higher Degree Total We introduce novel image regularization penalties to 2012
      Variation      (HDTV) overcome the practical problems associated with the
      Regularization     for classical total variation (TV) scheme. Motivated by
      Image Recovery         novel reinterpretations of the classical TV regularizer,
                             we derive two families of functionals involving higher
                             degree partial image derivatives; we term these
                             families as isotropic and anisotropic higher degree TV
                             (HDTV) penalties, respectively. The isotropic penalty
                             is the mixed norm of the directional image derivatives,
                             while the anisotropic penalty is the separable norm of
                             directional derivatives. These functionals inherit the
                             desirable properties of standard TV schemes such as
                             invariance to rotations and translations, preservation
                             of discontinuities, and convexity. The use of mixed
                             norms in isotropic penalties encourages the joint
                             sparsity of the directional derivatives at each pixel,
                             thus encouraging isotropic smoothing. In contrast, the
                             fully separable norm in the anisotropic penalty ensures
                             the preservation of discontinuities, while continuing to
                             smooth along the line like features; this scheme thus
                             enhances the linenlike image characteristics analogous
                             to standard TV. We also introduce efficient majorize-
                             minimize algorithms to solve the resulting
                             optimization problems. The numerical comparison of
                             the proposed scheme with classical TV penalty,
current second-degree methods, and wavelet
                            algorithms clearly demonstrate the performance
                            improvement. Specifically, the proposed algorithms
                            minimize the staircase and ringing artifacts that are
                            common with TV and wavelet schemes, while better
                            preserving the singularities. We also observe that
                            anisotropic HDTV penalty provides consistently
                            improved reconstructions compared with the isotropic
                            HDTV penalty.
30.   Human Identification This paper presents a new approach to improve the 2012
      Using Finger Images   performance of finger-vein identification systems
                            presented in the literature. The proposed system
                            simultaneously acquires the finger-vein and low-
                            resolution fingerprint images and combines these two
                            evidences using a novel score-level combination
                            strategy. We examine the previously proposed finger-
                            vein identification approaches and develop a new
                            approach that illustrates it superiority over prior
                            published efforts. The utility of low-resolution
                            fingerprint images acquired from a webcam is
                            examined to ascertain the matching performance from
                            such images. We develop and investigate two new
                            score-level combinations, i.e., holistic and nonlinear
                            fusion, and comparatively evaluate them with more
                            popular score-level fusion approaches to ascertain
                            their effectiveness in the proposed system. The
                            rigorous experimental results presented on the
                            database of 6264 images from 156 subjects illustrate
                            significant improvement in the performance, i.e., both
                            from the authentication and recognition experiments.
31.   Image Fusion Using A novel higher order singular value decomposition 2012
      Higher Order Singular (HOSVD)-based image fusion algorithm is proposed.
      Value Decomposition   The key points are given as follows: 1) Since image
                            fusion depends on local information of source images,
                            the proposed algorithm picks out informative image
                            patches of source images to constitute the fused image
                            by processing the divided subtensors rather than the
                            whole tensor; 2) the sum of absolute values of the
                            coefficients (SAVC) from HOSVD of subtensors is
                            employed for activity-level measurement to evaluate
                            the quality of the related image patch; and 3) a novel
                            sigmoid-function-like coefficient-combining scheme
                            is applied to construct the fused result. Experimental
                            results show that the proposed algorithm is an
                            alternative image fusion approach.
32.   Image   Segmentation Active contour models (ACMs) integrated with 2012
      Based on the Poincaré various kinds of external force fields to pull the
      Map Method            contours to the exact boundaries have shown their
                            powerful abilities in object segmentation. However,
                            local minimum problems still exist within these
                            models, particularly the vector field's “equilibrium
                            issues.” Different from traditional ACMs, within this
                            paper, the task of object segmentation is achieved in a
                            novel manner by the Poincaré map method in a
                            defined vector field in view of dynamical systems. An
                            interpolated swirling and attracting flow (ISAF) vector
                            field is first generated for the observed image. Then,
                            the states on the limit cycles of the ISAF are located
                            by the convergence of Newton-Raphson sequences on
                            the given Poincaré sections. Meanwhile, the periods of
                            limit cycles are determined. Consequently, the objects'
                            boundaries are represented by integral equations with
                            the corresponding converged states and periods.
                            Experiments and comparisons with some traditional
                            external force field methods are done to exhibit the
                            superiority of the proposed method in cases of
                            complex concave boundary segmentation, multiple-
                            object segmentation, and initialization flexibility. In
                            addition, it is more computationally efficient than
                            traditional ACMs by solving the problem in some
                            lower dimensional subspace without using level-set
                            methods.

33.   Implicit    Polynomial   This paper presents a simple distance estimation for 2012
      Representation           implicit polynomial fitting. It is computed as the
      Through a Fast Fitting   height of a simplex built between the point and the
      Error Estimation         surface (i.e., a triangle in 2-D or a tetrahedron in 3-D),
                               which is used as a coarse but reliable estimation of the
                               orthogonal distance. The proposed distance can be
                               described as a function of the coefficients of the
                               implicit polynomial. Moreover, it is differentiable and
                               has a smooth behavior . Hence, it can be used in any
                               gradient-based optimization. In this paper, its use in a
                               Levenberg-Marquardt framework is shown, which is
                               particularly devoted for nonlinear least squares
                               problems. The proposed estimation is a generalization
                               of the gradient-based distance estimation, which is
                               widely used in the literature. Experimental results,
                               both in 2-D and 3-D data sets, are provided.
                               Comparisons with state-of-the-art techniques are
                               presented, showing the advantages of the proposed
approach.


34.   Integrating             In this paper, we propose a method to exploit 2012
      Segmentation            segmentation information for elastic image
      Information     for     registration using a Markov-random-field (MRF)-
      Improved MRF-Based      based objective function. MRFs are suitable for
      Elastic      Image      discrete labeling problems, and the labels are defined
      Registration            as the joint occurrence of displacement fields (for
                              registration) and segmentation class probability. The
                              data penalty is a combination of the image intensity
                              (or gradient information) and the mutual dependence
                              of registration and segmentation information. The
                              smoothness is a function of the interaction between
                              the defined labels. Since both terms are a function of
                              registration and segmentation labels, the overall
                              objective function captures their mutual dependence.
                              A multiscale graph-cut approach is used to achieve
                              subpixel registration and reduce the computation time.
                              The user defines the object to be registered in the
                              floating image, which is rigidly registered before
                              applying our method. We test our method on synthetic
                              image data sets with known levels of added noise and
                              simulated deformations, and also on natural and
                              medical images. Compared with other registration
                              methods not using segmentation information, our
                              proposed method exhibits greater robustness to noise
                              and improved registration accuracy.

35.   Iterative Narrowband-   In this paper, an iterative narrow-band-based graph 2012
      Based Graph Cuts        cuts (INBBGC) method is proposed to optimize the
      Optimization      for   geodesic active contours with region forces
      Geodesic       Active   (GACWRF)         model     for   interactive     object
      Contours With Region    segmentation. Based on cut metric on graphs proposed
      Forces (GACWRF)         by Boykov and Kolmogorov, an NBBGC method is
                              devised to compute the local minimization of GAC.
                              An extension to an iterative manner, namely,
                              INBBGC, is developed for less sensitivity to the initial
                              curve. The INBBGC method is similar to graph-cuts-
                              based active contour (GCBAC) presented by Xu , and
                              their differences have been analyzed and discussed.
                              We then integrate the region force into GAC. An
                              improved INBBGC (IINBBGC) method is proposed
                              to optimize the GACWRF model, thus can effectively
                              deal with the concave region and complicated real-
                              world images segmentation. Two region force models
such as mean and probability models are studied.
                           Therefore, the GCBAC method can be regarded as the
                           special case of our proposed IINBBGC method
                           without region force. Our proposed algorithm has
                           been also analyzed to be similar to the Grabcut
                           method when the Gaussian mixture model region
                           force is adopted, and the band region is extended to
                           the whole image. Thus, our proposed IINBBGC
                           method can be regarded as narrow-band-based
                           Grabcut method or GCBAC with region force method.
                           We apply our proposed IINBBGC algorithm on
                           synthetic and real-world images to emphasize its
                           performance, compared with other segmentation
                           methods, such as GCBAC and Grabcut methods.
36.   PDE-Based            A novel method for color image enhancement is 2012
      Enhancement of Color proposed as an extension of the scalar-diffusion-
      Images in RGB Space  shock-filter coupling model, where noisy and blurred
                           images are denoised and sharpened. The proposed
                           model is based on using the single vectors of the
                           gradient magnitude and the second derivatives as a
                           manner to relate different color components of the
                           image. This model can be viewed as a generalization
                           of the Bettahar-Stambouli filter to multivalued images.
                           The proposed algorithm is more efficient than the
                           mentioned filter and some previous works at color
                           images denoising and deblurring without creating
                           false colors.
37.   Polyview Fusion A We propose a simple but effective strategy that aims 2012
      Strategy to Enhance to enhance the performance of existing video
      Video-Denoising      denoising algorithms, i.e., polyview fusion (PVF). The
      Algorithms           idea is to denoise the noisy video as a 3-D volume
                           using a given base 2-D denoising algorithm but
                           applied from multiple views (front, top, and side
                           views). A fusion algorithm is then designed to merge
                           the resulting multiple denoised videos into one, so that
                           the visual quality of the fused video is improved.
                           Extensive tests using a variety of base video-denoising
                           algorithms show that the proposed PVF method leads
                           to surprisingly significant and consistent gain in terms
                           of both peak signal-to-noise ratio (PSNR) and
                           structural similarity (SSIM) performance, particularly
                           at high noise levels, where the improvement over
                           state-of-the-art denoising algorithms is often more
                           than 2 dB in PSNR.

38.   Preconditioning   for We propose a simple preconditioning method for 2012
Edge-Preserving Image accelerating the solution of edge-preserving image
      Super Resolution       super-resolution (SR) problems in which a linear shift-
                             invariant point spread function is employed. Our
                             technique involves reordering the high-resolution
                             (HR) pixels in a similar manner to what is done in
                             preconditioning     methods      for    quadratic     SR
                             formulations. However, due to the edge preserving
                             requirements, the Hessian matrix of the cost function
                             varies during the minimization process. We develop
                             an efficient update scheme for the preconditioner in
                             order to cope with this situation. Unlike some other
                             acceleration strategies that round the displacement
                             values between the low-resolution (LR) images on the
                             HR grid, the proposed method does not sacrifice the
                             optimality of the observation model. In addition, we
                             describe a technique for preconditioning SR problems
                             involving rational magnification factors. The use of
                             such factors is motivated in part by the fact that, under
                             certain circumstances, optimal SR zooms are
                             nonintegers. We show that, by reordering the pixels of
                             the LR images, the structure of the problem to solve is
                             modified in such a way that preconditioners based on
                             circulant operators can be used.
39.   PSF Estimation via This paper proposes an efficient method to estimate 2012
      Gradient       Domain the point spread function (PSF) of a blurred image
      Correlation            using image gradients spatial correlation. A patch-
                             based image degradation model is proposed for
                             estimating the sample covariance matrix of the
                             gradient domain natural image. Based on the fact that
                             the gradients of clean natural images are
                             approximately uncorrelated to each other, we
                             estimated the autocorrelation function of the PSF from
                             the covariance matrix of gradient domain blurred
                             image using the proposed patch-based image
                             degradation model. The PSF is computed using a
                             phase retrieval technique to remove the ambiguity
                             introduced by the absence of the phase. Experimental
                             results show that the proposed method significantly
                             reduces the computational burden in PSF estimation,
                             compared with existing methods, while giving
                             comparable blurring kernel
40.   Rigid-Motion-Invariant This paper studies the problem of 3-D rigid-motion- 2012
      Classification of 3-D invariant texture discrimination for discrete 3-D
      Textures               textures that are spatially homogeneous by modeling
                             them as stationary Gaussian random fields. The latter
                             property and our formulation of a 3-D rigid motion of
a texture reduce the problem to the study of 3-D
                                rotations of discrete textures. We formally develop the
                                concept of 3-D texture rotations in the 3-D digital
                                domain. We use this novel concept to define a
                                "distance" between 3-D textures that remains invariant
                                under all 3-D rigid motions of the texture. This
                                concept of "distance" can be used for a monoscale or a
                                mill tiscale 3-D rigid- motion-invariant testing of the
                                statistical similarity of the 3-D textures. To compute
                                the "distance" between any two rotations R1 and R2 of
                                two given 3-D textures, we use the Kullback-Leibler
                                divergence between 3-D Gaussian Markov random
                                fields fitted to the rotated texture data. Then, the 3-D
                                rigid-motion-invariant texture distance is the integral
                                average, with respect to the Haar measure of the group
                                SO(3), of all of these divergences when rotations R1
                                and R2 vary throughout SO(3). We also present an
                                algorithm enabling the computation of the proposed 3-
                                D rigid-motion-invariant texture distance as well as
                                rules for 3-D rigid-motion-invariant texture
                                discrimination/classification and experimental results
                                demonstrating the capabilities of the proposed 3-D
                                rigid-motion texture discrimination rules when applied
                                in a multiscale setting, even on very general 3-D
                                texture models.

41.   Robust Image Hashing      In this paper, we propose a robust-hash function based 2012
      Based on Random           on random Gabor filtering and dithered lattice vector
      Gabor Filtering and       quantization (LVQ). In order to enhance the
      Dithered Lattice Vector   robustness against rotation manipulations, the
      Quantization              conventional Gabor filter is adapted to be rotation
                                invariant, and the rotation-invariant filter is
                                randomized to facilitate secure feature extraction.
                                Particularly, a novel dithered-LVQ-based quantization
                                scheme is proposed for robust hashing. The dithered-
                                LVQ-based quantization scheme is well suited for
                                robust hashing with several desirable features,
                                including better tradeoff between robustness and
                                discrimination, higher randomness, and secrecy,
                                which are validated by analytical and experimental
                                results. The performance of the proposed hashing
                                algorithm is evaluated over a test image database
                                under various content-preserving manipulations. The
                                proposed hashing algorithm shows superior robustness
                                and discrimination performance compared with other
                                state-of-the-art algorithms, particularly in the
robustness against rotations (of large degrees).
   42.   Snakes     With     an We present a new class of continuously defined 2012
         Ellipse-Reproducing    parametric snakes using a special kind of exponential
         Property               splines as basis functions. We have enforced our bases
                                to have the shortest possible support subject to some
                                design constraints to maximize efficiency. While the
                                resulting snakes are versatile enough to provide a
                                good approximation of any closed curve in the plane,
                                their most important feature is the fact that they admit
                                ellipses within their span. Thus, they can perfectly
                                generate circular and elliptical shapes. These features
                                are appropriate to delineate cross sections of
                                cylindrical-like conduits and to outline bloblike
                                objects. We address the implementation details and
                                illustrate the capabilities of our snake with synthetic
                                and real data.


TECHNOLOGY                           : JAVA

DOMAIN                                :     IEEE       TRANSACTIONS            ON       SOFTWARE
ENGINEERING

S.NO     TITLES                    ABSTRACT                                                  YEAR
   1.    Automated Behavioral We present a technique to test Java refactoring                2012
         Testing of Refactoring engines. It automates test input generation by using a
         Engines                   Java program generator that exhaustively generates
                                   programs for a given scope of Java declarations. The
                                   refactoring under2test is applied to each generated
                                   program. The technique uses SAFEREFACTOR, a
                                   tool for detecting behavioral changes, as oracle to
                                   evaluate the correctness of these transformations.
   2.    Towards                   This study contributes to the literature by considering   2012
         Comprehensible            15 different Bayesian Network (BN) classifiers and
         Software            Fault comparing them to other popular machine learning
         Prediction       Models techniques. Furthermore, the applicability of the
         Using          Bayesian Markov blanket principle for feature selection, which
         Network Classifiers       is a natural extension to BN theory, is investigated.
   3.    Using        Dependency In this paper, we present a family of test case             2012
         Structures            for prioritisation techniques that use the dependency
         Prioritisation         of information from a test suite to prioritise that test
         Functional Test Suites    suite. The nature of the techniques preserves the
                                   dependencies in the test ordering.
   4.    Automatically             Dynamic specification mining observes program             2012
         Generating Test Cases executions to infer models of normal program
         for         Specification behavior. What makes us believe that we have seen
Mining                 sufficiently many executions? The TAUTOKO
                            (“Tautoko” is the Mãori word for “enhance, enrich.”)
                            typestate miner generates test cases that cover
                            previously unobserved behavior, systematically
                            extending the execution space, and enriching the
                            specification. To our knowledge, this is the first
                            combination of systematic test case generation and
                            typestate mining-a combination with clear benefits:
                            On a sample of 800 defects seeded into six Java
                            subjects, a static typestate verifier fed with enriched
                            models would report significantly more true positives
                            and significantly fewer false positives than the initial
                            models
5.   Fault Localization for In recent years, there has been significant interest in 2012
     Dynamic           Web fault-localization techniques that are based on
     Applications           statistical analysis of program constructs executed by
                            passing and failing executions. This paper shows how
                            the Tarantula, Ochiai, and Jaccard fault-localization
                            algorithms can be enhanced to localize faults
                            effectively in web applications written in PHP by
                            using an extended domain for conditional and
                            function-call statements and by using a source
                            mapping. We also propose several novel test-
                            generation strategies that are geared toward producing
                            test suites that have maximal fault-localization
                            effectiveness. We implemented various fault-
                            localization techniques and test-generation strategies
                            in Apollo, and evaluated them on several open-source
                            PHP applications. Our results indicate that a variant of
                            the Ochiai algorithm that includes all our
                            enhancements localizes 87.8 percent of all faults to
                            within 1 percent of all executed statements, compared
                            to only 37.4 percent for the unenhanced Ochiai
                            algorithm. We also found that all the test-generation
                            strategies that we considered are capable of generating
                            test     suites   with     maximal     fault-localization
                            effectiveness when given an infinite time budget for
                            test generation. However, on average, a directed
                            strategy based on path-constraint similarity achieves
                            this maximal effectiveness after generating only 6.5
                            tests, compared to 46.8 tests for an undirected test-
                            generation strategy.
TECHNOLOGY           : JAVA

DOMAIN               : IEEE TRANSACTIONS ON GRID & CLOUD COMPUTING



S.NO    TITLES                   ABSTRACT                                          YEAR
   1.   Business-OWL             This paper introduces the Business-OWL (BOWL), an 2012
        (BOWL)—A                 ontology rooted in the Web Ontology Language
        Hierarchical     Task    (OWL), and modeled as a Hierarchical Task Network
        Network Ontology for     (HTN) for the dynamic formation of business
        Dynamic       Business   processes
        Process Decomposition
        and Formulation
   2.   Detecting         And    The advent of emerging computing technologies such         2012
        Resolving     Firewall   as service-oriented architecture and cloud computing
        Policy Anomalies         has enabled us to perform business services more
                                 efficiently and effectively.
   3.   Online System for Grid   In this paper, we present the design and evaluation of     2012
        Resource Monitoring      system architecture for grid resource monitoring and
        and Machine Learning-    prediction. We discuss the key issues for system
        Based Prediction         implementation, including machine learning-based
                                 methodologies for modeling and optimization of
                                 resource prediction models.
   4.   SOAP        Processing   SOAP communications produce considerable network           2011
        Performance        and   traffic, making them unfit for distributed, loosely
        Enhancement              coupled and heterogeneous computing environments
                                 such as the open Internet. They introduce higher
                                 latency and processing delays than other technologies,
                                 like Java RMI & CORBA. WS research has recently
                                 focused on SOAP performance enhancement.
   5.   Weather data sharing     Intelligent agents can play an important role in helping   2011
        system: an agent-based   achieve the ‘data grid’ vision. In this study, the
        distributed       data   authors present a multi-agent-based framework to
        management               implement manage, share and query weather data in a
                                 geographical distributed environment, named weather
                                 data sharing system
   6.   pCloud: A Distributed    In this paper we present pCloud, a distributed system      2012
        System for Practical     that constitutes the ?rst attempt towards practical
        PIR                      cPIR. Our approach assumes a disk-based architecture
                                 that retrieves one page with a single query. Using a
                                 striping technique, we distribute the database to a
                                 number of cooperative peers, and leverage their
                                 computational resources to process cPIR queries in
                                 parallel. We implemented pCloud on the PlanetLab
                                 network, and experimented extensively with several
                                 system parameters. Results
7.   A Gossip Protocol for      We address the problem of dynamic resource 2012
     Dynamic      Resource      management for a large-scale cloud environment. Our
     Management in Large        contribution includes outlining a distributed
     Cloud Environments.        middleware architecture and presenting one of its key
                                elements: a gossip protocol that (1) ensures fair
                                resource allocation among sites/applications, (2)
                                dynamically adapts the allocation to load changes and
                                (3) scales both in the number of physical machines
                                and sites/applications.
8.   A     Novel   Process      In this paper, we explore a novel approach to model 2012
     Network Model for          dynamic behaviors of interacting context-aware web
     Interacting  Context-      services. It aims to effectively process and take
     aware Web Services         advantage of contexts and realize behavior adaptation
                                of web services, further to facilitate the development
                                of context-aware application of web services.
9.   Monitoring         and     Recently, several mobile services are changing to 2012
     Detecting    Abnormal      cloud-based      mobile      services     with    richer
     Behavior in Mobile         communications and higher flexibility. We present a
                                new mobile cloud infrastructure that combines mobile
     Cloud Infrastructure       devices and cloud services. This new infrastructure
                                provides virtual mobile instances through cloud
                                computing. To commercialize new services with this
                                infrastructure, service providers should be aware of
                                security issues. Here, we first define new mobile cloud
                                services through mobile cloud infrastructure and
                                discuss possible security threats through the use of
                                several service scenarios. Then, we propose a
                                methodology and architecture for detecting abnormal
                                behavior through the monitoring of both host and
                                network data. To validate our methodology, we
                                injected malicious programs into our mobile cloud test
                                bed and used a machine learning algorithm to detect
                                the abnormal behavior that arose from these programs.

10. Impact of Storage           The volume of worldwide digital content has 2012
    Acquisition Intervals       increased nine-fold within the last five years, and this
    on the Cost-Efficiency      immense growth is predicted to continue in
    of the Private vs. Public   foreseeable future eaching 8ZB already by 2015.
    Storage.                    Traditionally, in order to cope with the growing
                                demand for storage capacity, organizations proactively
                                built and managed their private storage facilities.
                                Recently, with the proliferation of public cloud
                                infrastructure offerings, many organizations, instead,
                                welcomed the alternative of outsourcing their storage
                                needs to the providers of public cloud storage services.
                                The comparative cost-efficiency of these two
alternatives depends on a number of factors, among
                             which are e.g. the prices of the public and private
                             storage, the charging and the storage acquisition
                             intervals, and the predictability of the demand for
                             storage. In this paper, we study how the cost-
                             efficiency of the private vs. public storage depends on
                             the acquisition interval at which the organization re-
                             assesses
                             its storage needs and acquires additional private
                             storage. The analysis in the paper suggests that the
                             shorter the acquisition interval, the more likely it is
                             that the private storage solution is less expensive as
                             compared with the public cloud infrastructure. This
                             phenomenon is also illustrated in the paper
                             numerically using the storage needs encountered by a
                             university back-up and archiving service as an
                             example. Since the acquisition interval is determined
                             by the organization’s ability to foresee the growth of
                             storage demand, by the provisioning schedules of
                             storage equipment providers, and by internal practices
                             of the organization, among other factors, the
                             organization owning a private storage solution may
                             want to control some of these factors in order to attain
                             a shorter acquisition interval and thus make the private
                             storage (more) cost-efficient..
11. Managing A Cloud for     We present a novel execution environment for 2012
    Multi-agent Systems on   multiagent systems building on concepts from cloud
    Ad-hoc Networks          computing and peer-to-peer networks. The novel
                             environment can provide the computing power of a
                             cloud for multi-agent systems in intermittently
                             connected networks. We present the design and
                             implementation of a prototype operating system for
                             managing the environment. The operating system
                             provides the user with a consistent view of a single
                             machine, a single file system, and a unified
                             programming model while providing elasticity and
                             availability.
12. Cloud       Computing    The use of cloud computing has increased rapidly in 2012
    Security: From Single    many organizations. Cloud computing provides many
    to                       benefits in terms of low cost and accessibility of data.
    Multi-Clouds             Ensuring the security of cloud computing is a major
                             factor in the cloud computing environment, as users
                             often store sensitive information with cloud storage
                             providers but these providers may be untrusted.
                             Dealing with “single cloud” providers is predicted to
                             become less popular with customers due to risks of
service availability failure and the possibility of
                            malicious insiders in the single cloud. A movement
                            towards “multi-clouds”, or in other words,
                            “interclouds” or “cloud-ofclouds”
                            has emerged recently. This paper surveys recent
                            research related to single and multi-cloud security and
                            addresses possible solutions. It is found that the
                            research into the use of multicloud providers to
                            maintain security has received less attention from the
                            research community than has the use of single clouds.
                            This work aims to promote the use of multi-clouds
                            due to its ability to reduce security risks that affect the
                            cloud computing user.
13. Optimization       of   In cloud computing, cloud providers can offer cloud 2012
    Resource Provisioning   consumers two provisioning plans for computing
    Cost     in    Cloud    resources, namely reservation and on-demand plans.
    Computing               In general, cost of utilizing computing resources
                            provisioned by reservation plan is cheaper than that
                            provisioned by on-demand plan, since cloud consumer
                            has to pay to provider in advance. With the reservation
                            plan, the consumer can reduce the total resource
                            provisioning cost. However, the best advance
                            reservation of resources is difficult to be achieved due
                            to uncertainty of consumer's future demand and
                            providers' resource prices. To address this problem, an
                            optimal cloud resource provisioning (OCRP)
                            algorithm is proposed by formulating a stochastic
                            programming model. The OCRP algorithm can
                            provision computing resources for being used in
                            multiple provisioning stages as well as a long-term
                            plan, e.g., four stages in a quarter plan and twelve
                            stages in a yearly plan. The demand and price
                            uncertainty is considered in OCRP. In this paper,
                            different approaches to obtain the solution of the
                            OCRP       algorithm      are    considered      including
                            deterministic equivalent formulation, sample-average
                            approximation,        and     Benders     decomposition.
                            Numerical studies are extensively performed in which
                            the results clearly show that with the OCRP algorithm,
                            cloud consumer can successfully minimize total cost
                            of resource provisioning in cloud computing
                            environments
14. A    Secure   Erasure   A cloud storage system, consisting of a collection of 2012
    Code-Based     Cloud    storage servers, provides long-term storage services
    Storage System with     over the Internet.Storing data in a third party’s cloud
    Secure           Data   system causes serious concern over data
Forwarding               confidentiality. General encryption schemes protect
                              data confidentiality, but also limit the functionality of
                              the storage system because a few operations are
                              supported over encrypted data. Constructing a secure
                              storage system that supports multiple functions is
                              challenging when the storage system is distributed and
                              has no central authority. We propose a threshold proxy
                              re-encryption scheme and integrate it with a
                              decentralized erasure code such that a secure
                              distributed storage system is formulated. The
                              distributed storage system not only supports secure
                              and robust data storage and retrieval, but also lets a
                              user forward his data in the storage servers to another
                              user without retrieving the data back. The main
                              technical contribution is that the proxy re-encryption
                              scheme supports encoding operations over encrypted
                              messages as well as forwarding operations over
                              encoded and encrypted messages. Our method fully
                              integrates encrypting, encoding, and forwarding. We
                              analyze and suggest suitable parameters for the
                              number of copies of a message dispatched to storage
                              servers and the number of storage servers queried by a
                              key server. These parameters allow more flexible
                              adjustment between the number of storage servers .

15. HASBE:               A    Cloud computing has emerged as one of the most 2012
    Hierarchical Attribute-   influential paradigms in the IT industry in recent
    Based Solution for        years. Since this
    Flexible and Scalable     new computing technology requires users to entrust
    Access Control in         their valuable data to cloud providers, there have been
    Cloud Computing           increasing security and privacy concerns on
                              outsourced data. Several schemes employing attribute-
                              based encryption (ABE) have been proposed for
                              access control of outsourced data in cloud computing;
                              however, most of them suffer from inflexibility in
                              implementing complex access control policies. In
                              order to realize scalable, flexible, and fine-grained
                              access control of outsourced data in cloud computing,
                              in this paper, we propose hierarchical attribute-set-
                              based encryption (HASBE) by extending ciphertext-
                              policy attribute-set-based encryption (ASBE) with a
                              hierarchical structure of users. The proposed scheme
                              not only achieves scalability due to its hierarchical
                              structure, but also inherits flexibility and fine-grained
                              access control in supporting compound attributes of
                              ASBE. In addition, HASBE employs multiple value
assignments for access expiration time to deal with
                            user revocation more efficiently than existing
                            schemes. We formally prove the security of HASBE
                            based on security of the ciphertext-policy attribute-
                            based encryption (CP-ABE) scheme by Bethencourt et
                            al. and analyze its performance and computational
                            complexity. We implement our scheme and show that
                            it is both efficient and flexible in dealing with access
                            control for outsourced data in cloud computing with
                            comprehensive experiments.
16. A Distributed Access The large-scale, dynamic, and heterogeneous nature of 2012
    Control    Architecture cloud computing poses numerous security challenges.
    for Cloud Computing     But the cloud's main challenge is to provide a robust
                            authorization      mechanism        that    incorporates
                            multitenancy and virtualization aspects of resources.
                            The authors present a distributed architecture that
                            incorporates principles from security management and
                            software engineering and propose key requirements
                            and a design model for the architecture.

17. Cloud        Computing The use of cloud computing has increased rapidly in 2012
    Security: From Single many organizations. Cloud computing provides many
    to Multi-clouds        benefits in terms of low cost and accessibility of data.
                           Ensuring the security of cloud computing is a major
                           factor in the cloud computing environment, as users
                           often store sensitive information with cloud storage
                           providers but these providers may be untrusted.
                           Dealing with "single cloud" providers is predicted to
                           become less popular with customers due to risks of
                           service availability failure and the possibility of
                           malicious insiders in the single cloud. A movement
                           towards "multi-clouds", or in other words,
                           "interclouds" or "cloud-of-clouds" has emerged
                           recently. This paper surveys recent research related to
                           single and multi-cloud security and addresses possible
                           solutions. It is found that the research into the use of
                           multi-cloud providers to maintain security has
                           received less attention from the research community
                           than has the use of single clouds. This work aims to
                           promote the use of multi-clouds due to its ability to
                           reduce security risks that affect the cloud computing
                           user.
18. Scalable and Secure Personal health record (PHR) is an emerging patient- 2012
    Sharing of Personal centric model of health information exchange, which
    Health Records in is often outsourced to be stored at a third party, such
    Cloud        Computing as cloud providers. However, there have been wide
using Attribute-based privacy concerns as personal health information could
    Encryption            be exposed to those third party servers and to
                          unauthorized parties. To assure the patients’ control
                          over access to their own PHRs, it is a promising
                          method to encrypt the PHRs before outsourcing. Yet,
                          issues such as risks of privacy exposure, scalability in
                          key management, flexible access and efficient user
                          revocation, have remained the most important
                          challenges      toward       achieving     fine-grained,
                          cryptographically enforced data access control. In this
                          paper, we propose a novel patient-centric framework
                          and a suite of mechanisms for data access control to
                          PHRs stored in semi-trusted servers. To achieve fine-
                          grained and scalable data access control for PHRs, we
                          leverage attribute based encryption (ABE) techniques
                          to encrypt each patient’s PHR file. Different from
                          previous works in secure data outsourcing, we focus
                          on the multiple data owner scenario, and divide the
                          users in the PHR system into multiple security
                          domains that greatly reduces the key management
                          complexity for owners and users. A high degree of
                          patient privacy is guaranteed simultaneously by
                          exploiting multi-authority ABE. Our scheme also
                          enables dynamic modification of access policies or file
                          attributes, supports efficient on-demand user/attribute
                          revocation and break-glass access under emergency
                          scenarios. Extensive analytical and experimental
                          results are presented which show the security,
                          scalability and efficiency of our proposed scheme
19. Cloud Data Production Offering strong data protection to cloud users while 2012
    for Masses            enabling rich applications is a challenging task. We
                          explore a new cloud platform architecture called Data
                          Protection as a Service, which dramatically reduces
                          the per-application development effort required to
                          offer data protection, while still allowing rapid
                          development and maintenance.




20. Secure and Practical    Cloud Computing has great potential of providing 2012
    Outsourcing of Linear   robust computational power to the society at reduced
    Programming in Cloud    cost. It enables customers with limited computational
    Computing               resources to outsource their large computation
                            workloads to the cloud, and economically enjoy the
                            massive computational power, bandwidth, storage,
and even appropriate software that can be shared in a
                            pay-per-use manner. Despite the tremendous benefits,
                            security is the primary obstacle that prevents the wide
                            adoption of this promising computing model,
                            especially for customers when their confidential data
                            are consumed and produced during the computation.
                            Treating the cloud as an intrinsically insecure
                            computing platform from the viewpoint of the cloud
                            customers, we must design mechanisms that not only
                            protect     sensitive   information      by     enabling
                            computations with encrypted data, but also protect
                            customers from malicious behaviors by enabling the
                            validation of the computation result. Such a
                            mechanism of general secure computation outsourcing
                            was recently shown to be feasible in theory, but to
                            design mechanisms that are practically efficient
                            remains a very challenging problem. Focusing on
                            engineering computing and optimization tasks, this
                            paper investigates secure outsourcing of widely
                            applicable linear programming (LP) computations. In
                            order to achieve practical efficiency, our mechanism
                            design explicitly decomposes the LP computation
                            outsourcing into public LP solvers running on the
                            cloud and private LP parameters owned by the
                            customer. The resulting flexibility allows us to explore
                            appropriate security/ efficiency tradeoff via higher-
                            level abstraction of LP computations than the general
                            circuit representation. In particular, by formulating
                            private data owned by the customer for LP problem as
                            a set of matrices and vectors, we are able to develop a
                            set of efficient privacy-preserving problem
                            transformation techniques, which allow customers to
                            transform original LP problem into some arbitrary one
                            while protecting sensitive input/output information.
                            To validate the computation result, we further explore
                            the fundamental duality theorem of LP computation
                            and derive the necessary and sufficient conditions that
                            correct result must satisfy. Such result verification
                            mechanism is extremely efficient and incurs close-to-
                            zero additional cost on both cloud server and
                            customers. Extensive security analysis and experiment
                            results show the immediate practicability of our
                            mechanism design.
21. Efficient audit service Cloud-based outsourced storage relieves the client’s 2012
    outsourcing for data burden for storage management and maintenance by
    integrity in clouds – providing a comparably low-cost, scalable, location-
projects 2012          independent platform. However, the fact that clients
                            no longer have physical possession of data indicates
                            that they are facing a potentially formidable risk for
                            missing or corrupted data. To avoid the security risks,
                            audit services are critical to ensure the integrity and
                            availability of outsourced data and to achieve digital
                            forensics and credibility on cloud computing. Provable
                            data possession (PDP), which is a cryptographic
                            technique for verifying the integrity of data without
                            retrieving it at an untrusted server, can be used to
                            realize audit services. In this paper, profiting from the
                            interactive zero-knowledge proof system, we address
                            the construction of an interactive PDP protocol to
                            prevent the fraudulence of prover (soundness
                            property) and the leakage of verified data (zero-
                            knowledge property). We prove that our construction
                            holds these properties based on the computation
                            Diffie–Hellman assumption and the rewindable black-
                            box knowledge extractor. We also propose an efficient
                            mechanism with respect to probabilistic queries and
                            periodic verification to reduce the audit costs per
                            verification and implement abnormal detection timely.
                            In addition, we present an efficient method for
                            selecting an optimal parameter value to minimize
                            computational overheads of cloud audit services. Our
                            experimental results demonstrate the effectiveness of
                            our approach
22. Efficient audit service Cloud-based outsourced storage relieves the client’s 2012
    outsourcing for data burden for storage management and maintenance by
    integrity in clouds – providing a comparably low-cost, scalable, location-
    projects 2012           independent platform. However, the fact that clients
                            no longer have physical possession of data indicates
                            that they are facing a potentially formidable risk for
                            missing or corrupted data. To avoid the security risks,
                            audit services are critical to ensure the integrity and
                            availability of outsourced data and to achieve digital
                            forensics and credibility on cloud computing. Provable
                            data possession (PDP), which is a cryptographic
                            technique for verifying the integrity of data without
                            retrieving it at an untrusted server, can be used to
                            realize audit services. In this paper, profiting from the
                            interactive zero-knowledge proof system, we address
                            the construction of an interactive PDP protocol to
                            prevent the fraudulence of prover (soundness
                            property) and the leakage of verified data (zero-
                            knowledge property). We prove that our construction
holds these properties based on the computation
                            Diffie–Hellman assumption and the rewindable black-
                            box knowledge extractor. We also propose an efficient
                            mechanism with respect to probabilistic queries and
                            periodic verification to reduce the audit costs per
                            verification and implement abnormal detection timely.
                            In addition, we present an efficient method for
                            selecting an optimal parameter value to minimize
                            computational overheads of cloud audit services. Our
                            experimental results demonstrate the effectiveness of
                            our approach.


23. Secure and privacy      Cloud storage services enable users to remotely access 2012
    preserving    keyword   data in a cloud anytime and anywhere, using any
    searching for cloud     device, in a pay-as-you-go manner. Moving data into a
    storage services –      cloud offers great convenience to users since they do
    projects 2012           not have to care about the large capital investment in
                            both the deployment and management of the hardware
                            infrastructures. However, allowing a cloud service
                            provider (CSP), whose purpose is mainly for making a
                            profit, to take the custody of sensitive data, raises
                            underlying security and privacy issues. To keep user
                            data confidential against an untrusted CSP, a natural
                            way is to apply cryptographic approaches, by
                            disclosing the data decryption key only to authorized
                            users. However, when a user wants to retrieve files
                            containing certain keywords using a thin client, the
                            adopted encryption system should not only support
                            keyword searching over encrypted data, but also
                            provide high performance. In this paper, we
                            investigate the characteristics of cloud storage services
                            and propose a secure and privacy preserving keyword
                            searching (SPKS) scheme, which allows the CSP to
                            participate in the decipherment, and to return only
                            files containing certain keywords specified by the
                            users, so as to reduce both the computational and
                            communication overhead in decryption for users, on
                            the condition of preserving user data privacy and user
                            querying privacy. Performance analysis shows that the
                            SPKS scheme is applicable to a cloud environment
24. Secure and privacy      Cloud storage services enable users to remotely access 2012
    preserving    keyword   data in a cloud anytime and anywhere, using any
    searching for cloud     device, in a pay-as-you-go manner. Moving data into a
    storage services –      cloud offers great convenience to users since they do
    projects 2012           not have to care about the large capital investment in
both the deployment and management of the hardware
                                infrastructures. However, allowing a cloud service
                                provider (CSP), whose purpose is mainly for making a
                                profit, to take the custody of sensitive data, raises
                                underlying security and privacy issues. To keep user
                                data confidential against an untrusted CSP, a natural
                                way is to apply cryptographic approaches, by
                                disclosing the data decryption key only to authorized
                                users. However, when a user wants to retrieve files
                                containing certain keywords using a thin client, the
                                adopted encryption system should not only support
                                keyword searching over encrypted data, but also
                                provide high performance. In this paper, we
                                investigate the characteristics of cloud storage services
                                and propose a secure and privacy preserving keyword
                                searching (SPKS) scheme, which allows the CSP to
                                participate in the decipherment, and to return only
                                files containing certain keywords specified by the
                                users, so as to reduce both the computational and
                                communication overhead in decryption for users, on
                                the condition of preserving user data privacy and user
                                querying privacy. Performance analysis shows that the
                                SPKS scheme is applicable to a cloud environment


25. Cooperative Provable        Provable data possession (PDP) is a technique for 2012
    Data Possession for         ensuring the integrity of data in storage outsourcing.
    Integrity Verification in   In this paper, we address the construction of an
    Multi-Cloud Storage         efficient PDP scheme for distributed cloud storage to
                                support the scalability of service and data migration,
                                in which we consider the existence of multiple cloud
                                service providers to cooperatively store and maintain
                                the clients’ data. We present a cooperative
                                PDP (CPDP) scheme based on homomorphic
                                verifiable response and hash index hierarchy. We
                                prove the security of our scheme based on multi-
                                prover zero-knowledge proof system, which can
                                satisfy completeness, knowledge soundness, and zero-
                                knowledge properties. In addition, we articulate
                                performance optimization mechanisms for our
                                scheme, and in particular present an efficient method
                                for selecting optimal parameter values to minimize the
                                computation costs of clients and storage service
                                providers. Our experiments show that our solution
                                introduces lower computation and communication
                                overheads in comparison with non-cooperative
approaches
26. Cooperative Provable        Provable data possession (PDP) is a technique for 2012
    Data Possession for         ensuring the integrity of data in storage outsourcing.
    Integrity Verification in   In this paper, we address the construction of an
    Multi-Cloud Storage         efficient PDP scheme for distributed cloud storage to
                                support the scalability of service and data migration,
                                in which we consider the existence of multiple cloud
                                service providers to cooperatively store and maintain
                                the clients’ data. We present a cooperative
                                PDP (CPDP) scheme based on homomorphic
                                verifiable response and hash index hierarchy. We
                                prove the security of our scheme based on multi-
                                prover zero-knowledge proof system, which can
                                satisfy completeness, knowledge soundness, and zero-
                                knowledge properties. In addition, we articulate
                                performance optimization mechanisms for our
                                scheme, and in particular present an efficient method
                                for selecting optimal parameter values to minimize the
                                computation costs of clients and storage service
                                providers. Our experiments show that our solution
                                introduces lower computation and communication
                                overheads in comparison with non-cooperative
                                approaches
27. Bootstrapping               Ontologies have become the de-facto modeling tool of 2012
    Ontologies for Web          choice, employed in many applications and
    Services – projects         prominently in the semantic web. Nevertheless,
    2012                        ontology construction remains a daunting task.
                                Ontological     bootstrapping,     which    aims    at
                                automatically generating concepts and their relations
                                in a given domain, is a promising technique for
                                ontology construction. Bootstrapping an ontology
                                based on a set of predefined textual sources, such as
                                web services, must address the problem of multiple,
                                largely unrelated concepts. In this paper, we propose
                                an ontology bootstrapping process for web services.
                                We exploit the advantage that web services usually
                                consist of both WSDL and free text descriptors. The
                                WSDL descriptor is evaluated using two methods,
                                namely Term Frequency/Inverse Document Frequency
                                (TF/IDF) and web context generation. Our proposed
                                ontology bootstrapping process integrates the results
                                of both methods and applies a third method to validate
                                the concepts using the service free text descriptor,
                                thereby offering a more accurate definition of
                                ontologies.     We     extensively    validated    our
                                bootstrapping method using a large repository of real-
world web services and verified the results against
                             existing ontologies. The experimental results indicate
                             high precision. Furthermore, the recall versus
                             precision comparison of the results when each method
                             is separately implemented presents the advantage of
                             our integrated bootstrapping approach.
28. Data Security and        It is well-known that cloud computing has many 2012
    Privacy     Protection   potential advantages and many enterprise applications
    Issues   in     Cloud    and data are migrating to public or hybrid cloud. But
    Computing                regarding some business-critical applications, the
                             organizations, especially large enterprises, still
                             wouldn't move them to cloud. The market size the
                             cloud computing shared is still far behind the one
                             expected. From the consumers' perspective, cloud
                             computing security concerns, especially data security
                             and privacy protection issues, remain the primary
                             inhibitor for adoption of cloud computing services.
                             This paper provides a concise but all-round analysis
                             on data security and privacy protection issues
                             associated with cloud computing across all stages of
                             data life cycle. Then this paper discusses some current
                             solutions. Finally, this paper describes future research
                             work about data security and privacy protection issues
                             in cloud.
29. Stochastic models of     Cloud computing services are becoming ubiquitous, 2012
    load balancing and       and are starting to serve as the primary source of
    scheduling in cloud      computing power for both enterprises and personal
    computing clusters       computing applications. We consider a stochastic
                             model of a cloud computing cluster, where jobs arrive
                             according to a stochastic process and request virtual
                             machines (VMs), which are specified in terms of
                             resources such as CPU, memory and storage space.
                             While there are many design issues associated with
                             such systems, here we focus only on resource
                             allocation problems, such as the design of algorithms
                             for load balancing among servers, and algorithms for
                             scheduling VM configurations. Given our model of a
                             cloud, we first define its capacity, i.e., the maximum
                             rates at which jobs can be processed in such a system.
                             Then, we show that the widely-used Best-Fit
                             scheduling algorithm is not throughput-optimal, and
                             present alternatives which achieve any arbitrary
                             fraction of the capacity region of the cloud. We then
                             study the delay performance of these alternative
                             algorithms through simulations.
30. A comber approach to     Cloud computing is an internet based pay as use 2012
protect      cloud       service which provides three layered services
     computing   against      (Software as a Service, Platform as a Service and
     XML DDoS and HTTP        Infrastructure as a Service) to its consumers on
     DDoS attack              demand. These on demand service facilities provide to
                              its consumers in multitenant environment but as
                              facility increases complexity and security problems
                              also increase. Here all the resources are at one place in
                              data centers. Cloud uses public and private APIs
                              (Application Programming Interface) to provide
                              services to its consumers in multitenant environment.
                              In this environment Distributed Denial of Service
                              attack (DDoS), especially HTTP, XML or REST
                              based DDoS attacks may be very dangerous and may
                              provide very harmful effects for availability of
                              services and all consumers will get affected at the
                              same time. One other reason is that because the cloud
                              computing users make their request in XML then send
                              this request using HTTP protocol and build their
                              system interface with REST protocol such as Amazon
                              EC2 or Microsoft Azure. So the threaten coming from
                              distributed REST attacks are more and easy to
                              implement by the attacker, but to security expert very
                              difficult to resolve. So to resolve these attacks this
                              paper introduces a comber approach for security
                              services called filtering tree. This filtering tree has five
                              filters to detect and resolve XML and HTTP DDoS
                              attack
31. Resource     allocation   Cloud computing is a platform that hosts applications 2012
    and scheduling in cloud   and services for businesses and users to accesses
    computing                 computing as a service. In this paper, we identify two
                              scheduling and resource allocation problems in cloud
                              computing. We describe Hadoop MapReduce and its
                              schedulers, and present recent research efforts in this
                              area      including     alternative      schedulers     and
                              enhancements to existing schedulers. The second
                              scheduling problem is the provisioning of virtual
                              machines to resources in the cloud. We present a
                              survey of the different approaches to solve this
                              resource allocation problem. We also include recent
                              research and standards for inter-connecting clouds and
                              discuss the suitability of running scientific
                              applications in the cloud.
32. Application study of      Aimed at some problems in Network Education 2012
    online       education    Resources Construction at present, we analyse the
    platform based on         characteristics and application range of cloud
    cloud computing           computing, and present an integrated solving scheme.
On that basis, some critical technologies such as the
                              cloud storage, streaming media and cloud safety are
                              analyzed in detail. Finally, the paper gives
                              summarization and expectation.
33. Towards        temporal   Access control is one of the most important security 2012
    access control in cloud   mechanisms in cloud computing. Attribute-based
    computing                 access control provides a flexible approach that allows
                              data owners to integrate data access policies within the
                              encrypted data. However, little work has been done to
                              explore temporal attributes in specifying and
                              enforcing the data owner's policy and the data user's
                              privileges in cloud-based environments. In this paper,
                              we present an efficient temporal access control
                              encryption scheme for cloud services with the help of
                              cryptographic integer comparisons and a proxy-based
                              re-encryption mechanism on the current time. We also
                              provide a dual comparative expression of integer
                              ranges to extend the power of attribute expression for
                              implementing various temporal constraints. We prove
                              the security strength of the proposed scheme and our
                              experimental results not only validate the
                              effectiveness of our scheme, but also show that the
                              proposed integer comparison scheme performs
                              significantly better than previous bitwise comparison
                              scheme.
34. Privacy-Preserving        We come up with a digital rights management (DRM) 2012
    DRM       for    Cloud    concept for cloud computing and show how license
    Computing                 management for software within the cloud can be
                              achieved in a privacy-friendly manner. In our
                              scenario, users who buy software from software
                              providers stay anonymous. At the same time, our
                              approach guarantees that software licenses are bound
                              to users and their validity is checked before execution.
                              We employ a software re-encryption scheme so that
                              computing centers which execute users' software are
                              not able to build user profiles - not even under
                              pseudonym - of their users. We combine secret
                              sharing and homomorphic encryption. We make sure
                              that malicious users are unable to relay software to
                              others. DRM constitutes an incentive for software
                              providers to take partin a future cloud computing
                              scenario. We make this scenario more attractive for
                              users by preserving their privacy.
35. Pricing and peak aware    The proposed cloud computing scheduling algorithms 2012
    scheduling algorithm      demonstrated feasibility of interactions between
    for cloud computing       distributors and one of their heavy use customers in a
smart grid environment. Specifically, the proposed
                             algorithms take cues from the dynamic pricing and
                             schedule the jobs/tasks in ways that the energy usage
                             is what distributors are hinted. In addition, a peak
                             threshold can be dynamically assigned such that the
                             energy usage at any given time will not exceed the
                             threshold. The proposed scheduling algorithm proved
                             the feasibility of managing the energy usage of cloud
                             computers in collaboration with the energy distributor
36. Comparison          of   Computer Networks face a constant struggle against 2012
    Network      Intrusion   intruders and attackers. Attacks on distributed systems
    Detection Systems in     grow stronger and more prevalent each and every day.
    cloud       computing    Intrusion detection methods are a key to control and
    environment              potentially eradicate attacks on a system. An Intrusion
                             detection system pertains to the methods used to
                             identify an attack on a computer or computer network.
                             In cloud computing environment the applications are
                             user-centric and the customers should be confident
                             about their applications stored in the cloud server.
                             Network Intrusion Detection System (NIDS) plays an
                             important role in providing the network security. They
                             provide a defence layer which monitors the network
                             traffic for pre-defined suspicious activity or pattern. In
                             this paper Snort, Tcpdump and Network Flight
                             Recorder which are the most famous NIDS in cloud
                             system are examined and contrasted.
37. Intelligent and Active   Cloud's development has entered the practical stage, 2012
    Defense Strategy of      but safety issues must be resolved. How to avoid the
    Cloud Computing          risk on the web page, how to prevent attacks from
                             hacker, how to protect user data in the cloud. This
                             paper discusses some satety solution :the credit level
                             of web page, Trace data, analyze and filter them by
                             large-scale statistical methods, encryption protection
                             of user data and key management.
38. Distributed     Shared   In this paper we discuss the idea of combining 2012
    Memory       as     an   wireless sensor networks and cloud computing starting
    Approach           for   with a state of the art analysis of existing approaches
    Integrating WSNs and     in this field. As result of the analysis we propose to
    Cloud Computing          reflect a real wireless sensor network by virtual
                             sensors in the cloud. The main idea is to replicate data
                             stored on the real sensor nodes also in the virtual
                             sensors, without explicit triggering such updates from
                             the application. We provide a short overview of the
                             resulting architecture before explaining mechanisms to
                             realize it. The means to ensure a certain level of
                             consistency between the real WSN and the virtual
sensors in the cloud is distributed shared memory. In
                            order to realize DSM in WSNs we have developed a
                            middleware named tinyDSM which is shortly
                            introduced here and which provides means for
                            replicating sensor data and ensuring the consistency of
                            the replicates. Even though tinyDSM is a pretty good
                            vehicle to realize our idea there are some open issues
                            that need to be addressed when realizing such an
                            architecture. We discuss these challenges in an
                            abstract way to ensure clear separation between the
                            idea and its specific realization.

39. Improving     resource Even though the adoption of cloud computing and 2012
    allocation in multi-tier virtualization have improved resource utilization to a
    cloud systems            great extent, the continued traditional approach of
                             resource allocation in production environments has
                             introduced the problem of over-provisioning of
                             resources for enterprise-class applications hosted in
                             cloud systems. In this paper, we address the problem
                             and propose ways to minimize over-provisioning of IT
                             resources in multi-tier cloud systems by adopting an
                             innovative approach of application performance
                             monitoring and resource allocation at individual tier
                             levels, on the basis of criticality of the business
                             services and availability of the resources at one's
                             disposal.

40. Ensuring Distributed Cloud computing enables highly scalable services to 2012
    Accountability for Data be easily consumed over the Internet on an as-needed
    Sharing in the Cloud    basis. A major feature of the cloud services is that
                            users' data are usually processed remotely in unknown
                            machines that users do not own or operate. While
                            enjoying the convenience brought by this new
                            emerging technology, users' fears of losing control of
                            their own data (particularly, financial and health data)
                            can become a significant barrier to the wide adoption
                            of cloud services. To address this problem, in this
                            paper, we propose a novel highly decentralized
                            information accountability framework to keep track of
                            the actual usage of the users' data in the cloud. In
                            particular, we propose an object-centered approach
                            that enables enclosing our logging mechanism
                            together with users' data and policies. We leverage the
                            JAR programmable capabilities to both create a
                            dynamic and traveling object, and to ensure that any
                            access to users' data will trigger authentication and
automated logging local to the JARs. To strengthen
                                 user's control, we also provide distributed auditing
                                 mechanisms. We provide extensive experimental
                                 studies that demonstrate the efficiency and
                                 effectiveness of the proposed approaches.
   41. Efficient information     Cloud computing as an emerging technology trend is 2012
       retrieval for ranked      expected to reshape the advances in information
       queries    in    cost-    technology. In this paper, we address two fundamental
       effective       cloud     issues in a cloud environment: privacy and efficiency.
       environments              We first review a private keyword-based file retrieval
                                 scheme proposed by Ostrovsky et. al. Then, based on
                                 an aggregation and distribution layer (ADL), we
                                 present a scheme, termed efficient information
                                 retrieval for ranked query (EIRQ), to further reduce
                                 querying costs incurred in the cloud. Queries are
                                 classified into multiple ranks, where a higher ranked
                                 query can retrieve a higher percentage of matched
                                 files. Extensive evaluations have been conducted on
                                 an analytical model to examine the effectiveness of
                                 our scheme.


        TECHNOLOGY : JAVA

        DOMAIN           : IEEE TRANSACTIONS ON MULTIMEDIA



S.NO    TITLES              ABSTRACT                                              YEAR
   1.   Movie2Comics:       This paper proposes a scheme that is able to 2012
        Towards      a  Lively
                            automatically turn a movie clip to comics. Two
        Video          Content
                            principles are followed in the scheme: 1) optimizing
        Presentation        the information preservation of the movie; and 2)
                            generating outputs following the rules and the styles
                            of comics. The scheme mainly contains three
                            components: script-face mapping, descriptive picture
                            extraction, and cartoonization. The script-face
                            mapping utilizes face tracking and recognition
                            techniques to accomplish the mapping between
                            characters’ faces and their scripts
   2.   Robust Watermarking In this paper, we propose a robust watermarking 2012
        of Compressed and algorithm to watermark JPEG2000 compressed and
        Encrypted JPEG2000 encrypted images. The encryption algorithm we
        Images              propose to use is a stream cipher. While the proposed
                            technique embeds watermark in the compressed-
                            encrypted domain, the extraction ofwatermark can be
done in the decrypted domain
3.   Load-Balancing             Multipath Switching systems (MPS) are intensely
     Multipath    Switching     used in state-of-the-art core routers to provide terabit
     System with Flow Slice     or even petabit switching capacity. One of the most
                                intractable issues in designing MPS is how to load
                                balance traffic across its multiple paths while not
                                disturbing the intraflow packet orders. Previous
                                packet-based solutions either suffer from delay
                                penalties or lead to O(N2 ) hardware complexity,
                                hence do not scale. Flow-based hashing algorithms
                                also perform badly due to the heavy-tailed flow-size
                                distribution. In this paper, we develop a novel scheme,
                                namely, Flow Slice (FS) that cuts off each flow into
                                flow slices at every intraflow interval larger than a
                                slicing threshold and balances the load on a finer
                                granularity. Based on the studies of tens of real
                                Internet traces, we show that setting a slicing threshold
                                of 1-4 ms, the FS scheme achieves comparative load-
                                balancing performance to the optimal one. It also
                                limits the probability of out-of-order packets to a
                                negligible level (10-6) on three popular MPSes at the
                                cost of little hardware complexity and an internal
                                speedup up to two. These results are proven by
                                theoretical analyses and also validated through trace-
                                driven prototype simulations
4.   Robust       Face-Name     Automatic face identification of characters in movies 2012
     Graph Matching for         has drawn significant research interests and led to
     Movie          Character   many interesting applications. It is a challenging
     Identification             problem due to the huge variation in the appearance of
                                each character. Although existing methods
                                demonstrate promising results in clean environment,
                                the performances are limited in complex movie scenes
                                due to the noises generated during the face tracking
                                and face clustering process. In this paper we present
                                two schemes of global face-name matching based
                                framework for robust character identification. The
                                contributions of this work include: Complex character
                                changes are handled by simultaneously graph partition
                                and graph matching. Beyond existing character
                                identification approaches, we further perform an in-
                                depth sensitivity analysis by introducing two types of
                                simulated noises. The proposed schemes demonstrate
                                state-of-the-art performance on movie character
                                identification in various genres of movies.
5.   Learn to Personalized      Increasingly developed social sharing websites, like 2012
     Image Search from the      Flickr and Youtube, allow users to create, share,
Photo          Sharing annotate and comment medias. The large-scale user-
Websites   –   projects generated meta-data not only facilitate users in sharing
2012                    and organizing multimedia content,but provide useful
                        information to improve media retrieval and
                        management. Personalized search serves as one of
                        such
                        examples where the web search experience is
                        improved by generating the returned list according to
                        the modified user search intents. In this paper, we
                        exploit the social annotations and propose a novel
                        framework simultaneously considering the user and
                        query relevance to learn to personalized image search.
                        The basic premise is to embed the user preference and
                        query-related
                        search intent into user-specific topic spaces. Since the
                        users’ original annotation is too sparse for topic
                        modeling, we need to enrich users’ annotation pool
                        before user-specific topic spaces construction. The
                        proposed framework contains two components:
Ad

Recommended

PDF
Java networking 2012 ieee projects @ Seabirds ( Chennai, Bangalore, Hyderabad...
SBGC
 
PDF
Ieee projects 2012 for cse
SBGC
 
PDF
IEEE Projects 2013 For ME Cse Seabirds ( Trichy, Thanjavur, Karur, Perambalur )
SBGC
 
PDF
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Paralleld...
sunda2011
 
PDF
ENHANCED PARTICLE SWARM OPTIMIZATION FOR EFFECTIVE RELAY NODES DEPLOYMENT IN ...
IJCNCJournal
 
PDF
Ce24539543
IJERA Editor
 
PDF
Interference aware resource allocation model for D2D under cellular network
IJECEIAES
 
PDF
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Networknew
sunda2011
 
PPTX
Fundamentals of network performance engineering
Martin Geddes
 
PDF
Analysing Mobile Random Early Detection for Congestion Control in Mobile Ad-h...
IJECEIAES
 
PDF
QoS controlled capacity offload optimization in heterogeneous networks
journalBEEI
 
PDF
Introduction to ΔQ and Network Performance Science (extracts)
Martin Geddes
 
PDF
Throughput optimization in
ingenioustech
 
PDF
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Knowledge...
sunda2011
 
PDF
Networking ieee-project-topics-ocularsystems.in
Ocular Systems
 
PDF
ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...
IJCNCJournal
 
PDF
Wireless Sensor Network Based Clustering Architecture for Cooperative Communi...
ijtsrd
 
PDF
A Secure Data Transmission Scheme using Asymmetric Semi-Homomorphic Encryptio...
IJAAS Team
 
PDF
TRUST BASED ROUTING METRIC FOR RPL ROUTING PROTOCOL IN THE INTERNET OF THINGS
pijans
 
PDF
AN EFFICIENT INTRUSION DETECTION SYSTEM WITH CUSTOM FEATURES USING FPA-GRADIE...
IJCNCJournal
 
PDF
IEEE BE-BTECH NS2 PROJECT@ DREAMWEB TECHNO SOLUTION
ranjith kumar
 
PDF
AN ENERGY EFFICIENT DISTRIBUTED PROTOCOL FOR ENSURING COVERAGE AND CONNECTIVI...
ijasuc
 
PDF
BT Operate Case Study
Martin Geddes
 
PDF
Rapidly IPv6 multimedia management schemes based LTE-A wireless networks
IJECEIAES
 
PDF
Ijarcet vol-2-issue-7-2292-2296
Editor IJARCET
 
PDF
Intelligent Approach for Seamless Mobility in Multi Network Environment
IDES Editor
 
PDF
Performance Analysis of Bfsk Multi-Hop Communication Systems Over K-μ Fading ...
ijwmn
 
PDF
IEEE Projects 2012-2013 Network Security
SBGC
 
PDF
Final Year IEEE 2011 Java Projects in Trichy SBGC
SBGC
 
PDF
Ieee projects 2011 java cloud computing @ SBGC ( Chennai, Trichy, Karur, Pudu...
SBGC
 

More Related Content

What's hot (19)

PPTX
Fundamentals of network performance engineering
Martin Geddes
 
PDF
Analysing Mobile Random Early Detection for Congestion Control in Mobile Ad-h...
IJECEIAES
 
PDF
QoS controlled capacity offload optimization in heterogeneous networks
journalBEEI
 
PDF
Introduction to ΔQ and Network Performance Science (extracts)
Martin Geddes
 
PDF
Throughput optimization in
ingenioustech
 
PDF
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Knowledge...
sunda2011
 
PDF
Networking ieee-project-topics-ocularsystems.in
Ocular Systems
 
PDF
ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...
IJCNCJournal
 
PDF
Wireless Sensor Network Based Clustering Architecture for Cooperative Communi...
ijtsrd
 
PDF
A Secure Data Transmission Scheme using Asymmetric Semi-Homomorphic Encryptio...
IJAAS Team
 
PDF
TRUST BASED ROUTING METRIC FOR RPL ROUTING PROTOCOL IN THE INTERNET OF THINGS
pijans
 
PDF
AN EFFICIENT INTRUSION DETECTION SYSTEM WITH CUSTOM FEATURES USING FPA-GRADIE...
IJCNCJournal
 
PDF
IEEE BE-BTECH NS2 PROJECT@ DREAMWEB TECHNO SOLUTION
ranjith kumar
 
PDF
AN ENERGY EFFICIENT DISTRIBUTED PROTOCOL FOR ENSURING COVERAGE AND CONNECTIVI...
ijasuc
 
PDF
BT Operate Case Study
Martin Geddes
 
PDF
Rapidly IPv6 multimedia management schemes based LTE-A wireless networks
IJECEIAES
 
PDF
Ijarcet vol-2-issue-7-2292-2296
Editor IJARCET
 
PDF
Intelligent Approach for Seamless Mobility in Multi Network Environment
IDES Editor
 
PDF
Performance Analysis of Bfsk Multi-Hop Communication Systems Over K-μ Fading ...
ijwmn
 
Fundamentals of network performance engineering
Martin Geddes
 
Analysing Mobile Random Early Detection for Congestion Control in Mobile Ad-h...
IJECEIAES
 
QoS controlled capacity offload optimization in heterogeneous networks
journalBEEI
 
Introduction to ΔQ and Network Performance Science (extracts)
Martin Geddes
 
Throughput optimization in
ingenioustech
 
IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt Ltd::Knowledge...
sunda2011
 
Networking ieee-project-topics-ocularsystems.in
Ocular Systems
 
ON THE PERFORMANCE OF INTRUSION DETECTION SYSTEMS WITH HIDDEN MULTILAYER NEUR...
IJCNCJournal
 
Wireless Sensor Network Based Clustering Architecture for Cooperative Communi...
ijtsrd
 
A Secure Data Transmission Scheme using Asymmetric Semi-Homomorphic Encryptio...
IJAAS Team
 
TRUST BASED ROUTING METRIC FOR RPL ROUTING PROTOCOL IN THE INTERNET OF THINGS
pijans
 
AN EFFICIENT INTRUSION DETECTION SYSTEM WITH CUSTOM FEATURES USING FPA-GRADIE...
IJCNCJournal
 
IEEE BE-BTECH NS2 PROJECT@ DREAMWEB TECHNO SOLUTION
ranjith kumar
 
AN ENERGY EFFICIENT DISTRIBUTED PROTOCOL FOR ENSURING COVERAGE AND CONNECTIVI...
ijasuc
 
BT Operate Case Study
Martin Geddes
 
Rapidly IPv6 multimedia management schemes based LTE-A wireless networks
IJECEIAES
 
Ijarcet vol-2-issue-7-2292-2296
Editor IJARCET
 
Intelligent Approach for Seamless Mobility in Multi Network Environment
IDES Editor
 
Performance Analysis of Bfsk Multi-Hop Communication Systems Over K-μ Fading ...
ijwmn
 

Viewers also liked (9)

PDF
IEEE Projects 2012-2013 Network Security
SBGC
 
PDF
Final Year IEEE 2011 Java Projects in Trichy SBGC
SBGC
 
PDF
Ieee projects 2011 java cloud computing @ SBGC ( Chennai, Trichy, Karur, Pudu...
SBGC
 
PDF
IEEE Projects 2011 for it SBGC ( Chennai, Trichy, Dindigul, Madurai)
SBGC
 
PDF
Network security java ieee projects 2012 @ Seabirds ( Trichy, Pudukkottai, Ta...
SBGC
 
PDF
Ieee 2009 .N Et & DOtnet Projects Sbgc ( Chennai, Trichy, India, Tamilnadu)
SBGC
 
PDF
Ieee project 2011 Sbgc
SBGC
 
PDF
IEEE Projects 2011 on Network Security @ SBGC ( Chennai, Trichy, Madurai, Din...
SBGC
 
PDF
Computer Science Projects for ieee 2011 SBGC ( Chennai, Trichy, Thanjavur, Pu...
SBGC
 
IEEE Projects 2012-2013 Network Security
SBGC
 
Final Year IEEE 2011 Java Projects in Trichy SBGC
SBGC
 
Ieee projects 2011 java cloud computing @ SBGC ( Chennai, Trichy, Karur, Pudu...
SBGC
 
IEEE Projects 2011 for it SBGC ( Chennai, Trichy, Dindigul, Madurai)
SBGC
 
Network security java ieee projects 2012 @ Seabirds ( Trichy, Pudukkottai, Ta...
SBGC
 
Ieee 2009 .N Et & DOtnet Projects Sbgc ( Chennai, Trichy, India, Tamilnadu)
SBGC
 
Ieee project 2011 Sbgc
SBGC
 
IEEE Projects 2011 on Network Security @ SBGC ( Chennai, Trichy, Madurai, Din...
SBGC
 
Computer Science Projects for ieee 2011 SBGC ( Chennai, Trichy, Thanjavur, Pu...
SBGC
 
Ad

Similar to Algorithm Solved IEEE Projects 2012 2013 Java @ Seabirdssolutions (20)

PDF
Mobile computing java ieee projects 2012 Seabirds ( Chennai, Pondicherry, Vel...
SBGC
 
PDF
Ieee projects-2014-bulk-ieee-projects-2015-title-list-for-me-be-mphil-final-y...
birdsking
 
PDF
3 S W 2009 I E E E Abstracts Java, N C C T Chennai
ncct
 
PDF
Ncct 2009 Ieee Java Projects
ncct
 
PDF
B E M E Projects M C A Projects B
ncct
 
PDF
Ieee Projects Asp.Net Projects Ieee 2009
ncct
 
PDF
Final Year Projects Ncct Chennai
ncct
 
PDF
Real Time Projects, Live Projects, Student Projects, Engineering Projects
ncct
 
PDF
Be Projects
ncct
 
PDF
A S P
ncct
 
PDF
I E E E 2009 A S P
ncct
 
PDF
J2 E E Projects, I E E E Projects 2009
ncct
 
PDF
Me Projects, M Tech Projects
ncct
 
PDF
Software Projects Asp.Net Java 2009 Ieee
ncct
 
PDF
Be Projects M.E Projects M.Tech Projects Mca Projects B.Tech Projects Polytec...
ncct
 
PDF
Asp.Net Ieee Java Ieee Ncct 2009
ncct
 
PDF
IEEE Projects 2013 For ME Cse @ Seabirds ( Trichy, Thanjavur, Perambalur, Di...
SBGC
 
PDF
Ieee Projects 2013 for Cse @ Seabirds(Trichy, Pudukkottai, Perambalur, Thanja...
SBGC
 
PDF
Ieee projects-2013-2014-title-list-for-me-be-mphil-final-year-students
Pruthivi Rajan
 
PDF
Bulk Ieee Projects 2013 @ Seabirds ( Chennai, Trichy, Hyderabad, Pune, Mumbai )
SBGC
 
Mobile computing java ieee projects 2012 Seabirds ( Chennai, Pondicherry, Vel...
SBGC
 
Ieee projects-2014-bulk-ieee-projects-2015-title-list-for-me-be-mphil-final-y...
birdsking
 
3 S W 2009 I E E E Abstracts Java, N C C T Chennai
ncct
 
Ncct 2009 Ieee Java Projects
ncct
 
B E M E Projects M C A Projects B
ncct
 
Ieee Projects Asp.Net Projects Ieee 2009
ncct
 
Final Year Projects Ncct Chennai
ncct
 
Real Time Projects, Live Projects, Student Projects, Engineering Projects
ncct
 
Be Projects
ncct
 
A S P
ncct
 
I E E E 2009 A S P
ncct
 
J2 E E Projects, I E E E Projects 2009
ncct
 
Me Projects, M Tech Projects
ncct
 
Software Projects Asp.Net Java 2009 Ieee
ncct
 
Be Projects M.E Projects M.Tech Projects Mca Projects B.Tech Projects Polytec...
ncct
 
Asp.Net Ieee Java Ieee Ncct 2009
ncct
 
IEEE Projects 2013 For ME Cse @ Seabirds ( Trichy, Thanjavur, Perambalur, Di...
SBGC
 
Ieee Projects 2013 for Cse @ Seabirds(Trichy, Pudukkottai, Perambalur, Thanja...
SBGC
 
Ieee projects-2013-2014-title-list-for-me-be-mphil-final-year-students
Pruthivi Rajan
 
Bulk Ieee Projects 2013 @ Seabirds ( Chennai, Trichy, Hyderabad, Pune, Mumbai )
SBGC
 
Ad

More from SBGC (20)

PDF
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
SBGC
 
PDF
Ieee projects-2014-java-cloud-computing
SBGC
 
PDF
Ieee projects 2012 for cse
SBGC
 
PDF
Bulk ieee projects 2012 2013
SBGC
 
PDF
Java image processing ieee projects 2012 @ Seabirds ( Chennai, Bangalore, Hyd...
SBGC
 
PDF
2012 ieee projects software engineering @ Seabirds ( Trichy, Chennai, Pondich...
SBGC
 
PDF
Dotnet datamining ieee projects 2012 @ Seabirds ( Chennai, Pondicherry, Vello...
SBGC
 
PDF
Java datamining ieee Projects 2012 @ Seabirds ( Chennai, Mumbai, Pune, Nagpur...
SBGC
 
PDF
Bulk IEEE Java Projects 2012 @ Seabirds ( Chennai, Trichy, Hyderabad, Mumbai,...
SBGC
 
PDF
Latest IEEE Projects 2012 For IT@ Seabirds ( Trichy, Perambalur, Namakkal, Sa...
SBGC
 
PDF
Latest IEEE Projects 2012 for Cse Seabirds ( Trichy, Chennai, Perambalur, Pon...
SBGC
 
PDF
IEEE Projects 2012 Titles For Cse @ Seabirds ( Chennai, Pondicherry, Vellore,...
SBGC
 
PDF
IEEE Projects 2012 For Me Cse @ Seabirds ( Trichy, Chennai, Thanjavur, Pudukk...
SBGC
 
PDF
Ieee project-for-cse -2012
SBGC
 
PDF
Bulk IEEE Projects 2012 @ SBGC ( Chennai, Trichy, Karur, Pudukkottai, Nellore...
SBGC
 
PDF
IEEE Projects 2012 - 2013
SBGC
 
PDF
J2EE ieee projects 2011 SBGC ( Trichy, Chennai, Tirupati, Nellore, Kadapa, Ku...
SBGC
 
PDF
MTECH / ME IEEE Projects 2011 @ Seabirds ( Trichy, Chennai, Tanjore, Vellore,...
SBGC
 
PDF
Java IEEE Projects 2011 Software Engineering @ SBGC ( Trichy, Chennai, Natham...
SBGC
 
DOC
IEEE 2011 Dotnet Projects @ SBGC ( Chennai, Trichy, Hyderabad, Mumbai, Pune, ...
SBGC
 
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
SBGC
 
Ieee projects-2014-java-cloud-computing
SBGC
 
Ieee projects 2012 for cse
SBGC
 
Bulk ieee projects 2012 2013
SBGC
 
Java image processing ieee projects 2012 @ Seabirds ( Chennai, Bangalore, Hyd...
SBGC
 
2012 ieee projects software engineering @ Seabirds ( Trichy, Chennai, Pondich...
SBGC
 
Dotnet datamining ieee projects 2012 @ Seabirds ( Chennai, Pondicherry, Vello...
SBGC
 
Java datamining ieee Projects 2012 @ Seabirds ( Chennai, Mumbai, Pune, Nagpur...
SBGC
 
Bulk IEEE Java Projects 2012 @ Seabirds ( Chennai, Trichy, Hyderabad, Mumbai,...
SBGC
 
Latest IEEE Projects 2012 For IT@ Seabirds ( Trichy, Perambalur, Namakkal, Sa...
SBGC
 
Latest IEEE Projects 2012 for Cse Seabirds ( Trichy, Chennai, Perambalur, Pon...
SBGC
 
IEEE Projects 2012 Titles For Cse @ Seabirds ( Chennai, Pondicherry, Vellore,...
SBGC
 
IEEE Projects 2012 For Me Cse @ Seabirds ( Trichy, Chennai, Thanjavur, Pudukk...
SBGC
 
Ieee project-for-cse -2012
SBGC
 
Bulk IEEE Projects 2012 @ SBGC ( Chennai, Trichy, Karur, Pudukkottai, Nellore...
SBGC
 
IEEE Projects 2012 - 2013
SBGC
 
J2EE ieee projects 2011 SBGC ( Trichy, Chennai, Tirupati, Nellore, Kadapa, Ku...
SBGC
 
MTECH / ME IEEE Projects 2011 @ Seabirds ( Trichy, Chennai, Tanjore, Vellore,...
SBGC
 
Java IEEE Projects 2011 Software Engineering @ SBGC ( Trichy, Chennai, Natham...
SBGC
 
IEEE 2011 Dotnet Projects @ SBGC ( Chennai, Trichy, Hyderabad, Mumbai, Pune, ...
SBGC
 

Algorithm Solved IEEE Projects 2012 2013 Java @ Seabirdssolutions

  • 1. SEABIRDS SOLUTION IEEE 2012 – 2013 SOFTWARE PROJECTS IN VARIOUS DOMAINS | JAVA | J2ME | J2EE | DOTNET |MATLAB |NS2 | SBGC SBGC 24/83, O Block, MMDA COLONY 4th FLOOR SURYA COMPLEX, ARUMBAKKAM SINGARATHOPE BUS STOP, CHENNAI-600106 OLD MADURAI ROAD, TRICHY- 620002 Web: www.ieeeproject.in E-Mail: [email protected] Trichy Chennai Mobile: - 09003012150 Mobile: - 09944361169 Phone: - 0431-4012303
  • 2. SBGC Provides IEEE 2012-2013 projects for all Final Year Students. We do assist the students with Technical Guidance for two categories. Category 1: Students with new project ideas / New or Old IEEE Papers. Category 2: Students selecting from our project list. When you register for a project we ensure that the project is implemented to your fullest satisfaction and you have a thorough understanding of every aspect of the project. SBGC PROVIDES YOU THE LATEST IEEE 2012 PROJECTS / IEEE 2013 PROJECTS FOR FOLLOWING DEPARTMENT STUDENTS B.E, B.TECH, M.TECH, M.E, DIPLOMA, MS, BSC, MSC, BCA, MCA, MBA, BBA, PHD, B.E (ECE, EEE, E&I, ICE, MECH, PROD, CSE, IT, THERMAL, AUTOMOBILE, MECATRONICS, ROBOTICS) B.TECH(ECE, MECATRONICS, E&I, EEE, MECH , CSE, IT, ROBOTICS) M.TECH(EMBEDDED SYSTEMS, COMMUNICATION SYSTEMS, POWER ELECTRONICS, COMPUTER SCIENCE, SOFTWARE ENGINEERING, APPLIED ELECTRONICS, VLSI Design) M.E(EMBEDDED SYSTEMS, COMMUNICATION SYSTEMS, POWER ELECTRONICS, COMPUTER SCIENCE, SOFTWARE ENGINEERING, APPLIED ELECTRONICS, VLSI Design) DIPLOMA (CE, EEE, E&I, ICE, MECH,PROD, CSE, IT) MBA(HR, FINANCE, MANAGEMENT, HOTEL MANAGEMENT, SYSTEM MANAGEMENT, PROJECT MANAGEMENT, HOSPITAL MANAGEMENT, SCHOOL MANAGEMENT, MARKETING MANAGEMENT, SAFETY MANAGEMENT) We also have training and project, R & D division to serve the students and make them job oriented professionals
  • 3. PROJECT SUPPORTS AND DELIVERABLES  Free Course (JAVA & DOT NET)  Project Abstract  IEEE PAPER  IEEE Reference Papers, Materials &  Books in CD  PPT / Review Material  Project Report (All Diagrams & Screen shots)  Working Procedures  Algorithm Explanations  Project Installation in Laptops  Project Certificate
  • 4. TECHNOLOGY : JAVA DOMAIN : IEEE TRANSACTIONS ON NETWORKING S.NO TITLES ABSTRACT YEAR 1. Balancing the Trade- In This Project, we propose schemes to balance the trade- 2012 Offs between Query offs between data availability and query delay under Delay and Data different system settings and requirements. Mobile nodes Availability in in one partition are not able to access data hosted by MANETs nodes in other partitions, and hence significantly degrade the performance of data access. To deal with this problem, We apply data replication techniques. 2. MeasuRouting: A In this paper we present a theoretical framework for 2012 Framework for MeasuRouting. Furthermore, as proofs-of-concept, we Routing Assisted present synthetic and practical monitoring applications to Traffic Monitoring showcase the utility enhancement achieved with MeasuRouting. 3. Cooperative Profit We model Optimal cooperation using the theory of 2012 Sharing in Coalition- transferable payoff coalitional games. We show that the Based Resource optimum cooperation strategy, which involves the Allocation in Wireless acquisition, deployment, and allocation of the channels Networks and base stations (to customers), can be computed as the solution of a concave or an integer optimization. We next show that the grand coalition is stable in many different settings. 4. Bloom Cast: Efficient In this paper we propose Bloom Cast, an efficient and 2012 Full-Text Retrieval effective full-text retrieval scheme, in unstructured P2P over Unstructured networks. Bloom Cast is effective because it guarantees P2Ps with Guaranteed perfect recall rate with high probability. Recall 5. On Optimizing We propose a novel overlay formation algorithm for 2012 Overlay Topologies unstructured P2P networks. Based on the file sharing for Search in pattern exhibiting the power-law property, our proposal Unstructured Peer-to- is unique in that it poses rigorous performance Peer Networks guarantees. 6. An MDP-Based In this paper, we propose an automated Markov Decision 2012 Dynamic Optimization Process (MDP)-based methodology to prescribe optimal Methodology for sensor node operation to meet application requirements Wireless Sensor and adapt to changing environmental stimuli. Numerical Networks results confirm the optimality of our proposed methodology and reveal that our methodology more closely meets application requirements compared to other feasible policies. 7. Obtaining Provably In this paper, we address The Internet Topology 2012 Legitimate Internet Problems by providing a framework to generate small, Topologies realistic, and policy-aware topologies. We propose HBR,
  • 5. a novel sampling method, which exploits the inherent hierarchy of the policy-aware Internet topology. We formally prove that our approach generates connected and legitimate topologies, which are compatible with the policy-based routing conventions and rules. 8. Extrema Propagation: This paper introduces Extrema Propagation, a 2012 Fast Distributed probabilistic technique for distributed estimation of the Estimation of Sums sum of positive real numbers. The technique relies on the and Network Sizes exchange of duplicate insensitive messages and can be applied in flood and/or epidemic settings, where multipath routing occurs; it is tolerant of message loss; it is fast, as the number of message exchange steps can be made just slightly above the theoretical minimum; and it is fully distributed, with no single point of failure and the result produced at every node. 9. Latency Equalization We propose a Latency Equalization (LEQ) service, 2012 as a New Network which equalizes the perceived latency for all clients. Service Primitive 10. Grouping-Enhanced This paper proposes a scheme, referred to as Grouping- 2012 Resilient Probabilistic enhanced Resilient Probabilistic En-route Filtering En-Route Filtering of (GRPEF). In GRPEF, an efficient distributed algorithm is Injected False Data in proposed to group nodes without incurring extra groups, WSNs and a multi axis division based approach for deriving location-aware keys is used to overcome the threshold problem and remove the dependence on the sink immobility and routing protocols. 11. On Achieving Group- A multicast scheme is stragegyproof if no receiver has 2012 Strategy proof incentive to lie about her true valuation. It is further Multicast group strategyproof if no group of colluding receivers has incentive to lie. We study multicast schemes that target group strategyproofness, in both directed and undirected networks. 12. Distributed -Optimal In this paper, we develop a framework for user 2012 User Association and association in infrastructure-based wireless networks, Cell Load Balancing specifically focused on flow-level cell load balancing in Wireless Networks under spatially in homogeneous traffic distributions. Our work encompasses several different user association policies: rate-optimal, throughput-optimal, delay-optimal, and load-equalizing, which we collectively denote ?- optimal user association. 13. Opportunistic Flow- In this paper, we propose Consistent Net Flow (CNF) 2012 Level Latency architecture for measuring per-flow delay measurements Estimation Using within routers. CNF utilizes the existing Net Flow Consistent Net Flow architecture that already reports the first and last timestamps per flow, and it proposes hash-based sampling to ensure that two adjacent routers record the
  • 6. same flows. 14. Leveraging a Resource discovery is critical to the usability and 2012 Compound Graph- accessibility of grid computing systems. Distributed Based DHT for Multi- Hash Table (DHT) has been applied to grid systems as a Attribute Range distributed mechanism for providing scalable range- Queries with query and multi-attribute resource discovery. Multi- Performance Analysis DHT-based approaches depend on multiple DHT networks with each network responsible for a single attribute. Single-DHT-based approaches keep the resource information of all attributes in a single node. Both classes of approaches lead to high overhead. In this paper, we propose a Low-Overhead Range-query Multi- attribute (LORM) DHT-based resource discovery approach. Unlike other DHT-based approaches, LORM relies on a single compound graph-based DHT network and distributes resource information among nodes in balance by taking advantage of the compound graph structure. Moreover, it has high capability to handle the large-scale and dynamic characteristics of resources in grids. Experimental results demonstrate the efficiency of LORM in comparison with other resource discovery approaches. LORM dramatically reduces maintenance and resource discovery overhead. In addition, it yields significant improvements in resource location efficiency. We also analyze the performance of the LORM approach rigorously by comparing it with other multi-DHT-based and single-DHT-based approaches with respect to their overhead and efficiency. The analytical results are consistent with experimental results, and prove the superiority of the LORM approach in theory 15. Exploiting Excess Excess capacity (EC) is the unused capacity in a 2012 Capacity to Improve network. We propose EC management techniques to Robustness of WDM improve network performance. Our techniques exploit Mesh Networks the EC in two ways. First, a connection preprovisioning algorithm is used to reduce the connection setup time. Second, whenever possible, we use protection schemes that have higher availability and shorter protection switching time. Specifically, depending on the amount of EC available in the network, our proposed EC management techniques dynamically migrate connections between high-availability, high-backup- capacity protection schemes and low-availability, low- backup-capacity protection schemes. Thus, multiple protection schemes can coexist in the network. The four EC management techniques studied in this paper differ in two respects: when the connections are migrated from
  • 7. one protection scheme to another, and which connections are migrated. Specifically, Lazy techniques migrate connections only when necessary, whereas Proactive techniques migrate connections to free up capacity in advance. Partial Backup Reprovisioning (PBR) techniques try to migrate a minimal set of connections, whereas Global Backup Reprovisioning (GBR) techniques migrate all connections. We develop integer linear program (ILP) formulations and heuristic algorithms for the EC management techniques. We then present numerical examples to illustrate how the EC management techniques improve network performance by exploiting the EC in wavelength-division- multiplexing (WDM) mesh networks 16. Revisiting Dynamic In unstructured peer-to-peer networks, the average 2012 Query Protocols in response latency and traffic cost of a query are two main Unstructured Peer-to- performance metrics. Controlled-flooding resource query Peer Networks algorithms are widely used in unstructured networks such as peer-to-peer networks. In this paper, we propose a novel algorithm named Selective Dynamic Query (SDQ). Based on mathematical programming, SDQ calculates the optimal combination of an integer TTL value and a set of neighbors to control the scope of the next query. Our results demonstrate that SDQ provides finer grained control than other algorithms: its response latency is close to the well-known minimum one via Expanding Ring; in the mean time, its traffic cost is also close to the minimum. To our best knowledge, this is the first work capable of achieving a best trade-off between response latency and traffic cost. 17. Adaptive A distributed adaptive opportunistic routing scheme for 2012 Opportunistic Routing multihop wireless ad hoc networks is proposed. The for Wireless Ad Hoc proposed scheme utilizes a reinforcement learning Networks framework to opportunistically route the packets even in the absence of reliable knowledge about channel statistics and network model. This scheme is shown to be optimal with respect to an expected average per-packet reward criterion. The proposed routing scheme jointly addresses the issues of learning and routing in an opportunistic context, where the network structure is characterized by the transmission success probabilities. In particular, this learning framework leads to a stochastic routing scheme that optimally “explores” and “exploits” the opportunities in the network. 18. Design, This load balancer improves both throughput and 2012
  • 8. Implementation, and response time versus a single node, while exposing a Performance of A single interface to external clients. The algorithm Load Balancer for SIP achieves Transaction Least-Work-Left (TLWL), Server Clusters – achieves its performance by integrating several features: projects 2012 knowledge of the SIP protocol; dynamic estimates of back-end server load; distinguishing transactions from calls; recognizing variability in call length; and exploiting differences in processing costs for different SIP transactions. 19. Router Support for An increasing number of datacenter network 2012 Fine-Grained Latency applications, including automated trading and high- Measurements – performance computing, have stringent end-to-end projects 2012 latency requirements where even microsecond variations may be intolerable. The resulting fine-grained measurement demands cannot be met effectively by existing technologies, such as SNMP, NetFlow, or active probing. Instrumenting routers with a hash-based primitive has been proposed that called as Lossy Difference Aggregator (LDA) to measure latencies down to tens of microseconds even in the presence of packet loss. Because LDA does not modify or encapsulate the packet, it can be deployed incrementally without changes along the forwarding path. When compared to Poisson- spaced active probing with similar overheads, LDA mechanism delivers orders of magnitude smaller relative error. Although ubiquitous deployment is ultimately desired, it may be hard to achieve in the shorter term 20. A Framework for Monitoring transit traffic at one or more points in a 2012 Routing Assisted network is of interest to network operators for reasons of Traffic Monitoring – traffic accounting, debugging or troubleshooting, projects 2012 forensics, and traffic engineering. Previous research in the area has focused on deriving a placement of monitors across the network towards the end of maximizing the monitoring utility of the network operator for a given traffic routing. However, both traffic characteristics and measurement objectives can dynamically change over time, rendering a previously optimal placement of monitors suboptimal. It is not feasible to dynamically redeploy/reconfigure measurement infrastructure to cater to such evolving measurement requirements. This problem is addressed by strategically routing traffic sub- populations over fixed monitors. This approach is MeasuRouting. The main challenge for MeasuRouting is to work within the constraints of existing intra-domain traffic engineering operations that are geared for efficiently utilizing bandwidth resources, or meeting
  • 9. Quality of Service (QoS) constraints, or both. A fundamental feature of intra-domain routing, that makes MeasuRouting feasible, is that intra-domain routing is often specified for aggregate flows. MeasuRouting, can therefore, differentially route components of an aggregate flow while ensuring that the aggregate placement is compliant to original traffic engineering objectives. 21. Independent In order to achieve resilient multipath routing we 2012 Directed Acyclic introduce the concept of Independent Directed Acyclic Graphs for Resilient Graphs (IDAGs) in this study. Link-independent (Node- Multipath Routing independent) DAGs satisfy the property that any path from a source to the root on one DAG is link-disjoint (node-disjoint) with any path from the source to the root on the other DAG. Given a network, we develop polynomial time algorithms to compute link-independent and node-independent DAGs. The algorithm developed in this paper: (1) provides multipath routing; (2) utilizes all possible edges; (3) guarantees recovery from single link failure; and (4) achieves all these with at most one bit per packet as overhead when routing is based on destination address and incoming edge. We show the effectiveness of the proposed IDAGs approach by comparing key performance indices to that of the independent trees and multiple pairs of independent trees techniques through extensive simulations 22. A Greedy Link Information-theoretic broadcast channels (BCs) and 2012 Scheduler for Wireless multiple-access channels (MACs) enable a single node to Networks With transmit data simultaneously to multiple nodes, and Gaussian Multiple- multiple nodes to transmit data simultaneously to a single Access and Broadcast node, respectively. In this paper, we address the problem Channels of link scheduling in multihop wireless networks containing nodes with BC and MAC capabilities. We first propose an interference model that extends protocol interference models, originally designed for point-to- point channels, to include the possibility of BCs and MACs. Due to the high complexity of optimal link schedulers, we introduce the Multiuser Greedy Maximum Weight algorithm for link scheduling in multihop wireless networks containing BCs and MACs. Given a network graph, we develop new local pooling conditions and show that the performance of our algorithm can be fully characterized using the associated parameter, the multiuser local pooling factor. We provide examples of some network graphs, on which we apply local pooling conditions and derive the multiuser local
  • 10. pooling factor. We prove optimality of our algorithm in tree networks and show that the exploitation of BCs and MACs improve the throughput performance considerably in multihop wireless networks. 23. A Quantization We consider rate optimization in multicast systems that 2012 Theoretic Perspective use several multicast trees on a communication network. on Simulcast and The network is shared between different applications. For Layered Multicast that reason, we model the available bandwidth for Optimization multicast as stochastic. For specific network topologies, we show that the multicast rate optimization problem is equivalent to the optimization of scalar quantization. We use results from rate-distortion theory to provide a bound on the achievable performance for the multicast rate optimization problem. A large number of receivers makes the possibility of adaptation to changing network conditions desirable in a practical system. To this end, we derive an analytical solution to the problem that is asymptotically optimal in the number of multicast trees. We derive local optimality conditions, which we use to describe a general class of iterative algorithms that give locally optimal solutions to the problem. Simulation results are provided for the multicast of an i.i.d. Gaussian process, an i.i.d. Laplacian process, and a video source. 24. Bit Weaving A Non- Ternary Content Addressable Memories (TCAMs) have 2012 Prefix Approach to become the de facto standard in industry for fast packet Compressing Packet classification. Unfortunately, TCAMs have limitations of Classifiers in TCAMs small capacity, high power consumption, high heat generation, and high cost. The well-known range expansion problem exacerbates these limitations as each classifier rule typically has to be converted to multiple TCAM rules. One method for coping with these limitations is to use compression schemes to reduce the number of TCAM rules required to represent a classifier. Unfortunately, all existing compression schemes only produce prefix classifiers. Thus, they all miss the compression opportunities created by non-prefix ternary classifiers. 25. Cooperative Profit We consider a network in which several service 2012 Sharing in Coalition- providers offer wireless access service to their respective Based Resource subscribed customers through potentially multi-hop Allocation in Wireless routes. If providers cooperate, i.e., pool their resources, Networks such as spectrum and base stations, and agree to serve each others' customers, their aggregate payoffs, and individual shares, can potentially substantially increase through efficient utilization of resources and statistical
  • 11. multiplexing. The potential of such cooperation can however be realized only if each provider intelligently determines who it would cooperate with, when it would cooperate, and how it would share its resources during such cooperation. Also, when the providers share their aggregate revenues, developing a rational basis for such sharing is imperative for the stability of the coalitions. We model such cooperation using transferable payoff coalitional game theory. We first consider the scenario that locations of the base stations and the channels that each provider can use have already been decided apriori. We show that the optimum cooperation strategy, which involves the allocations of the channels and the base stations to mobile customers, can be obtained as solutions of convex optimizations. We next show that the grand coalition is stable in this case, i.e. if all providers cooperate, there is always an operating point that maximizes the providers' aggregate payoff, while offering each such a share that removes any incentive to split from the coalition. Next, we show that when the providers can choose the locations of their base stations and decide which channels to acquire, the above results hold in important special cases. Finally, we examine cooperation when providers do not share their payoffs, but still share their resources so as to enhance individual payoffs. We show that the grand coalition continues to be stable. 26. CSMACN Carrier A wireless transmitter learns of a packet loss and infers 2012 Sense Multiple Access collision only after completing the entire transmission. If With Collision the transmitter could detect the collision early [such as Notification with carrier sense multiple access with collision detection (CSMA/CD) in wired networks], it could immediately abort its transmission, freeing the channel for useful communication. There are two main hurdles to realize CSMA/CD in wireless networks. First, a wireless transmitter cannot simultaneously transmit and listen for a collision. Second, any channel activity around the transmitter may not be an indicator of collision at the receiver. This paper attempts to approximate CSMA/CD in wireless networks with a novel scheme called CSMA/CN (collision notification). Under CSMA/CN, the receiver uses PHY-layer information to detect a collision and immediately notifies the transmitter. The collision notification consists of a unique signature, sent on the same channel as the data. The transmitter employs a listener antenna and performs signature correlation to
  • 12. discern this notification. Once discerned, the transmitter immediately aborts the transmission. We show that the notification signature can be reliably detected at the listener antenna, even in the presence of a strong self- interference from the transmit antenna. A prototype testbed of 10 USRP/GNU Radios demonstrates the feasibility and effectiveness of CSMA/CN. 27. Dynamic Power A major problem in wireless networks is coping with 2012 Allocation Under limited resources, such as bandwidth and energy. These Arbitrary Varying issues become a major algorithmic challenge in view of Channels—An Online the dynamic nature of the wireless domain. We consider Approach in this paper the single-transmitter power assignment problem under time-varying channels, with the objective of maximizing the data throughput. It is assumed that the transmitter has a limited power budget, to be sequentially divided during the lifetime of the battery. We deviate from the classic work in this area, which leads to explicit "water-filling" solutions, by considering a realistic scenario where the channel state quality changes arbitrarily from one transmission to the other. The problem is accordingly tackled within the framework of competitive analysis, which allows for worst case performance guarantees in setups with arbitrarily varying channel conditions. We address both a "discrete" case, where the transmitter can transmit only at a fixed power level, and a "continuous" case, where the transmitter can choose any power level out of a bounded interval. For both cases, we propose online power-allocation algorithms with proven worst-case performance bounds. In addition, we establish lower bounds on the worst-case performance of any online algorithm, and show that our proposed algorithms are optimal. 28. Economic Issues in In designing and managing a shared infrastructure, one 2012 Shared Infrastructures must take account of the fact that its participants will make self-interested and strategic decisions about the resources that they are willing to contribute to it and/or the share of its cost that they are willing to bear. Taking proper account of the incentive issues that thereby arise, we design mechanisms that, by eliciting appropriate information from the participants, can obtain for them maximal social welfare, subject to charging payments that are sufficient to cover costs. We show that there are incentivizing roles to be played both by the payments that we ask from the participants and the specification of how resources are to be shared. New in this paper is our
  • 13. formulation of models for designing optimal management policies, our analysis that demonstrates the inadequacy of simple sharing policies, and our proposals for some better ones. We learn that simple policies may be far from optimal and that efficient policy design is not trivial. However, we find that optimal policies have simple forms in the limit as the number of participants becomes large. 29. On New Approaches Society relies heavily on its networked physical 2012 of Assessing Network infrastructure and information systems. Accurately Vulnerability assessing the vulnerability of these systems against Hardness and disruptive events is vital for planning and risk Approximation management. Existing approaches to vulnerability assessments of large-scale systems mainly focus on investigating inhomogeneous properties of the underlying graph elements. These measures and the associated heuristic solutions are limited in evaluating the vulnerability of large-scale network topologies. Furthermore, these approaches often fail to provide performance guarantees of the proposed solutions. In this paper, we propose a vulnerability measure, pairwise connectivity, and use it to formulate network vulnerability assessment as a graph-theoretical optimization problem, referred to as -disruptor. The objective is to identify the minimum set of critical network elements, namely nodes and edges, whose removal results in a specific degradation of the network global pairwise connectivity. We prove the NP- completeness and inapproximability of this problem and propose an pseudo-approximation algorithm to computing the set of critical nodes and an pseudo- approximation algorithm for computing the set of critical edges. The results of an extensive simulation-based experiment show the feasibility of our proposed vulnerability assessment framework and the efficiency of the proposed approximation algorithms in comparison to other approaches. 30. Quantifying Video- With the proliferation of multimedia content on the 2012 QoE Degradations of Internet, there is an increasing demand for video streams Internet Links with high perceptual quality. The capability of present- day Internet links in delivering high-perceptual-quality streaming services, however, is not completely understood. Link-level degradations caused by intradomain routing policies and inter-ISP peering policies are hard to obtain, as Internet service providers often consider such information proprietary.
  • 14. Understanding link-level degradations will enable us in designing future protocols, policies, and architectures to meet the rising multimedia demands. This paper presents a trace-driven study to understand quality-of-experience (QoE) capabilities of present-day Internet links using 51 diverse ISPs with a major presence in the US, Europe, and Asia-Pacific. We study their links from 38 vantage points in the Internet using both passive tracing and active probing for six days. We provide the first measurements of link-level degradations and case studies of intra-ISP and inter-ISP peering links from a multimedia standpoint. Our study offers surprising insights into intradomain traffic engineering, peering link loading, BGP, and the inefficiencies of using autonomous system (AS)-path lengths as a routing metric. Though our results indicate that Internet routing policies are not optimized for delivering high-perceptual- quality streaming services, we argue that alternative strategies such as overlay networks can help meet QoE demands over the Internet. Streaming services apart, our Internet measurement results can be used as an input to a variety of research problems. 31. Order Matters Modern wireless interfaces support a physical-layer 2012 Transmission capability called Message in Message (MIM). Briefly, Reordering in MIM allows a receiver to disengage from an ongoing Wireless Networks reception and engage onto a stronger incoming signal. Links that otherwise conflict with each other can be made concurrent with MIM. However, the concurrency is not immediate and can be achieved only if conflicting links begin transmission in a specific order. The importance of link order is new in wireless research, motivating MIM-aware revisions to link-scheduling protocols. This paper identifies the opportunity in MIM- aware reordering, characterizes the optimal improvement in throughput, and designs a link-layer protocol for enterprise wireless LANs to achieve it. Testbed and simulation results confirm the performance gains of the proposed system. 32. Static Routing and In this paper, we investigate the static multicast advance 2012 Wavelength reservation (MCAR) problem for all-optical wavelength- Assignment for routed WDM networks. Under the advanced reservation Multicast Advance traffic model, connection requests specify their start time Reservation in All- to be some time in the future and also specify their Optical Wavelength- holding times. We investigate the static MCAR problem Routed WDM where the set of advance reservation requests is known
  • 15. Networks ahead of time. We prove the MCAR problem is NP- complete, formulate the problem mathematically as an integer linear program (ILP), and develop three efficient heuristics, seqRWA, ISH, and SA, to solve the problem for practical size networks. We also introduce a theoretical lower bound on the number of wavelengths required. To evaluate our heuristics, we first compare their performances to the ILP for small networks, and then simulate them over real-world, large-scale networks. We find the SA heuristic provides close to optimal results compared to the ILP for our smaller networks, and up to a 33% improvement over seqRWA and up to a 22% improvement over ISH on realistic networks. SA provides, on average, solutions 1.5-1.8 times the cost given by our conservative lower bound on large networks. 33. System-Level We consider a robust-optimization-driven system-level 2012 Optimization in approach to interference management in a cellular Wireless Networks broadband system operating in an interference-limited Managing Interference and highly dynamic regime. Here, base stations in and Uncertainty via neighboring cells (partially) coordinate their transmission Robust Optimization schedules in an attempt to avoid simultaneous max- power transmission to their mutual cell edge. Limits on communication overhead and use of the backhaul require base station coordination to occur at a slower timescale than the customer arrival process. The central challenge is to properly structure coordination decisions at the slow timescale, as these subsequently restrict the actions of each base station until the next coordination period. Moreover, because coordination occurs at the slower timescale, the statistics of the arriving customers, e.g., the load, are typically only approximately known-thus, this coordination must be done with only approximate knowledge of statistics. We show that performance of existing approaches that assume exact knowledge of these statistics can degrade rapidly as the uncertainty in the arrival process increases. We show that a two-stage robust optimization framework is a natural way to model two-timescale decision problems. We provide tractable formulations for the base-station coordination problem and show that our formulation is robust to fluctuations (uncertainties) in the arriving load. This tolerance to load fluctuation also serves to reduce the need for frequent reoptimization across base stations, thus helping minimize the communication overhead required for system-level interference reduction. Our robust
  • 16. optimization formulations are flexible, allowing us to control the conservatism of the solution. Our simulations show that we can build in robustness without significant degradation of nominal performance. 34. The Case for Feed- Variable latencies due to communication delays or 2012 Forward Clock system noise is the central challenge faced by time- Synchronization keeping algorithms when synchronizing over the network. Using extensive experiments, we explore the robustness of synchronization in the face of both normal and extreme latency variability and compare the feedback approaches of ntpd and ptpd (a software implementation of IEEE-1588) to the feed-forward approach of the RADclock and advocate for the benefits of a feed-forward approach. Noting the current lack of kernel support, we present extensions to existing mechanisms in the Linux and FreeBSD kernels giving full access to all available raw counters, and then evaluate the TSC, HPET, and ACPI counters' suitability as hardware timing sources. We demonstrate how the RADclock achieves the same microsecond accuracy with each counter. TECHNOLOGY : JAVA DOMAIN : IEEE TRANSACTIONS ON NETWORK SECURITY S.NO TITLES ABSTRACT YEAR 1. Design and We have designed and implemented TARF, a robust 2012 Implementation of trust-aware routing framework for dynamic wireless TARF: A Trust-Aware sensor networks (WSN). Without tight time Routing Framework synchronization or known Geographic information, for WSNs TARF provides trustworthy and energy-efficient route. Most importantly, TARF proves effective against those harmful attacks developed out of identity deception; the resilience of TARF is verified through extensive evaluation with both simulation and empirical experiments on large-scale WSNs under various scenarios including mobile and RF-shielding network conditions. 2. Risk-Aware In this paper, we propose a risk-aware response 2012 Mitigation for mechanism to systematically Cope with the identified MANET Routing routing attacks. Our risk-aware approach is based on an Attacks extended Dempster-Shafer mathematical theory of
  • 17. Evidence introducing a notion of importance factors. 3. Survivability In this paper, we study survivability issues for RFID. We 2012 Experiment and first present an RFID survivability experiment to define a Attack foundation to measure the degree of survivability of an Characterization for RFID system under varying attacks. Then we model a RFID series of malicious scenarios using stochastic process algebras and study the different effects of those attacks on the ability of the RFID system to provide critical services even when parts of the system have been damaged. 4. Detecting and In this paper, we represent an 2012 Resolving Firewall innovative policy anomaly management framework Policy Anomalies for firewalls, adopting a rule-based segmentation technique to identify policy anomalies and derive effective anomaly resolutions. In particular, we articulate a grid-based representation technique, providing an intuitive cognitive sense about policy anomaly. 5. Automatic In this paper, we present a complete solution for 2012 Reconfiguration for dynamically changing system membership in a large- Large-Scale Reliable scale Byzantine-fault-tolerant system. We present a Storage Systems service that tracks system membership and periodically notifies other system nodes of changes. 6. Detecting Anomalous In this paper, we introduce the community anomaly 2012 Insiders in detection system (CADS), an unsupervised learning Collaborative framework to detect insider threats based on the access Information Systems logs of collaborative environments. The framework is based on the observation that typical CIS users tend to form community structures based on the subjects accessed 7. An Extended Visual Conventional visual secret sharing schemes generate 2012 Cryptography noise-like random pixels on shares to hide secret images. Algorithm for General It suffers a management problem. In this paper, we Access Structures propose a general approach to solve the above- mentioned problems; the approach can be used for binary secret images in non computer-aided decryption environments. 8. Mitigating Distributed In this paper, we extend port-hopping to support 2012 Denial of Service multiparty applications, by proposing the BIGWHEEL Attacks in Multiparty algorithm, for each application server to communicate Applications in the with multiple clients in a port-hopping manner without Presence of Clock the need for group synchronization. Furthermore, we Drift present an adaptive algorithm, HOPERAA, for enabling hopping in the presence of bounded asynchrony, namely, when the communicating parties have clocks with clock drifts. 9. On the Security and Content distribution via network coding has received a 2012
  • 18. Efficiency of Content lot of attention lately. However, direct application of Distribution via network coding may be insecure. In particular, attackers Network Coding can inject "bogus” data to corrupt the content distribution process so as to hinder the information dispersal or even deplete the network resource. Therefore, content verification is an important and practical issue when network coding is employed. 10. Packet-Hiding In this paper, we address the problem of selective 2012 Methods for jamming attacks in wireless networks. In these attacks, Preventing Selective the adversary is active only for a short period of time, Jamming Attacks selectively targeting messages of high importance. 11. Stochastic Model of 2012 Multi virus Dynamics 12. Peering Equilibrium Our scheme relies on a game theory modeling, with a 2012 Multipath Routing: A non-cooperative potential game considering both routing Game Theory and congestions costs. We compare different PEMP Framework for policies to BGP Multipath schemes by emulating a Internet Peering realistic peering scenario. Settlements 13. Modeling and Our scheme uses the Power Spectral Density (PSD) 2012 Detection of distribution of the scan traffic volume and its Camouflaging Worm corresponding Spectral Flatness Measure (SFM) to distinguish the C-Worm traffic from background traffic. The performance data clearly demonstrates that our scheme can effectively detect the C-Worm propagation.two heuristic algorithms for the two sub problems. 14. Analysis of a Botnet We present the design of an advanced hybrid peer-to- 2012 Takeover peer botnet. Compared with current botnets, the proposed botnet is harder to be shut down, monitored, and hijacked. It provides individualized encryption and control traffic dispersion. 15. Efficient Network As real-time traffic such as video or voice increases on 2012 Modification to the Internet, ISPs are required to provide stable quality as Improve QoS Stability well as connectivity at failures. For ISPs, how to at Failures effectively improve the stability of these qualities at failures with the minimum investment cost is an important issue, and they need to effectively select a limited number of locations to add link facilities. 16. Detecting Spam Compromised machines are one of the key security 2012 Zombies by threats on the Internet; they are often used to launch Monitoring Outgoing various security attacks such as spamming and spreading Messages malware, DDoS, and identity theft. Given that spamming provides a key economic incentive for attackers to recruit the large number of compromised machines, we focus on the detection of the compromised machines in a network
  • 19. that are involved in the spamming activities, commonly known as spam zombies. We develop an effective spam zombie detection system named SPOT by monitoring outgoing messages of a network. SPOT is designed based on a powerful statistical tool called Sequential Probability Ratio Test, which has bounded false positive and false negative error rates. In addition, we also evaluate the performance of the developed SPOT system using a two-month e-mail trace collected in a large US campus network. Our evaluation studies show that SPOT is an effective and efficient system in automatically detecting compromised machines in a network. For example, among the 440 internal IP addresses observed in the e-mail trace, SPOT identifies 132 of them as being associated with compromised machines. Out of the 132 IP addresses identified by SPOT, 126 can be either independently confirmed (110) or highly likely (16) to be compromised. Moreover, only seven internal IP addresses associated with compromised machines in the trace are missed by SPOT. In addition, we also compare the performance of SPOT with two other spam zombie detection algorithms based on the number and percentage of spam messages originated or forwarded by internal machines, respectively, and show that SPOT outperforms these two detection algorithms. 17. A Hybrid Approach to Real-world entities are not always represented by the 2012 Private Record same set of features in different data sets. Therefore, Matching Network matching records of the same real-world entity Security 2012 Java distributed across these data sets is a challenging task. If the data sets contain private information, the problem becomes even more difficult. Existing solutions to this problem generally follow two approaches: sanitization techniques and cryptographic techniques. We propose a hybrid technique that combines these two approaches and enables users to trade off between privacy, accuracy, and cost. Our main contribution is the use of a blocking phase that operates over sanitized data to filter out in a privacy- preserving manner pairs of records that do not satisfy the matching condition. We also provide a formal definition of privacy and prove that the participants of our protocols learn nothing other than their share of the result and what can be inferred from their share of the result, their input and sanitized views of the input data sets (which are considered public information). Our method incurs considerably lower costs than cryptographic techniques
  • 20. and yields significantly more accurate matching results compared to sanitization techniques, even when privacy requirements are high. 18. ES-MPICH2: A An increasing number of commodity clusters are 2012 Message Passing connected to each other by public networks, which have Interface with become a potential threat to security sensitive parallel Enhanced Security applications running on the clusters. To address this Network Security security issue, we developed a Message Passing Interface 2012 Java (MPI) implementation to preserve confidentiality of messages communicated among nodes of clusters in an unsecured network. We focus on MPI rather than other protocols, because MPI is one of the most popular communication protocols for parallel computing on clusters. Our MPI implementation—called ES- MPICH2—was built based on MPICH2 developed by the Argonne National Laboratory. Like MPICH2, ES- MPICH2 aims at supporting a large variety of computation and communication platforms like commodity clusters and high-speed networks. We integrated encryption and decryption algorithms into the MPICH2 library with the standard MPI interface and; thus, data confidentiality of MPI applications can be readily preserved without a need to change the source codes of the MPI applications. MPI-application programmers can fully configure any confidentiality services in MPICHI2, because a secured configuration file in ES-MPICH2 offers the programmers flexibility in choosing any cryptographic schemes and keys seamlessly incorporated in ES-MPICH2. We used the Sandia Micro Benchmark and Intel MPI Benchmark suites to evaluate and compare the performance of ES- MPICH2 with the original MPICH2 version. Our experiments show that overhead incurred by the confidentiality services in ES-MPICH2 is marginal for small messages. The security overhead in ES-MPICH2 becomes more pronounced with larger messages. Our results also show that security overhead can be significantly reduced in ES-MPICH2 by high- performance clusters. 19. Ensuring Distributed Cloud computing enables highly scalable services to be 2012 Accountability for easily consumed over the Internet on an as-needed basis. Data Sharing in the A major feature of the cloud services is that users’ data Cloud are usually processed remotely in unknown machines that users do not own or operate. While enjoying the convenience brought by this new emerging technology, users’ fears of losing control of their own data
  • 21. (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud services. To address this problem, here, we propose a novel highly decentralized information accountability framework to keep track of the actual usage of the users’ data in the cloud. In particular, we propose an object-centered approach that enables enclosing our logging mechanism together with users’ data and policies. We leverage the JAR programmable capabilities to both create a dynamic and traveling object, and to ensure that any access to users’ data will trigger authentication and automated logging local to the JARs. To strengthen user’s control, we also provide distributed auditing mechanisms. We provide extensive experimental studies that demonstrate the efficiency and effectiveness of the proposed approaches. 20. BECAN: A Injecting false data attack is a well known serious threat 2012 Bandwidth-Efficient to wireless sensor network, for which an adversary Cooperative reports bogus information to sink causing error decision Authentication at upper level and energy waste in en-route nodes. In this Scheme for Filtering paper, we propose a novel bandwidth-efficient Injected False Data in cooperative authentication (BECAN) scheme for filtering Wireless Sensor injected false data. Based on the random graph Networks – projects characteristics of sensor node deployment and the 2012 cooperative bit-compressed authentication technique, the proposed BECAN scheme can save energy by early detecting and filtering the majority of injected false data with minor extra overheads at the en-route nodes. In addition, only a very small fraction of injected false data needs to be checked by the sink, which thus largely reduces the burden of the sink. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high filtering probability and energy saving. 21. A Flexible Approach There is an increasing need for fault tolerance 2012 to Improving System capabilities in logic devices brought about by the scaling Reliability with of transistors to ever smaller geometries. This paper Virtual Lockstep presents a hypervisor-based replication approach that can be applied to commodity hardware to allow for virtually lockstepped execution. It offers many of the benefits of hardware-based lockstep while being cheaper and easier to implement and more flexible in the configurations supported. A novel form of processor state fingerprinting is also presented, which can significantly reduce the fault detection latency. This further improves reliability by triggering rollback recovery before errors are recorded to
  • 22. a checkpoint. The mechanisms are validated using a full prototype and the benchmarks considered indicate an average performance overhead of approximately 14 percent with the possibility for significant optimization. Finally, a unique method of using virtual lockstep for fault injection testing is presented and used to show that significant detection latency reduction is achievable by comparing only a small amount of data across replicas 22. A Learning-Based Despite the conventional wisdom that proactive security 2012 Approach to Reactive is superior to reactive security, we show that reactive Security security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst case assumptions about the attacker: we grant the attacker complete knowledge of the defender's strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker's incentives and knowledge 23. Automated Security Despite the conventional wisdom that proactive security 2012 Test Generation with is superior to reactive security, we show that reactive Formal Threat Models security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst case assumptions about the attacker: we grant the attacker complete knowledge of the defender's strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker's incentives and knowledge. 24. Automatic Byzantine-fault-tolerant replication enhances the 2012 Reconfiguration for availability and reliability of Internet services that store Large-Scale Reliable critical state and preserve it despite attacks or software Storage Systems errors. However, existing Byzantine-fault-tolerant storage systems either assume a static set of replicas, or
  • 23. have limitations in how they handle reconfigurations (e.g., in terms of the scalability of the solutions or the consistency levels they provide). This can be problematic in long-lived, large-scale systems where system membership is likely to change during the system lifetime. In this paper, we present a complete solution for dynamically changing system membership in a large- scale Byzantine-fault-tolerant system. We present a service that tracks system membership and periodically notifies other system nodes of membership changes. The membership service runs mostly automatically, to avoid human configuration errors; is itself Byzantine-fault- tolerant and reconfigurable; and provides applications with a sequence of consistent views of the system membership. We demonstrate the utility of this membership service by using it in a novel distributed hash table called dBQS that provides atomic semantics even across changes in replica sets. dBQS is interesting in its own right because its storage algorithms extend existing Byzantine quorum protocols to handle changes in the replica set, and because it differs from previous DHTs by providing Byzantine fault tolerance and offering strong semantics. We implemented the membership service and dBQS. Our results show that the approach works well, in practice: the membership service is able to manage a large system and the cost to change the system membership is low. 25. JS-Reduce Defending Web queries, credit card transactions, and medical 2012 Your Data from records are examples of transaction data flowing in Sequential corporate data stores, and often revealing associations Background between individuals and sensitive information. The serial Knowledge Attacks release of these data to partner institutions or data analysis centers in a nonaggregated form is a common situation. In this paper, we show that correlations among sensitive values associated to the same individuals in different releases can be easily used to violate users' privacy by adversaries observing multiple data releases, even if state-of-the-art privacy protection techniques are applied. We show how the above sequential background knowledge can be actually obtained by an adversary, and used to identify with high confidence the sensitive values of an individual. Our proposed defense algorithm is based on Jensen-Shannon divergence; experiments show its superiority with respect to other applicable solutions. To the best of our knowledge, this is the first work that systematically investigates the role of sequential
  • 24. background knowledge in serial release of transaction data. 26. Mitigating DistributedNetwork-based applications commonly open some 2012 Denial of Service known communication port(s), making themselves easy Attacks in Multiparty targets for (distributed) Denial of Service (DoS) attacks. Applications in the Earlier solutions for this problem are based on port- Presence of Clock hopping between pairs of processes which are Drifts synchronous or exchange acknowledgments. However, acknowledgments, if lost, can cause a port to be open for longer time and thus be vulnerable, while time servers can become targets to DoS attack themselves. Here, we extend port-hopping to support multiparty applications, by proposing the BIGWHEEL algorithm, for each application server to communicate with multiple clients in a port-hopping manner without the need for group synchronization. Furthermore, we present an adaptive algorithm, HOPERAA, for enabling hopping in the presence of bounded asynchrony, namely, when the communicating parties have clocks with clock drifts. The solutions are simple, based on each client interacting with the server independently of the other clients, without the need of acknowledgments or time server(s). Further, they do not rely on the application having a fixed port open in the beginning, neither do they require the clients to get a "first-contact” port from a third party. We show analytically the properties of the algorithms and also study experimentally their success rates, confirm the relation with the analytical bounds. 27. On the Security and Content distribution via network coding has received a 2012 Efficiency of Content lot of attention lately. However, direct application of Distribution via network coding may be insecure. In particular, attackers Network Coding can inject "bogus” data to corrupt the content distribution process so as to hinder the information dispersal or even deplete the network resource. Therefore, content verification is an important and practical issue when network coding is employed. When random linear network coding is used, it is infeasible for the source of the content to sign all the data, and hence, the traditional "hash-and-sign” methods are no longer applicable. Recently, a new on-the-fly verification technique has been proposed by Krohn et al. (IEEE S&P '04), which employs a classical homomorphic hash function. However, this technique is difficult to be applied to network coding because of high computational and communication overhead. We explore this issue further
  • 25. by carefully analyzing different types of overhead, and propose methods to help reducing both the computational and communication cost, and provide provable security at the same time. 28. Security of Bertino- Recently, Bertino, Shang and Wagstaff proposed a time- 2012 Shang-Wagstaff Time- bound hierarchical key management scheme for secure Bound Hierarchical broadcasting. Their scheme is built on elliptic curve Key Management cryptography and implemented with tamper-resistant Scheme for Secure devices. In this paper, we present two collusion attacks Broadcasting on Bertino-Shang-Wagstaff scheme. The first attack does not need to compromise any decryption device, while the second attack requires to compromise single decryption device only. Both attacks are feasible and effective. 29. Survivability Radio Frequency Identification (RFID) has been 2012 Experiment and developed as an important technique for many high Attack security and high integrity settings. In this paper, we Characterization for study survivability issues for RFID. We first present an RFID RFID survivability experiment to define a foundation to measure the degree of survivability of an RFID system under varying attacks. Then we model a series of malicious scenarios using stochastic process algebras and study the different effects of those attacks on the ability of the RFID system to provide critical services even when parts of the system have been damaged. Our simulation model relates its statistic to the attack strategies and security recovery. The model helps system designers and security specialists to identify the most devastating attacks given the attacker's capacities and the system's recovery abilities. The goal is to improve the system survivability given possible attacks. Our model is the first of its kind to formally represent and simulate attacks on RFID systems and to quantitatively measure the degree of survivability of an RFID system under those attacks. 30. Persuasive Cued This paper presents an integrated evaluation of the 2012 Click-Points Design, Persuasive Cued Click-Points graphical password Implementation, and scheme, including usability and security evaluations, and Evaluation of a implementation considerations. An important usability Knowledge- Based goal for knowledge-based authentication systems is to Authentication support users in selecting passwords of higher security, Mechanism in the sense of being from an expanded effective security space. We use persuasion to influence user choice in click-based graphical passwords, encouraging users to select more random, and hence more difficult to guess, click-points.
  • 26. 31. Resilient Modern computer systems are built on a foundation of 2012 Authenticated software components from a variety of vendors. While Execution of Critical critical applications may undergo extensive testing and Applications in evaluation procedures, the heterogeneity of software Untrusted sources threatens the integrity of the execution Environments environment for these trusted programs. For instance, if an attacker can combine an application exploit with a privilege escalation vulnerability, the operating system (OS) can become corrupted. Alternatively, a malicious or faulty device driver running with kernel privileges could threaten the application. While the importance of ensuring application integrity has been studied in prior work, proposed solutions immediately terminate the application once corruption is detected. Although, this approach is sufficient for some cases, it is undesirable for many critical applications. In order to overcome this shortcoming, we have explored techniques for leveraging a trusted virtual machine monitor (VMM) to observe the application and potentially repair damage that occurs. In this paper, we describe our system design, which leverages efficient coding and authentication schemes, and we present the details of our prototype implementation to quantify the overhead of our approach. Our work shows that it is feasible to build a resilient execution environment, even in the presence of a corrupted OS kernel, with a reasonable amount of storage and performance overhead. TECHNOLOGY : JAVA DOMAIN : IEEE TRANSACTIONS ON DATA MINING S.NO TITLES ABSTRACT YEAR 1. A Survival Modeling In this paper, we propose a survival modeling approach 2012 Approach to to promoting ranking diversity for biomedical Biomedical Searchinformation retrieval. The proposed approach concerns Result Diversification with finding relevant documents that can deliver more Using Wikipedia different aspects of a query. First, two probabilistic models derived from the survival analysis theory are proposed for measuring aspect novelty. 2. A Fuzzy Approach for In this paper, we propose a new fuzzy clustering 2012 Multitype Relational approach for multitype relational data (FC-MR). In FC- Data Clustering MR, different types of objects are clustered simultaneously. An object is assigned a large
  • 27. membership with respect to a cluster if its related objects in this cluster have high rankings. 3. Anonimos: An LP- We present Anonimos, a Linear Programming-based 2012 Based Approach for technique for anonymization of edge weights that Anonymizing preserves linear properties of graphs. Such properties Weighted Social form the foundation of many important graph-theoretic Network Graphs algorithms such as shortest paths problem, k-nearest neighbors, minimum cost spanning Tree and maximizing information spread. 4. A Methodology for In this paper, we tackle discrimination prevention in data 2012 Direct and Indirect mining and propose new techniques applicable for direct Discrimination or indirect Discrimination prevention individually or Prevention in Data both at the same time. We discuss how to clean training Mining datasets and outsourced datasets in such a way that direct and/or indirect discriminatory decision rules are converted to legitimate (non-discriminatory) Classification rules. 5. Mining Web Graphs In this paper, aiming at providing a general framework 2012 for Recommendations on mining Web graphs for recommendations, (1) we first propose a novel diffusion method which propagates similarities between different nodes and generates recommendations; (2) then we illustrate how to generalize different recommendation problems into our graph diffusion framework. 6. Prediction of User’s Predicting user's behavior while serving the Internet can 2012 Web-Browsing be applied effectively in various critical applications. Behavior: Application Such application has traditional tradeoffs between of Markov Model modeling complexity and prediction accuracy. In this paper, we analyze and study Markov model and all- Kth Markov model in Web prediction. We propose a new modified Markov model to alleviate the issue of scalability in the number of paths. 7. Prototype Selection This paper provides a survey of the prototype selection 2012 for Nearest Neighbor methods proposed in the literature from a theoretical and Classification: empirical point of view. Considering a theoretical point Taxonomy and of view, we propose a taxonomy based on the main Empirical Study characteristics presented in prototype selection and we analyze their advantages and drawbacks, the nearest neighbor classifier suffers from several drawbacks such as high storage requirements, low efficiency in classification response, and low noise Tolerance. 8. Query Planning for We present a low-cost, scalable technique to answer 2012 Continuous continuous aggregation queries using a network of Aggregation Queries aggregators of dynamic data items. In such a network of over a Network of data aggregators, each data aggregator serves a set of Data Aggregators data items at specific coherencies.
  • 28. 9. Revealing Density- In this paper, we introduce a novel density-based 2012 Based Clustering network clustering method, called gSkeletonClu (graph- Structure from the skeleton based clustering). By projecting an undirected Core-Connected Tree network to its core-connected maximal spanning tree, the of a Network clustering problem can be converted to detect core connectivity components on the tree. 10. Scalable Learning of This study of collective behavior is to understand how 2012 Collective Behavior individuals behave in a social networking environment. Oceans of data generated by social media like Facebook, Twitter, Flickr, and YouTube present opportunities and challenges to study collective behavior on a large scale. In this work, we aim to learn to predict collective behavior in social media 11. Weakly Supervised This paper proposes a novel probabilistic modeling 2012 Joint Sentiment-Topic framework called joint sentiment-topic (JST) model Detection from Text based on latent Dirichlet allocation (LDA), which detects sentiment and topic simultaneously from text. A reparameterized version of the JST model called Reverse-JST, obtained by reversing the sequence of sentiment and topic generation in the modeling process, is also studied. 12. A Framework for Due to a wide range of potential applications, research on 2012 Personal Mobile mobile commerce has received a lot of interests from Commerce Pattern both of the industry and academia. Among them, one of Mining and Prediction the active topic areas is the mining and prediction of users’ mobile commerce behaviors such as their movements and purchase transactions. In this paper, we propose a novel framework, called Mobile Commerce Explorer (MCE), for mining and prediction of mobile users’ movements and purchase transactions under the context of mobile commerce. The MCE framework consists of three major components: 1) Similarity Inference Model (SIM) for measuring the similarities among stores and items, which are two basic mobile commerce entities considered in this paper; 2) Personal Mobile Commerce Pattern Mine (PMCP-Mine) algorithm for efficient discovery of mobile users’ Personal Mobile Commerce Patterns (PMCPs); and 3) Mobile Commerce Behavior Predictor (MCBP) for prediction of possible mobile user behaviors. To our best knowledge, this is the first work that facilitates mining and prediction of mobile users’ commerce behaviors in order to recommend stores and items previously unknown to a user. We perform an extensive experimental evaluation by simulation and show that our proposals produce excellent results.
  • 29. 13. Efficient Extended Extended Boolean retrieval (EBR) models were proposed 2012 Boolean Retrieval nearly three decades ago, but have had little practical impact, despite their significant advantages compared to either ranked keyword or pure Boolean retrieval. In particular,EBR models produce meaningful rankings; their query model allows the representation of complex concepts in an and-or format; and they are scrutable, in that the score assigned to a document depends solely on the content of that document, unaffected by any collection statistics or other external factors. These characteristics make EBR models attractive in domains typified by medical and legal searching, where the emphasis is on iterative development of reproducible complex queries of dozens or even hundreds of terms. However, EBR is much more computationally expensive than the alternatives. We consider the implementation of the p-norm approach to EBR, and demonstrate that ideas used in the max-score and wand exact optimization techniques for ranked keyword retrieval can be adaptedto allow selective bypass of documents via a low-cost screening process for this and similar retrieval models. We also propose term independent bounds that are able to further reduce the number of score calculations for short, simple queries under the extended Boolean retrieval model. Together, these methods yield an overall saving from 50 to 80percent of the evaluation cost on test queries drawn from biomedical search. 14. Improving Aggregate Recommender systems are becoming increasingly Recommendation important to individual users and businesses for Diversity Using providingpersonalized recommendations. However, Ranking-Based while the majority of algorithms proposed in Techniques recommender systems literature have focused on improving recommendation accuracy (as exemplified by the recent Netflix Prize competition) , other important aspects of recommendation quality, such as the diversity of recommendations, have often been overlooked. In this paper, we introduce and explore a number of item ranking techniques that can generate recommendations that have substantially higher aggregate diversity across all users while maintaining comparable levels of recommendation accuracy. Comprehensive empirical evaluation consistently shows the diversity gains of the
  • 30. proposed techniques using several real-world rating datasets and different rating prediction algorithms. 15. BibPro: A Citation Dramatic increase in the number of academic 2012 Parser Based on publications has led to growing demand for efficient Sequence Alignment organization of the resources to meet researchers' needs. As a result, a number of network services have compiled databases from the public resources scattered over the Internet. However, publications by different conferences and journals adopt different citation styles. It is an interesting problem to accurately extract metadata from a citation string which is formatted in one of thousands of different styles. It has attracted a great deal of attention in research in recent years. In this paper, based on the notion of sequence alignment, we present a citation parser called BibPro that extracts components of a citation string. To demonstrate the efficacy of BibPro, we conducted experiments on three benchmark data sets. The results show that BibPro achieved over 90 percent accuracy on each benchmark. Even with citations and associated metadata retrieved from the web as training data, our experiments show that BibPro still achieves a reasonable performance 16. Extending Attribute Data quantity is the main issue in the small data set 2012 Information for Small problem, because usually insufficient data will not lead Data Set Classification to a robust classification performance. How to extract more effective information from a small data set is thus of considerable interest. This paper proposes a new attribute construction approach which converts the original data attributes into a higher dimensional feature space to extract more attribute information by a similarity-based algorithm using the classification- oriented fuzzy membership function. Seven data sets with different attribute sizes are employed to examine the performance of the proposed method. The results show that the proposed method has a superior classification performance when compared to principal component analysis (PCA), kernel principal component analysis (KPCA), and kernel independent component analysis (KICA) with a Gaussian kernel in the support vector machine (SVM) classifier 17. Horizontal Preparing a data set for analysis is generally the most 2012 Aggregations in SQL time consuming task in a data mining project, requiring to Prepare Data Sets many complex SQL queries, joining tables, and
  • 31. for Data Mining aggregating columns. Existing SQL aggregations have Analysis limitations to prepare data sets because they return one column per aggregated group. In general, a significant manual effort is required to build data sets, where a horizontal layout is required. We propose simple, yet powerful, methods to generate SQL code to return aggregated columns in a horizontal tabular layout, returning a set of numbers instead of one number per row. This new class of functions is called horizontal aggregations. Horizontal aggregations build data sets with a horizontal denormalized layout (e.g., point- dimension, observationvariable, instance-feature), which is the standard layout required by most data mining algorithms. We propose three fundamental methods to evaluate horizontal aggregations: CASE: Exploiting the programming CASE construct; SPJ: Based on standard relational algebra operators (SPJ queries); PIVOT: Using the PIVOT operator, which is offered by some DBMSs. Experiments with large tables compare the proposed query evaluation methods. Our CASE method has similar speed to the PIVOT operator and it is much faster than the SPJ method. In general, the CASE and PIVOT methods exhibit linear scalability, whereas the SPJ method does 18. Enabling Multilevel Privacy Preserving Data Mining (PPDM) addresses the 2012 Trust in Privacy problem of developing accurate models about aggregated Preserving Data data without access to precise information in individual Mining data record. A widely studied perturbation-based PPDM approach introduces random perturbation to individual values to preserve privacy before data are published. Previous solutions of this approach are limited in their tacit assumption of single-level trust on data miners. In this work, we relax this assumption and expand the scope of perturbation-based PPDM to Multilevel Trust (MLT- PPDM). In our setting, the more trusted a data miner is, the less perturbed copy of the data it can access. Under this setting, a malicious data miner may have access to differently perturbed copies of the same data through various means, and may combine these diverse copies to jointly infer additional information about the original data that the data owner does not intend to release. Preventing such diversity attacks is the key challenge of providing MLT-PPDM services. We address this challenge by properly correlating perturbation across copies at different trust levels. We prove that our solution
  • 32. is robust against diversity attacks with respect to our privacy goal. That is, for data miners who have access to an arbitrary collection of the perturbed copies, our solution prevent them from jointly reconstructing the original data more accurately than the best effort using any individual copy in the collection. Our solution allows a data owner to generate perturbed copies of its data for arbitrary trust levels ondemand.This feature offers data owners maximum flexibility. 19. Using Rule Ontology Inferential rules are as essential to the Semantic Web 2012 in Repeated Rule applications as ontology. Therefore, rule acquisition is Acquisition from also an important issue, and the Web that implies Similar Web Sites inferential rules can be a major source of rule acquisition. We expect that it will be easier to acquire rules from a site by using similar rules of other sites in the same domain rather than starting from scratch. We proposed an automatic rule acquisition procedure using a rule ontology RuleToOnto, which represents information about the rule components and their structures. The rule acquisition procedure consists of the rule component identification step and the rule composition step. We developed A* algorithm for the rule composition and we performed experiments demonstrating that our ontology- based rule acquisition approach works in a real-world application. 20. Efficient Processing of There is a growing need for systems that react 2012 Uncertain Events in automatically to events. While some events are generated Rule-Based Systems externally and deliver data across distributed systems, others need to be derived by the system itself based on available information. Event derivation is hampered by uncertainty attributed to causes such as unreliable data sources or the inability to determine with certainty whether an event has actually occurred, given available information. Two main challenges exist when designing a solution for event derivation under uncertainty. First, event derivation should scale under heavy loads of incoming events. Second, the associated probabilities must be correctly captured and represented. We present a solution to both problems by introducing a novel generic and formal mechanism and framework for managing event derivation under uncertainty. We also provide empirical evidence demonstrating the scalability and accuracy of our approach 21. Feature Selection Data and knowledge management systems employ 2012 Based on Class- feature selection algorithms for removing irrelevant, Dependent Densities redundant, and noisy information from the data. There
  • 33. for High-Dimensional are two well-known approaches to feature selection, Binary Data feature ranking (FR) and feature subset selection (FSS). In this paper, we propose a new FR algorithm, termed as class-dependent density-based feature elimination (CDFE), for binary data sets. Our theoretical analysis shows that CDFE computes the weights, used for feature ranking, more efficiently as compared to the mutual information measure. Effectively, rankings obtained from both the two criteria approximate each other. CDFE uses a filtrapper approach to select a final subset. For data sets having hundreds of thousands of features, feature selection with FR algorithms is simple and computationally efficient but redundant information may not be removed. On the other hand, FSS algorithms analyze the data for redundancies but may become computationally impractical on high-dimensional data sets. We address these problems by combining FR and FSS methods in the form of a two-stage feature selection algorithm. When introduced as a preprocessing step to the FSS algorithms, CDFE not only presents them with a feature subset, good in terms of classification, but also relieves them from heavy computations. Two FSS algorithms are employed in the second stage to test the two-stage feature selection idea. We carry out experiments with two different classifiers (naive Bayes' and kernel ridge regression) on three different real-life data sets (NOVA, HIVA, and GINA) of the”Agnostic Learning versus Prior Knowledge” challenge. As a stand- alone method, CDFE shows up to about 92 percent reduction in the feature set size. When combined with the FSS algorithms in two-stages, CDFE significantly improves their classification accuracy and exhibits up to 97 percent reduction in the feature set size. We also compared CDFE against the winning entries of the challenge and f- und that it outperforms the best results on NOVA and HIVA while obtaining a third position in case of GINA. 22. Ranking Model With the explosive emergence of vertical search 2012 Adaptation for domains, applying the broad-based ranking model Domain-Specific directly to different domains is no longer desirable due to Search domain differences, while building a unique ranking model for each domain is both laborious for labeling data and time-consuming for training models. In this paper, we address these difficulties by proposing a regularization based algorithm called ranking adaptation
  • 34. SVM (RA-SVM), through which we can adapt an existing ranking model to a new domain, so that the amount of labeled data and the training cost is reduced while the performance is still guaranteed. Our algorithm only requires the prediction from the existing ranking models, rather than their internal representations or the data from auxiliary domains. In addition, we assume that documents similar in the domain-specific feature space should have consistent rankings, and add some constraints to control the margin and slack variables of RA-SVM adaptively. Finally, ranking adaptability measurement is proposed to quantitatively estimate if an existing ranking model can be adapted to a new domain. Experiments performed over Letor and two large scale datasets crawled from a commercial search engine demonstrate the applicabilities of the proposed ranking adaptation algorithms and the ranking adaptability measurement. 23. Slicing: A New Several anonymization techniques, such as generalization 2012 Approach to Privacy and bucketization, have been designed for privacy Preserving Data preserving microdata publishing. Recent work has shown Publishing that general ization loses considerable amount of information, especially for high-dimensional data. Bucketization, on the other hand, does not prevent membership disclosure and does not apply for data that do not have a clear separation between quasi- identifying attributes and sensitive attributes. In this paper, we present a novel technique called slicing, which partitions the data both horizontally and vertically. We show that slicing preserves better data utility than generalization and can be used for membership disclosure protection. Another important advantage of slicing is that it can handle high-dimensional data. We show how slicing can be used for attribute disclosure protection and develop an efficient algorithm for computing the sliced data that obey the ℓ-diversity requirement. Our workload experiments confirm that slicing preserves better utility than generalization and is more effective than bucketization in workloads involving the sensitive attribute. Our experiments also demonstrate that slicing can be used to prevent membership disclosure. 24. Improving Aggregate Recommender systems are becoming increasingly 2012 Recommendation important to individual users and businesses for Diversity Using providing personalized recommendations. However, Ranking-Based while the majority of algorithms proposed in Techniques- projects recommender systems literature have focused on
  • 35. improving recommendation accuracy, other important aspects of recommendation quality, such as the diversity of recommendations, have often been overlooked. In this paper, we introduce and explore a number of item ranking techniques that can generate recommendations that have substantially higher aggregate diversity across all users while maintaining comparable levels of recommendation accuracy. Comprehensive empirical evaluation consistently shows the diversity gains of the proposed techniques using several real-world rating datasets and different rating prediction algorithms. 25. Horizontal Preparing a data set for analysis is generally the most 2012 Aggregations in SQL time consuming task in a data mining project, requiring to Prepare Data Sets many complex SQL queries, joining tables and for Data Mining aggregating columns. Existing SQL aggregations have Analysis limitations to prepare data sets because they return one column per aggregated group. In general, a significant manual effort is required to build data sets, where a horizontal layout is required. We propose simple, yet powerful, methods to generate SQL code to return aggregated columns in a horizontal tabular layout, returning a set of numbers instead of one number per row. This new class of functions is called horizontal aggregations. Horizontal aggregations build data sets with a horizontal denormalized layout (e.g. point-dimension, observation- variable, instance-feature), which is the standard layout required by most data mining algorithms. We propose three fundamental methods to evaluate horizontal aggregations: CASE: Exploiting the programming CASE construct; SPJ: Based on standard relational algebra operators (SPJ queries); PIVOT: Using the PIVOT operator, which is offered by some DBMSs. Experiments with large tables compare the proposed query evaluation methods. Our CASE method has similar speed to the PIVOT operator and it is much faster than the SPJ method. In general, the CASE and PIVOT methods exhibit linear scalability, whereasthe SPJ method does not 26. Scalable Learning of This study of collective behavior is to understand how 2012 Collective Behavior - individuals behave in a social networking environment. projects 2012 Oceans of data generated by social media like Face book, Twitter, Flicker, and YouTube present opportunities and challenges to study collective behavior on a large scale. In this work, we aim to learn to predict collective behavior in social media. In particular, given information
  • 36. about some individuals, how can we infer the behavior of unobserved individuals in the same network? A social- dimension-based approach has been shown effective in addressing the heterogeneity of connections presented in social media. However, the networks in social media are normally of colossal size, involving hundreds of thousands of actors. The scale of these networks entails scalable learning of models for collective behavior prediction. To address the scalability issue, we propose an edge-centric clustering scheme to extract sparse social dimensions. With sparse social dimensions, the proposed approach can efficiently handle networks of millions of actors while demonstrating a comparable prediction performance to other non-scalable methods 27. Outsourced Similarity This paper considers a cloud computing setting in which 2012 Search on Metric Data similarity querying of metric data is outsourced to a Assets – projects service provider. The data is to be revealed only to trusted users, not to the service provider or anyone else. Users query the server for the most similar data objects to a query example. Outsourcing offers the data owner scalability and a low-initial investment. The need for privacy may be due to the data being sensitive (e.g., in medicine), valuable (e.g., in astronomy), or otherwise confidential. Given this setting, the paper presents techniques that transform the data prior to supplying it to the service provider for similarity queries on the transformed data. Our techniques provide interesting trade-offs between query cost and accuracy. They are then further extended to offer an intuitive privacy guarantee. Empirical studies with real data demonstrate that the techniques are capable of offering privacy while enabling efficient and accurate processing of similarity queries. 28. A Framework for A Time Series Clique (TSC) consists of multiple time 2012 Similarity Search of series which are related to each other by natural relations. Time Series Cliques The natural relations that are found between the time with Natural Relations series depend on the application domains. For example, a TSC can consist of time series which are trajectories in video that have spatial relations. In conventional time series retrieval, such natural relations between the time series are not considered. In this paper, we formalize the problem of similarity search over a TSC database. We develop a novel framework for efficient similarity search on TSC data. The framework addresses the following issues. First, it provides a compact representation for TSC data. Second, it uses a multidimensional relation
  • 37. vector to capture the natural relations between the multiple time series in a TSC. Lastly, the framework defines a novel similarity measure that uses the compact representation and the relation vector. We conduct an extensive performance study, using both real-life and synthetic data sets. From the performance study, we show that our proposed framework is both effective and efficient for TSC retrieval 29. A Genetic Several systems that rely on consistent data to offer high- 2012 Programming quality services, such as digital libraries and e-commerce Approach to Record brokers, may be affected by the existence of duplicates, Deduplication quasi replicas, or near-duplicate entries in their repositories. Because of that, there have been significant investments from private and government organizations for developing methods for removing replicas from its data repositories. This is due to the fact that clean and replica-free repositories not only allow the retrieval of higher quality information but also lead to more concise data and to potential savings in computational time and resources to process this data. In this paper, we propose a genetic programming approach to record deduplication that combines several different pieces of evidence extracted from the data content to find a deduplication function that is able to identify whether two entries in a repository are replicas or not. As shown by our experiments, our approach outperforms an existing state- of-the-art method found in the literature. Moreover, the suggested functions are computationally less demanding since they use fewer evidence. In addition, our genetic programming approach is capable of automatically adapting these functions to a given fixed replica identification boundary, freeing the user from the burden of having to choose and tune this parameter. 30. A Probabilistic Databases enable users to precisely express their 2012 Scheme for Keyword- informational needs using structured queries. However, Based Incremental database query construction is a laborious and error- Query Construction prone process, which cannot be performed well by most end users. Keyword search alleviates the usability problem at the price of query expressiveness. As keyword search algorithms do not differentiate between the possible informational needs represented by a keyword query, users may not receive adequate results. This paper presents IQP - a novel approach to bridge the gap between usability of keyword search and expressiveness of database queries. IQP enables a user to start with an arbitrary keyword query and incrementally
  • 38. refine it into a structured query through an interactive interface. The enabling techniques of IQP include: 1) a probabilistic framework for incremental query construction; 2) a probabilistic model to assess the possible informational needs represented by a keyword query; 3) an algorithm to obtain the optimal query construction process. This paper presents the detailed design of IQP, and demonstrates its effectiveness and scalability through experiments over real-world data and a user study. 31. Anónimos An LP- The increasing popularity of social networks has initiated 2012 Based Approach for a fertile research area in information extraction and data Anonymizing mining. Anonymization of these social graphs is Weighted Social important to facilitate publishing these data sets for Network Graphs analysis by external entities. Prior work has concentrated mostly on node identity anonymization and structural anonymization. But with the growing interest in analyzing social networks as a weighted network, edge weight anonymization is also gaining importance. We present Anónimos, a Linear Programming-based technique for anonymization of edge weights that preserves linear properties of graphs. Such properties form the foundation of many important graph-theoretic algorithms such as shortest paths problem, k-nearest neighbors, minimum cost spanning tree, and maximizing information spread. As a proof of concept, we apply Anónimos to the shortest paths problem and its extensions, prove the correctness, analyze complexity, and experimentally evaluate it using real social network data sets. Our experiments demonstrate that Anónimos anonymizes the weights, improves k-anonymity of the weights, and also scrambles the relative ordering of the edges sorted by weights, thereby providing robust and effective anonymization of the sensitive edge-weights. We also demonstrate the composability of different models generated using Anónimos, a property that allows a single anonymized graph to preserve multiple linear properties. 32. Answering General Time is an important dimension of relevance for a large 2012 Time-Sensitive number of searches, such as over blogs and news Queries archives. So far, research on searching over such collections has largely focused on locating topically similar documents for a query. Unfortunately, topic similarity alone is not always sufficient for document ranking. In this paper, we observe that, for an important
  • 39. class of queries that we call time-sensitive queries, the publication time of the documents in a news archive is important and should be considered in conjunction with the topic similarity to derive the final document ranking. Earlier work has focused on improving retrieval for “recency” queries that target recent documents. We propose a more general framework for handling time- sensitive queries and we automatically identify the important time intervals that are likely to be of interest for a query. Then, we build scoring techniques that seamlessly integrate the temporal aspect into the overall ranking mechanism. We present an extensive experimental evaluation using a variety of news article data sets, including TREC data as well as real web data analyzed using the Amazon Mechanical Turk. We examine several techniques for detecting the important time intervals for a query over a news archive and for incorporating this information in the retrieval process. We show that our techniques are robust and significantly improve result quality for time-sensitive queries compared to state-of-the-art retrieval techniques. 33. Clustering with All clustering methods have to assume some cluster 2012 Multiviewpoint-Based relationship among the data objects that they are applied Similarity Measure on. Similarity between a pair of objects can be defined either explicitly or implicitly. In this paper, we introduce a novel multiviewpoint-based similarity measure and two related clustering methods. The major difference between a traditional dissimilarity/similarity measure and ours is that the former uses only a single viewpoint, which is the origin, while the latter utilizes many different viewpoints, which are objects assumed to not be in the same cluster with the two objects being measured. Using multiple viewpoints, more informative assessment of similarity could be achieved. Theoretical analysis and empirical study are conducted to support this claim. Two criterion functions for document clustering are proposed based on this new measure. We compare them with several well- known clustering algorithms that use other popular similarity measures on various document collections to verify the advantages of our proposal. 34. Cluster-Oriented This paper presents a novel cluster-oriented ensemble 2012 Ensemble Classifier classifier. The proposed ensemble classifier is based on Impact of Multicluster original concepts such as learning of cluster boundaries Characterization on by the base classifiers and mapping of cluster Ensemble Classifier confidences to class decision using a fusion classifier.
  • 40. Learning The categorized data set is characterized into multiple clusters and fed to a number of distinctive base classifiers. The base classifiers learn cluster boundaries and produce cluster confidence vectors. A second level fusion classifier combines the cluster confidences and maps to class decisions. The proposed ensemble classifier modifies the learning domain for the base classifiers and facilitates efficient learning. The proposed approach is evaluated on benchmark data sets from UCI machine learning repository to identify the impact of multicluster boundaries on classifier learning and classification accuracy. The experimental results and two-tailed sign test demonstrate the superiority of the proposed cluster-oriented ensemble classifier over existing ensemble classifiers published in the literature. 35. Effective Pattern Many data mining techniques have been proposed for 2012 Discovery for Text mining useful patterns in text documents. However, how Mining to effectively use and update discovered patterns is still an open research issue, especially in the domain of text mining. Since most existing text mining methods adopted term-based approaches, they all suffer from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern (or phrase)-based approaches should perform better than the term-based ones, but many experiments do not support this hypothesis. This paper presents an innovative and effective pattern discovery technique which includes the processes of pattern deploying and pattern evolving, to improve the effectiveness of using and updating discovered patterns for finding relevant and interesting information. Substantial experiments on RCV1 data collection and TREC topics demonstrate that the proposed solution achieves encouraging performance. 36. Efficient Fuzzy Type- In a traditional keyword-search system over XML data, a 2012 Ahead Search in XML user composes a keyword query, submits it to the system, Data and retrieves relevant answers. In the case where the user has limited knowledge about the data, often the user feels “left in the dark” when issuing queries, and has to use a try-and-see approach for finding information. In this paper, we study fuzzy type-ahead search in XML data, a new information-access paradigm in which the system searches XML data on the fly as the user types in query keywords. It allows users to explore data as they type, even in the presence of minor errors of their keywords. Our proposed method has the following features: 1) Search as you type: It extends Autocomplete by
  • 41. supporting queries with multiple keywords in XML data. 2) Fuzzy: It can find high-quality answers that have keywords matching query keywords approximately. 3) Efficient: Our effective index structures and searching algorithms can achieve a very high interactive speed. We study research challenges in this new search framework. We propose effective index structures and top-k algorithms to achieve a high interactive speed. We examine effective ranking functions and early termination techniques to progressively identify the top-k relevant answers. We have implemented our method on real data sets, and the experimental results show that our method achieves high search efficiency and result quality. 37. Feature Selection Data and knowledge management systems employ 2012 Based on Class- feature selection algorithms for removing irrelevant, Dependent Densities redundant, and noisy information from the data. There for High-Dimensional are two well-known approaches to feature selection, Binary Data feature ranking (FR) and feature subset selection (FSS). In this paper, we propose a new FR algorithm, termed as class-dependent density-based feature elimination (CDFE), for binary data sets. Our theoretical analysis shows that CDFE computes the weights, used for feature ranking, more efficiently as compared to the mutual information measure. Effectively, rankings obtained from both the two criteria approximate each other. CDFE uses a filtrapper approach to select a final subset. For data sets having hundreds of thousands of features, feature selection with FR algorithms is simple and computationally efficient but redundant information may not be removed. On the other hand, FSS algorithms analyze the data for redundancies but may become computationally impractical on high-dimensional data sets. We address these problems by combining FR and FSS methods in the form of a two-stage feature selection algorithm. When introduced as a preprocessing step to the FSS algorithms, CDFE not only presents them with a feature subset, good in terms of classification, but also relieves them from heavy computations. Two FSS algorithms are employed in the second stage to test the two-stage feature selection idea. We carry out experiments with two different classifiers (naive Bayes' and kernel ridge regression) on three different real-life data sets (NOVA, HIVA, and GINA) of the”Agnostic Learning versus Prior Knowledge” challenge. As a stand-
  • 42. alone method, CDFE shows up to about 92 percent reduction in the feature set size. When combined with the FSS algorithms in two-stages, CDFE significantly improves their classification accuracy and exhibits up to 97 percent reduction in the feature set size. We also compared CDFE against the winning entries of the challenge and f- und that it outperforms the best results on NOVA and HIVA while obtaining a third position in case of GINA. 38. Feedback Matching There is a need to promote drastically increased levels of 2012 Framework for interoperability of product data across a broad spectrum Semantic of stakeholders, while ensuring that the semantics of Interoperability of product knowledge are preserved, and when necessary, Product Data translated. In order to achieve this, multiple methods have been proposed to determine semantic maps across concepts from different representations. Previous research has focused on developing different individual matching methods, i.e., ones that compute mapping based on a single matching measure. These efforts assume that some weighted combination can be used to obtain the overall maps. We analyze the problem of combination of multiple individual methods to determine requirements specific to product development and propose a solution approach called FEedback Matching Framework with Implicit Training (FEMFIT). FEMFIT provides the ability to combine the different matching approaches using ranking Support Vector Machine (ranking SVM). The method accounts for nonlinear relations between the individual matchers. It overcomes the need to explicitly train the algorithm before it is used, and further reduces the decision-making load on the domain expert by implicitly capturing the expert's decisions without requiring him to input real numbers on similarity. We apply FEMFIT to a subset of product constraints across a commercial system and the ISO standard. We observe that FEMIT demonstrates better accuracy (average correctness of the results) and stability (deviation from the average) in comparison with other existing combination methods commonly assumed to be valid in this domain. 39. Fractal-Based Intrinsic Dimensionality reduction is an important step in 2012 Dimension Estimation knowledge discovery in databases. Intrinsic dimension and Its Application in indicates the number of variables necessary to describe a Dimensionality data set. Two methods, box-counting dimension and Reduction correlation dimension, are commonly used for intrinsic dimension estimation. However, the robustness of these
  • 43. two methods has not been rigorously studied. This paper demonstrates that correlation dimension is more robust with respect to data sample size. In addition, instead of using a user selected distance d, we propose a new approach to capture all log-log pairs of a data set to more precisely estimate the correlation dimension. Systematic experiments are conducted to study factors that influence the computation of correlation dimension, including sample size, the number of redundant variables, and the portion of log-log plot used for calculation. Experiments on real-world data sets confirm the effectiveness of intrinsic dimension estimation with our improved method. Furthermore, a new supervised dimensionality reduction method based on intrinsic dimension estimation was introduced and validated. 40. Horizontal Preparing a data set for analysis is generally the most 2012 Aggregations in SQL time consuming task in a data mining project, requiring to Prepare Data Sets many complex SQL queries, joining tables, and for Data Mining aggregating columns. Existing SQL aggregations have Analysis limitations to prepare data sets because they return one column per aggregated group. In general, a significant manual effort is required to build data sets, where a horizontal layout is required. We propose simple, yet powerful, methods to generate SQL code to return aggregated columns in a horizontal tabular layout, returning a set of numbers instead of one number per row. This new class of functions is called horizontal aggregations. Horizontal aggregations build data sets with a horizontal denormalized layout (e.g., point- dimension, observation-variable, instance-feature), which is the standard layout required by most data mining algorithms. We propose three fundamental methods to evaluate horizontal aggregations: CASE: Exploiting the programming CASE construct; SPJ: Based on standard relational algebra operators (SPJ queries); PIVOT: Using the PIVOT operator, which is offered by some DBMSs. Experiments with large tables compare the proposed query evaluation methods. Our CASE method has similar speed to the PIVOT operator and it is much faster than the SPJ method. In general, the CASE and PIVOT methods exhibit linear scalability, whereas the SPJ method does not. 41. Low-Rank Kernel Traditional clustering techniques are inapplicable to 2012 Matrix Factorization problems where the relationships between data points for Large-Scale evolve over time. Not only is it important for the Evolutionary clustering algorithm to adapt to the recent changes in the
  • 44. Clustering evolving data, but it also needs to take the historical relationship between the data points into consideration. In this paper, we propose ECKF, a general framework for evolutionary clustering large-scale data based on low- rank kernel matrix factorization. To the best of our knowledge, this is the first work that clusters large evolutionary data sets by the amalgamation of low-rank matrix approximation methods and matrix factorization- based clustering. Since the low-rank approximation provides a compact representation of the original matrix, and especially, the near-optimal low-rank approximation can preserve the sparsity of the original data, ECKF gains computational efficiency and hence is applicable to large evolutionary data sets. Moreover, matrix factorization-based methods have been shown to effectively cluster high-dimensional data in text mining and multimedia data analysis. From a theoretical standpoint, we mathematically prove the convergence and correctness of ECKF, and provide detailed analysis of its computational efficiency (both time and space). Through extensive experiments performed on synthetic and real data sets, we show that ECKF outperforms the existing methods in evolutionary clustering. 42. Mining Online Posting reviews online has become an increasingly 2012 Reviews for Predicting popular way for people to express opinions and Sales Performance A sentiments toward the products bought or services Case Study in the received. Analyzing the large volume of online reviews Movie Domain available would produce useful actionable knowledge that could be of economic values to vendors and other interested parties. In this paper, we conduct a case study in the movie domain, and tackle the problem of mining reviews for predicting product sales performance. Our analysis shows that both the sentiments expressed in the reviews and the quality of the reviews have a significant impact on the future sales performance of products in question. For the sentiment factor, we propose Sentiment PLSA (S-PLSA), in which a review is considered as a document generated by a number of hidden sentiment factors, in order to capture the complex nature of sentiments. Training an S-PLSA model enables us to obtain a succinct summary of the sentiment information embedded in the reviews. Based on S-PLSFA, we propose ARSA, an Autoregressive Sentiment-Aware model for sales prediction. We then seek to further improve the accuracy of prediction by considering the quality factor, with a focus on predicting the quality of a
  • 45. review in the absence of user-supplied indicators, and present ARSQA, an Autoregressive Sentiment and Quality Aware model, to utilize sentiments and quality for predicting product sales performance. Extensive experiments conducted on a large movie data set confirm the effectiveness of the proposed approach. 43. Privacy Preserving Privacy preservation is important for machine learning 2012 Decision Tree and data mining, but measures designed to protect private Learning Using information often result in a trade-off: reduced utility of Unrealized Data Sets the training samples. This paper introduces a privacy preserving approach that can be applied to decision tree learning, without concomitant loss of accuracy. It describes an approach to the preservation of the privacy of collected data samples in cases where information from the sample database has been partially lost. This approach converts the original sample data sets into a group of unreal data sets, from which the original samples cannot be reconstructed without the entire group of unreal data sets. Meanwhile, an accurate decision tree can be built directly from those unreal data sets. This novel approach can be applied directly to the data storage as soon as the first sample is collected. The approach is compatible with other privacy preserving approaches, such as cryptography, for extra protection TECHNOLOGY : JAVA DOMAIN : IEEE TRANSACTIONS ON MOBILE COMPUTING S.NO TITLES ABSTRACT YEAR 1. The Boomerang We present the boomerang protocol to efficiently retain 2012 Protocol: Tying Data to information at a particular geographic location in a Geographic Locations in sparse network of highly mobile nodes without using Mobile Disconnected infrastructure networks. To retain information around Networks certain physical location, each mobile device passing that location will carry the information for a short while. 2. Nature-Inspired Self- In this paper, we present new models and algorithms 2012 Organization, Control, for control and optimization of a class of next and Optimization in generation communication networks: Hierarchical Heterogeneous Wireless Heterogeneous Wireless Networks (HHWNs), under Networks real-world physical constraints. Two biology-inspired techniques, a Flocking Algorithm (FA) and a Particle
  • 46. Swarm Optimizer (PSO), are investigated in this context. 3. A Cost Analysis In this paper, we have developed analytical framework 2012 Framework for NEMO to measure the costs of the basic protocol for Network Prefix Delegation-Based Mobility (NEMO), and four representative prefix Schemes delegation-based schemes. Our results show that cost of packet delivery through the partially optimized route dominates over other costs. 4. OMAN: A Mobile Ad In this paper, we present a high-level view of the 2012 Hoc Network Design OMAN architecture, review specific mathematical System models used in the network representation, and show how OMAN is used to evaluate tradeoffs in MANET design. Specifically, we cover three case studies of optimization. 1-robust power control under uncertain channel information for a single physical layer snapshot. 2-scheduling with the availability of directional radiation patterns. 3-optimizing topology through movement planning of relay nodes. 5. Energy-Efficient In this paper, we formulate the resource allocation 2012 Cooperative Video problem for general multihop multicast network flows Distribution with and derive the optimal solution that minimizes the total Statistical QoS energy consumption while guaranteeing a statistical Provisions over end-to-end delay bound on each network path. Wireless Networks 6. Leveraging the In this paper, we consider the implications of spectrum 2012 Algebraic Connectivity heterogeneity on connectivity and routing in a of a Cognitive Network Cognitive Radio Ad Hoc Network (CRAHN). We for Routing Design study the Laplacian spectrum of the CRAHN graph when the activity of primary users is considered. We introduce the cognitive algebraic connectivity, i.e., the second smallest eigenvalue of the Laplacian of a graph, in a cognitive scenario. 7. Efficient Virtual In this paper, we will study a directional virtual 2012 Backbone Construction backbone (VB) in the network where directional with Routing Cost antennas are used. When constructing a VB, we will Constraint in Wireless take routing and broadcasting into account since they Networks Using are two common operations in wireless networks. Directional Antennas Hence, we will study a VB with guaranteed routing costs, named ? Minimum rOuting Cost Directional VB (?-MOC-DVB). 8. Stateless Multicast In this paper, we have developed a stateless receiver- 2012 Protocol for Ad Hoc based multicast (RBMulticast) protocol that simply Networks. uses a list of the multicast members' (e.g., sinks') addresses, embedded in packet headers, to enable receivers to decide the best way to forward the multicast traffic.
  • 47. 9. Detection of Selfish CCA tuning can be exploited by selfish nodes to obtain 2012 Manipulation of Carrier an unfair share of the available bandwidth. Sensing in 802.11 Specifically, a selfish entity can manipulate the CCA Networks threshold to ignore ongoing transmissions; this increases the probability of accessing the medium and provides the entity a higher, unfair share of the bandwidth. 10. Handling Selfishness in In a mobile ad hoc network, the mobility and resource 2012 Replica Allocation over constraints of mobile nodes may lead to network a Mobile Ad Hoc partitioning or performance degradation. Several data Network replication techniques have been proposed to minimize performance degradation. Most of them assume that all mobile nodes collaborate fully in terms of sharing their memory space. In reality, however, some nodes may selfishly decide only to cooperate partially, or not at all, with other nodes. These selfish nodes could then reduce the overall data accessibility in the network. In this paper, we examine the impact of selfish nodes in a mobile ad hoc network from the perspective of replica allocation. We term this selfish replica allocation. In particular, we develop a selfish node detection algorithm that considers partial selfishness and novel replica allocation techniques to properly cope with selfish replica allocation. The conducted simulations demonstrate the proposed approach outperforms traditional cooperative replica allocation techniques in terms of data accessibility, communication cost, and average query delay. 11. Acknowledgment-Based We propose a broadcast algorithm suitable for a wide 2012 Broadcast Protocol for range of vehicular scenarios, which only employs local Reliable and Efficient information acquired via periodic beacon messages, Data Dissemination in containing acknowledgments of the circulated Vehicular Ad Hoc broadcast messages. Each vehicle decides whether it Networks belongs to a connected dominating set (CDS). Vehicles in the CDS use a shorter waiting period before possible retransmission. At time-out expiration, a vehicle retransmits if it is aware of at least one neighbor in need of the message. To address intermittent connectivity and appearance of new neighbors, the evaluation timer can be restarted. Our algorithm resolves propagation at road intersections without any need to even recognize intersections. It is inherently adaptable to different mobility regimes, without the need to classify network or vehicle speeds. In a thorough simulation-based performance evaluation, our algorithm is shown to provide higher reliability and
  • 48. message efficiency than existing approaches for nonsafety applications. 12. Toward Reliable Data This paper addresses the problem of delivering data 2012 Delivery for Highly packets for highly dynamic mobile ad hoc networks in Dynamic Mobile Ad a reliable and timely manner. Most existing ad hoc Hoc Networks routing protocols are susceptible to node mobility, especially for large-scale networks. Driven by this issue, we propose an efficient Position-based Opportunistic Routing (POR) protocol which takes advantage of the stateless property of geographic routing and the broadcast nature of wireless medium. When a data packet is sent out, some of the neighbor nodes that have overheard the transmission will serve as forwarding candidates, and take turn to forward the packet if it is not relayed by the specific best forwarder within a certain period of time. By utilizing such in- the-air backup, communication is maintained without being interrupted. The additional latency incurred by local route recovery is greatly reduced and the duplicate relaying caused by packet reroute is also decreased. In the case of communication hole, a Virtual Destination-based Void Handling (VDVH) scheme is further proposed to work together with POR. Both theoretical analysis and simulation results show that POR achieves excellent performance even under high node mobility with acceptable overhead and the new void handling scheme also works well. 13. Protecting Location While many protocols for sensor network security 2012 Privacy in Sensor provide confidentiality for the content of messages, Networks against a contextual information usually remains exposed. Such Global Eavesdropper contextual information can be exploited by an adversary to derive sensitive information such as the locations of monitored objects and data sinks in the field. Attacks on these components can significantly undermine any network application. Existing techniques defend the leakage of location information from a limited adversary who can only observe network traffic in a small region. However, a stronger adversary, the global eavesdropper, is realistic and can defeat these existing techniques. This paper first formalizes the location privacy issues in sensor networks under this strong adversary model and computes a lower bound on the communication overhead needed for achieving a given level of location privacy. The paper then proposes two techniques to
  • 49. provide location privacy to monitored objects (source- location privacy)—periodic collection and source simulation—and two techniques to provide location privacy to data sinks (sink-location privacy)—sink simulation and backbone flooding. These techniques provide trade-offs between privacy, communication cost, and latency. Through analysis and simulation, we demonstrate that the proposed techniques are efficient and effective for source and sink-location privacy in sensor networks. 14. local broadcastThere are two main approaches, static and dynamic, to 2012 algorithms in wireless broadcast algorithms in wireless ad hoc networks. In ad hoc networks the static approach, local algorithms determine the reducing the number of status (forwarding/nonforwarding) of each node transmissions proactively based on local topology information and a globally known priority function. In this paper, we first show that local broadcast algorithms based on the static approach cannot achieve a good approximation factor to the optimum solution (an NP-hard problem). However, we show that a constant approximation factor is achievable if (relative) position information is available. In the dynamic approach, local algorithms determine the status of each node “on-the-fly” based on local topology information and broadcast state information. Using the dynamic approach, it was recently shown that local broadcast algorithms can achieve a constant approximation factor to the optimum solution when (approximate) position information is available. However, using position information can simplify the problem. Also, in some applications it may not be practical to have position information. Therefore, we wish to know whether local broadcast algorithms based on the dynamic approach can achieve a constant approximation factor without using position information. We answer this question in the positive – we design a local broadcast algorithm in which the status of each node is decided “on-the-fly” and prove that the algorithm can achieve both full delivery and a constant approximation to the optimum solution 15. Compressed-Sensing- This article presents the design of a networked system 2012 Enabled Video for joint compression, rate control and error correction Streaming for Wireless of video over resource-constrained embedded devices Multimedia Sensor based on the theory of compressed sensing. The
  • 50. Networks objective of this work is to design a cross-layer system that jointly controls the video encoding rate, the transmission rate, and the channel coding rate to maximize the received video quality. First, compressed sensing based video encoding for transmission over wireless multimedia sensor networks (WMSNs) is studied. It is shown that compressed sensing can overcome many of the current problems of video over WMSNs, primarily encoder complexity and low resiliency to channel errors. A rate controller is then developed with the objective of maintaining fairness among video streams while maximizing the received video quality. It is shown that the rate of compressed sensed video can be predictably controlled by varying only the compressed sensing sampling rate. It is then shown that the developed rate controller can be interpreted as the iterative solution to a convex optimization problem representing the optimization of the rate allocation across the network. The error resiliency properties of compressed sensed images and videos are then studied, and an optimal error detection and correction scheme is presented for video transmission over lossy channels. Finally, the entire system is evaluated through simulation and testbed evaluation. The rate controller is shown to outperform existing TCP-friendly rate control schemes in terms of both fairness and received video quality. Testbed results also show that the rates converge to stable values in real channels. 16. Hop-by-Hop Routing in Wireless Mesh Network (WMN) has become an 2012 Wireless Mesh important edge network to provide Internet access to Networks with remote areas and wireless connections in a Bandwidth Guarantees metropolitan scale. In this paper, we study the problem of identifying the maximum available bandwidth path, a fundamental issue in supporting quality-of-service in WMNs. Due to interference among links, bandwidth, a well-known bottleneck metric in wired networks, is neither concave nor additive in wireless networks. We propose a new path weight which captures the available path bandwidth information. We formally prove that our hop-by-hop routing protocol based on the new path weight satisfies the consistency and loop-freeness requirements. The consistency property guarantees that each node makes a proper packet forwarding decision, so that a data packet does traverse over the intended path. Our extensive simulation experiments also show
  • 51. that our proposed path weight outperforms existing path metrics in identifying high-throughput paths 17. Handling Selfishness in In a mobile ad hoc network, the mobility and resource 2012 Replica Allocation over constraints of mobile nodes may lead to network a Mobile Ad Hoc partitioning or performance degradation. Several data Network- Mobile replication techniques have been proposed to minimize Computing, projects performance degradation. Most of them assume that all 2012 mobile nodes collaborate fully in terms of sharing their memory space. In reality, however, some nodes may selfishly decide only to cooperate partially, or not at all, with other nodes. These selfish nodes could then reduce the overall data accessibility in the network. In this paper, we examine the impact of selfish nodes in a mobile ad hoc network from the perspective of replica allocation. We term this selfish replica allocation. In particular, we develop a selfish node detection algorithm that considers partial selfishness and novel replica allocation techniques to properly cope with selfish replica allocation. The conducted simulations demonstrate the proposed approach outperforms traditional cooperative replica allocation techniques in terms of data accessibility, communication cost, and average query delay. 18. Toward Reliable Data This paper addresses the problem of delivering data 2012 Delivery for Highly packets for highly dynamic mobile ad hoc networks in Dynamic Mobile Ad a reliable and timely manner. Most existing ad hoc Hoc Networks- Mobile routing protocols are susceptible to node mobility, Computing, projects especially for large-scale networks. Driven by this 2012 issue, we propose an efficient Position-based Opportunistic Routing (POR) protocol which takes advantage of the stateless property of geographic routing and the broadcast nature of wireless medium. When a data packet is sent out, some of the neighbor nodes that have overheard the transmission will serve as forwarding candidates, and take turn to forward the packet if it is not relayed by the specific best forwarder within a certain period of time. By utilizing such in- the-air backup, communication is maintained without being interrupted. The additional latency incurred by local route recovery is greatly reduced and the duplicate relaying caused by packet reroute is also decreased. In the case of communication hole, a Virtual Destination-based Void Handling (VDVH) scheme is further proposed to work together with POR. Both theoretical analysis and simulation results show that POR achieves excellent performance even under high
  • 52. node mobility with acceptable overhead and the new void handling scheme also works well 19. Fast Data Collection in We investigate the following fundamental question – 2012 Tree-Based Wireless how fast can information be collected from a wireless Sensor Networks- sensor network organized as tree? To address this, we Mobile Computing, explore and evaluate a number of different techniques projects 2012 using realistic simulation models under the many-to- one communication paradigm known as convergecast. We first consider time scheduling on a single frequency channel with the aim of minimizing the number of time slots required (schedule length) to complete a convergecast. Next, we combine scheduling with transmission power control to mitigate the effects of interference, and show that while power control helps in reducing the schedule length under a single frequency, scheduling transmissions using multiple frequencies is more efficient. We give lower bounds on the schedule length when interference is completely eliminated, and propose algorithms that achieve these bounds. We also evaluate the performance of various channel assignment methods and find empirically that for moderate size networks of about 100 nodes, the use of multi-frequency scheduling can suffice to eliminate most of the interference. Then, the data collection rate no longer remains limited by interference but by the topology of the routing tree. To this end, we construct degree-constrained spanning trees and capacitated minimal spanning trees, and show significant improvement in scheduling performance over different deployment densities. Lastly, we evaluate the impact of different interference and channel models on the schedule length. 20. Protecting Location The location privacy issue in sensor networks under 2012 Privacy in Sensor this strong adversary model is considered. Proposed Networks Against a two techniques to provide location privacy to Global Eavesdropper monitored objects (source-location privacy)-periodic JAVA collection and source simulation-and two techniques to provide location privacy to data sinks (sink-location privacy)-sink simulation and backbone flooding. 21. Energy-Efficient For real-time video broadcast where multiple users are 2012 Cooperative Video interested in the same content, mobile-to-mobile Distribution with cooperation can be utilized to improve delivery Statistical QoS efficiency and reduce network utilization. Under such Provisions over cooperation, however, real-time video transmission Wireless Networks – requires end-to-end delay bounds. Due to the inherently projects 2012 stochastic nature of wireless fading channels,
  • 53. deterministic delay bounds are prohibitively difficult to guarantee. For a scalable video structure, an alternative is to provide statistical guarantees using the concept of effective capacity/bandwidth by deriving quality of service exponents for each video layer. Using this concept, we formulate the resource allocation problem for general multihop multicast network flows and derive the optimal solution that minimizes the total energy consumption while guaranteeing a statistical end-to-end delay bound on each network path. A method is described to compute the optimal resource allocation at each node in a distributed fashion. Furthermore, we propose low complexity approximation algorithms for energy-efficient flow selection from the set of directed acyclic graphs forming the candidate network flows. The flow selection and resource allocation process is adapted for each video frame according to the channel conditions on the network links. Considering different network topologies, results demonstrate that the proposed resource allocation and flow selection algorithms provide notable performance gains with small optimality gaps at a low computational cost. 22. A Novel MAC Scheme This paper proposes a novel medium access control 2012 for Multichannel (MAC) scheme for multichannel cognitive radio (CR) Cognitive Radio Ad Hoc ad hoc networks, which achieves high throughput of Networks CR system while protecting primary users (PUs) effectively. In designing the MAC scheme, we consider that the PU signal may cover only a part of the network and the nodes can have the different sensing result for the same PU even on the same channel. By allowing the nodes to use the channel on which the PU exists as long as their transmissions do not disturb the PU, the proposed MAC scheme fully utilizes the spectrum access opportunity. To mitigate the hidden PU problem inherent to multichannel CR networks where the PU signal is detectable only to some nodes, the proposed MAC scheme adjusts the sensing priorities of channels at each node with the PU detection information of other nodes and also limits the transmission power of a CR node to the maximum allowable power for guaranteeing the quality of service requirement of PU. The performance of the proposed MAC scheme is evaluated by using simulation. The simulation results show that the CR system with the proposed MAC
  • 54. accomplishes good performance in throughput and packet delay, while protecting PUs properly 23. A Statistical Mechanics- Characterizing the performance of ad hoc networks is 2012 Based Framework to one of the most intricate open challenges; conventional Analyze Ad Hoc ideas based on information-theoretic techniques and Networks with Random inequalities have not yet been able to successfully Access tackle this problem in its generality. Motivated thus, we promote the totally asymmetric simple exclusion process (TASEP), a particle flow model in statistical mechanics, as a useful analytical tool to study ad hoc networks with random access. Employing the TASEP framework, we first investigate the average end-to-end delay and throughput performance of a linear multihop flow of packets. Additionally, we analytically derive the distribution of delays incurred by packets at each node, as well as the joint distributions of the delays across adjacent hops along the flow. We then consider more complex wireless network models comprising intersecting flows, and propose the partial mean-field approximation (PMFA), a method that helps tightly approximate the throughput performance of the system. We finally demonstrate via a simple example that the PMFA procedure is quite general in that it may be used to accurately evaluate the performance of ad hoc networks with arbitrary topologies. 24. Acknowledgment-Based We propose a broadcast algorithm suitable for a wide 2012 Broadcast Protocol for range of vehicular scenarios, which only employs local Reliable and Efficient information acquired via periodic beacon messages, Data Dissemination in containing acknowledgments of the circulated Vehicular Ad Hoc broadcast messages. Each vehicle decides whether it Networks belongs to a connected dominating set (CDS). Vehicles in the CDS use a shorter waiting period before possible retransmission. At time-out expiration, a vehicle retransmits if it is aware of at least one neighbor in need of the message. To address intermittent connectivity and appearance of new neighbors, the evaluation timer can be restarted. Our algorithm resolves propagation at road intersections without any need to even recognize intersections. It is inherently adaptable to different mobility regimes, without the need to classify network or vehicle speeds. In a thorough simulation-based performance evaluation, our algorithm is shown to provide higher reliability and message efficiency than existing approaches for nonsafety applications.
  • 55. 25. Characterizing the Cellular text messaging services are increasingly being 2012 Security Implications of relied upon to disseminate critical information during Third-Party Emergencyemergencies. Accordingly, a wide range of Alert Systems over organizations including colleges and universities now Cellular Text Messaging partner with third-party providers that promise to Services improve physical security by rapidly delivering such messages. Unfortunately, these products do not work as advertised due to limitations of cellular infrastructure and therefore provide a false sense of security to their users. In this paper, we perform the first extensive investigation and characterization of the limitations of an Emergency Alert System (EAS) using text messages as a security incident response mechanism. We show emergency alert systems built on text messaging not only can meet the 10 minute delivery requirement mandated by the WARN Act, but also potentially cause other voice and SMS traffic to be blocked at rates upward of 80 percent. We then show that our results are representative of reality by comparing them to a number of documented but not previously understood failures. Finally, we analyze a targeted messaging mechanism as a means of efficiently using currently deployed infrastructure and third-party EAS. In so doing, we demonstrate that this increasingly deployed security infrastructure does not achieve its stated requirements for large populations. 26. Converge Cast On the In this paper, we define an ad hoc network where 2012 Capacity and Delay multiple sources transmit packets to one destination as Tradeoffs Converge-Cast network. We will study the capacity delay tradeoffs assuming that n wireless nodes are deployed in a unit square. For each session (the session is a dataflow from k different source nodes to 1 destination node), k nodes are randomly selected as active sources and each transmits one packet to a particular destination node, which is also randomly selected. We first consider the stationary case, where capacity is mainly discussed and delay is entirely dependent on the average number of hops. We find that the per-node capacity is Θ (1/√(n log n)) (given nonnegative functions f(n) and g(n): f(n) = O(g(n)) means there exist positive constants c and m such that f(n) ≤ cg(n) for all n ≥ m; f(n)= Ω (g(n)) means there exist positive constants c and m such that f(n) ≥ cg(n) for all n ≥ m; f(n) = Θ (g(n)) means that both f(n) = Ω (g(n)) and f(n) = O(g(n)) hold), which is the same as that of unicast, presented in (Gupta and Kumar, 2000).
  • 56. Then, node mobility is introduced to increase network capacity, for which our study is performed in two steps. The first step is to establish the delay in single-session transmission. We find that the delay is Θ (n log k) under 1-hop strategy, and Θ (n log k/m) under 2-hop redundant strategy, where m denotes the number of replicas for each packet. The second step is to find delay and capacity in multisession transmission. We reveal that the per-node capacity and delay for 2-hop nonredundancy strategy are Θ (1) and Θ (n log k), respectively. The optimal delay is Θ (√(n log k)+k) with redundancy, corresponding to a capacity of Θ (√((1/n log k) + (k/n log k)). Therefore, we obtain that the capacity delay tradeoff satisfies delay/rate ≥ Θ (n log k) for both strategies. 27. Cooperative Download We consider a complex (i.e., nonlinear) road scenario 2012 in Vehicular where users aboard vehicles equipped with Environments communication interfaces are interested in downloading large files from road-side Access Points (APs). We investigate the possibility of exploiting opportunistic encounters among mobile nodes so to augment the transfer rate experienced by vehicular downloaders. To that end, we devise solutions for the selection of carriers and data chunks at the APs, and evaluate them in real-world road topologies, under different AP deployment strategies. Through extensive simulations, we show that carry&forward transfers can significantly increase the download rate of vehicular users in urban/suburban environments, and that such a result holds throughout diverse mobility scenarios, AP placements and network loads 28. Detection of Selfish Recently, tuning the clear channel assessment (CCA) 2012 Manipulation of Carrier threshold in conjunction with power control has been Sensing in 802.11 considered for improving the performance of WLANs. Networks However, we show that, CCA tuning can be exploited by selfish nodes to obtain an unfair share of the available bandwidth. Specifically, a selfish entity can manipulate the CCA threshold to ignore ongoing transmissions; this increases the probability of accessing the medium and provides the entity a higher, unfair share of the bandwidth. We experiment on our 802.11 testbed to characterize the effects of CCA tuning on both isolated links and in 802.11 WLAN configurations. We focus on AP-client(s)
  • 57. configurations, proposing a novel approach to detect this misbehavior. A misbehaving client is unlikely to recognize low power receptions as legitimate packets; by intelligently sending low power probe messages, an AP can efficiently detect a misbehaving node. Our key contributions are: 1) We are the first to quantify the impact of selfish CCA tuning via extensive experimentation on various 802.11 configurations. 2) We propose a lightweight scheme for detecting selfish nodes that inappropriately increase their CCAs. 3) We extensively evaluate our system on our testbed; its accuracy is 95 percent while the false positive rate is less than 5 percent.S 29. Distributed Throughput We consider throughput-optimal power allocation in 2012 Maximization in multi-hop wireless networks. The study of this problem Wireless Networks via has been limited due to the non-convexity of the Random Power underlying optimization problems, that prohibits an Allocation efficient solution even in a centralized setting. We take a randomization approach to deal with this difficulty. To this end, we generalize the randomization framework originally proposed for input queued switches to an SINR rate-based interference model. Further, we develop distributed power allocation and comparison algorithms that satisfy these conditions, thereby achieving (nearly) 100% throughput. We illustrate the performance of our proposed power allocation solution through numerical investigation and present several extensions for the considered problem. 30. Efficient Rendezvous Recent research shows that significant energy saving 2012 Algorithms for can be achieved in mobility-enabled wireless sensor Mobility-Enabled networks (WSNs) that visit sensor nodes and collect Wireless Sensor data from them via short-range communications. Networks However, a major performance bottleneck of such WSNs is the significantly increased latency in data collection due to the low movement speed of mobile base stations. To address this issue, we propose a rendezvous-based data collection approach in which a subset of nodes serve as rendezvous points that buffer and aggregate data originated from sources and transfer to the base station when it arrives. This approach combines the advantages of controlled mobility and in- network data caching and can achieve a desirable balance between network energy saving and data collection delay. We propose efficient rendezvous design algorithms with provable performance bounds for mobile base stations with variable and fixed tracks,
  • 58. respectively. The effectiveness of our approach is validated through both theoretical analysis and extensive simulations. 31. Efficient Virtual Directional antennas can divide the transmission range 2012 Backbone Construction into several sectors. Thus, through switching off with Routing Cost sectors in unnecessary directions in wireless networks, Constraint in Wireless we can save bandwidth and energy consumption. In Networks Using this paper, we will study a directional virtual backbone Directional Antennas (VB) in the network where directional antennas are used. When constructing a VB, we will take routing and broadcasting into account since they are two common operations in wireless networks. Hence, we will study a VB with guaranteed routing costs, named α Minimum rOuting Cost Directional VB (α-MOC- DVB). Besides the properties of regular VBs, α-MOC- DVB also has a special constraint - for any pair of nodes, there exists at least one path all intermediate directions on which must belong to α-MOC-DVB and the number of intermediate directions on the path is smaller than α times that on the shortest path. We prove that construction of a minimum α-MOC-DVB is an NP-hard problem in a general directed graph. A heuristic algorithm is proposed and theoretical analysis is also discussed in the paper. Extensive simulations demonstrate that our α-MOC-DVB is much more efficient in the sense of VB size and routing costs compared to other VBs. 32. Energy-Efficient Distributed Information SHaring (DISH) is a new 2012 Strategies for cooperative approach to designing multichannel MAC Cooperative protocols. It aids nodes in their decision making Multichannel MAC processes by compensating for their missing Protocols information via information sharing through neighboring nodes. This approach was recently shown to significantly boost the throughput of multichannel MAC protocols. However, a critical issue for ad hoc communication devices, viz. energy efficiency, has yet to be addressed. In this paper, we address this issue by developing simple solutions that reduce the energy consumption without compromising the throughput performance and meanwhile maximize cost efficiency. We propose two energy-efficient strategies: in-situ energy conscious DISH, which uses existing nodes only, and altruistic DISH, which requires additional nodes called altruists. We compare five protocols with respect to these strategies and identify altruistic DISH to be the right choice in general: it 1) conserves 40-80
  • 59. percent of energy, 2) maintains the throughput advantage, and 3) more than doubles the cost efficiency compared to protocols without this strategy. On the other hand, our study also shows that in-situ energy conscious DISH is suitable only in certain limited scenarios. 33. Estimating Parameters We propose a method for estimating parameters of 2012 of Multiple multiple target objects by using networked binary Heterogeneous Target sensors whose locations are unknown. These target Objects Using objects may have different parameters, such as size and Composite Sensor perimeter length. Each sensors, which is incapable of Nodes monitoring the target object's parameters, sends only binary data describing whether or not it detects target objects coming into, moving around, or leaving the sensing area at every moment. We previously developed a parameter estimation method for a single target object. However, a straight-forward extension of this method is not applicable for estimating multiple heterogeneous target objects. This is because a networked binary sensor at an unknown location cannot provide information that distinguishes individual target objects, but it can provide information on the total perimeter length and size of multiple target objects. Therefore, we propose composite sensor nodes with multiple sensors in a predetermined layout for obtaining additional information for estimating the parameter of each target object. As an example of a composite sensor node, we consider a two-sensor composite sensor node, which consists of two sensors, one at each of the two end points of a line segment of known length. For the two-sensor composite sensor node, measures are derived such as the two sensors detecting target objects. These derived measures are the basis for identifying the shape of each target object among a given set of categories (for example, disks and rectangles) and estimating parameters such as the radius and lengths of two sides of each target object. Numerical examples demonstrate that networked composite sensor nodes consisting of two binary sensors enable us to estimate the parameters of target objects. 34. Fast Capture— The technology of Radio Frequency IDentification 2012 Recapture Approach for (RFID) enables many applications that rely on passive, Mitigating the Problem battery-less wireless devices. If a RFID reader needs to of Missing RFID Tags gather the ID from multiple tags in its range, then it
  • 60. needs to run an anticollision protocol. Due to errors on the wireless link, a single reader session, which contains one full execution of the anticollision protocol, may not be sufficient to retrieve the ID of all tags. This problem can be mitigated by running multiple, redundant reader sessions and use the statistical relationship between these sessions. On the other hand, each session is time consuming and therefore the number of sessions should be kept minimal. We optimize the process of running multiple reader sessions, by allowing only some of the tags already discovered to reply in subsequent reader sessions. The estimation procedure is integrated with an actual tree-based anticollision protocol, and numerical results show that the reliable tag resolution algorithm attain high speed of protocol execution, while not sacrificing the reliability of the estimators used to assess the probability of missing tags. 35. Fast Data Collection in We investigate the following fundamental question- 2012 Tree-Based Wireless how fast can information be collected from a wireless Sensor Networks sensor network organized as tree? To address this, we explore and evaluate a number of different techniques using realistic simulation models under the many-to- one communication paradigm known as convergecast. We first consider time scheduling on a single frequency channel with the aim of minimizing the number of time slots required (schedule length) to complete a convergecast. Next, we combine scheduling with transmission power control to mitigate the effects of interference, and show that while power control helps in reducing the schedule length under a single frequency, scheduling transmissions using multiple frequencies is more efficient. We give lower bounds on the schedule length when interference is completely eliminated, and propose algorithms that achieve these bounds. We also evaluate the performance of various channel assignment methods and find empirically that for moderate size networks of about 100 nodes, the use of multifrequency scheduling can suffice to eliminate most of the interference. Then, the data collection rate no longer remains limited by interference but by the topology of the routing tree. To this end, we construct degree-constrained spanning trees and capacitated minimal spanning trees, and show significant improvement in scheduling performance over different deployment densities. Lastly, we evaluate the impact of
  • 61. different interference and channel models on the schedule length. 36. Fault Localization Using Faulty components in a network need to be localized 2012 Passive End-to-End and repaired to sustain the health of the network. In this Measurements and paper, we propose a novel approach that carefully Sequential Testing for combines active and passive measurements to localize Wireless Sensor faults in wireless sensor networks. More specifically, Networks we formulate a problem of optimal sequential testing guided by end-to-end data. This problem determines an optimal testing sequence of network components based on end-to-end data in sensor networks to minimize expected testing cost. We prove that this problem is NP-hard, and propose a recursive approach to solve it. This approach leads to a polynomial-time optimal algorithm for line topologies while requiring exponential running time for general topologies. We further develop two polynomial-time heuristic schemes that are applicable to general topologies. Extensive simulation shows that our heuristic schemes only require testing a very small set of network components to localize and repair all faults in the network. Our approach is superior to using active and passive measurements in isolation. It also outperforms the state-of-the-art approaches that localize and repair all faults in a network. 37. FESCIM Fair, Efficient, In multihop cellular networks, the mobile nodes usually 2012 and Secure Cooperation relay others' packets for enhancing the network Incentive Mechanism performance and deployment. However, selfish nodes for Multihop Cellular usually do not cooperate but make use of the Networks cooperative nodes to relay their packets, which has a negative effect on the network fairness and performance. In this paper, we propose a fair and efficient incentive mechanism to stimulate the node cooperation. Our mechanism applies a fair charging policy by charging the source and destination nodes when both of them benefit from the communication. To implement this charging policy efficiently, hashing operations are used in the ACK packets to reduce the number of public-key-cryptography operations. Moreover, reducing the overhead of the payment checks is essential for the efficient implementation of the incentive mechanism due to the large number of payment transactions. Instead of generating a check per message, a small-size check can be generated per route, and a check submission scheme is proposed to reduce the number of submitted checks and protect against
  • 62. collusion attacks. Extensive analysis and simulations demonstrate that our mechanism can secure the payment and significantly reduce the checks' overhead, and the fair charging policy can be implemented almost computationally free by using hashing operations. 38. Geometry and Motion- This paper presents positioning algorithms for cellular 2012 Based Positioning network-based vehicle tracking in severe non-line-of- Algorithms for Mobile sight (NLOS) propagation scenarios. The aim of the Tracking in NLOS algorithms is to enhance positional accuracy of Environments network-based positioning systems when the GPS receiver does not perform well due to the complex propagation environment. A one-step position estimation method and another two-step method are proposed and developed. Constrained optimization is utilized to minimize the cost function which takes account of the NLOS error so that the NLOS effect is significantly reduced. Vehicle velocity and heading direction measurements are exploited in the algorithm development, which may be obtained using a speedometer and a heading sensor, respectively. The developed algorithms are practical so that they are suitable for implementation in practice for vehicle applications. It is observed through simulation that in severe NLOS propagation scenarios, the proposed positioning methods outperform the existing cellular network-based positioning algorithms significantly. Further, when the distance measurement error is modeled as the sum of an exponential bias variable and a Gaussian noise variable, the exact expressions of the CRLB are derived to benchmark the performance of the positioning algorithms. 39. Handling Selfishness in In a mobile ad hoc network, the mobility and resource 2012 Replica Allocation over constraints of mobile nodes may lead to network a Mobile Ad Hoc partitioning or performance degradation. Several data Network replication techniques have been proposed to minimize performance degradation. Most of them assume that all mobile nodes collaborate fully in terms of sharing their memory space. In reality, however, some nodes may selfishly decide only to cooperate partially, or not at all, with other nodes. These selfish nodes could then reduce the overall data accessibility in the network. In this paper, we examine the impact of selfish nodes in a mobile ad hoc network from the perspective of replica allocation. We term this selfish replica allocation. In particular, we develop a selfish node detection algorithm that considers partial selfishness and novel
  • 63. replica allocation techniques to properly cope with selfish replica allocation. The conducted simulations demonstrate the proposed approach outperforms traditional cooperative replica allocation techniques in terms of data accessibility, communication cost, and average query delay. 40. Heuristic Burst IEEE 802.16 OFDMA systems have gained much 2012 Construction Algorithm attention for their ability to support high transmission for Improving Downlink rates and broadband access services. For multiuser Capacity in IEEE environments, IEEE 802.16 OFDMA systems require a 802.16 OFDMA resource allocation algorithm to use the limited Systems downlink resource efficiently. The IEEE 802.16 standard defines that resource allocation should be performed with a rectangle region of slots, called a burst. However, the standard does not specify how to construct bursts. In this paper, we propose a heuristic burst construction algorithm, called HuB, to improve the downlink capacity in IEEE 802.16 OFDMA systems. To increase the downlink capacity, during burst constructions HuB reduces resource wastage by considering padded slots and unused slots and reduces resource usage by considering the power boosting possibility. For simple burst constructions, HuB makes a HuB-tree, in which a node represents an available downlink resource and edges of a node represent a burst rectangle region. Thus, making child nodes of a parent node is the same as constructing a burst in a given downlink resource. We analyzed the proposed algorithm and performed simulations to compare the performance of the proposed algorithm with existing algorithms. Our simulation study results show that HuB shows improved downlink capacity over existing algorithms. 41. Hop-by-Hop Routing in Wireless Mesh Network (WMN) has become an 2012 Wireless Mesh important edge network to provide Internet access to Networks with remote areas and wireless connections in a Bandwidth Guarantees metropolitan scale. In this paper, we study the problem of identifying the maximum available bandwidth path, a fundamental issue in supporting quality-of-service in WMNs. Due to interference among links, bandwidth, a well-known bottleneck metric in wired networks, is neither concave nor additive in wireless networks. We propose a new path weight which captures the available path bandwidth information. We formally prove that our hop-by-hop routing protocol based on the new path weight satisfies the consistency and loop-freeness
  • 64. requirements. The consistency property guarantees that each node makes a proper packet forwarding decision, so that a data packet does traverse over the intended path. Our extensive simulation experiments also show that our proposed path weight outperforms existing path metrics in identifying high-throughput paths. 42. Jointly Optimal Source- Emerging media overlay networks for wireless 2012 Flow, Transmit-Power, applications aim at delivering Variable Bit Rate (VBR) and Sending-Rate encoded media contents to nomadic end users by Control for Maximum- exploiting the (fading-impaired and time-varying) Throughput Delivery of access capacity offered by the "last-hop” wireless VBR Traffic over Faded channel. In this application scenario, a still open Links question concerns the closed-form design of control policies that maximize the average throughput sent over the wireless last hop, under constraints on the maximum connection bandwidth available at the Application (APP) layer, the queue capacity available at the Data Link (DL) layer, and the average and peak energies sustained by the Physical (PHY) layer. The approach we follow relies on the maximization on a per-slot basis of the throughput averaged over the fading statistic and conditioned on the queue state, without resorting to cumbersome iterative algorithms. The resulting optimal controller operates in a cross- layer fashion that involves the APP, DL, and PHY layers of the underlying protocol stack. Finally, we develop the operating conditions allowing the proposed controller also to maximize the unconditional average throughput (i.e., the throughput averaged over both queue and channel-state statistics). The carried out numerical tests give insight into the connection bandwidth-versus-queue delay trade-off achieved by the optimal controller. 43. Moderated Group This paper describes the design and implementation of 2012 Authoring System for a file system-based distributed authoring system for Campus-Wide campus-wide workgroups. We focus on documents for Workgroups which changes by different group members are harder to automatically reconcile into a single version. Prior approaches relied on using group-aware editors. Others built collaborative middleware that allowed the group members to use traditional authoring tools. These approaches relied on an ability to automatically detect conflicting updates. They also operated on specific document types. Instead, our system relies on users to moderate and reconcile updates by other group members. Our file system-based approach also allows
  • 65. group members to modify any document type. We maintain one updateable copy of the shared content on each group member's node. We also hoard read-only copies of each of these updateable copies in any interested group member's node. All these copies are propagated to other group members at a rate that is solely dictated by the wireless user availability. The various copies are reconciled using the moderation operation; each group member manually incorporates updates from all the other group members into their own copy. The various document versions eventually converge into a single version through successive moderation operations. The system assists with this convergence process by using the made-with knowledge of all causal file system reads of contents from other replicas. An analysis using a long-term wireless user availability traces from a university shows the strength of our asynchronous and distributed update propagation mechanism. Our user space file system prototype exhibits acceptable file system performance. A subjective evaluation showed that the moderation operation was intuitive for students. 44. Network Connectivity We investigate the communication range of the nodes 2012 with a Family of Group necessary for network connectivity, which we call Mobility Models bidirectional connectivity, in a simple setting. Unlike in most of existing studies, however, the locations or mobilities of the nodes may be correlated through group mobility: nodes are broken into groups, with each group comprising the same number of nodes, and lie on a unit circle. The locations of the nodes in the same group are not mutually independent, but are instead conditionally independent given the location of the group. We examine the distribution of the smallest communication range needed for bidirectional connectivity, called the critical transmission range (CTR), when both the number of groups and the number of nodes in a group are large. We first demonstrate that the CTR exhibits a parametric sensitivity with respect to the space each group occupies on the unit circle. Then, we offer an explanation for the observed sensitivity by identifying what is known as a very strong threshold and asymptotic bounds for CTR. 45. OMAN A Mobile Ad We present a software library that aids in the design of 2012 Hoc Network Design mobile ad hoc networks (MANET). The OMAN design
  • 66. System engine works by taking a specification of network requirements and objectives, and allocates resources which satisfy the input constraints and maximize the communication performance objective. The tool is used to explore networking design options and challenges, including: power control, adaptive modulation, flow control, scheduling, mobility, uncertainty in channel models, and cross-layer design. The unaddressed niche which OMAN seeks to fill is the general framework for optimization of any network resource, under arbitrary constraints, and with any selection of multiple objectives. While simulation is an important part of measuring the effectiveness of implemented optimization techniques, the novelty and focus of OMAN is on proposing novel network design algorithms, aggregating existing approaches, and providing a general framework for a network designer to test out new proposed resource allocation methods. In this paper, we present a high-level view of the OMAN architecture, review specific mathematical models used in the network representation, and show how OMAN is used to evaluate tradeoffs in MANET design. Specifically, we cover three case studies of optimization. The first case is robust power control under uncertain channel information for a single physical layer snapshot. The second case is scheduling with the availability of directional radiation patterns. The third case is optimizing topology through movement planning of relay nodes. 46. Robust Topology Topology engineering concerns with the problem of 2012 Engineering in automatic determination of physical layer parameters to Multiradio Multichannel form a network with desired properties. In this paper, Wireless Networks we investigate the joint power control, channel assignment, and radio interface selection for robust provisioning of link bandwidth in infrastructure multiradio multichannel wireless networks in presence of channel variability and external interference. To characterize the logical relationship between spatial contention constraints and transmit power, we formulate the joint power control and radio-channel assignment as a generalized disjunctive programming problem. The generalized Benders decomposition technique is applied for decomposing the radio-channel assignment (combinatorial constraints) and network resource allocation (continuous constraints) so that the problem can be solved efficiently. The proposed
  • 67. algorithm is guaranteed to converge to the optimal solution within a finite number of iterations. We have evaluated our scheme using traces collected from two wireless testbeds and simulation studies in Qualnet. Experiments show that the proposed algorithm is superior to existing schemes in providing larger interference margin, and reducing outage and packet loss probabilities. 47. SenseLess A Database- The 2010 FCC ruling on white spaces proposes relying 2012 Driven White Spaces on a database of incumbents as the primary means of Network determining white space availability at any white space device (WSD). While the ruling provides broad guidelines for the database, the specifics of its design, features, implementation, and use are yet to be determined. Furthermore, architecting a network where all WSDs rely on the database raises several systems and networking challenges that have remained unexplored. Also, the ruling treats the database only as a storehouse for incumbents. We believe that the mandated use of the database has an additional opportunity: a means to dynamically manage the RF spectrum. Motivated by this opportunity, in this paper, we present SenseLess, a database-driven white spaces network. As suggested by its very name, in SenseLess, WSDs rely on a database service to determine white spaces availability as opposed to spectrum sensing. The service, using a combination of an up-to-date database of incumbents, sophisticated signal propagation modeling, and an efficient content dissemination mechanism to ensure efficient, scalable, and safe white space network operation. We build, deploy, and evaluate SenseLess and compare our results to ground truth spectrum measurements. We present the unique system design considerations that arise due to operating over the white spaces. We also evaluate its efficiency and scalability. To the best of our knowledge, this is the first paper that identifies and examines the systems and networking challenges that arise from operating a white space network, which is solely dependent on a channel occupancy database. 48. Smooth Trade-Offs Throughput capacity in mobile ad hoc networks has 2012 between Throughput been studied extensively under many different mobility and Delay in Mobile Ad models. However, most previous research assumes Hoc Networks global mobility, and the results show that a constant per-node throughput can be achieved at the cost of very
  • 68. high delay. Thus, we are having a very big gap here, i.e., either low throughput and low delay in static networks or high throughput and high delay in mobile networks. In this paper, employing a practical restricted random mobility model, we try to fill this gap. Specifically, we assume that a network of unit area with n nodes is evenly divided into cells with an area of n -2α, each of which is further evenly divided into squares with an area of n-2β(0≤ α ≤ β ≤1/2). All nodes can only move inside the cell which they are initially distributed in, and at the beginning of each time slot, every node moves from its current square to a uniformly chosen point in a uniformly chosen adjacent square. By proposing a new multihop relay scheme, we present smooth trade-offs between throughput and delay by controlling nodes' mobility. We also consider a network of area nγ (0 ≤ γ ≤ 1) and find that network size does not affect the results obtained before. 49. Spectrum-Aware Cognitive radio (CR) networks have been proposed as 2012 Mobility Management a solution to both spectrum inefficiency and spectrum in Cognitive Radio scarcity problems. However, they face several Cellular Networks challenges based on the fluctuating nature of the available spectrum, making it more difficult to support seamless communications, especially in CR cellular networks. In this paper, a spectrum-aware mobility management scheme is proposed for CR cellular networks. First, a novel network architecture is introduced to mitigate heterogeneous spectrum availability. Based on this architecture, a unified mobility management framework is developed to support diverse mobility events in CR networks, which consists of spectrum mobility management, user mobility management, and intercell resource allocation. The spectrum mobility management scheme determines a target cell and spectrum band for CR users adaptively dependent on time-varying spectrum opportunities, leading to increase in cell capacity. In the user mobility management scheme, a mobile user selects a proper handoff mechanism so as to minimize a switching latency at the cell boundary by considering spatially heterogeneous spectrum availability. Intercell resource allocation helps to improve the performance of both mobility management schemes by efficiently sharing spectrum resources with multiple cells. Simulation results show that the proposed method can achieve better performance than conventional handoff
  • 69. schemes in terms of both cell capacity as well as mobility support in communications. 50. Stateless Multicast Multicast routing protocols typically rely on the a priori Protocol for Ad Hoc creation of a multicast tree (or mesh), which requires Networks the individual nodes to maintain state information. In dynamic networks with bursty traffic, where long periods of silence are expected between the bursts of data, this multicast state maintenance adds a large amount of communication, processing, and memory overhead for no benefit to the application. Thus, we have developed a stateless receiver-based multicast (RBMulticast) protocol that simply uses a list of the multicast members' (e.g., sinks') addresses, embedded in packet headers, to enable receivers to decide the best way to forward the multicast traffic. This protocol, called Receiver-Based Multicast, exploits the knowledge of the geographic locations of the nodes to remove the need for costly state maintenance (e.g., tree/mesh/neighbor table maintenance), making it ideally suited for multicasting in dynamic networks. RBMulticast was implemented in the OPNET simulator and tested using a sensor network implementation. Both simulation and experimental results confirm that RBMulticast provides high success rates and low delay without the burden of state maintenance. TECHNOLOGY : JAVA DOMAIN : IEEE TRANSACTIONS ON IMAGE PROCESSING S.NO TITLES DESCRIPTION YEAR 1. Image Super- In this paper, we propose a sparse neighbor selection 2012 Resolution With Sparse scheme for SR reconstruction.We first predetermine a Neighbor Embedding larger number of neighbors as potential candidates and develop an extended Robust-SL0 algorithm tosimultaneously find the neighbors and to solve the reconstruction weights. Recognizing that the -nearest neighbor ( -NN) for reconstruction should have similar local geometric structures basedon clustering, we employ a local statistical feature, namely histograms of oriented gradients (HoG) of low-resolution (LR) image patches, to perform such clustering. 2. Scalable Coding of This paper proposes a novel scheme of scalable 2012
  • 70. Encrypted Images coding for encrypted images. In the encryption phase, the original pixel values are masked by a modulo-256 addition with pseudorandom numbers that are derived from a secret key. Then, the data of quantized sub image and coefficients are regarded as a set of bit streams. 3. PDE-Based The proposed model is based on using the single 2012 Enhancement of Color vectors of the gradient magnitude and the second Images in RGB Space derivatives as a manner to relate different color components of the image. This model can be viewed as a generalization of the Bettahar–Stambouli filter to multivalued images. The proposed algorithm is more efficient than the mentioned filter and some previous works at color images denoising and deblurring without creating false colors 4. Abrupt Motion The robust tracking of abrupt motion is a challenging 2012 Tracking Via task in computer vision due to its large motion Intensively Adaptive uncertainty. While various particle filters and Markov-Chain Monte conventional Markov-chain Monte Carlo (MCMC) Carlo Sampling methods have been proposed for visual tracking, these methods often suffer from the well-known local-trap problem or from poor convergence rate. In this paper, we propose a novel sampling-based tracking scheme for the abrupt motion problem in the Bayesian filtering framework. To effectively handle the local- trap problem, we first introduce the stochastic approximation Monte Carlo (SAMC) sampling method into the Bayesian filter tracking framework, in which the filtering distribution is adaptively estimated as the sampling proceeds, and thus, a good approximation to the target distribution is achieved. In addition, we propose a new MCMC sampler with intensive adaptation to further improve the sampling efficiency, which combines a density-grid-based predictive model with the SAMC sampling, to give a proposal adaptation scheme. The proposed method is effective and computationally efficient in addressing the abrupt motion problem. We compare our approach with several alternative tracking algorithms, and extensive experimental results are presented to demonstrate the effectiveness and the efficiency of the proposed method in dealing with various types of abrupt motions. 5. Vehicle Detection in We present an automatic vehicle detection system for 2012 Aerial Surveillance aerial surveillance in this paper. In this system, we Using Dynamic escape from the stereotype and existing frameworks of
  • 71. Bayesian Networks vehicle detection in aerial surveillance, which are either region based or sliding window based. We design a pixelwise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixelwise classification, relations among neighboring pixels in a region are preserved in the feature extraction process.We consider features including vehicle colors and local features. For vehicle color extraction, we utilize a color transform to separate vehicle colors and nonvehicle colors effectively. For edge detection, we apply moment preserving to adjust the thresholds of the Canny edge detector automatically, which increases the adaptability and the accuracy for detection in various aerial images. Afterward, a dynamic Bayesian network (DBN) is constructed for the classification purpose. We convert regional local features into quantitative observations that can be referenced when applying pixelwise classification via DBN. Experiments were conducted on a wide variety of aerial videos. The results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial surveillance images taken at different heights and under different camera angles. 6. A Secret-Sharing- A new blind authentication method based on the secret 2012 Based Method for sharing technique with a data repair capability for Authentication of grayscale document images via the use of the Portable Grayscale Document Network Graphics (PNG) image is proposed. An Images via the Use of authentication signal is generated for each block of a the PNG Image With a grayscale document image, which, together with the Data Repair Capability binarized block content, is transformed into several shares using the Shamir secret sharing scheme. The involved parameters are carefully chosen so that as many shares as possible are generated and embedded into an alpha channel plane. The alpha channel plane is then combined with the original grayscale image to form a PNG image. During the embedding process, the computed share values are mapped into a range of alpha channel values near their maximum value of 255 to yield a transparent stego-image with a disguise effect. In the process of image authentication, an image block is marked as tampered if the authentication signal computed from the current block content does not match that extracted from the shares embedded in the alpha channel plane. Data repairing is
  • 72. then applied to each tampered block by a reverse Shamir scheme after collecting two shares from unmarked blocks. Measures for protecting the security of the data hidden in the alpha channel are also proposed. Good experimental results prove the effectiveness of the proposed method for real applications. 7. Learn to Personalized Increasingly developed social sharing websites, like Image Search from the Flickr and Youtube, allow users to create, share, Photo Sharing annotate and comment medias. The large-scale user- Websites – projects generated meta-data not only facilitate users in sharing 2012 and organizing multimedia content,but provide useful information to improve media retrieval and management. Personalized search serves as one of such examples where the web search experience is improved by generating the returned list according to the modified user search intents. In this paper, we exploit the social annotations and propose a novel framework simultaneously considering the user and query relevance to learn to personalized image search. The basic premise is to embed the user preference and query-related search intent into user-specific topic spaces. Since the users’ original annotation is too sparse for topic modeling, we need to enrich users’ annotation pool before user-specific topic spaces construction. The proposed framework contains two components: 8. A Discriminative Action recognition is very important for many 2012 Model of Motion and applications such as video surveillance, human- Cross Ratio for View- computer interaction, and so on; view-invariant action Invariant Action recognition is hot and difficult as well in this field. In Recognition this paper, a new discriminative model is proposed for video-based view-invariant action recognition. In the discriminative model, motion pattern and view invariants are perfectly fused together to make a better combination of invariance and distinctiveness. We address a series of issues, including interest point detection in image sequence, motion feature extraction and description, and view-invariant calculation. First, motion detection is used to extract motion information from videos, which is much more efficient than traditional background modeling and tracking-based methods. Second, as for feature representation, we exact variety of statistical information from motion and view-invariant feature based on cross ratio. Last, in the action modeling, we apply a discriminative
  • 73. probabilistic model-hidden conditional random field to model motion patterns and view invariants, by which we could fuse the statistics of motion and projective invariability of cross ratio in one framework. Experimental results demonstrate that our method can improve the ability to distinguish different categories of actions with high robustness to view change in real circumstances. 9. A General Fast In this paper, we propose a general framework for 2012 Registration performance improvement of the current state-of-the- Framework by art registration algorithms in terms of both accuracy Learning Deformation– and computation time. The key concept involves rapid Appearance Correlation prediction of a deformation field for registration initialization, which is achieved by a statistical correlation model learned between image appearances and deformation fields. This allows us to immediately bring a template image as close as possible to a subject image that we need to register. The task of the registration algorithm is hence reduced to estimating small deformation between the subject image and the initially warped template image, i.e., the intermediate template (IT). Specifically, to obtain a good subject- specific initial deformation, support vector regression is utilized to determine the correlation between image appearances and their respective deformation fields. When registering a new subject onto the template, an initial deformation field is first predicted based on the subject's image appearance for generating an IT. With the IT, only the residual deformation needs to be estimated, presenting much less challenge to the existing registration algorithms. Our learning-based framework affords two important advantages: 1) by requiring only the estimation of the residual deformation between the IT and the subject image, the computation time can be greatly reduced; 2) by leveraging good deformation initialization, local minima giving suboptimal solution could be avoided. Our framework has been extensively evaluated using medical images from different sources, and the results indicate that, on top of accuracy improvement, significant registration speedup can be achieved, as compared with the case where no prediction of initial deformation is performed. 10. A Geometric We present a geometric framework for explicit 2012 Construction of derivation of multivariate sampling functions (sinc) on Multivariate Sinc multidimensional lattices. The approach leads to a
  • 74. Functions generalization of the link between sinc functions and the Lagrange interpolation in the multivariate setting. Our geometric approach also provides a frequency partition of the spectrum that leads to a nonseparable extension of the 1-D Shannon (sinc) wavelets to the multivariate setting. Moreover, we propose a generalization of the Lanczos window function that provides a practical and unbiased approach for signal reconstruction on sampling lattices. While this framework is general for lattices of any dimension, we specifically characterize all 2-D and 3-D lattices and show the detailed derivations for 2-D hexagonal body- centered cubic (BCC) and face-centered cubic (FCC) lattices. Both visual and numerical comparisons validate the theoretical expectations about superiority of the BCC and FCC lattices over the commonly used Cartesian lattice. 11. A Novel Algorithm for The challenges in local-feature-based image matching 2012 View and Illumination are variations of view and illumination. Many Invariant Image methods have been recently proposed to address these Matching problems by using invariant feature detectors and distinctive descriptors. However, the matching performance is still unstable and inaccurate, particularly when large variation in view or illumination occurs. In this paper, we propose a view and illumination invariant image-matching method. We iteratively estimate the relationship of the relative view and illumination of the images, transform the view of one image to the other, and normalize their illumination for accurate matching. Our method does not aim to increase the invariance of the detector but to improve the accuracy, stability, and reliability of the matching results. The performance of matching is significantly improved and is not affected by the changes of view and illumination in a valid range. The proposed method would fail when the initial view and illumination method fails, which gives us a new sight to evaluate the traditional detectors. We propose two novel indicators for detector evaluation, namely, valid angle and valid illumination, which reflect the maximum allowable change in view and illumination, respectively. Extensive experimental results show that our method improves the traditional detector significantly, even in large variations, and the two indicators are much more distinctive. 12. A Spectral and Spatial This paper presents an algorithm designed to measure 2012
  • 75. Measure of Local the local perceived sharpness in an image. Our method Perceived Sharpness in utilizes both spectral and spatial properties of the Natural Images image: For each block, we measure the slope of the magnitude spectrum and the total spatial variation. These measures are then adjusted to account for visual perception, and then, the adjusted measures are combined via a weighted geometric mean. The resulting measure, i.e., S3 (spectral and spatial sharpness), yields a perceived sharpness map in which greater values denote perceptually sharper regions. This map can be collapsed into a single index, which quantifies the overall perceived sharpness of the whole image. We demonstrate the utility of the S3 measure for within-image and across-image sharpness prediction, no-reference image quality assessment of blurred images, and monotonic estimation of the standard deviation of the impulse response used in Gaussian blurring. We further evaluate the accuracy of S3 in local sharpness estimation by comparing S3 maps to sharpness maps generated by human subjects. We show that S3 can generate sharpness maps, which are highly correlated with the human-subject maps. 13. A Unified Feature and The goal of feature selection is to identify the most 2012 Instance Selection informative features for compact representation, Framework Using whereas the goal of active learning is to select the Optimum Experimental most informative instances for prediction. Previous Design studies separately address these two problems, despite of the fact that selecting features and instances are dual operations over a data matrix. In this paper, we consider the novel problem of simultaneously selecting the most informative features and instances and develop a solution from the perspective of optimum experimental design. That is, by using the selected features as the new representation and the selected instances as training data, the variance of the parameter estimate of a learning function can be minimized. Specifically, we propose a novel approach, which is called Unified criterion for Feature and Instance selection (UFI), to simultaneously identify the most informative features and instances that minimize the trace of the parameter covariance matrix. A greedy algorithm is introduced to efficiently solve the optimization problem. Experimental results on two benchmark data sets demonstrate the effectiveness of our proposed method. 14. An Algorithm for the Speeded-Up Robust Features is a feature extraction 2012
  • 76. Contextual Adaption of algorithm designed for real-time execution, although SURF Octave Selection this is rarely achievable on low-power hardware such With Good Matching as that in mobile robots. One way to reduce the Performance Best computation is to discard some of the scale-space Octaves octaves, and previous research has simply discarded the higher octaves. This paper shows that this approach is not always the most sensible and presents an algorithm for choosing which octaves to discard based on the properties of the imagery. Results obtained with this best octaves algorithm show that it is able to achieve a significant reduction in computation without compromising matching performance. 15. An Efficient Camera In the field of machine vision, camera calibration 2012 Calibration Technique refers to the experimental determination of a set of Offering Robustness parameters that describe the image formation process and Accuracy Over a for a given analytical model of the machine vision Wide Range of Lens system. Researchers working with low-cost digital Distortion cameras and off-the-shelf lenses generally favor camera calibration techniques that do not rely on specialized optical equipment, modifications to the hardware, or an a priori knowledge of the vision system. Most of the commonly used calibration techniques are based on the observation of a single 3- D target or multiple planar (2-D) targets with a large number of control points. This paper presents a novel calibration technique that offers improved accuracy, robustness, and efficiency over a wide range of lens distortion. This technique operates by minimizing the error between the reconstructed image points and their experimentally determined counterparts in “distortion free” space. This facilitates the incorporation of the exact lens distortion model. In addition, expressing spatial orientation in terms of unit quaternions greatly enhances the proposed calibration solution by formulating a minimally redundant system of equations that is free of singularities. Extensive performance benchmarking consisting of both computer simulation and experiments confirmed higher accuracy in calibration regardless of the amount of lens distortion present in the optics of the camera. This paper also experimentally confirmed that a comprehensive lens distortion model including higher order radial and tangential distortion terms improves calibration accuracy. 16. Bayesian Estimation Structured illumination microscopy is a recent 2012
  • 77. for Optimized imaging technique that aims at going beyond the Structured Illumination classical optical resolution by reconstructing high- Microscopy resolution (HR) images from low-resolution (LR) images acquired through modulation of the transfer function of the microscope. The classical implementation has a number of drawbacks, such as requiring a large number of images to be acquired and parameters to be manually set in an ad-hoc manner that have, until now, hampered its wide dissemination. Here, we present a new framework based on a Bayesian inverse problem formulation approach that enables the computation of one HR image from a reduced number of LR images and has no specific constraints on the modulation. Moreover, it permits to automatically estimate the optimal reconstruction hyperparameters and to compute an uncertainty bound on the estimated values. We demonstrate through numerical evaluations on simulated data and examples on real microscopy data that our approach represents a decisive advance for a wider use of HR microscopy through structured illumination. 17. Binarization of Low- It is difficult to directly apply existing binarization 2012 Quality Barcode approaches to the barcode images captured by mobile Images Captured by device due to their low quality. This paper proposes a Mobile Phones Using novel scheme for the binarization of such images. The Local Window of barcode and background regions are differentiated by Adaptive Location and the number of edge pixels in a search window. Unlike Size existing approaches that center the pixel to be binarized with a window of fixed size, we propose to shift the window center to the nearest edge pixel so that the balance of the number of object and background pixels can be achieved. The window size is adaptive either to the minimum distance to edges or minimum element width in the barcode. The threshold is calculated using the statistics in the window. Our proposed method has demonstrated its capability in handling the nonuniform illumination problem and the size variation of objects. Experimental results conducted on 350 images captured by five mobile phones achieve about 100% of recognition rate in good lighting conditions, and about 95% and 83% in bad lighting conditions. Comparisons made with nine existing binarization methods demonstrate the advancement of our proposed scheme 18. B-Spline Explicit A new formulation of active contours based on 2012 Active Surfaces An explicit functions has been recently suggested. This
  • 78. Efficient Framework novel framework allows real-time 3-D segmentation for Real-Time 3-D since it reduces the dimensionality of the segmentation Region-Based problem. In this paper, we propose a B-spline Segmentation formulation of this approach, which further improves the computational efficiency of the algorithm. We also show that this framework allows evolving the active contour using local region-based terms, thereby overcoming the limitations of the original method while preserving computational speed. The feasibility of real-time 3-D segmentation is demonstrated using simulated and medical data such as liver computer tomography and cardiac ultrasound images. 19. Change Detection in This paper presents an unsupervised distribution-free 2012 Synthetic Aperture change detection approach for synthetic aperture radar Radar Images based on (SAR) images based on an image fusion strategy and a Image Fusion and novel fuzzy clustering algorithm. The image fusion Fuzzy Clustering technique is introduced to generate a difference image by using complementary information from a mean- ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean- ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences. 20. Color Constancy for Color constancy algorithms are generally based on the 2012 Multiple Light Sources simplifying assumption that the spectral distribution of a light source is uniform across scenes. However, in reality, this assumption is often violated due to the presence of multiple light sources. In this paper, we will address more realistic scenarios where the uniform light-source assumption is too restrictive.
  • 79. First, a methodology is proposed to extend existing algorithms by applying color constancy locally to image patches, rather than globally to the entire image. After local (patch-based) illuminant estimation, these estimates are combined into more robust estimations, and a local correction is applied based on a modified diagonal model. Quantitative and qualitative experiments on spectral and real images show that the proposed methodology reduces the influence of two light sources simultaneously present in one scene. If the chromatic difference between these two illuminants is more than 1° , the proposed framework outperforms algorithms based on the uniform light-source assumption (with error-reduction up to approximately 30%). Otherwise, when the chromatic difference is less than 1° and the scene can be considered to contain one (approximately) uniform light source, the performance of the proposed method framework is similar to global color constancy methods. 21. Coupled Bias–Variance Subspace-based face representation can be looked as a 2012 Tradeoff for Cross- regression problem. From this viewpoint, we first Pose Face Recognition revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias- variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance 22. Depth From Motion Space-variantly blurred images of a scene contain 2012 and Optical Blur With valuable depth information. In this paper, our an Unscented Kalman objective is to recover the 3-D structure of a scene Filter from motion blur/optical defocus. In the proposed approach, the difference of blur between two observations is used as a cue for recovering depth, within a recursive state estimation framework. For motion blur, we use an unblurred-blurred image pair. Since the relationship between the observation and the
  • 80. scale factor of the point spread function associated with the depth at a point is nonlinear, we propose and develop a formulation of unscented Kalman filter for depth estimation. There are no restrictions on the shape of the blur kernel. Furthermore, within the same formulation, we address a special and challenging scenario of depth from defocus with translational jitter. The effectiveness of our approach is evaluated on synthetic as well as real data, and its performance is also compared with contemporary techniques. 23. Design of Almost It is a well-known fact that (compact-support) dyadic 2012 Symmetric Orthogonal wavelets [based on the two channel filter banks (FBs)] Wavelet Filter Bank cannot be simultaneously orthogonal and symmetric. Via Direct Although orthogonal wavelets have the energy Optimization preservation property, biorthogonal wavelets are preferred in image processing applications because of their symmetric property. In this paper, a novel method is presented for the design of almost symmetric orthogonal wavelet FB. Orthogonality is structurally imposed by using the unnormalized lattice structure, and this leads to an objective function, which is relatively simple to optimize. The designed filters have good frequency response, flat group delay, almost symmetric filter coefficients, and symmetric wavelet function 24. Design of Interpolation Traditionally, subpixel interpolation in stereo-vision 2012 Functions for Subpixel- systems was designed for the block-matching Accuracy Stereo- algorithm. During the evaluation of different Vision Systems interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were
  • 81. employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. 25. Entropy-Functional- In this paper, an entropy-functional-based online 2012 Based Online Adaptive adaptive decision fusion (EADF) framework is Decision Fusion developed for image analysis and computer vision Framework With applications. In this framework, it is assumed that the Application to Wildfire compound algorithm consists of several Detection in Video subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented. 26. Fast Semantic Exploring context information for visual recognition 2012 Diffusion for Large- has recently received significant research attention. Scale Context-Based This paper proposes a novel and highly efficient Image and Video approach, which is named semantic diffusion, to Annotation utilize semantic context for large-scale image and video annotation. Starting from the initial annotation of a large number of semantic concepts (categories), obtained by either machine learning or manual tagging, the proposed approach refines the results using a graph diffusion technique, which recovers the consistency and smoothness of the annotations over a semantic graph. Different from the existing graph- based learning methods that model relations among data samples, the semantic graph captures context by treating the concepts as nodes and the concept affinities as the weights of edges. In particular, our approach is capable of simultaneously improving annotation accuracy and adapting the concept affinities to new test data. The adaptation provides a means to handle domain change between training and test data, which often occurs in practice. Extensive
  • 82. experiments are conducted to improve concept annotation results using Flickr images and TV program videos. Results show consistent and significant performance gain (10 on both image and video data sets). Source codes of the proposed algorithms are available online. 27. Gradient-Based Image A major problem in imaging applications such as 2012 Recovery Methods magnetic resonance imaging and synthetic aperture From Incomplete radar is the task of trying to reconstruct an image with Fourier Measurements the smallest possible set of Fourier samples, every single one of which has a potential time and/or power cost. The theory of compressive sensing (CS) points to ways of exploiting inherent sparsity in such images in order to achieve accurate recovery using sub-Nyquist sampling schemes. Traditional CS approaches to this problem consist of solving total-variation (TV) minimization programs with Fourier measurement constraints or other variations thereof. This paper takes a different approach. Since the horizontal and vertical differences of a medical image are each more sparse or compressible than the corresponding TV image, CS methods will be more successful in recovering these differences individually. We develop an algorithm called GradientRec that uses a CS algorithm to recover the horizontal and vertical gradients and then estimates the original image from these gradients. We present two methods of solving the latter inverse problem, i.e., one based on least- square optimization and the other based on a generalized Poisson solver. After a thorough derivation of our complete algorithm, we present the results of various experiments that compare the effectiveness of the proposed method against other leading methods. 28. Groupwise Registration Groupwise registration is concerned with bringing a 2012 of Multimodal Images group of images into the best spatial alignment. If by an Efficient Joint images in the group are from different modalities, then Entropy Minimization the intensity correspondences across the images can be Scheme modeled by the joint density function (JDF) of the cooccurring image intensities. We propose a so-called treecode registration method for groupwise alignment of multimodal images that uses a hierarchical intensity-space subdivision scheme through which an efficient yet sufficiently accurate estimation of the (high-dimensional) JDF based on the Parzen kernel method is computed. To simultaneously align a group
  • 83. of images, a gradient-based joint entropy minimization was employed that also uses the same hierarchical intensity-space subdivision scheme. If the Hilbert kernel is used for the JDF estimation, then the treecode method requires no data-dependent bandwidth selection and is thus fully automatic. The treecode method was compared with the ensemble clustering (EC) method on four different publicly available multimodal image data sets and on a synthetic monomodal image data set. The obtained results indicate that the treecode method has similar and, for two data sets, even superior performances compared to the EC method in terms of registration error and success rate. The obtained good registration performances can be mostly attributed to the sufficiently accurate estimation of the JDF, which is computed through the hierarchical intensity-space subdivision scheme, that captures all the important features needed to detect the correct intensity correspondences across a multimodal group of images undergoing registration. 29. Higher Degree Total We introduce novel image regularization penalties to 2012 Variation (HDTV) overcome the practical problems associated with the Regularization for classical total variation (TV) scheme. Motivated by Image Recovery novel reinterpretations of the classical TV regularizer, we derive two families of functionals involving higher degree partial image derivatives; we term these families as isotropic and anisotropic higher degree TV (HDTV) penalties, respectively. The isotropic penalty is the mixed norm of the directional image derivatives, while the anisotropic penalty is the separable norm of directional derivatives. These functionals inherit the desirable properties of standard TV schemes such as invariance to rotations and translations, preservation of discontinuities, and convexity. The use of mixed norms in isotropic penalties encourages the joint sparsity of the directional derivatives at each pixel, thus encouraging isotropic smoothing. In contrast, the fully separable norm in the anisotropic penalty ensures the preservation of discontinuities, while continuing to smooth along the line like features; this scheme thus enhances the linenlike image characteristics analogous to standard TV. We also introduce efficient majorize- minimize algorithms to solve the resulting optimization problems. The numerical comparison of the proposed scheme with classical TV penalty,
  • 84. current second-degree methods, and wavelet algorithms clearly demonstrate the performance improvement. Specifically, the proposed algorithms minimize the staircase and ringing artifacts that are common with TV and wavelet schemes, while better preserving the singularities. We also observe that anisotropic HDTV penalty provides consistently improved reconstructions compared with the isotropic HDTV penalty. 30. Human Identification This paper presents a new approach to improve the 2012 Using Finger Images performance of finger-vein identification systems presented in the literature. The proposed system simultaneously acquires the finger-vein and low- resolution fingerprint images and combines these two evidences using a novel score-level combination strategy. We examine the previously proposed finger- vein identification approaches and develop a new approach that illustrates it superiority over prior published efforts. The utility of low-resolution fingerprint images acquired from a webcam is examined to ascertain the matching performance from such images. We develop and investigate two new score-level combinations, i.e., holistic and nonlinear fusion, and comparatively evaluate them with more popular score-level fusion approaches to ascertain their effectiveness in the proposed system. The rigorous experimental results presented on the database of 6264 images from 156 subjects illustrate significant improvement in the performance, i.e., both from the authentication and recognition experiments. 31. Image Fusion Using A novel higher order singular value decomposition 2012 Higher Order Singular (HOSVD)-based image fusion algorithm is proposed. Value Decomposition The key points are given as follows: 1) Since image fusion depends on local information of source images, the proposed algorithm picks out informative image patches of source images to constitute the fused image by processing the divided subtensors rather than the whole tensor; 2) the sum of absolute values of the coefficients (SAVC) from HOSVD of subtensors is employed for activity-level measurement to evaluate the quality of the related image patch; and 3) a novel sigmoid-function-like coefficient-combining scheme is applied to construct the fused result. Experimental results show that the proposed algorithm is an alternative image fusion approach.
  • 85. 32. Image Segmentation Active contour models (ACMs) integrated with 2012 Based on the Poincaré various kinds of external force fields to pull the Map Method contours to the exact boundaries have shown their powerful abilities in object segmentation. However, local minimum problems still exist within these models, particularly the vector field's “equilibrium issues.” Different from traditional ACMs, within this paper, the task of object segmentation is achieved in a novel manner by the Poincaré map method in a defined vector field in view of dynamical systems. An interpolated swirling and attracting flow (ISAF) vector field is first generated for the observed image. Then, the states on the limit cycles of the ISAF are located by the convergence of Newton-Raphson sequences on the given Poincaré sections. Meanwhile, the periods of limit cycles are determined. Consequently, the objects' boundaries are represented by integral equations with the corresponding converged states and periods. Experiments and comparisons with some traditional external force field methods are done to exhibit the superiority of the proposed method in cases of complex concave boundary segmentation, multiple- object segmentation, and initialization flexibility. In addition, it is more computationally efficient than traditional ACMs by solving the problem in some lower dimensional subspace without using level-set methods. 33. Implicit Polynomial This paper presents a simple distance estimation for 2012 Representation implicit polynomial fitting. It is computed as the Through a Fast Fitting height of a simplex built between the point and the Error Estimation surface (i.e., a triangle in 2-D or a tetrahedron in 3-D), which is used as a coarse but reliable estimation of the orthogonal distance. The proposed distance can be described as a function of the coefficients of the implicit polynomial. Moreover, it is differentiable and has a smooth behavior . Hence, it can be used in any gradient-based optimization. In this paper, its use in a Levenberg-Marquardt framework is shown, which is particularly devoted for nonlinear least squares problems. The proposed estimation is a generalization of the gradient-based distance estimation, which is widely used in the literature. Experimental results, both in 2-D and 3-D data sets, are provided. Comparisons with state-of-the-art techniques are presented, showing the advantages of the proposed
  • 86. approach. 34. Integrating In this paper, we propose a method to exploit 2012 Segmentation segmentation information for elastic image Information for registration using a Markov-random-field (MRF)- Improved MRF-Based based objective function. MRFs are suitable for Elastic Image discrete labeling problems, and the labels are defined Registration as the joint occurrence of displacement fields (for registration) and segmentation class probability. The data penalty is a combination of the image intensity (or gradient information) and the mutual dependence of registration and segmentation information. The smoothness is a function of the interaction between the defined labels. Since both terms are a function of registration and segmentation labels, the overall objective function captures their mutual dependence. A multiscale graph-cut approach is used to achieve subpixel registration and reduce the computation time. The user defines the object to be registered in the floating image, which is rigidly registered before applying our method. We test our method on synthetic image data sets with known levels of added noise and simulated deformations, and also on natural and medical images. Compared with other registration methods not using segmentation information, our proposed method exhibits greater robustness to noise and improved registration accuracy. 35. Iterative Narrowband- In this paper, an iterative narrow-band-based graph 2012 Based Graph Cuts cuts (INBBGC) method is proposed to optimize the Optimization for geodesic active contours with region forces Geodesic Active (GACWRF) model for interactive object Contours With Region segmentation. Based on cut metric on graphs proposed Forces (GACWRF) by Boykov and Kolmogorov, an NBBGC method is devised to compute the local minimization of GAC. An extension to an iterative manner, namely, INBBGC, is developed for less sensitivity to the initial curve. The INBBGC method is similar to graph-cuts- based active contour (GCBAC) presented by Xu , and their differences have been analyzed and discussed. We then integrate the region force into GAC. An improved INBBGC (IINBBGC) method is proposed to optimize the GACWRF model, thus can effectively deal with the concave region and complicated real- world images segmentation. Two region force models
  • 87. such as mean and probability models are studied. Therefore, the GCBAC method can be regarded as the special case of our proposed IINBBGC method without region force. Our proposed algorithm has been also analyzed to be similar to the Grabcut method when the Gaussian mixture model region force is adopted, and the band region is extended to the whole image. Thus, our proposed IINBBGC method can be regarded as narrow-band-based Grabcut method or GCBAC with region force method. We apply our proposed IINBBGC algorithm on synthetic and real-world images to emphasize its performance, compared with other segmentation methods, such as GCBAC and Grabcut methods. 36. PDE-Based A novel method for color image enhancement is 2012 Enhancement of Color proposed as an extension of the scalar-diffusion- Images in RGB Space shock-filter coupling model, where noisy and blurred images are denoised and sharpened. The proposed model is based on using the single vectors of the gradient magnitude and the second derivatives as a manner to relate different color components of the image. This model can be viewed as a generalization of the Bettahar-Stambouli filter to multivalued images. The proposed algorithm is more efficient than the mentioned filter and some previous works at color images denoising and deblurring without creating false colors. 37. Polyview Fusion A We propose a simple but effective strategy that aims 2012 Strategy to Enhance to enhance the performance of existing video Video-Denoising denoising algorithms, i.e., polyview fusion (PVF). The Algorithms idea is to denoise the noisy video as a 3-D volume using a given base 2-D denoising algorithm but applied from multiple views (front, top, and side views). A fusion algorithm is then designed to merge the resulting multiple denoised videos into one, so that the visual quality of the fused video is improved. Extensive tests using a variety of base video-denoising algorithms show that the proposed PVF method leads to surprisingly significant and consistent gain in terms of both peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) performance, particularly at high noise levels, where the improvement over state-of-the-art denoising algorithms is often more than 2 dB in PSNR. 38. Preconditioning for We propose a simple preconditioning method for 2012
  • 88. Edge-Preserving Image accelerating the solution of edge-preserving image Super Resolution super-resolution (SR) problems in which a linear shift- invariant point spread function is employed. Our technique involves reordering the high-resolution (HR) pixels in a similar manner to what is done in preconditioning methods for quadratic SR formulations. However, due to the edge preserving requirements, the Hessian matrix of the cost function varies during the minimization process. We develop an efficient update scheme for the preconditioner in order to cope with this situation. Unlike some other acceleration strategies that round the displacement values between the low-resolution (LR) images on the HR grid, the proposed method does not sacrifice the optimality of the observation model. In addition, we describe a technique for preconditioning SR problems involving rational magnification factors. The use of such factors is motivated in part by the fact that, under certain circumstances, optimal SR zooms are nonintegers. We show that, by reordering the pixels of the LR images, the structure of the problem to solve is modified in such a way that preconditioners based on circulant operators can be used. 39. PSF Estimation via This paper proposes an efficient method to estimate 2012 Gradient Domain the point spread function (PSF) of a blurred image Correlation using image gradients spatial correlation. A patch- based image degradation model is proposed for estimating the sample covariance matrix of the gradient domain natural image. Based on the fact that the gradients of clean natural images are approximately uncorrelated to each other, we estimated the autocorrelation function of the PSF from the covariance matrix of gradient domain blurred image using the proposed patch-based image degradation model. The PSF is computed using a phase retrieval technique to remove the ambiguity introduced by the absence of the phase. Experimental results show that the proposed method significantly reduces the computational burden in PSF estimation, compared with existing methods, while giving comparable blurring kernel 40. Rigid-Motion-Invariant This paper studies the problem of 3-D rigid-motion- 2012 Classification of 3-D invariant texture discrimination for discrete 3-D Textures textures that are spatially homogeneous by modeling them as stationary Gaussian random fields. The latter property and our formulation of a 3-D rigid motion of
  • 89. a texture reduce the problem to the study of 3-D rotations of discrete textures. We formally develop the concept of 3-D texture rotations in the 3-D digital domain. We use this novel concept to define a "distance" between 3-D textures that remains invariant under all 3-D rigid motions of the texture. This concept of "distance" can be used for a monoscale or a mill tiscale 3-D rigid- motion-invariant testing of the statistical similarity of the 3-D textures. To compute the "distance" between any two rotations R1 and R2 of two given 3-D textures, we use the Kullback-Leibler divergence between 3-D Gaussian Markov random fields fitted to the rotated texture data. Then, the 3-D rigid-motion-invariant texture distance is the integral average, with respect to the Haar measure of the group SO(3), of all of these divergences when rotations R1 and R2 vary throughout SO(3). We also present an algorithm enabling the computation of the proposed 3- D rigid-motion-invariant texture distance as well as rules for 3-D rigid-motion-invariant texture discrimination/classification and experimental results demonstrating the capabilities of the proposed 3-D rigid-motion texture discrimination rules when applied in a multiscale setting, even on very general 3-D texture models. 41. Robust Image Hashing In this paper, we propose a robust-hash function based 2012 Based on Random on random Gabor filtering and dithered lattice vector Gabor Filtering and quantization (LVQ). In order to enhance the Dithered Lattice Vector robustness against rotation manipulations, the Quantization conventional Gabor filter is adapted to be rotation invariant, and the rotation-invariant filter is randomized to facilitate secure feature extraction. Particularly, a novel dithered-LVQ-based quantization scheme is proposed for robust hashing. The dithered- LVQ-based quantization scheme is well suited for robust hashing with several desirable features, including better tradeoff between robustness and discrimination, higher randomness, and secrecy, which are validated by analytical and experimental results. The performance of the proposed hashing algorithm is evaluated over a test image database under various content-preserving manipulations. The proposed hashing algorithm shows superior robustness and discrimination performance compared with other state-of-the-art algorithms, particularly in the
  • 90. robustness against rotations (of large degrees). 42. Snakes With an We present a new class of continuously defined 2012 Ellipse-Reproducing parametric snakes using a special kind of exponential Property splines as basis functions. We have enforced our bases to have the shortest possible support subject to some design constraints to maximize efficiency. While the resulting snakes are versatile enough to provide a good approximation of any closed curve in the plane, their most important feature is the fact that they admit ellipses within their span. Thus, they can perfectly generate circular and elliptical shapes. These features are appropriate to delineate cross sections of cylindrical-like conduits and to outline bloblike objects. We address the implementation details and illustrate the capabilities of our snake with synthetic and real data. TECHNOLOGY : JAVA DOMAIN : IEEE TRANSACTIONS ON SOFTWARE ENGINEERING S.NO TITLES ABSTRACT YEAR 1. Automated Behavioral We present a technique to test Java refactoring 2012 Testing of Refactoring engines. It automates test input generation by using a Engines Java program generator that exhaustively generates programs for a given scope of Java declarations. The refactoring under2test is applied to each generated program. The technique uses SAFEREFACTOR, a tool for detecting behavioral changes, as oracle to evaluate the correctness of these transformations. 2. Towards This study contributes to the literature by considering 2012 Comprehensible 15 different Bayesian Network (BN) classifiers and Software Fault comparing them to other popular machine learning Prediction Models techniques. Furthermore, the applicability of the Using Bayesian Markov blanket principle for feature selection, which Network Classifiers is a natural extension to BN theory, is investigated. 3. Using Dependency In this paper, we present a family of test case 2012 Structures for prioritisation techniques that use the dependency Prioritisation of information from a test suite to prioritise that test Functional Test Suites suite. The nature of the techniques preserves the dependencies in the test ordering. 4. Automatically Dynamic specification mining observes program 2012 Generating Test Cases executions to infer models of normal program for Specification behavior. What makes us believe that we have seen
  • 91. Mining sufficiently many executions? The TAUTOKO (“Tautoko” is the Mãori word for “enhance, enrich.”) typestate miner generates test cases that cover previously unobserved behavior, systematically extending the execution space, and enriching the specification. To our knowledge, this is the first combination of systematic test case generation and typestate mining-a combination with clear benefits: On a sample of 800 defects seeded into six Java subjects, a static typestate verifier fed with enriched models would report significantly more true positives and significantly fewer false positives than the initial models 5. Fault Localization for In recent years, there has been significant interest in 2012 Dynamic Web fault-localization techniques that are based on Applications statistical analysis of program constructs executed by passing and failing executions. This paper shows how the Tarantula, Ochiai, and Jaccard fault-localization algorithms can be enhanced to localize faults effectively in web applications written in PHP by using an extended domain for conditional and function-call statements and by using a source mapping. We also propose several novel test- generation strategies that are geared toward producing test suites that have maximal fault-localization effectiveness. We implemented various fault- localization techniques and test-generation strategies in Apollo, and evaluated them on several open-source PHP applications. Our results indicate that a variant of the Ochiai algorithm that includes all our enhancements localizes 87.8 percent of all faults to within 1 percent of all executed statements, compared to only 37.4 percent for the unenhanced Ochiai algorithm. We also found that all the test-generation strategies that we considered are capable of generating test suites with maximal fault-localization effectiveness when given an infinite time budget for test generation. However, on average, a directed strategy based on path-constraint similarity achieves this maximal effectiveness after generating only 6.5 tests, compared to 46.8 tests for an undirected test- generation strategy.
  • 92. TECHNOLOGY : JAVA DOMAIN : IEEE TRANSACTIONS ON GRID & CLOUD COMPUTING S.NO TITLES ABSTRACT YEAR 1. Business-OWL This paper introduces the Business-OWL (BOWL), an 2012 (BOWL)—A ontology rooted in the Web Ontology Language Hierarchical Task (OWL), and modeled as a Hierarchical Task Network Network Ontology for (HTN) for the dynamic formation of business Dynamic Business processes Process Decomposition and Formulation 2. Detecting And The advent of emerging computing technologies such 2012 Resolving Firewall as service-oriented architecture and cloud computing Policy Anomalies has enabled us to perform business services more efficiently and effectively. 3. Online System for Grid In this paper, we present the design and evaluation of 2012 Resource Monitoring system architecture for grid resource monitoring and and Machine Learning- prediction. We discuss the key issues for system Based Prediction implementation, including machine learning-based methodologies for modeling and optimization of resource prediction models. 4. SOAP Processing SOAP communications produce considerable network 2011 Performance and traffic, making them unfit for distributed, loosely Enhancement coupled and heterogeneous computing environments such as the open Internet. They introduce higher latency and processing delays than other technologies, like Java RMI & CORBA. WS research has recently focused on SOAP performance enhancement. 5. Weather data sharing Intelligent agents can play an important role in helping 2011 system: an agent-based achieve the ‘data grid’ vision. In this study, the distributed data authors present a multi-agent-based framework to management implement manage, share and query weather data in a geographical distributed environment, named weather data sharing system 6. pCloud: A Distributed In this paper we present pCloud, a distributed system 2012 System for Practical that constitutes the ?rst attempt towards practical PIR cPIR. Our approach assumes a disk-based architecture that retrieves one page with a single query. Using a striping technique, we distribute the database to a number of cooperative peers, and leverage their computational resources to process cPIR queries in parallel. We implemented pCloud on the PlanetLab network, and experimented extensively with several system parameters. Results
  • 93. 7. A Gossip Protocol for We address the problem of dynamic resource 2012 Dynamic Resource management for a large-scale cloud environment. Our Management in Large contribution includes outlining a distributed Cloud Environments. middleware architecture and presenting one of its key elements: a gossip protocol that (1) ensures fair resource allocation among sites/applications, (2) dynamically adapts the allocation to load changes and (3) scales both in the number of physical machines and sites/applications. 8. A Novel Process In this paper, we explore a novel approach to model 2012 Network Model for dynamic behaviors of interacting context-aware web Interacting Context- services. It aims to effectively process and take aware Web Services advantage of contexts and realize behavior adaptation of web services, further to facilitate the development of context-aware application of web services. 9. Monitoring and Recently, several mobile services are changing to 2012 Detecting Abnormal cloud-based mobile services with richer Behavior in Mobile communications and higher flexibility. We present a new mobile cloud infrastructure that combines mobile Cloud Infrastructure devices and cloud services. This new infrastructure provides virtual mobile instances through cloud computing. To commercialize new services with this infrastructure, service providers should be aware of security issues. Here, we first define new mobile cloud services through mobile cloud infrastructure and discuss possible security threats through the use of several service scenarios. Then, we propose a methodology and architecture for detecting abnormal behavior through the monitoring of both host and network data. To validate our methodology, we injected malicious programs into our mobile cloud test bed and used a machine learning algorithm to detect the abnormal behavior that arose from these programs. 10. Impact of Storage The volume of worldwide digital content has 2012 Acquisition Intervals increased nine-fold within the last five years, and this on the Cost-Efficiency immense growth is predicted to continue in of the Private vs. Public foreseeable future eaching 8ZB already by 2015. Storage. Traditionally, in order to cope with the growing demand for storage capacity, organizations proactively built and managed their private storage facilities. Recently, with the proliferation of public cloud infrastructure offerings, many organizations, instead, welcomed the alternative of outsourcing their storage needs to the providers of public cloud storage services. The comparative cost-efficiency of these two
  • 94. alternatives depends on a number of factors, among which are e.g. the prices of the public and private storage, the charging and the storage acquisition intervals, and the predictability of the demand for storage. In this paper, we study how the cost- efficiency of the private vs. public storage depends on the acquisition interval at which the organization re- assesses its storage needs and acquires additional private storage. The analysis in the paper suggests that the shorter the acquisition interval, the more likely it is that the private storage solution is less expensive as compared with the public cloud infrastructure. This phenomenon is also illustrated in the paper numerically using the storage needs encountered by a university back-up and archiving service as an example. Since the acquisition interval is determined by the organization’s ability to foresee the growth of storage demand, by the provisioning schedules of storage equipment providers, and by internal practices of the organization, among other factors, the organization owning a private storage solution may want to control some of these factors in order to attain a shorter acquisition interval and thus make the private storage (more) cost-efficient.. 11. Managing A Cloud for We present a novel execution environment for 2012 Multi-agent Systems on multiagent systems building on concepts from cloud Ad-hoc Networks computing and peer-to-peer networks. The novel environment can provide the computing power of a cloud for multi-agent systems in intermittently connected networks. We present the design and implementation of a prototype operating system for managing the environment. The operating system provides the user with a consistent view of a single machine, a single file system, and a unified programming model while providing elasticity and availability. 12. Cloud Computing The use of cloud computing has increased rapidly in 2012 Security: From Single many organizations. Cloud computing provides many to benefits in terms of low cost and accessibility of data. Multi-Clouds Ensuring the security of cloud computing is a major factor in the cloud computing environment, as users often store sensitive information with cloud storage providers but these providers may be untrusted. Dealing with “single cloud” providers is predicted to become less popular with customers due to risks of
  • 95. service availability failure and the possibility of malicious insiders in the single cloud. A movement towards “multi-clouds”, or in other words, “interclouds” or “cloud-ofclouds” has emerged recently. This paper surveys recent research related to single and multi-cloud security and addresses possible solutions. It is found that the research into the use of multicloud providers to maintain security has received less attention from the research community than has the use of single clouds. This work aims to promote the use of multi-clouds due to its ability to reduce security risks that affect the cloud computing user. 13. Optimization of In cloud computing, cloud providers can offer cloud 2012 Resource Provisioning consumers two provisioning plans for computing Cost in Cloud resources, namely reservation and on-demand plans. Computing In general, cost of utilizing computing resources provisioned by reservation plan is cheaper than that provisioned by on-demand plan, since cloud consumer has to pay to provider in advance. With the reservation plan, the consumer can reduce the total resource provisioning cost. However, the best advance reservation of resources is difficult to be achieved due to uncertainty of consumer's future demand and providers' resource prices. To address this problem, an optimal cloud resource provisioning (OCRP) algorithm is proposed by formulating a stochastic programming model. The OCRP algorithm can provision computing resources for being used in multiple provisioning stages as well as a long-term plan, e.g., four stages in a quarter plan and twelve stages in a yearly plan. The demand and price uncertainty is considered in OCRP. In this paper, different approaches to obtain the solution of the OCRP algorithm are considered including deterministic equivalent formulation, sample-average approximation, and Benders decomposition. Numerical studies are extensively performed in which the results clearly show that with the OCRP algorithm, cloud consumer can successfully minimize total cost of resource provisioning in cloud computing environments 14. A Secure Erasure A cloud storage system, consisting of a collection of 2012 Code-Based Cloud storage servers, provides long-term storage services Storage System with over the Internet.Storing data in a third party’s cloud Secure Data system causes serious concern over data
  • 96. Forwarding confidentiality. General encryption schemes protect data confidentiality, but also limit the functionality of the storage system because a few operations are supported over encrypted data. Constructing a secure storage system that supports multiple functions is challenging when the storage system is distributed and has no central authority. We propose a threshold proxy re-encryption scheme and integrate it with a decentralized erasure code such that a secure distributed storage system is formulated. The distributed storage system not only supports secure and robust data storage and retrieval, but also lets a user forward his data in the storage servers to another user without retrieving the data back. The main technical contribution is that the proxy re-encryption scheme supports encoding operations over encrypted messages as well as forwarding operations over encoded and encrypted messages. Our method fully integrates encrypting, encoding, and forwarding. We analyze and suggest suitable parameters for the number of copies of a message dispatched to storage servers and the number of storage servers queried by a key server. These parameters allow more flexible adjustment between the number of storage servers . 15. HASBE: A Cloud computing has emerged as one of the most 2012 Hierarchical Attribute- influential paradigms in the IT industry in recent Based Solution for years. Since this Flexible and Scalable new computing technology requires users to entrust Access Control in their valuable data to cloud providers, there have been Cloud Computing increasing security and privacy concerns on outsourced data. Several schemes employing attribute- based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set- based encryption (HASBE) by extending ciphertext- policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value
  • 97. assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the ciphertext-policy attribute- based encryption (CP-ABE) scheme by Bethencourt et al. and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments. 16. A Distributed Access The large-scale, dynamic, and heterogeneous nature of 2012 Control Architecture cloud computing poses numerous security challenges. for Cloud Computing But the cloud's main challenge is to provide a robust authorization mechanism that incorporates multitenancy and virtualization aspects of resources. The authors present a distributed architecture that incorporates principles from security management and software engineering and propose key requirements and a design model for the architecture. 17. Cloud Computing The use of cloud computing has increased rapidly in 2012 Security: From Single many organizations. Cloud computing provides many to Multi-clouds benefits in terms of low cost and accessibility of data. Ensuring the security of cloud computing is a major factor in the cloud computing environment, as users often store sensitive information with cloud storage providers but these providers may be untrusted. Dealing with "single cloud" providers is predicted to become less popular with customers due to risks of service availability failure and the possibility of malicious insiders in the single cloud. A movement towards "multi-clouds", or in other words, "interclouds" or "cloud-of-clouds" has emerged recently. This paper surveys recent research related to single and multi-cloud security and addresses possible solutions. It is found that the research into the use of multi-cloud providers to maintain security has received less attention from the research community than has the use of single clouds. This work aims to promote the use of multi-clouds due to its ability to reduce security risks that affect the cloud computing user. 18. Scalable and Secure Personal health record (PHR) is an emerging patient- 2012 Sharing of Personal centric model of health information exchange, which Health Records in is often outsourced to be stored at a third party, such Cloud Computing as cloud providers. However, there have been wide
  • 98. using Attribute-based privacy concerns as personal health information could Encryption be exposed to those third party servers and to unauthorized parties. To assure the patients’ control over access to their own PHRs, it is a promising method to encrypt the PHRs before outsourcing. Yet, issues such as risks of privacy exposure, scalability in key management, flexible access and efficient user revocation, have remained the most important challenges toward achieving fine-grained, cryptographically enforced data access control. In this paper, we propose a novel patient-centric framework and a suite of mechanisms for data access control to PHRs stored in semi-trusted servers. To achieve fine- grained and scalable data access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patient’s PHR file. Different from previous works in secure data outsourcing, we focus on the multiple data owner scenario, and divide the users in the PHR system into multiple security domains that greatly reduces the key management complexity for owners and users. A high degree of patient privacy is guaranteed simultaneously by exploiting multi-authority ABE. Our scheme also enables dynamic modification of access policies or file attributes, supports efficient on-demand user/attribute revocation and break-glass access under emergency scenarios. Extensive analytical and experimental results are presented which show the security, scalability and efficiency of our proposed scheme 19. Cloud Data Production Offering strong data protection to cloud users while 2012 for Masses enabling rich applications is a challenging task. We explore a new cloud platform architecture called Data Protection as a Service, which dramatically reduces the per-application development effort required to offer data protection, while still allowing rapid development and maintenance. 20. Secure and Practical Cloud Computing has great potential of providing 2012 Outsourcing of Linear robust computational power to the society at reduced Programming in Cloud cost. It enables customers with limited computational Computing resources to outsource their large computation workloads to the cloud, and economically enjoy the massive computational power, bandwidth, storage,
  • 99. and even appropriate software that can be shared in a pay-per-use manner. Despite the tremendous benefits, security is the primary obstacle that prevents the wide adoption of this promising computing model, especially for customers when their confidential data are consumed and produced during the computation. Treating the cloud as an intrinsically insecure computing platform from the viewpoint of the cloud customers, we must design mechanisms that not only protect sensitive information by enabling computations with encrypted data, but also protect customers from malicious behaviors by enabling the validation of the computation result. Such a mechanism of general secure computation outsourcing was recently shown to be feasible in theory, but to design mechanisms that are practically efficient remains a very challenging problem. Focusing on engineering computing and optimization tasks, this paper investigates secure outsourcing of widely applicable linear programming (LP) computations. In order to achieve practical efficiency, our mechanism design explicitly decomposes the LP computation outsourcing into public LP solvers running on the cloud and private LP parameters owned by the customer. The resulting flexibility allows us to explore appropriate security/ efficiency tradeoff via higher- level abstraction of LP computations than the general circuit representation. In particular, by formulating private data owned by the customer for LP problem as a set of matrices and vectors, we are able to develop a set of efficient privacy-preserving problem transformation techniques, which allow customers to transform original LP problem into some arbitrary one while protecting sensitive input/output information. To validate the computation result, we further explore the fundamental duality theorem of LP computation and derive the necessary and sufficient conditions that correct result must satisfy. Such result verification mechanism is extremely efficient and incurs close-to- zero additional cost on both cloud server and customers. Extensive security analysis and experiment results show the immediate practicability of our mechanism design. 21. Efficient audit service Cloud-based outsourced storage relieves the client’s 2012 outsourcing for data burden for storage management and maintenance by integrity in clouds – providing a comparably low-cost, scalable, location-
  • 100. projects 2012 independent platform. However, the fact that clients no longer have physical possession of data indicates that they are facing a potentially formidable risk for missing or corrupted data. To avoid the security risks, audit services are critical to ensure the integrity and availability of outsourced data and to achieve digital forensics and credibility on cloud computing. Provable data possession (PDP), which is a cryptographic technique for verifying the integrity of data without retrieving it at an untrusted server, can be used to realize audit services. In this paper, profiting from the interactive zero-knowledge proof system, we address the construction of an interactive PDP protocol to prevent the fraudulence of prover (soundness property) and the leakage of verified data (zero- knowledge property). We prove that our construction holds these properties based on the computation Diffie–Hellman assumption and the rewindable black- box knowledge extractor. We also propose an efficient mechanism with respect to probabilistic queries and periodic verification to reduce the audit costs per verification and implement abnormal detection timely. In addition, we present an efficient method for selecting an optimal parameter value to minimize computational overheads of cloud audit services. Our experimental results demonstrate the effectiveness of our approach 22. Efficient audit service Cloud-based outsourced storage relieves the client’s 2012 outsourcing for data burden for storage management and maintenance by integrity in clouds – providing a comparably low-cost, scalable, location- projects 2012 independent platform. However, the fact that clients no longer have physical possession of data indicates that they are facing a potentially formidable risk for missing or corrupted data. To avoid the security risks, audit services are critical to ensure the integrity and availability of outsourced data and to achieve digital forensics and credibility on cloud computing. Provable data possession (PDP), which is a cryptographic technique for verifying the integrity of data without retrieving it at an untrusted server, can be used to realize audit services. In this paper, profiting from the interactive zero-knowledge proof system, we address the construction of an interactive PDP protocol to prevent the fraudulence of prover (soundness property) and the leakage of verified data (zero- knowledge property). We prove that our construction
  • 101. holds these properties based on the computation Diffie–Hellman assumption and the rewindable black- box knowledge extractor. We also propose an efficient mechanism with respect to probabilistic queries and periodic verification to reduce the audit costs per verification and implement abnormal detection timely. In addition, we present an efficient method for selecting an optimal parameter value to minimize computational overheads of cloud audit services. Our experimental results demonstrate the effectiveness of our approach. 23. Secure and privacy Cloud storage services enable users to remotely access 2012 preserving keyword data in a cloud anytime and anywhere, using any searching for cloud device, in a pay-as-you-go manner. Moving data into a storage services – cloud offers great convenience to users since they do projects 2012 not have to care about the large capital investment in both the deployment and management of the hardware infrastructures. However, allowing a cloud service provider (CSP), whose purpose is mainly for making a profit, to take the custody of sensitive data, raises underlying security and privacy issues. To keep user data confidential against an untrusted CSP, a natural way is to apply cryptographic approaches, by disclosing the data decryption key only to authorized users. However, when a user wants to retrieve files containing certain keywords using a thin client, the adopted encryption system should not only support keyword searching over encrypted data, but also provide high performance. In this paper, we investigate the characteristics of cloud storage services and propose a secure and privacy preserving keyword searching (SPKS) scheme, which allows the CSP to participate in the decipherment, and to return only files containing certain keywords specified by the users, so as to reduce both the computational and communication overhead in decryption for users, on the condition of preserving user data privacy and user querying privacy. Performance analysis shows that the SPKS scheme is applicable to a cloud environment 24. Secure and privacy Cloud storage services enable users to remotely access 2012 preserving keyword data in a cloud anytime and anywhere, using any searching for cloud device, in a pay-as-you-go manner. Moving data into a storage services – cloud offers great convenience to users since they do projects 2012 not have to care about the large capital investment in
  • 102. both the deployment and management of the hardware infrastructures. However, allowing a cloud service provider (CSP), whose purpose is mainly for making a profit, to take the custody of sensitive data, raises underlying security and privacy issues. To keep user data confidential against an untrusted CSP, a natural way is to apply cryptographic approaches, by disclosing the data decryption key only to authorized users. However, when a user wants to retrieve files containing certain keywords using a thin client, the adopted encryption system should not only support keyword searching over encrypted data, but also provide high performance. In this paper, we investigate the characteristics of cloud storage services and propose a secure and privacy preserving keyword searching (SPKS) scheme, which allows the CSP to participate in the decipherment, and to return only files containing certain keywords specified by the users, so as to reduce both the computational and communication overhead in decryption for users, on the condition of preserving user data privacy and user querying privacy. Performance analysis shows that the SPKS scheme is applicable to a cloud environment 25. Cooperative Provable Provable data possession (PDP) is a technique for 2012 Data Possession for ensuring the integrity of data in storage outsourcing. Integrity Verification in In this paper, we address the construction of an Multi-Cloud Storage efficient PDP scheme for distributed cloud storage to support the scalability of service and data migration, in which we consider the existence of multiple cloud service providers to cooperatively store and maintain the clients’ data. We present a cooperative PDP (CPDP) scheme based on homomorphic verifiable response and hash index hierarchy. We prove the security of our scheme based on multi- prover zero-knowledge proof system, which can satisfy completeness, knowledge soundness, and zero- knowledge properties. In addition, we articulate performance optimization mechanisms for our scheme, and in particular present an efficient method for selecting optimal parameter values to minimize the computation costs of clients and storage service providers. Our experiments show that our solution introduces lower computation and communication overheads in comparison with non-cooperative
  • 103. approaches 26. Cooperative Provable Provable data possession (PDP) is a technique for 2012 Data Possession for ensuring the integrity of data in storage outsourcing. Integrity Verification in In this paper, we address the construction of an Multi-Cloud Storage efficient PDP scheme for distributed cloud storage to support the scalability of service and data migration, in which we consider the existence of multiple cloud service providers to cooperatively store and maintain the clients’ data. We present a cooperative PDP (CPDP) scheme based on homomorphic verifiable response and hash index hierarchy. We prove the security of our scheme based on multi- prover zero-knowledge proof system, which can satisfy completeness, knowledge soundness, and zero- knowledge properties. In addition, we articulate performance optimization mechanisms for our scheme, and in particular present an efficient method for selecting optimal parameter values to minimize the computation costs of clients and storage service providers. Our experiments show that our solution introduces lower computation and communication overheads in comparison with non-cooperative approaches 27. Bootstrapping Ontologies have become the de-facto modeling tool of 2012 Ontologies for Web choice, employed in many applications and Services – projects prominently in the semantic web. Nevertheless, 2012 ontology construction remains a daunting task. Ontological bootstrapping, which aims at automatically generating concepts and their relations in a given domain, is a promising technique for ontology construction. Bootstrapping an ontology based on a set of predefined textual sources, such as web services, must address the problem of multiple, largely unrelated concepts. In this paper, we propose an ontology bootstrapping process for web services. We exploit the advantage that web services usually consist of both WSDL and free text descriptors. The WSDL descriptor is evaluated using two methods, namely Term Frequency/Inverse Document Frequency (TF/IDF) and web context generation. Our proposed ontology bootstrapping process integrates the results of both methods and applies a third method to validate the concepts using the service free text descriptor, thereby offering a more accurate definition of ontologies. We extensively validated our bootstrapping method using a large repository of real-
  • 104. world web services and verified the results against existing ontologies. The experimental results indicate high precision. Furthermore, the recall versus precision comparison of the results when each method is separately implemented presents the advantage of our integrated bootstrapping approach. 28. Data Security and It is well-known that cloud computing has many 2012 Privacy Protection potential advantages and many enterprise applications Issues in Cloud and data are migrating to public or hybrid cloud. But Computing regarding some business-critical applications, the organizations, especially large enterprises, still wouldn't move them to cloud. The market size the cloud computing shared is still far behind the one expected. From the consumers' perspective, cloud computing security concerns, especially data security and privacy protection issues, remain the primary inhibitor for adoption of cloud computing services. This paper provides a concise but all-round analysis on data security and privacy protection issues associated with cloud computing across all stages of data life cycle. Then this paper discusses some current solutions. Finally, this paper describes future research work about data security and privacy protection issues in cloud. 29. Stochastic models of Cloud computing services are becoming ubiquitous, 2012 load balancing and and are starting to serve as the primary source of scheduling in cloud computing power for both enterprises and personal computing clusters computing applications. We consider a stochastic model of a cloud computing cluster, where jobs arrive according to a stochastic process and request virtual machines (VMs), which are specified in terms of resources such as CPU, memory and storage space. While there are many design issues associated with such systems, here we focus only on resource allocation problems, such as the design of algorithms for load balancing among servers, and algorithms for scheduling VM configurations. Given our model of a cloud, we first define its capacity, i.e., the maximum rates at which jobs can be processed in such a system. Then, we show that the widely-used Best-Fit scheduling algorithm is not throughput-optimal, and present alternatives which achieve any arbitrary fraction of the capacity region of the cloud. We then study the delay performance of these alternative algorithms through simulations. 30. A comber approach to Cloud computing is an internet based pay as use 2012
  • 105. protect cloud service which provides three layered services computing against (Software as a Service, Platform as a Service and XML DDoS and HTTP Infrastructure as a Service) to its consumers on DDoS attack demand. These on demand service facilities provide to its consumers in multitenant environment but as facility increases complexity and security problems also increase. Here all the resources are at one place in data centers. Cloud uses public and private APIs (Application Programming Interface) to provide services to its consumers in multitenant environment. In this environment Distributed Denial of Service attack (DDoS), especially HTTP, XML or REST based DDoS attacks may be very dangerous and may provide very harmful effects for availability of services and all consumers will get affected at the same time. One other reason is that because the cloud computing users make their request in XML then send this request using HTTP protocol and build their system interface with REST protocol such as Amazon EC2 or Microsoft Azure. So the threaten coming from distributed REST attacks are more and easy to implement by the attacker, but to security expert very difficult to resolve. So to resolve these attacks this paper introduces a comber approach for security services called filtering tree. This filtering tree has five filters to detect and resolve XML and HTTP DDoS attack 31. Resource allocation Cloud computing is a platform that hosts applications 2012 and scheduling in cloud and services for businesses and users to accesses computing computing as a service. In this paper, we identify two scheduling and resource allocation problems in cloud computing. We describe Hadoop MapReduce and its schedulers, and present recent research efforts in this area including alternative schedulers and enhancements to existing schedulers. The second scheduling problem is the provisioning of virtual machines to resources in the cloud. We present a survey of the different approaches to solve this resource allocation problem. We also include recent research and standards for inter-connecting clouds and discuss the suitability of running scientific applications in the cloud. 32. Application study of Aimed at some problems in Network Education 2012 online education Resources Construction at present, we analyse the platform based on characteristics and application range of cloud cloud computing computing, and present an integrated solving scheme.
  • 106. On that basis, some critical technologies such as the cloud storage, streaming media and cloud safety are analyzed in detail. Finally, the paper gives summarization and expectation. 33. Towards temporal Access control is one of the most important security 2012 access control in cloud mechanisms in cloud computing. Attribute-based computing access control provides a flexible approach that allows data owners to integrate data access policies within the encrypted data. However, little work has been done to explore temporal attributes in specifying and enforcing the data owner's policy and the data user's privileges in cloud-based environments. In this paper, we present an efficient temporal access control encryption scheme for cloud services with the help of cryptographic integer comparisons and a proxy-based re-encryption mechanism on the current time. We also provide a dual comparative expression of integer ranges to extend the power of attribute expression for implementing various temporal constraints. We prove the security strength of the proposed scheme and our experimental results not only validate the effectiveness of our scheme, but also show that the proposed integer comparison scheme performs significantly better than previous bitwise comparison scheme. 34. Privacy-Preserving We come up with a digital rights management (DRM) 2012 DRM for Cloud concept for cloud computing and show how license Computing management for software within the cloud can be achieved in a privacy-friendly manner. In our scenario, users who buy software from software providers stay anonymous. At the same time, our approach guarantees that software licenses are bound to users and their validity is checked before execution. We employ a software re-encryption scheme so that computing centers which execute users' software are not able to build user profiles - not even under pseudonym - of their users. We combine secret sharing and homomorphic encryption. We make sure that malicious users are unable to relay software to others. DRM constitutes an incentive for software providers to take partin a future cloud computing scenario. We make this scenario more attractive for users by preserving their privacy. 35. Pricing and peak aware The proposed cloud computing scheduling algorithms 2012 scheduling algorithm demonstrated feasibility of interactions between for cloud computing distributors and one of their heavy use customers in a
  • 107. smart grid environment. Specifically, the proposed algorithms take cues from the dynamic pricing and schedule the jobs/tasks in ways that the energy usage is what distributors are hinted. In addition, a peak threshold can be dynamically assigned such that the energy usage at any given time will not exceed the threshold. The proposed scheduling algorithm proved the feasibility of managing the energy usage of cloud computers in collaboration with the energy distributor 36. Comparison of Computer Networks face a constant struggle against 2012 Network Intrusion intruders and attackers. Attacks on distributed systems Detection Systems in grow stronger and more prevalent each and every day. cloud computing Intrusion detection methods are a key to control and environment potentially eradicate attacks on a system. An Intrusion detection system pertains to the methods used to identify an attack on a computer or computer network. In cloud computing environment the applications are user-centric and the customers should be confident about their applications stored in the cloud server. Network Intrusion Detection System (NIDS) plays an important role in providing the network security. They provide a defence layer which monitors the network traffic for pre-defined suspicious activity or pattern. In this paper Snort, Tcpdump and Network Flight Recorder which are the most famous NIDS in cloud system are examined and contrasted. 37. Intelligent and Active Cloud's development has entered the practical stage, 2012 Defense Strategy of but safety issues must be resolved. How to avoid the Cloud Computing risk on the web page, how to prevent attacks from hacker, how to protect user data in the cloud. This paper discusses some satety solution :the credit level of web page, Trace data, analyze and filter them by large-scale statistical methods, encryption protection of user data and key management. 38. Distributed Shared In this paper we discuss the idea of combining 2012 Memory as an wireless sensor networks and cloud computing starting Approach for with a state of the art analysis of existing approaches Integrating WSNs and in this field. As result of the analysis we propose to Cloud Computing reflect a real wireless sensor network by virtual sensors in the cloud. The main idea is to replicate data stored on the real sensor nodes also in the virtual sensors, without explicit triggering such updates from the application. We provide a short overview of the resulting architecture before explaining mechanisms to realize it. The means to ensure a certain level of consistency between the real WSN and the virtual
  • 108. sensors in the cloud is distributed shared memory. In order to realize DSM in WSNs we have developed a middleware named tinyDSM which is shortly introduced here and which provides means for replicating sensor data and ensuring the consistency of the replicates. Even though tinyDSM is a pretty good vehicle to realize our idea there are some open issues that need to be addressed when realizing such an architecture. We discuss these challenges in an abstract way to ensure clear separation between the idea and its specific realization. 39. Improving resource Even though the adoption of cloud computing and 2012 allocation in multi-tier virtualization have improved resource utilization to a cloud systems great extent, the continued traditional approach of resource allocation in production environments has introduced the problem of over-provisioning of resources for enterprise-class applications hosted in cloud systems. In this paper, we address the problem and propose ways to minimize over-provisioning of IT resources in multi-tier cloud systems by adopting an innovative approach of application performance monitoring and resource allocation at individual tier levels, on the basis of criticality of the business services and availability of the resources at one's disposal. 40. Ensuring Distributed Cloud computing enables highly scalable services to 2012 Accountability for Data be easily consumed over the Internet on an as-needed Sharing in the Cloud basis. A major feature of the cloud services is that users' data are usually processed remotely in unknown machines that users do not own or operate. While enjoying the convenience brought by this new emerging technology, users' fears of losing control of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud services. To address this problem, in this paper, we propose a novel highly decentralized information accountability framework to keep track of the actual usage of the users' data in the cloud. In particular, we propose an object-centered approach that enables enclosing our logging mechanism together with users' data and policies. We leverage the JAR programmable capabilities to both create a dynamic and traveling object, and to ensure that any access to users' data will trigger authentication and
  • 109. automated logging local to the JARs. To strengthen user's control, we also provide distributed auditing mechanisms. We provide extensive experimental studies that demonstrate the efficiency and effectiveness of the proposed approaches. 41. Efficient information Cloud computing as an emerging technology trend is 2012 retrieval for ranked expected to reshape the advances in information queries in cost- technology. In this paper, we address two fundamental effective cloud issues in a cloud environment: privacy and efficiency. environments We first review a private keyword-based file retrieval scheme proposed by Ostrovsky et. al. Then, based on an aggregation and distribution layer (ADL), we present a scheme, termed efficient information retrieval for ranked query (EIRQ), to further reduce querying costs incurred in the cloud. Queries are classified into multiple ranks, where a higher ranked query can retrieve a higher percentage of matched files. Extensive evaluations have been conducted on an analytical model to examine the effectiveness of our scheme. TECHNOLOGY : JAVA DOMAIN : IEEE TRANSACTIONS ON MULTIMEDIA S.NO TITLES ABSTRACT YEAR 1. Movie2Comics: This paper proposes a scheme that is able to 2012 Towards a Lively automatically turn a movie clip to comics. Two Video Content principles are followed in the scheme: 1) optimizing Presentation the information preservation of the movie; and 2) generating outputs following the rules and the styles of comics. The scheme mainly contains three components: script-face mapping, descriptive picture extraction, and cartoonization. The script-face mapping utilizes face tracking and recognition techniques to accomplish the mapping between characters’ faces and their scripts 2. Robust Watermarking In this paper, we propose a robust watermarking 2012 of Compressed and algorithm to watermark JPEG2000 compressed and Encrypted JPEG2000 encrypted images. The encryption algorithm we Images propose to use is a stream cipher. While the proposed technique embeds watermark in the compressed- encrypted domain, the extraction ofwatermark can be
  • 110. done in the decrypted domain 3. Load-Balancing Multipath Switching systems (MPS) are intensely Multipath Switching used in state-of-the-art core routers to provide terabit System with Flow Slice or even petabit switching capacity. One of the most intractable issues in designing MPS is how to load balance traffic across its multiple paths while not disturbing the intraflow packet orders. Previous packet-based solutions either suffer from delay penalties or lead to O(N2 ) hardware complexity, hence do not scale. Flow-based hashing algorithms also perform badly due to the heavy-tailed flow-size distribution. In this paper, we develop a novel scheme, namely, Flow Slice (FS) that cuts off each flow into flow slices at every intraflow interval larger than a slicing threshold and balances the load on a finer granularity. Based on the studies of tens of real Internet traces, we show that setting a slicing threshold of 1-4 ms, the FS scheme achieves comparative load- balancing performance to the optimal one. It also limits the probability of out-of-order packets to a negligible level (10-6) on three popular MPSes at the cost of little hardware complexity and an internal speedup up to two. These results are proven by theoretical analyses and also validated through trace- driven prototype simulations 4. Robust Face-Name Automatic face identification of characters in movies 2012 Graph Matching for has drawn significant research interests and led to Movie Character many interesting applications. It is a challenging Identification problem due to the huge variation in the appearance of each character. Although existing methods demonstrate promising results in clean environment, the performances are limited in complex movie scenes due to the noises generated during the face tracking and face clustering process. In this paper we present two schemes of global face-name matching based framework for robust character identification. The contributions of this work include: Complex character changes are handled by simultaneously graph partition and graph matching. Beyond existing character identification approaches, we further perform an in- depth sensitivity analysis by introducing two types of simulated noises. The proposed schemes demonstrate state-of-the-art performance on movie character identification in various genres of movies. 5. Learn to Personalized Increasingly developed social sharing websites, like 2012 Image Search from the Flickr and Youtube, allow users to create, share,
  • 111. Photo Sharing annotate and comment medias. The large-scale user- Websites – projects generated meta-data not only facilitate users in sharing 2012 and organizing multimedia content,but provide useful information to improve media retrieval and management. Personalized search serves as one of such examples where the web search experience is improved by generating the returned list according to the modified user search intents. In this paper, we exploit the social annotations and propose a novel framework simultaneously considering the user and query relevance to learn to personalized image search. The basic premise is to embed the user preference and query-related search intent into user-specific topic spaces. Since the users’ original annotation is too sparse for topic modeling, we need to enrich users’ annotation pool before user-specific topic spaces construction. The proposed framework contains two components: