SlideShare a Scribd company logo
OIT552 Cloud Computing
Course Material
Prepared By
Kaviya.P
Assistant Professor / Information Technology
Kamaraj College of Engineering & Technology
(Autonomous)
15-11-2021
1
IBM Power Systems
Cloud computing is an umbrella term used to
refer to Internet based development and
services
Introduction to Cloud Computing
IBM Power Systems
The Next Revolution in IT
The Big Switch in IT
• Classical Computing
– Buy & Own
• Hardware, System
Software, Applications
often to meet peak
needs.
– Install, Configure, Test,
Verify, Evaluate
– Manage
– ..
– Finally, use it
– $$$$....$(High CapEx)
• Cloud Computing
– Subscribe
– Use
– $ - pay for what you use,
based on QoS
Every
18
months?
IBM Power Systems
WHAT IS CLOUD COMPUTING ?
What do they say ?
IBM Power Systems
What is Cloud Computing?
• Shared pool of configurable computing resources
• On-demand network access
• Provisioned by the Service Provider
4
15-11-2021
2
IBM Power Systems
• A model for enabling convenient, on-demand network access to a
shared pool of configurable computing resources (e.g., networks,
servers, storage, applications, and services)
• It can be rapidly provisioned and released with minimal management
effort or service provider interaction.
• Promotes availability
• Provides high level abstraction of computation and storage model.
• It has essential characteristics, service models, and deployment
models.
Cloud Definitions
IBM Power Systems
Cloud Definitions
• Definition from Wikipedia
– Cloud computing is Internet-based computing, whereby
shared resources, software, and information are provided
to computers and other devices on demand like the
electricity grid.
– Cloud computing - A style of computing in which
dynamically scalable and often virtualized resources are
provided as a service over the Internet.
IBM Power Systems
Cloud Definitions
• Definition from Whatis.com
– Name cloud computing was inspired by the cloud
symbol that's often used to represent the Internet in
flowcharts and diagrams.
– Cloud computing is a general term for anything that
involves delivering hosted services over the Internet.
IBM Power Systems
Cloud Definitions
• Definition from Berkeley
– Cloud Computing refers to both the applications
delivered as services over the Internet and the
hardware and systems software in the datacenters
that provide those services.
15-11-2021
3
IBM Power Systems
Cloud Definitions
• Definition from Buyya
 A Cloud is a type of parallel and distributed system consisting of a
collection of interconnected and virtualized computers .
 They are dynamically provisioned and presented as one or more
unified computing resources based on service-level agreements
established through negotiation between the service provider and
consumers.
IBM Power Systems
Cloud Applications
•Scientific / Technical Applications
•Business Applications
•Consumer / Social Applications
Science and Technical Applications
Business Applications
Consumer / Social Applications
IBM Power Systems
Cloud Computing
• Provides the facility to provision virtual hardware, runtime
environment and services to people – on demand service
• These facilities are used by end user as long as they are
needed by them
• Long term Vision of cloud computing
o IT services are traded as utilities on an open market
without technological and legal barriers
IBM Power Systems
15-11-2021
4
IBM Power Systems IBM Power Systems
IBM Power Systems IBM Power Systems
15-11-2021
5
IBM Power Systems IBM Power Systems
IBM Power Systems IBM Power Systems
15-11-2021
6
IBM Power Systems IBM Power Systems
IBM Power Systems IBM Power Systems
15-11-2021
1
Evolution of Cloud Computing
15-11-2021
2
ROOTS OF CLOUD COMPUTING
• Hardware (virtualization, multi-core chips)
• Internet technologies (Web services, service-oriented
architectures, Web 2.0),
• Distributed computing (clusters, grids)
• Systems management (autonomic
center automation).
computing, data
From Mainframes to Clouds
• Switch in the IT world
• From in-house generated computing power
• into
• Utility-supplied computing resources delivered over the
Internet as Web services
Computing delivered as a utility can be defined as
―on demand delivery of infrastructure, applications, and
business processes in a security-rich, shared, scalable, and
based computer environment over the Internet for a fee”
15-11-2021
3
In 1970s,
• Common data processing tasks ( payroll automation)
operated time-shared mainframes as utilities
• Mainframes had to operate at very high utilization rates
• They are very expensive.
Disadvantages
• With the advent of fast and inexpensive microprocessors
• isolation of workload into dedicated servers
• Incompatibilities between software stacks and operating
systems
• the unavailability of efficient computer network
SOA, Web Services, Web 2.0 and Mashups
• Web services
• glue together applications running on different messaging
product platforms
• enabling information from one application to be made
available to others
• enabling internal applications to be made available over the
Internet.
SOA, Web Services, Web 2.0 and Mashups
• Describe, compose, and orchestrate services, package and
transport messages between services, publish and discover
services, represent quality of service (QoS) parameters, and
ensure security in service access.
• Created on HTTP and XML - providing a common mechanism
for delivering services, making them ideal for implementing a
service-oriented architecture (SOA).
• Purpose of a SOA
• to address requirements of loosely coupled, standards-based,
and protocol-independent distributed computing.
• Software resources are packaged as ―services,
• They are well-defined, self contained modules that provide
standard business functionality
• They are independent of the state or context of other services.
• Service Mashups - information and services may be
building blocks of
programmatically aggregated, acting as
complex compositions
15-11-2021
4
• Use of distributed Systems to solve
computational problems
• The processors communicate with
one another through
communication lines such as high
speed buses or telephone lines
• Each processor has its own local
memory
• Examples :
• ATM, Internet, Intranet /
Workgroups
Distributed Computing
Properties of Distributed Computing
• Fault Tolerance
• When one or some node fails, the whole system still work fine
except performance
• Need to check the status of each node
• Resource Sharing
• Each user can share the computing power and storage resources in
the system with other users
• Load Sharing
• Dispatching several tasks to each nodes can help share loading to
the whole system
• Easy to expand
• Expect to use few time when adding nodes
• Performance
• Parallel computing can be considered a subset of distributed
computing
Why Distributed Computing ?
• Nature of application
• Performance
• Computing intensive
• Task consume lot of time on computing
• Ex: computation of pi value using Monte carlo simulation
• Data intensive
• Task deals with a large amount or large size of filesng
• Ex: Facebook, Experimental data processing
• Robustness
• No SPOF ( Single Point Of Failure)
• Other nodes can execute the same task executed on failed node
15-11-2021
5
• Grid
• Users (client applications) gain access to computing resources
(processors, storage, data, applications) as needed with little
knowledge of where those resources are located or what the
underlying technologies, hardware and operating system
• “The Gird” links computing
workstations, servers,
together
storage elements) and provides
resources (PC,
the
mechanism needed to access them
• Grid Computing
• is a computing infrastructure that provides dependable,
consistent, pervasive and inexpensive access to computational
capabilities
Grid Computing
• Grid Computing
• Share more than information
• Data, computing power, applications in dynamic
environment, multi-institutional, virtual organizations
• Effective use of resources at many institutes. People from
many institutions working to solve a common problem (
virtual organization)
• Join local communities
• Interactions with the underneath layers must be
transparent and seemless to the users
• Open Grid ServicesArchitecture (OGSA)
• defining a set of core capabilities and behaviors that address
key concerns in grid systems
• Globus Toolkit is a middleware that implements several standard
Grid services
• Grid brokers, which facilitate user interaction with multiple
middleware and implement policies to meet QoS needs.
15-11-2021
6
• Types of Grid
• Computational Grid
• provide secure access to large pool of shared processing
power suitable for high throughput applications
• Data Grid
• provide an infrastructure to support data storage, data
manipulation of large volume of data stored
discovery, data handling, data publication and data
in
heterogeneous databases and file system
• Disadvantages
• Ensuring QoS in grids is difficult
• availability of resources with diverse software configurations
• Eg: Disparate operating systems, libraries, compilers,
runtime environments but user applications would often run
only on specially customized environments
• Cluster
• is a type of parallel or distributed computer system consists of a
collection of inter-connected stand-alone computers working
together as a single integrated computing resource
• Key components
• Multiple standalone computers, operating systems, high
performance interconnects, middleware, parallel computing
environments and applications
• Clusters are usually deployed to improve speed
Cluster Computing
15-11-2021
7
• Types of Clusters
• High Availability or Failover clusters
• Load Balancing Clusters
• Parallel / Distributed Processing Clusters
• Benefits of clustering
• System availability
• Offer inherent high system availability due to redundancy
of hardware, OS and applications
• Hardware fault tolerance
• Redundancy for most system components (hardware and
software)
• OS and applications reliability
• Run multiple copies of OS, applications
• Scalability
• Adding servers to the cluster
• High Performance
• Running cluster enabled programme
• Utility
• Eg : electrical power – seek to meet fluctuating needs and charge
for the resources based on usage rather than flat basis.
• Utility Computing
• Service provisioning models in which a service provider
makes computing resources and infrastructure
management available to the customer as needed and
changes them for specific usage rather than a flat rate
• Advantage
• Low or no initial cost to acquire compute resource –
Computational resource are essentially rented
Utility Computing
15-11-2021
8
• U
tility Computing ?
• Pay-for-use Pricing Model
• Data Center Virtualization and provisioning
• Solves Resource utilization problem
• Outsourcing
• Web Services Delivery
• Automation
Hardware Virtualization
• Hardware virtualization allows running multiple operating systems and
software stacks on a single physical platform
• Software layer - Virtual machine monitor (VMM) - Hypervisor
- mediates access to the physical hardware presenting to each
guest operating system a virtual machine (VM), which is a set of
virtual platform interfaces
15-11-2021
9
Technologiesincreased adoption of virtualization
• Multi-core chips,
• Paravirtualization,
• Hardware-assisted virtualization, and
• Live migration of VMs
Benefits
• Improvements on sharing and utilization
• Better manageability
• Higher reliability.
Capabilities regarding management of workload in a virtualized
system
• Isolation
• Consolidation
• Migration
• Work load Isolation
• Execution of one VM should not affect the performance of
another VM
• Consolidation
• Consolidation of several individual and heterogeneous
workloads onto a single physical platform leads to better
system utilization.
• Workload Migration
• It is done by encapsulating a guest OS state within a VM
and allowing it to be suspended, fully serialized,
migrated to a different platform, and resumed
immediately or preserved to be restored at a later date
Virtual Appliances
• An application combined with the environment needed to run it
Environment - operating system, libraries, compilers, databases,
application containers, and so forth.
• It eases software customization, configuration, and patching and
improves portability.
Example –AMI(Amazon Machine Image) format forAmazon
EC2 public cloud
15-11-2021
10
Open Virtualization Format
• Consists of a file or Set of files –
• Describing the VM hardware characteristics (e.g.,
memory, network cards, and disks)
• Operating system details, startup, and shutdown
actions
• Virtual disks themselves
• Other metadata containing product and licensing
information.
Autonomic Computing
• Systems should manage themselves, with high-level guidance
from humans
• Autonomic (self-managing) systems rely on
• Monitoring probes and gauges (sensors),
• On an adaptation engine (autonomic manager) for computing
optimizations based on monitoring data, and
• On effectors to carry out changes on the system.
• 4 properties of autonomic systems(by IBM):
• self-configuration,
• self-optimization,
• self-healing, and
• self-protection.
• IBM - Reference model for autonomic control loops of
autonomic managers
MAPE-K (Monitor Analyze Plan Execute—Knowledge)
• Autonomic computing inspire software technologies for
data centre automation
• Its Tasks are
• Management of service levels of running applications
• Management of data centre capacity
• Proactive disaster recovery and
• Automation of VM provisioning
15-11-2021
1
Desired Features of Cloud
1
Desired Features of Cloud
To satisfy the expectations of consumers cloud must provide,
• Self-Service
• Per Usage Metering – Billing
• Elastic
• Customization
2
15-11-2021
2
Desired Features of Cloud
Self Service
• On-demand instant
access to resources
• Must allow self service
access, So customers can
request, customize, pay
and use services without
intervention.
3
Desired Features of Cloud
Per Usage Metering and Billing
• Services must be prized on short term basis
• Allow users to release resources as soon as they are not needed.
• Must offer efficient trading services like prizing, accounting and billing
• Metering should be done accordingly for different services
• Usage promptly reported
4
15-11-2021
3
Desired Features of Cloud
Elasticity
• Infinite computing resources available on demand.
• Rapidly provide resources in any quantity and at any time.
• Additional resources can be provided when application load
increases
• Release when load decreases.
5
Desired Features of Cloud
Customization
• Resources rented from cloud must be customizable.
• In IaaS – allow users to deploy specialised virtual appliances
and give privileged access to servers.
6
CHALLENGES AND RISKS OF CLOUD COMPUTING
Despite the initial success and popularity of the cloud computing paradigm and the extensive
availability of providers and tools, a significant number of challenges and risks are inherent to
this new model of computing.
Issues faced in cloud computing are
 Security, Privacy and Trust
 Data Lock in Standardization
 Availability, Fault-Tolerance, and Disaster Recovery
 Resource Management and Energy Efficient
Security, Privacy and Trust
 Current cloud offerings are essentially public, exposing the system to more attacks. For
this reason there are potentially additional challenges to make cloud computing
environments as secure as in-house IT systems.
 Security and privacy affect the entire cloud computing stack, since there is a massive use
of third-party services and infrastructures that are used to host important data or to
perform critical operations.
 In this scenario, the trust toward providers is fundamental to ensure the desired level of
privacy for applications hosted in the cloud.
 Legal and regulatory issues also need attention. When data are moved into the Cloud,
providers may choose to locate them anywhere on the planet.
 The physical location of data centers determines the set of laws that can be applied to the
management of data.
 For example, specific cryptography techniques could not be used because they are not
allowed in some countries.
 Similarly, country laws can impose that sensitive data, such as patient health records, are
to be stored within national border.
Data Lock-In and Standardization
 A major concern of cloud computing users is about having their data locked-in by a
certain provider.
 Users may want to move data and applications out from a provider that does not meet
their requirements.
 However, in their current form, cloud computing infrastructures and platforms do not
employ standard methods of storing user data and applications.
 The answer to this concern is standardization. In this direction, there are efforts to create
open standards for cloud computing.
 The Cloud Computing Interoperability Forum (CCIF) was formed by organizations such
as Intel, Sun, and Cisco in order to “enable a global cloud computing ecosystem whereby
organizations are able to seamlessly work together for the purposes for wider industry
adoption of cloud computing technology”.
 The development of the Unified Cloud Interface (UCI) by CCIF aims at creating a
standard programmatic point of access to an entire cloud infrastructure.
 In the hardware virtualization sphere, the Open Virtual Format (OVF) aims at facilitating
packing and distribution of software to be run on VMs so that virtual appliances can be
made portable—that is, seamlessly run on hypervisor of different vendor
Availability, Fault-Tolerance and Disaster Recovery
 Availability of the service, its overall performance, and what measures are to be taken
when something goes wrong in the system or its components is very essential in cloud.
 Users seek for a warranty before they can comfortably move their business to the cloud.
 SLAs, which include QoS requirements, must be ideally set up between customers
andcloud computing providers to act as warranty.
 An SLA specifies the details of the service to be provided, including availability and
performance guarantees.
 Additionally, metrics must be agreed upon by all parties, and penalties for violating the
expectations must also be approved.
Resource Management and Energy-Efficiency
 Resource Management and Energy-Efficiency is an important challenge faced by
providers of cloud computing services is the efficient management of virtualized resource
pools.
 Physical resources such as CPU cores, disk space, and network bandwidth must be sliced
and shared among virtual machines running potentially heterogeneous workloads.
 The multi-dimensional nature of virtual machines complicates the activity of finding a
good mapping of VMs onto available physical hosts while maximizing user utility.
 Dimensions to be considered include: number of CPUs, amount of memory, size of
virtual disks, and network bandwidth.
 Dynamic VM mapping policies may leverage the ability to suspend, migrate, and resume
VMs as an easy way of preempting low-priority allocations in favor of higher-priority
ones.
 Migration of VMs also brings additional challenges such as detecting when to initiate a
migration, which VM to migrate, and where to migrate.
 In addition, policies may take advantage of live migration of virtual machines to relocate
data center load without significantly disrupting running services.
 Data centers consumer large amounts of electricity. According to a data published by HP,
100 server racks can consume 1.3MWof power and another 1.3 MW are required by the
cooling system, thus costing USD 2.6 million per year.
 Besides the monetary cost, data centers significantly impact the environment in terms of
CO2 emissions from the cooling systems.
BENEFITS OF CLOUD COMPUTING
No upfront commitment
 IT assets, namely software and infrastructure, are turned into utility costs, which are
paid for as long as they are used, not paid for upfront.
 Capital costs are costs associated with assets that need to be paid in advance to start a
business activity.
 Before cloud computing, IT infrastructure and software generated capital costs,
since they were paid upfront so that business start-ups could afford a computing
infrastructure, enabling the business activities of the organization.
Cost efficiency
 The most evident benefit from the use of cloud computing systems and
technologies is the increased economical return due to the reduced maintenance
costs and operational costs related to IT software and infrastructure.
 The biggest reason behind shifting to cloud computing is that it takes considerably
lesser cost than an on-premise technology.
 Now the companies need not store the data in disks anymore as the Cloud offers
enormous storage space, saving money and resources of the companies.
 It helps you to save substantial capital cost as it does not need any physical hardware
investments.
 Also, you do not need trained personnel to maintain the hardware. The buying and
managing of equipment is done by the cloud service provider.
On Demand
 Services can be accessed on demand and only when required.
 Cloud users can access the required services only when they need and pay for only for
theusage.
 Any subscriber of cloud service can access the services from anywhere and at anytime.
Disaster Recovery:
 It is highly recommended that businesses have an emergency backup plan ready in the
case of an emergency. Cloud storage can be used as a back‐up plan by businesses by
providing a second copy of important files.
 These files are stored at a remote location and can be accessed through an internet
connection.
Excellent accessibility
 Storing the information in cloud allows you to access it anywhere and anytime
regardless of the machine making it highly accessible and flexible technology of
present times.
 Information and services stored in the cloud are exposed to users by Web-based
interfaces that make them accessible from portable devices as well as desktops at
home.
Scalability
 If you are anticipating a huge upswing in computing need (or even if you are
surprised bya sudden demand), cloud computing can help you manage. Rather than
having to buy, install, and configure new equipment, you can buy additional CPU
cycles or storage froma third party.
 For example, organizations can add more servers to process workload spikes and
dismiss them when they are no longer needed.
Flexibility
 Increased agility in defining and structuring software systems is another
significantbenefit of cloud computing.
 Since organizations rent IT services, they can more dynamically and flexibly
compose their software systems, without being constrained by capital costs for IT
assets.
 There is a reduced need for capacity planning, since cloud computing allows
organizations to react to unplanned surges in demand quite rapidly.
DISADVANTAGES OF CLOUD COMPUTING
Downtime
 With massive overload on the servers from various clients, the service provider might
come up against technical outages. Due to this unavoidable situation your business
could be temporarily sabotaged.
 And in case your internet connection is down, you will not be able to access the data,
software or applications on the cloud. So basically you are depending on the
quality ofthe internet to access the tools and software, as it is not installed in-house.
Security
 There is room for imminent risk for your data even though cloud service providers
abide by strict confidentiality terms, are industry certified and implement the best
security standards.
 When you seek to use cloud-based technology you are extending your access
controls toa third party agent to import critical confidential data from your company
onto the cloud.
 With high levels of security and confidentiality involved, the cloud service
providers are often faced with security challenges.
 The presence of data on the cloud opens up a greater risk of data theft as hackers
could find loopholes in the framework. Basically your data on the cloud is at a higher
risk, than if it was managed in-house.
 Hackers could find ways to gain access to data, scan, exploit a loophole and look
for vulnerabilities on the cloud server to gain access to the data.
 For instance, when you are dealing with a multi-tenant cloud server, the chances of a
hacker breaking into your data are quite high, as the server has data stored by multiple
users.
 But the cloud-based servers take enough precautions to prevent data thefts and
thelikelihood of being hacked is quite less.
Vendor Lock-In
 Companies might find it a bit of a hassle to change the vendors.
 Although the cloud service providers assure that it is a breeze to use the cloud and
integrate your business needs with them, disengaging and moving to the next vendor
is not a forte that’s completely evolved.
 As the applications that work fine with one platform may not be compatible with
another.
 The transition might pose a risk and the change could be inflexible due to
synchronization and support issues.
Limited Control
 Organizations could have limited access control on the data, tools and apps as the
cloud iscontrolled by the service provider.
 It hands over minimal control to the customer, as the access is only limited to the
applications, tools and data that is loaded on the server and no access to the
infrastructureitself.
 The customer may not have access to the key administrative services.
Legal Issues
 Legal issues may also arise. These are specifically tied to the ubiquitous nature of
cloud computing, which spreads computing infrastructure across diverse geographical
locations.
 Different legislation about privacy in different countries may potentially create
disputesas to the rights that third parties (including government agencies) have to
your data.
 U.S. legislation is known to give extreme powers to government agencies to acquire
confidential data when there is the suspicion of operations leading to a threat to
national security.
 European countries are more restrictive and protect the right of privacy.
15-11-2021
1
Basics of Virtualization, Types of
Virtualization, Implementation
Levels of Virtualization
BASICS OF VIRTUALIZATION
• Virtualization is a computer architecture technology by
which multiple virtual machines (VMs) are multiplexed in
the same hardware machine.
• The purpose of a VM is to enhance resource sharing by
many users and improve computer performance in terms
of resource utilization and application flexibility.
• Hardware resources such as CPU, memory, I/O devices or
software resources such as OS, software libraries can be
virtualized
15-11-2021
2
Levels of Virtualization
Implementation
Levels of Virtualization Implementation
• A traditional computer runs with host OS with its hardware
architecture
• After virtualization different user applications managed by their
own OS can run on the same hardware independent of host OS
• Additional layer called virtualization layer called hypervisor or
virtual machine monitor(VMM)
• Main function of software layer is to virtualize the physical
hardware of host machine into virtual resources to be used by
VMs.
15-11-2021
3
Levels of Virtualization Implementation
• Virtualization software creates the abstraction of VMs by
interposing a virtualization layer at various levels of a
computer system.
• Common virtualization layers are:
1. Instruction Set Architecture (ISA) level
2. Hardware level
3. Operating System level
4. Library support level
5. Application level
Virtualization ranging from hardware to applications in five abstraction levels
Levels of Virtualization Implementation
15-11-2021
4
Instruction Set Architecture Level
• Virtualization is performed by emulating a given ISA by
the ISA of the host machine.
• For example, MIPS binary code can run on an x86-based
host machine with the help of ISA emulation.
• It is possible to run a large amount of legacy binary code
written for various processors on any given new
hardware host machine.
• Instruction set emulation leads to virtual ISAs created on
any hardware machine.
Instruction Set Architecture Level
• Basic emulation method is through code interpretation.
• An interpreter program interprets the source instructions
to target instructions one by one.
• This process is relatively slow.
• For better performance, dynamic binary translation is
desired. This approach translates basic blocks of dynamic
source instructions to target instructions.
15-11-2021
5
Instruction Set Architecture Level
• Instruction set emulation requires binary translation and
optimization.
• A virtual instruction set architecture (V-ISA) thus requires
adding a processor-specific software translation layer to
the compiler.
Hardware Abstraction Level
• Hardware-level virtualization is performed right on top of the
bare hardware.
• This approach generates a virtual hardware environment for a
VM.
• The intention is to upgrade the hardware utilization rate by
multiple users concurrently.
• The idea was implemented in the IBM VM/370 in the 1960s.
• recently, the Xen hypervisor has been applied to virtualize
x86-based machines to run Linux or other guest OS
applications.
15-11-2021
6
Operating System Level
• Refers to an abstraction layer between traditional OS and
user applications.
• OS-level virtualization creates isolated containers on a single
physical server and the OS instances to utilize the hardware
and software in data centers.
• The containers behave like real servers.
Operating System Level
• OS-level virtualization is commonly used in creating
virtual hosting environments to allocate hardware
resources among a large number of mutually distrusting
users.
15-11-2021
7
Library Support Level
• Most applications use APIs exported by user-level libraries
rather than using lengthy system calls by the OS.
• Virtualization with library interfaces is possible by
controlling the communication link between applications
and the rest of a system through API hooks.
• The software tool WINE has implemented this approach to
support Windows applications on top of UNIX hosts.
• Another example is the vCUDA which allows applications
executing within VMs to leverage GPU hardware
acceleration.
User-Application Level
• Virtualizes an application as a VM.
• Application-level virtualization is also known as process-level
virtualization.
• Application seems to be running on a local machine infact it is
running on virtual machine(such as server) in another location
• The most popular approach is to deploy high level language (HLL)
• The virtualization layer sits as an application program on top of
the operating system.
• The Microsoft .NET CLR and Java Virtual Machine (JVM) are two
good examples of this class of VM.
15-11-2021
8
Application-level virtualization
• Application-level virtualization are known as
– application isolation,
– Application sandboxing, or
– Application streaming.
• The process involves wrapping the application in a layer that
is isolated from the host OS and other applications.
• An example is the LANDesk application virtualization
platform : self-contained, executable files in an isolated
• Environment without requiring installation, system
modifications, or elevated security privileges.
Relative Merits of Different
Approaches
15-11-2021
9
VMM Design Requirements and Providers
• Hardware-level virtualization inserts a layer between real
hardware and traditional operating systems.
• This layer is commonly called the Virtual Machine Monitor
(VMM) and it manages the hardware resources of a
computing system.
• Each time programs access the hardware the VMM captures
the process
• One hardware component, such as the CPU, can be
virtualized as several virtual copies.
Three requirements for a VMM
1. VMM should provide an environment identical to the
original machine.
2. Programs run in this environment should show, only minor
decreases in speed.
3. VMM should be in complete control of the system
resources.
15-11-2021
10
Virtual Machine Monitor
• VMM should exhibit a function identical to that which it
runs on the original machine directly.
• Two possible exceptions permitted:
– Differences caused by the availability of system resources:
arises when more than one VM runs on the same machine
– Differences caused by timing dependencies.
• These two differences pertain to performance, while
the function a VMM provides stays the same as that of
a real machine
Virtual Machine Monitor
• Compared with a physical machine, no one prefers a VMM if its
efficiency is too low.
• Traditional emulators and complete software interpreters emulate
each instruction by means of functions or macros
• Provides the most flexible solutions for VMMs.
• However, emulators or simulators are too slow to be used as real
machines.
• To guarantee the efficiency of a VMM, a statistically dominant
subset of the virtual processor’s instructions needs to be
executed directly by the real processor, with no software
intervention by the VMM
15-11-2021
11
• Complete control of these resources by a
VMM includes the following aspects:
(1) The VMM is responsible for allocating
hardware resources for programs;
(2) it is not possible for a program to access any
resource not explicitly allocated to it; and
(3) it is possible under certain circumstances for
a VMM to regain control of resources already
allocated
Comparison of Four VMM and
Hypervisor Software Package
LOAD BALANCING
 With the explosive growth of the Internet and its increasingly important role in our lives,
the traffic on the Internet is increasing dramatically, which has been growing at over
100% annualy.
 The workload on servers is increasing rapidly, so servers may easily be overloaded,
especially servers for a popular web site.
There are two basic solutions to the problem of overloaded servers,
One is a single-server solution,
 i.e., upgrade the server to a higher performance server. However, the new server may also
soon be overloaded, requiring another upgrade.
 Further, the upgrading process is complex and the cost is high.
The second solution is a multiple-server solution,
 i.e., build a scalable network service system on a cluster of servers. When load increases,
you can simply add one or more new servers to the cluster, and commodity servers have
the highest performance/cost ratio.
 Therefore, it is more scalable and more cost-effective to build a server cluster system for
network services.
Cloud Load Balancing
 Cloud load balancing is the process of distributing workloads across multiple computing
resources.
 Cloud load balancing is defined as the method of splitting workloads and computing
resources in a cloud computing.
 It enables enterprise to manage workload demands or application demands by distributing
resources among numerous computers, networks or servers.
Load Balancer
 A load balancer is a device that distributes network or application traffic across a cluster
of servers.
 Load balancing improves responsiveness and increases availability of applications.
 A load balancer sits between the client and the server farm accepting incoming network
and application traffic and distributing the traffic across multiple backend servers using
various methods.
Load Balancer
 Cloud-based server farms can achieve high scalability and availability using server load
balancing. This technique makes the server farm appear to clients as a single server.
 Load balancing distributes service requests from clients across a bank of servers and
makes those servers appear as if it is only a single powerful server responding to client
requests.
 Load balancing solutions can be divided into software-based load balancers and
hardware-based load balancers.
 Hardware-based load balancers are specialized boxes that include Application Specific
Integrated Circuits (ASICs) customized for a specific use.
 Software-based load balancers run on standard operating systems and standard
hardware components such as desktop PCs.
Load balancing Algorithms
 Round Robin
 Weighted Round Robin
 Least Connection
 Source IP Hash
 Global Server Load Balancing
Round Robin:
 This load balancing technique involves a pool of servers that have been identically
configured to deliver the same service as each other.
 Each will have a unique IP address but will be linked to the same domain name, and
requests and servers are linked.
Weighted Round Robin
 Weighted Round Robin builds on the simple Round Robin load balancing method.
 In the weighted version, each server in the pool is given a static numerical weighting.
 Servers with higher ratings get more requests sent to them.
Least Connection
 Neither Round Robin or Weighted Round Robin take the current server load into
consideration when distributing requests.
 The Least Connection method does take the current server load into consideration.
 The current request goes to the server that is servicing the least number of active sessions
at the current time.
Source IP Hash
 This algorithm combines source and destination IP addresses of the client and server to
generate a unique hash key.
 The key is used to allocate the client to a particular server. As the key can be regenerated
if the session is broken, the client request is directed to the same server it was using
previously.
 This is useful if it’s important that a client should connect to a session that is still active
after a disconnection.
 For example, to retain items in a shopping cart between sessions.
Global Server Load Balancing (GSLB)
 GSLB load balances DNS requests, not traffic.
 It uses algorithms such as round robin, weighted round robin, fixed weighting, real server
load, location-based, proximity and all available. It offers High Availability through
multiple data centers.
 If a primary site is down, traffic is diverted to a disaster recovery site. Clients connect to
their fastest performing, geographically closest data center.
 Application health checking ensures unavailable services or data centers are not visible to
clients.
15-11-2021
1
Virtualization Structures Tools
and Mechanisms
Virtualization structures Tools & Mechanisms
• Before virtualization, the operating system manages the
hardware.
• After virtualization, a virtualization layer is inserted
between the hardware and the OS.
• The virtualization layer is responsible for converting
portions of the real hardware into virtual hardware.
Virtualization structures Tools & Mechanisms
• Depending on the position of the virtualization layer,
there are several classes of VM architectures, namely
– Hypervisor architecture
– Paravirtualization
– Host-based virtualization
Hypervisor and Xen Architecture
• Hypervisor supports hardware-level virtualization on
bare metal devices such as CPU,memory,disk and
network interfaces
• Hypervisor sits directly between physical hardware and
its OS
• Depending on the functionality, a hypervisor can assume
a micro-kernel architecture or a monolithic hypervisor
architecture
15-11-2021
2
Hypervisor and Xen Architecture
• A micro-kernel hypervisor includes only the basic and
unchanging functions (such as physical memory
management and processor scheduling)
• Device drivers and other changeable components are
outside the hypervisor
• A monolithic hypervisor implements all the
aforementioned functions, including those of the device
drivers
• The size of the hypervisor code of a micro-kernel
hypervisor is smaller than that of a monolithic hypervisor
Xen Architecture
• Xen is an open source hypervisor program developed by
Cambridge University
• Xen is a microkernel hypervisor, which separates the policy
from the mechanism
• It implements all the mechanisms, leaving the policy to be
handled by Domain 0
• Xen does not include any device drivers natively
• It just provides a mechanism by which a guest OS can have
direct access to the physical devices
Xen Architecture
• Like other virtualization systems, many guest OSes can run
on top of the hypervisor
• Not all guest OS es are created equal and one in particular
controls other
• The guest OS (privileged guest OS), which has control
ability, is called Domain 0, and the others are called Domain
U
• It is first loaded when Xen boots
• Domain 0 is designed to access hardware directly and
manage devices.
15-11-2021
3
Xen Architecture
• VM is named Domain 0, which has the privilege to
manage other VMs implemented on the same host
• If Domain 0 is compromised, the hacker can control the
entire system
Binary Translation with Full Virtualization
• Depending on implementation technologies, hardware
virtualization can be classified into two categories:
full virtualization
host-based virtualization
Full virtualization:
– Does not need to modify the host OS
– Relies on binary translation to trap and to virtualize the
execution of certain sensitive, nonvirtualizable
instructions
15-11-2021
4
Full Virtualization
• With full virtualization, noncritical instructions run on the
hardware directly
• Critical instructions are discovered and replaced with traps
into the VMM to be emulated by software.
• Noncritical instructions do not control hardware or threaten
the security of the system, but critical instructions do
• Running noncritical instructions on hardware not only can
promote efficiency, but also can ensure system security
Binary Translation of Guest OS Requests Using
a VMM
• VMware puts the VMM at Ring 0 and the guest OS at
Ring 1
• VMM scans the instruction stream and identifies the
privileged, control and behaviour-sensitive instructions
• These instructions are identified, they are trapped into
the VMM, which emulates the behaviour of these
instructions
Binary Translation of Guest OS Requests Using
a VMM
• The method used is binary translation
• Full virtualization combines binary translation and
direct execution
• The guest OS is completely decoupled from the
underlying hardware
Binary Translation of Guest OS Requests Using
a VMM
• Performance of full virtualization may not be ideal,
involves binary translation
• Code cache to store translated hot instructions to
improve performance, but it increases the cost of
memory usage
15-11-2021
5
Host Based Virtualization
• An alternative is to install a virtualization layer on top of
the host OS
• Host OS is still responsible for managing the hardware
• Guest OSes are installed and run on top of the
virtualization layer
• Dedicated applications may run on the VMs
• Some other applications can also run with the host OS
directly
Host based virtualization
• First, the user can install this VM architecture without
modifying the host OS
• Second, the host-based approach appeals to many host
machine configurations
• Performance is too low when compared with
hypervisor/VMM architecture
• Application requesting hardware access involves four
layers of mapping
15-11-2021
6
Para-virtualization
• OS recognize the presence of VMM
• Guest OS communicates directly with hypervisor
• Para-virtualization needs to modify the guest operating
systems
• Para-virtualized VMs provides special APIs requiring OS
modifications
• API exchanges hyper calls with hypervisor
• Assisted by compiler to replace nonvirtualizable OS
instructions by hyper calls
Para-virtualization
• X86 offers four instruction execution rings: Ring 0,1,2,3
• The lower ring number responsible for running high
privileged instructions
• OS responsible for managing the hardware and privilege
instructions to execute at Ring0.
• User level applications at Ring3
Para-Virtualization Architecture
15-11-2021
7
Problems with Para-virtualization
• It must support the unmodified OS as well.
• Second, the cost of maintaining paravirtualized OSes is
high, because they may require deep OS kernel
modifications
• Finally, the performance advantage of para-virtualization
varies greatly due to workload variations
• Main problem in full virtualization is its low performance
in binary translation
Problems with Para-virtualization
• To speed up binary translation is difficult
• Many virtualization products employ the para-
virtualization architecture
• Eg: XEN, VMWare, ESX
KVM (Kernel based VM)
• Linux para-virtualization system
• Memory management and scheduling activities are
carried out by the existing Linux kernel, KVM does the rest
• KVM is a hardware-assisted para-virtualization tool, which
improves performance and supports unmodified guest
OSes such as Windows, Linux, Solaris, and other UNIX
variants
15-11-2021
8
Para-Virtualization with Compiler Support
• Full virtualization architecture which intercepts and
emulates privileged and sensitive instructions at runtime
• Para-virtualization handles these instructions at compile
time
• The guest OS kernel is modified to replace the privileged
and sensitive instructions with hypercalls to the hypervisor
or VMM
Para-Virtualization with Compiler Support
• The guest OS running in a guest domain may run at Ring 1
instead of at Ring 0
• This implies that the guest OS may not be able to execute
some privileged and sensitive instructions
• Privileged instructions are implemented by hypercalls to
the hypervisor
• After replacing the instructions with hypercalls, the
modified guest OS emulates the behavior of the original
guest OS
VMware ESX Server for Para-
Virtualization
SERVER VIRTUALIZATION
 Server virtualization is the process of using software on a physical server to create
multiple partitions or "virtual instances" each capable of running independently.
 Whereas on a single dedicated server the entire machine has only one instance of an
operating system, on a virtual server the same machine can be used to run multiple
server instances each with independent operating system configurations.
 Server virtualization is a virtualization technique that involves partitioning a physical
server into a number of small, virtual servers with the help of virtualization software.
 In server virtualization, each virtual server runs multiple operating system instances at
the same time.
The primary uses of server virtualization are:
 To centralize the server administration
 Improve the availability of server
 Helps in disaster recovery
 Ease in development & testing
 Make efficient use of server resources.
Types of Server Virtualization and Approaches to Server Virtualization
There are 3 types of server virtualization in cloud computing:
Hypervisor
 A Hypervisor is a layer between the operating system and hardware. The hypervisor is
the reason behind the successful running of multiple operating systems.
 It can also perform tasks such as handling queues, dispatching and returning the
hardware request. Host operating system works on the top of the hypervisor, we use it
to administer and manage the virtual machines.
Para-Virtualization
 In Para-virtualization model, simulation in trapping overhead in software
virtualizations.
 It is based on the hypervisor and the guest operating system and modified entry
compiled for installing it in a virtual machine.
 After the modification, the overall performance is increased as the guest operating
system communicates directly with the hypervisor.
Full Virtualization
 Full virtualizations can emulate the underlying hardware. It is quite similar to Para-
virtualization. Here, machine operation used by the operating system which is further
used to perform input-output or modify the system status.
 The unmodified operating system can run on the top of the hypervisor. This is
possible because of the operations, which are emulated in the software and the status
codes are returned with what the real hardware would deliver.
WhyServerVirtualization?
 Server Virtualization allows us to use resources efficiently. With the help of server
virtualization, you can eliminate the major cost of hardware.
 This virtualization in Cloud Computing can divide the workload to the multiple
servers and all these virtual servers are capable of performing a dedicated task.
 One of the reasons for choosing server virtualization is that a person can move the
workload between virtual machine according to the load.
Application server virtualization
 Application server virtualization abstracts a collection of application servers that
provide the same services as a single virtual application server by using load-
balancing strategies and providing a high-availability infrastructure for the services
hosted in the application server.
 This is a particular form of virtualization and serves the same purpose of storage
virtualization: providing a better quality of service rather than emulating a different
environment
Advantages of Server Virtualization
 Cost Reduction: Server virtualization reduces cost because less hardware is required.
 Independent Restart: Each server can be rebooted independently and that reboot
won't affectthe working of other virtual servers.
DESKTOP VIRTUALIZATION
 Desktop virtualization abstracts the desktop environment available on a personal
computer in order to provide access to it using a client / server approach.
 Desktop virtualization provides the same out- come of hardware virtualization but
serves a different purpose. Similarly to hardware virtualization, desktop virtualization
makes accessible a different system as though it were natively installed on the host,
but this system is remotely stored on a different host and accessed through a network
connection.
 Moreover, desktop virtualization addresses the problem of making the same desktop
environment accessible from everywhere.
 Although the term desktop virtualization strictly refers to the ability to remotely
access a desktop environment, generally the desktop environment is stored in a
remote server or a datacenter that provides a high-availability infrastructure and
ensures the accessibility and persistence of the data.
 In this scenario, an infrastructure supporting hardware virtualization is fundamental to
provide access to multiple desktop environments hosted on the same server; a specific
desktop environment is stored in a virtual machine image that is loaded and started on
demand when a client connects to the desktop environment.
 This is a typical cloud computing scenario in which the user leverages the virtual
infrastructure for performing the daily tasks on his computer. The advantages of
desktop virtualization are high availability, persistence, accessibility, and ease of
management
 The basic services for remotely accessing a desktop environment are implemented in
software components such as Windows Remote Services, VNC, and X Server.
 Infrastructures for desktop virtualization based on cloud computing solutions include
Sun Virtual Desktop Infrastructure (VDI), Parallels Virtual Desktop Infrastructure
(VDI), Citrix XenDesktop, and others
APPLICATION VIRTUALIZATION
 Application-level virtualization is a technique allowing applications to be run in
runtime environments that do not natively support all the features required by such
applications.
 In this scenario, applications are not installed in the expected runtime environment but
are run as though they were.
 In general, these techniques are mostly concerned with partial file systems, libraries,
and operating system component emulation. Such emulation is performed by a thin
layer – a program or an operating system component—that is incharge of executing
the application.
 Emulation can also be used to execute program binaries compiled for different
hardware architectures. In this case, one of the following strategies can be
implemented.
 Interpretation. In this technique every source instruction is interpreted by an
emulator for Executing native ISA instructions, leading to poor performance.
Interpretation has a minimal Startup cost but a huge overhead, since each instruction
is emulated.
 Binary translation. In this technique every source instruction is converted to native
instructions with equivalent functions. After a block of instructions is translated, It is
cached and reused.
 Binary translation has a large initial overhead cost, but over time it is subject to better
performance, since previously translated instruction blocks are directly executed.
 Emulation, as described, is different from hardware-level virtualization. The former
simply allows the execution of a program compiled against a different hardware,
whereas the latter emulates a complete hardware environment where an entire
operating system can be installed
 Application virtualization is a good solution in the case of missing libraries in the host
operating system; in this case a replacement library can be linked with the application,
or library calls can be remapped to existing functions available in the hostsystem
 Another advantage is that in this case the virtual machine manager is much lighter
since it provides a partial emula- tion of the runtime environment compared to
hardware virtualization..
 Moreover, this technique allows incompatible applications to run together. Compared
to programming-level virtualization, which works across all the applications
developed for that virtual machine, application-level virtualization works for a
specific environment: It supports all the applications that run on top of a specific
environment.
 One of the most popular solutions implementing application virtualization is Wine,
which is a software application allowing Unix-like operating systems to execute
programs written for the Microsoft Windows platform.
 Wine takes its inspiration from a similar product from Sun, Windows Application
Binary Interface (WABI), which implements the Win 16API specifications on Solaris.
 VMware ThinApp, another product in this area, allows capturing the setup of an
installed application and packaging it into an executable image isolated from the
hosting operating system.
UNIT III CLOUD ARCHITECTURE, SERVICES AND STORAGE
NIST CLOUD REFERENCE ARCHTECTURE
The Conceptual Reference Model
NIST cloud computing reference architecture, which identifies the major actors, their
activities andfunctions in cloud computing.
The diagram depicts a generic high-level architecture and is intended to facilitate the
understanding ofthe requirements, uses, characteristics and standards of cloud computing.
The NIST cloud computing reference architecture defines five major actors:
 cloud consumer,
 cloud provider,
 cloud carrier,
 cloud auditor and
 cloud broker.
Each actor is an entity (a person or an organization) that participates in a transaction or process
and/or performs tasks in cloud computing.
Cloud Consumer
 The cloud consumer is the principal stakeholder for the cloud computing service. A cloud
consumer represents a person or organization that maintains a business relationship with,
and uses the service from a cloud provider.
 A cloud consumer browses the service catalog from a cloud provider, requests the
appropriate service, sets up service contracts with the cloud provider, and uses the service.
 The cloud consumer may be billed for the service provisioned, and needs to arrange
payments accordingly.
 A cloud consumer can freely choose a cloud provider with better pricing and more
favorable terms.
 Typically a cloud provider‟s pricing policy and SLAs are non-negotiable, unless the
customer expects heavy usage and might be able to negotiate for better contracts
 SaaS applications in the cloud and made accessible via a network to the SaaS consumers.
SaaS consumers can be billed based on the number of end users, the time of use, the
network bandwidth consumed, the amount of data stored or duration of stored data.
 Cloud consumers of PaaS can employ the tools and execution resources provided by cloud
providers to develop, test, deploy and manage the applications hosted in a cloud
environment
 Consumers of IaaS have access to virtual computers, network-accessible storage, network
infrastructure components, and other fundamental computing resources onwhich they can
deploy and run arbitrary software.
Cloud Provider
 A cloud provider is a person, an organization; it is the entity responsible for making a
service available to interested parties.
 A Cloud Provider acquires and manages the computing infrastructure required for
providing the services, runs the cloud software that provides the services, and makes
arrangement to deliver the cloud services to the Cloud Consumers through network acces.
 A cloud provider conducts its activities in the areas of service deployment, service
orchestration, cloud service management, security, and privacy.
Service Orchestration
Service Orchestration refers to the composition of system components to support the Cloud
Providersactivities in arrangement, coordination and management of computing resources in
order to provide cloud services to Cloud Consumers
 A three-layered model is used in this representation, representing the grouping of
three types of system components Cloud Providers need to compose to deliver their
services.
 The top is the service layer, this is where Cloud Providers define interfaces for Cloud
Consumers to access the computing services. Access interfaces of each of the three service
models are provided in this layer.
 The optional dependency relationships among SaaS, PaaS, and IaaS components are
represented graphically as components stacking on each other;
 The middle layer in the model is the resource abstraction and control layer. This layer
contains the system components that Cloud Providers use to provide andmanage access to
the physical computing resources through software abstraction.
 Examples of resource abstraction components include software elements such as
hypervisors, virtual machines, virtual data storage, and other computing resource
abstractions.
 The lowest layer in the stack is the physical resource layer, which includes all the
physical computing resources. This layer includes hardware resources, such as computers
(CPU and memory), networks (routers, firewalls, switches, network links and interfaces),
storage components (hard disks) and other physical computing infrastructure elements. It
also includes facility resources, such as heating, ventilation and air conditioning (HVAC),
power, communications, and other aspects of the physical plant.
Cloud Service Management
Cloud Service Management includes all of the service-related functions that are necessary for the
management and operation of those services required by or proposed to cloud consumers. Cloud
service management can be described from the perspective of business support, provisioning and
configuration, and from the perspective of portability and interoperability requirements.
Business Support
Business Support entails the set of business-related services dealing with clients and supporting
processes. It includes the components used to run business operations that are client-facing.
 Customer management: Manage customer accounts, open/close/terminate accounts,
manage user profiles, manage customer relationships by providing points-of-contact and
resolving customer issues and problems, etc.
 Contract management: Manage service contracts, setup/negotiate/close/terminatecontract,
etc.
 Inventory Management: Set up and manage service catalogs, etc.
 Accounting and Billing: Manage customer billing information, send billing statements,
process received payments, track invoices, etc.
 Reporting and Auditing: Monitor user operations, generate reports, etc.
 Pricing and Rating: Evaluate cloud services and determine prices, handle promotionsand
pricing rules based on a user's profile, etc.
Provisioning and Configuration
 Rapid provisioning: Automatically deploying cloud systems based on the requested
service/resources/capabilities.
 Resource changing: Adjusting configuration/resource assignment for repairs,upgrades and
joining new nodes into the cloud.
 Monitoring and Reporting: Discovering and monitoring virtual resources, monitoring
cloud operations and events and generating performance reports.
 Metering: Providing a metering capability at some level of abstraction appropriate tothe
type of service (e.g., storage, processing, bandwidth, and active user accounts).
 SLA management: Encompassing the SLA contract definition (basic schema with theQoS
parameters), SLA monitoring and SLA enforcement according to defined policies.
Portability and Interoperability
 The proliferation of cloud computing promises cost savings in technologyinfrastructure
and faster software upgrades.
 Cloud providers should provide mechanisms to support data portability, service
interoperability, and system portability
 Data portability is the ability of cloud consumers to copy data objects into or out of a
cloud or to use a disk for bulk data transfer.
 Service interoperability is the ability of cloud consumers to use their data andservices
across multiple cloud providers with a unified management interface
Cloud Auditor
 A cloud auditor is a party that can perform an independent examination of cloud
service controls with the intent to express an opinion thereon.
 Audits are performed to verify conformance to standards through review of objective
evidence.
 A cloud auditor can evaluate the services provided by a cloud provider in terms of
security controls, privacy impact, performance, etc.
 A privacy impact audit can help Federal agencies comply with applicable privacy lawsand
regulations governing an individual‟s privacy, and to ensure confidentiality, integrity, and
availability of an individual‟s personal information at every stage of development and
operation.
Security
 It is critical to recognize that security is a cross-cutting aspect of the architecture that
spans across all layers of the reference model, ranging from physical security to
application security.
 Therefore, security in cloud computing architecture concerns is not solely under the
purview of the Cloud Providers, but also Cloud Consumers and other relevant actors.
 Cloud-based systems still need to address security requirements such as authentication,
authorization, availability, confidentiality, identity management, integrity, audit, security
monitoring, incident response, and security policy management.
 While these security requirements are not new, we discuss cloud specific perspectives to
help discuss, analyze and implement security in a cloud system
Cloud Broker
 As cloud computing evolves, the integration of cloud services can be too complex for
cloud consumers to manage.
 A cloud consumer may request cloud services from a cloud broker, instead ofcontacting a
cloud provider directly.
 A cloud broker is an entity that manages the use, performance and delivery of cloud
services and negotiates relationships between cloud providers and cloud consumers.
 In general, a cloud broker can provide services in three categories [9]:
 Service Intermediation: A cloud broker enhances a given service by improving some
specific capability and providing value-added services to cloud consumers. The
improvement can be managing access to cloud services, identity management,
performance reporting, enhanced security, etc.
 Service Aggregation: A cloud broker combines and integrates multiple services into one
or more new services. The broker provides data integration and ensures the secure data
movement between the cloud consumer and multiple cloud providers.
 Service Arbitrage: Service arbitrage is similar to service aggregation except that the
services being aggregated are not fixed. Service arbitrage means a broker has the
flexibility to choose services from multiple agencies. The cloud broker, for example, can
use a credit-scoring service to measure and select an agency with the best score.
Cloud Carrier
 A cloud carrier acts as an intermediary that provides connectivity and transport of cloud
services between cloud consumers and cloud providers.
 Cloud carriers provide access to consumers through network, telecommunication and
other access devices.
 The distribution of cloud services is normally provided by network and
telecommunication carriers or a transport agent, where a transport agent refers to a
business organization that provides physical transport of storage media such as high-
capacity hard drives.
 Note that a cloud provider will set up SLAs with a cloud carrier to provide services
consistent with the level of SLAs offered to cloud consumers, and may require the cloud
carrier to provide dedicated and secure connections between cloud consumers and cloud
providers.
CLOUD DEPLOYMENT MODELS
A cloud infrastructure may be operated in one of the following deployment models:
 Public cloud,
 Private cloud,
 Community cloud, or
 Hybrid cloud.
The differences are based on how exclusive the computing resources are made to a
CloudConsumer.
Public Cloud
 A public cloud is one in which the cloud infrastructure and computing resources are
made available to the general public over a public network. A public cloud is
ownedby an organization selling cloud services, and serves a diverse pool of clients.
 A public cloud is built over the Internet and can be accessed by any user who has
paid forthe service.
 In Public cloud, the services offered are made available to anyone, from
anywhere, and at any time through the Internet.
 From a structural point of view public cloud is a distributed system, most likely
composed of one or more datacenters connected together, on top of which the
specific services offered by the cloud are implemented.
 Any customer can easily sign in with the cloud provider, enter her credential and
billing details, and use the services offered.
 Public cloud offer solutions for minimizing IT infrastructure costs and serve as a
viable option for handling peak loads on the local infrastructure.
 They have become an interesting option for small enterprises, which are able to start
their businesses without large up-front investments by completely relying on public
infrastructure for their IT needs.
 A fundamental characteristic of public clouds is multi-tenancy. A public cloud is
meant to serve a multitude of users, not a single customer. Any customer requires a
virtual computing environment that is separated, and most likely isolated, from other
users.
 A public cloud can offer any kind of service: infrastructure, platform, or
applications.
 From an architectural point of view there is no restriction concerning the type of
distributed system implemented to support public clouds.
 Public clouds can be composed of geographically dispersed data centers to share the
load of users and better serve them according to their locations.
 Public cloud is better suited for business requirements which require managing the
load;
Benefit of Public Cloud
 Public clouds promote standardization, preserve capital investment, and offer
applicationflexibility.
Example of Public Cloud
 Amazon EC2 is a public cloud that provides infrastructure as a service;
 Google AppEngine is a public cloud that provides an application development
platform as a service;
 SalesForce.com is a public cloud that provides software as a service.
Drawbacks
 In the case of public clouds, the provider is in control of the infrastructure and,
eventually, of the customers’ core logic and sensitive data.
 The risk of a breach in the security infrastructure of the provider could expose
sensitive information to others.
 Public cloud service offering has low degree of control and physical and security
aspects of the cloud.
Private Cloud
 A private cloud gives a single Cloud Consumers organization the exclusive access
to and usage of the infrastructure and computational resources.
 In private cloud, the cloud infrastructure is operated solely for an organization.
 It may be managed either by the Cloud Consumer organization or by a third party,
and may be hosted on the organizations premises
 Private clouds give local users a flexible and agile private infrastructure to run
service workloads within their administrative domains.
 A private cloud is supposed to deliver more efficient and convenient cloud services.
It may impact the cloud standardization, while retaining greater customization and
organizational control.
 In a private cloud security management and day to day operations are relegated to
internal IT or third party vendor, with contractual SLAs.
 Hence customer of private cloud service offering has high degree of control and
physical and security aspects of the cloud.
 Security concerns are less critical, since sensitive information does not flow out of
the private infrastructure.
 Business that has dynamic or unforeseen needs, assignments which are mission
critical, security alarms, management demands and uptime requirements are better
suited for private cloud.
 Private clouds have the advantage of keeping the core business operations in-
house by relying on the existing IT infrastructure and reducing the burden of
maintaining it once the cloud has been set up.
 Moreover, existing IT resources can be better utilized because the private cloud
canprovide services to a different range of users.
 Contrary to popular belief, private cloud may exist off premises and can be
managed by thirdparty. Thus two private cloud scenarios exist, as follows,
On premises or On site Private Cloud
 Applies to private cloud implemented at a customer premises.
Outsourced Private Cloud
 Applies to private clouds where the server side is outsourced to a hosting company.
Key advantages of using a private cloud computing infrastructure
 Customer information protection.- In-house security is easier to maintain and rely
on.
 Infrastructure ensuring SLAs.
 Compliance with standard procedures and operations.
 Private clouds attempt to achieve customization and offer higher efficiency,
resiliency, security, and privacy.
Drawback
 From an architectural point of view, private clouds can be implemented on more
heterogeneous hardware: They generally rely on the existing IT infrastructure
already deployed on the private premises.
 Private clouds can provide in-house solutions for cloud computing, but if compared
to public clouds they exhibit more limited capability to scale elastically on demand.
Example
 VMWare vSphere
 Openstack
 Amazon VPC (Virtual Private Cloud)
 Microsoft ECI data center
Hybrid and Community Cloud
 A hybrid cloud is a composition of two or more clouds (on-site private, on-site
community, off-site private, off-site community or public) that remain as distinct
entities but are bound together by standardized or proprietary technology that
enables data and application portability.
 Hybrid clouds allow enterprises to exploit existing IT infrastructures, maintain
sensitive information within the premises, and naturally grow and shrink by
provisioning external resources and releasing them when they’re no longer needed.
 Hybrid clouds address scalability issues by leveraging external resources for
exceeding capacity demand.
 These resources or services are temporarily leased for the time required and then
released .This practice is also known as cloud bursting.
 A hybrid cloud provides access to clients, the partner network, and third parties.
 In summary, public clouds promote standardization, preserve capital investment,
andoffer application flexibility.
 Private clouds attempt to achieve customization and offer higher efficiency,
resiliency, security, and privacy.
 Hybrid clouds operate in the middle, with many compromises in terms of resource
sharing.
 In hybrid cloud the resources are managed and provided either in-house or by
externalproviders.
 It is an adaptation among two platforms in which the workload exchange between
theprivate cloud and the public cloud as per the need and demand.
 For example organizations can use the hybrid cloud model for processing big data.
Ona private cloud it can retain sales, business and various data that needs security
and privacy.
 Hybrid cloud hosting is enabled with features like scalability, flexibility and
security.
Example
 Microsoft Azure
 VMWare – vSphere for private and vCloudAir for public
 Rackspace Rackconnect
Community Cloud
 A community cloud serves a group of Cloud Consumers which have shared concerns
such as mission objectives, security, privacy and compliance policy, rather than
serving a single organization as does a private cloud.
 A community cloud is “shared by several organizations and supports a specific
community that has shared concerns (e.g., mission, security requirements, policy,
and compliance considerations)
 Similar to private clouds, a community cloud may be managed by the organizations
or by a third party, and may be implemented on customer premise (i.e. on-site
community cloud) or outsourced to a hosting company (i.e. outsourced community
cloud).
 From an architectural point of view, a community cloud is most likely implemented
over multi- ple administrative domains. This means that different organizations
such as government bodies private enterprises, research organizations, and even
public virtual infrastructure providers contrib- ute with their resources to build the
cloud infrastructure.
Candidate sectors for community clouds are as follows:
 Media industry
 Healthcare industry
 Energy and other core industries
 Public sector
 Scientific research
The benefits of these community clouds are the following:
 Openness. - By removing the dependency on cloud vendors, community clouds are
open systems in which fair competition between different solutions can happen.
 Community - Being based on a collective that provides resources and services, the
infrastructure turns out to be more scalable because the system can grow simply by
expanding its user base.
 Graceful failures - Since there is no single provider or vendor in control of the
infrastructure, there is no single point of failure.
 Convenience and control - Within a community cloud there is no conflict between
convenience and control because the cloud is shared and owned by the community,
which makes all the decisions through a collective democratic process.
 Environmental sustainability - The community cloud is supposed to have a smaller
carbon footprint because it harnesses underutilized resources.
CLOUD SERVICE MODELS
Infrastructure as a Service (IaaS)
 In cloud Computing Offering virtualized resources (computation, storage, and communication) on
demand is known as Infrastructure as a Service (IaaS).
 This model allows users to use virtualized IT resources for computing, storage, andnetworking.
 In short, the service is performed by rented cloud infrastructure. The user can deployand run his
applications over his chosen OS environment.
 They deliver customizable infrastructure on demand.
 IaaS (Infrastructure as a Service): provides you the computing infrastructure, physical or (quite often)
virtual machines and other resources like virtual-machine disk image library, block and file-based
storage, firewalls, load balancers, IP addresses, virtual local area networks etc.
 A cloud infrastructure enables on-demand provisioning of servers running several choices of operating
systems and a customized software stack. Infrastructure services are considered to be the bottom layer
of cloud computing systems.
Examples: Amazon EC2, Windows Azure, Rackspace, Google Compute Engine.
 The main technology used to deliver and implement these solutions is hardware virtualization: one or
more virtual machines opportunely configured and interconnected define the distributed sys- tem on
top of which applications are installed and deployed.
 IaaS/HaaS solutions bring all the bene- fits of hardware virtualization: workload partitioning,
application isolation, sandboxing, and hard- ware tuning.
 From the perspective of the service provider, IaaS/HaaS allows better exploiting the IT infrastructure
and provides a more secure environment where executing third party applications.
 From the perspective of the customer it reduces the administration and maintenance cost as well as the
capital costs allocated to purchase hardware.
It is possible to distinguish three principal layers:
 the physical infrastructure,
 the software management infrastructure, and
 the user interface.
 At the top layer the user interface provides access to the services exposed by the software management
infrastructure. Such an interface is generally based on Web 2.0 technologies: Web services, RESTful
APIs, and mash-ups.
 The core features of an IaaS solution are implemented in the infrastructure management software
layer. In particular, management of the virtual machines is the most important function performed
by this layer. A central role is played by the scheduler, which is in charge of allocating the execution
of virtual machine instances.
 The bottom layer is composed of the physical infrastructure, on top of which the management layer
operates.
 In the case of complete IaaS solutions, all three levels are offered as service. This is generally the case
with public clouds vendors such as Amazon, GoGrid, Joyent, Rightscale, Terremark, Rackspace,
ElasticHosts, and Flexiscale.
Platform as a Service (PaaS)
 Platform-as-a-Service (PaaS) solutions provide a development and deployment platform for running
applications in the cloud. They constitute the middleware on topof which applications are built.
 In PaaS we can able to develop, deploy, and manage the execution of applications using provisioned
resources demands a cloud platform with the proper software environment.
 Such a platform includes operating system and runtime library support.
 PaaS (Platform as a Service provides you computing platforms which typically includes operating
system, programming language execution environment, database, web server etc.
Examples:
 Google AppEngine, an example of Platform as a Service
 AWS Elastic Beanstalk,
 Windows Azure, Heroku,
 Force.com,,
 Apache Stratos
 Application management is the core functionality of the middleware. PaaS implementations pro- vide
applications with a runtime environment and do not expose any service for managing the underlying
infrastructure.
 Developers design their systems in terms of applications and are not concerned with hardware (physical
or virtual), operating systems, and other low-level services.
 The core middleware is in charge of managing the resources and scaling applications on demand or
automatically, according to the commitments made with users.
 Developers generally have the full power of programming languages such as Java, .NET, Python,
or Ruby, with some restrictions to provide better scalability and security.
 In this case the traditional development environments can be used to design and develop applications,
which are then deployed on the cloud by using the APIs exposed by the PaaS provider.
 PaaS solutions can offer middleware for developing applications together with the infrastructure or
simply provide users with the software that is installed on the user premises.
 It is possible to organize the various solutions into three wide categories PaaS-I, PaaS- II, and PaaS-III.
 The first category identifies PaaS implementations that completely follow the cloud computing style for
application development and deployment.
 Example - Force.com and Longjump. Both deliver as platforms the combination of middleware and
infrastructure.
 In the second class we can list all those solutions that are focused on providing a scalable infrastructure
for Web application, mostly websites. In this case, developers generally use the providers’ APIs, which
are built on top of industrial runtimes, to develop applications.
 Example - Google AppEngine is the most popular product in this category.
 The third category consists of all those solutions that provide a cloud programming platform for any
kind of application, not only Web applications
 Example - Microsoft Windows Azure, which provides a comprehensive framework for building
service- oriented cloud applications on top of the .NET technology, hosted on Microsoft’s
datacenters
 Manjrasoft Aneka, Apprenda SaaSGrid, Appistry Cloud IQ Platform, DataSynapse,and GigaSpaces
DataGrid, provide only middleware with different services
Some essential characteristics that identify a PaaS solution:
 Runtime framework
 Abstraction
 Automation
 Cloud services
 Another essential component for a PaaS-based approach is the ability to integrate third-party cloud
services offered from other vendors by leveraging service-oriented architecture.
 One of the major concerns of leveraging PaaS solutions for implementing applicationsis vendor lock-
in.
 Differently from IaaS solutions, which deliver bare virtual servers that can be fully custom- ized in
terms of the software stack installed, PaaS environments deliver a platform for developing applications,
which exposes a well-defined set of APIs and,in most cases, binds the application to the specific
runtime of the PaaS provider.
 Finally, from a financial standpoint, although IaaS solutions allow shifting the capital cost into
operational costs through outsourcing, PaaS solutions can cut the cost across development, deployment,
and management of applications.
 It helps management reduce the risk of ever-changing technologies by offloading the cost of upgrading
the technology to the PaaS provider.
Software as a Service (SaaS)
 Software-as-a-Service (SaaS) is a software delivery model that provides access to applications through
the Internet as a Web-based service.
 It provides a means to free users from complex hard- ware and software management by offloading
such tasks to third parties, which build applications accessible to multiple users through a Web
browser.
 SaaS (Software as a Service) model you are provided with access to application software often referred
to as "on-demand software".
 No need to worry about the installation, setup and running of the application. Service provider will do
that for you. You just have to pay and use it through some client.
 On the provider side, the specific details and features of each customer’s applica- tion are maintained in
the infrastructure and made available on demand
 The SaaS model is appealing for applications serving a wide range of users and thatcan be adapted to
specific needs with little further customization. This requirement characterizes SaaS as a “one-to-
many” software delivery model, whereby an application is shared across multiple users.
 The SaaS model provides software applications as a service. As a result, on the customer side, there is
no upfront investment in servers or software licensing.
 On the provider side, costs are kept rather low, compared with conventional hosting of user
applications. Customer data is stored in the cloud that is either vendor proprietary or publicly hosted to
support PaaS and IaaS.
Examples: Google Apps, Microsoft Office 365.
 SaaS applications are naturally multitenant.
 Multitenancy which is a feature of SaaS compared to traditional packaged software, allows providers to
centralize and sustain the effort of managing large hardware infrastructures, maintaining and upgrading
applications transparently to the users, and optimizing resources by sharing the costs among the large
user base.
Benefits of SaaS
 Software cost reduction and total cost of ownership (TCO) were paramount
 Service-level improvements
 Rapid implementation
 Standalone and configurable applications
 Rudimentary application and data integration
 Subscription and pay-as-you-go (PAYG) pricing
 Software-as-a-Service applications can serve different needs. CRM, ERP, and social networking
applications are definitely the most popular ones.
 SalesForce.com is probably the most successful and popular example of a CRM service
 Another important class of popular SaaS applications comprises social networkingapplications such as
Facebook and professional networking sites such as LinkedIn.
 Office automation applications are also an important representative for SaaS applications: Google
Documents and Zoho Office are examples of Web-based applications that aim to address all user needs
for documents, spreadsheets, and presentation management
 It is important to note the role of SaaS solution enablers, which provide an environment in which to
integrate third-party services and share information with others.
15-11-2021
1
1
2
15-11-2021
2
3
4
15-11-2021
3
5
6
15-11-2021
4
7
8
CLOUD COMPUTING DESIGN CHALLENGES
Cloud computing presents many challenges for industry and academia.
The interoperation between different clouds, the creation of standards, security, scalability,fault tolerance, and
organizational aspects.
Cloud interoperability and standards
 Cloud computing is a service-based model for delivering IT infrastructure and applications like utilities such
as power, water, and electricity.
 To fully realize this goal, introducing standards and allowing interoperability between solutions offered by
different vendors are objectives of fundamental importance.
 Vendor lock-in constitutes one of the major strategic barriers against the seam- less adoption of cloud
computing at all stages.
 Vendor lock-in can prevent a customer from switching to another competitor’s solution,
 The presence of standards that are actually implemented and adopted in the cloud computing community
could give room for interoperability and then lessen the risks resulting from vendor lock-in.
 The standardization efforts are mostly concerned with the lower level of the cloud computing architecture,
which is the most popular and developed.
 The Open Virtualization Format (OVF) [51] is an attempt to provide a common format for storing the
information and metadata describing a virtual machine image.
 Another direction in which standards try to move is devising general reference architecture for cloud
computing systems and providing a standard interface through which one can interact with them.
Scalability and fault tolerance
 The ability to scale on demand constitutes one of the most attractive features of cloud computing. Clouds
allow scaling beyond the limits of the existing in-house IT resources, whether they are infrastructure
(compute and storage) or applications services.
 To implement such a capability, the cloud middleware has to be designed with the principle of scalability
along different dimensions in mind—for example, performance, size, and load.
 the ability to tolerate failure becomes fundamental, sometimes even more important than providing an
extremely efficient and optimized system.
 Hence, the challenge in this case is designing highly scalable and fault-tolerant systems that are easy to
manage and at the same time provide competitive performance.
Security, trust, and privacy
 Security, trust, and privacy issues are major obstacles for massive adoption of cloud computing.
 The traditional cryptographic technologies are used to prevent data tampering and access to sensitive
information.
 The massive use of virtualization technologies exposes the existing system to new threats, which previously
were not considered applicable.
 It then happens that a new way of using existing technologies creates new opportunities for additional
threats to the security of applications.
 The lack of control over their own data and processes also poses severe problems for the trust we give to
the cloud service provider and the level of privacy we want to have for our data.
 On one side we need to decide whether to trust the provider itself; on the other side, specific regulations can
simply prevail over the agreement the provider is willing to establish with us concerning the privacy of the
information managed on our behalf.
 The challenges in this area are, then, mostly concerned with devising secure and trustable systems from
different perspectives: technical, social, and legal.
Organizational aspects
 Cloud computing introduces a significant change in the way IT services are consumed and man- aged. More
precisely, storage, compute power, network infrastructure, and applications are delivered as metered services
over the Internet.
 This introduces a billing model that is new within typical enterprise IT departments, which requires a certain
level of cultural and organizational process maturity. In particular, a wide acceptance of cloud computing will
require a significant change to business processes and organizational boundaries.
 From an organizational point of view, the lack of control over the management of data and processes poses
not only security threats but also new problems that previously did not exist.
 Traditionally, when there was a problem with computer systems, organizations developed strategies and
solutions to cope with them, often by relying on local expertise and knowledge.
15-11-2021
1
Cloud Storage:
• Storage-as-a-Service
• Advantages of Cloud Storage
Cloud Storage Providers: S3
2
3 4
15-11-2021
2
5
What is cloud storage?
History
J.C.R.Licklider – One of the fathers of the
cloud based computing idea.
Global network that allows access from
anywhere at anytime.
Technological limits of the 60’s.
What is cloud storage?
 Cloud storage is a service model in which data is
maintained, managed and backed up remotely and
made available to users over a network (typically the
Internet).
How does cloud storage work?
Redundancy
 Core of cloud
computing
Equipment
 Data servers
 Power supplies
Data files
 Replication
15-11-2021
3
9
Provider failures
Amazon S3 systems failure downs Web 2.0 sites
Twitterers lose their faces, others just want their data back
Computer World, July 21, 2008
Customers Shrug Off S3 Service Failure
At about 7:30 EST this morning, S3, Amazon.com‟s
online storage service, went down. The 2-hour
service failure affected customers worldwide.
Wired, Feb. 15, 2008
Loss of customer data spurs closure of
online storage service 'The Linkup„
Network World, Nov 8, 2008
Spectacular Data Loss
Drowns Sidekick Users
October 10, 2009
Temporary
unavailability
Permanent
data loss
How do we increase users’ confidence in the cloud?
Cloud Storage
iCloud
•iCloud is a service provided by
Apple
•5GB storage space is free of cost
•Once the iCloud is used you can
share your stored data on any of
your different Apple devices
•Aceess to all files, music, calendar,
email
•Only iOS 5 has iCloud installed
First 1 TB / month $0.140 per GB
Next 49 TB / month $0.125 per GB
Next 450 TB / month $0.110 per GB
Next 500 TB / month $0.095 per GB
Next 4000 TB / month $0.080 per GB
Over 5000 TB / month $0.055 per GB
Home: Business:
Packages: 3 2
Price Range: $7.95 - $24.95 $49.95 - $159.95
Storage Space: 2TB - 5TB 2TB - 10TB+
Users: 1 3 - 10+
15-11-2021
4
Free Options
15 16
Data Storage Saving:
• By storing your data online you are reducing the burden of your hard disk,
which means you are eventually saving disk space
World Wide Accessibility
• You can access your data anywhere in the world. You don’t have to carry your
hard disk pen drive or any other storage device
Data Safety
• You cannot trust your HDD and storage device every time because it can
crash anytime
• In order to make your data safe from such hazards you can keep it online
Advantages
15-11-2021
5
17
Security
• Most of the online storage sites provide better security
• Only the user can access the account
Easy sharing
• You can share data faster, easy and secure manner
Data Recovery
• Online data storage sites provide quick recovery of your files and
folders
• This makes them more safe and secure
Automatic backup
• User can even schedule automatic backup of your personal
computer in order to avoid manual backup of files
Advantages
18
19
Improper handling can cause trouble
• You must need your user-id and password safe to protect your
data
• If someone knows or even guess your credentials, it may result in
loss of data
• Use complex passwords and try to avoid storage them in your
personal storage devices such as pen drive and HDD
Disadvantages
20
Choose trustworthy source to avoid any hazard
• There are many online storage sites out there but you have to
choose the one, on which you can trust
Internet connection sucks
• To access your files everywhere the only thing you need is internet
connection
• If you don’t get internet connection somewhere then will end up
with no access of data even though it is safely stored online
Disadvantages
15-11-2021
6
Cloud Storage
• Several large Web companies are now exploiting the fact that they
have data storage capacity that can be hired out to others.
– allows data stored remotely to be temporarily cached on desktop
computers, mobile phones or other Internet-linked devices.
• Amazon’s Elastic Compute Cloud (EC2) and Simple Storage Solution
(S3) are well known examples
Amazon Simple Storage Service (S3)
• Amazon S3 provides a simple web services interface that can be used
to store and retrieve any amount of data, at any time, from anywhere
on the web.
• S3 provides the object-oriented storage service for users.
• Users can access their objects through Simple Object Access Protocol
(SOAP) with either browsers or other client programs which support
SOAP.
• SQS is responsible for ensuring a reliable message service between
two processes, even if the receiver processes are not running.
23
• Fundamental operation unit of S3 is called an object.
• Each object is stored in a bucket and retrieved via a unique, developer-
assigned key - the object has other attributes such as values, metadata,
and access control information.
• The storage provided by S3 can be viewed as a very coarse-grained key-
value pair.
• Through the key-value programming interface, users can write, read,
and delete objects containing from 1 byte to 5 gigabytes of data each.
• There are two types of web service interface for the user to access the
data stored in Amazon clouds.
• REST (web 2.0) interface,
• SOAP interface.
Amazon Simple Storage Service (S3)
24
15-11-2021
7
25
• Redundant through geographic dispersion.
• Designed to provide 99.999999999 percent durability and 99.99
percent availability of objects over a given year with cheaper
reduced redundancy storage (RRS).
• Authentication mechanisms to ensure that data is kept secure
from unauthorized access.
• Objects can be made private or public, and rights can be granted
to specific users.
• Per-object URLs and ACLs (access control lists). 6, 2010).
Key features of S3:
26
• Default download protocol of HTTP.
• A BitTorrent protocol interface is provided to lower costs for
high-scale distribution.
• $0.055 (more than 5,000 TB) to 0.15 per GB per month storage
(depending on total amount).
• First 1 GB per month input or output free and then $.08 to $0.15 per
GB for transfers outside an S3 region.
• There is no data transfer charge for data transferred between
Amazon EC2 and Amazon S3 within the same region or for data
transferred between the Amazon EC2 Northern Virginia region and
the Amazon S3 U.S. Standard region (as of October
Key features of S3:
27
Amazon Elastic Block Store (EBS) and SimpleDB
• The Elastic Block Store (EBS) provides the volume block interface for
saving and restoring the virtual images of EC2 instances.
• The status of EC2 are saved in the EBS system after the machine is shut
down.
• Users can use EBS to save persistent data and mount to the running
instances of EC2.
• EBS is analogous to a distributed file system accessed by traditional OS
disk access mechanisms.
• EBS allows you to create storage volumes from 1 GB to 1 TB that can
be mounted as EC2 instances.
28
• Multiple volumes can be mounted to the same instance.
• These storage volumes behave like raw, unformatted block devices,
with user-supplied device names and a block device interface.
• You can create a file system on top of Amazon EBS volumes, or use
them in any other way you would use a block device (like a hard
drive).
• Snapshots are provided so that the data can be saved incrementally.
• EBS also charges $0.10 per 1 million I/O requests made to the storage
(as of October 6, 2010).
Amazon Elastic Block Store(EBS) and SimpleDB
15-11-2021
8
29
Amazon SimpleDB Service
• SimpleDB provides a simplified data model based on the relational
database data model.
• Structured data from users must be organized into domains.
• Each domain can be considered a table.
• The items are the rows in the table.
• A cell in the table is recognized as the value for a specific attribute
(column name) of the corresponding row.
• This is similar to a table in a relational database and possible to assign
multiple values to a single cell in the table.
• This is not permitted in a traditional relational database which wants to
maintain data consistency
30
• Many developers simply want to quickly store, access, and query the
stored data.
• SimpleDB removes the requirement to maintain database schemas with
strong consistency.
• SimpleDB is priced at $0.140 per Amazon SimpleDB Machine Hour
consumed with the first 25 Amazon SimpleDB Machine Hours
consumed per month free (as of October 6, 2010).
SimpleDB called as “LittleTable”
31 32
15-11-2021
9
33 34
35 36
UNIT IV RESOURCE MANAGEMENT AND SECURITY IN CLOUD
Inter-cloud Resource Management
Extended Cloud Computing Services
Top three service layers Include
 SaaS,
 PaaS, and
 IaaS
 The cloud platform provides PaaS, which sits on top of the IaaS infrastructure.
 The top layer offers SaaS.
 These must be implemented on the cloud platforms provided. The implication is that one cannot
launch SaaS applications with a cloud platform. The cloud platform cannot be built if
compute and storage infrastructures are not there.
 The cloud infrastructure layer can be further subdivided as Data as a Service (DaaS) and
Communication as a Service (CaaS) in addition to compute and storage in IaaS.
 Cloud players are divided into three classes:
(1) cloud service providers and IT administrators,
(2) software developers or vendors, and
(3) end users or business users.
 These cloud players vary in their roles under the IaaS, PaaS, and SaaS models.
 From the software vendors’ perspective, application performance on a given cloudplatform
is most important.
 From the providers’ perspective, cloud infrastructure performance is the primaryconcern.
 From the end users’ perspective, the quality of services, including security, is the most
important.
Cloud Service Tasks and Trends
 Cloud services are introduced in five layers. The top layer is for SaaS applications
 For example, CRM is heavily practiced in business promotion, direct sales, and marketing
services.
 CRM offered the first SaaS on the cloud successfully.
 SaaS tools also apply to distributed collaboration, and financial and human resources
management.
 PaaS is provided by Google, Salesforce.com, and Facebook, among others.
 IaaS is provided by Amazon, Windows Azure, and RackRack among others.
 Collocation services require multiple cloud providers to work together to supportsupply
chains in manufacturing.
 Network cloud services provide communications such as those by AT&T, Qwest, and
AboveNet.
Software Stack for Cloud Computing
 Developers have to consider how to design the system to meet critical requirements such as high
throughput, HA, and fault tolerance.
 The overall software stack structure of cloud computing software can be viewed as layers.
 Each layer has its own purpose and provides the interface for the upper layers just as the
traditional software stack does.
 By using VMs, the platform can be flexible, that is, the running services are not bound to
specific hardware platforms.
 The software layer on top of the platform is the layer for storing massive amounts of data. This
layer acts like the file system in a traditional single machine.
 Other layers running on top of the file system are the layers for executing cloud computing
applications. They include the database storage system, programming for large-scale
clusters, and data query language support.
 The next layers are the components in the software stack.
Runtime Support Services
 As in a cluster environment, there are also some runtime supporting services in the cloud
computing environment.
 Cluster monitoring is used to collect the runtime status of the entire cluster.
 The scheduler queues the tasks submitted to the whole cluster and assigns the tasks to the
processing nodes according to node availability.
 Runtime support is software needed in browser-initiated applications applied by thousands of
cloud customers.
RESOURCE PROVISIONING METHODS
In Cloud the following resources are provisioned
 Computer resources or VMs.
 Storage allocation schemes to interconnect distributed computing infrastructures
Provisioning of Compute Resources (VMs)
 Providers supply cloud services by signing SLAs with end users.
 The SLAs must commit sufficient resources such as CPU, memory, and bandwidth that the user
can use for a preset period.
 Deploying an autonomous system to efficiently provision resources to users is a challenging
problem
 The difficulty comes from the unpredictability of consumer demand,
1. Software and hardware failures,
2. Heterogeneity of services,
3. Power management, and
4. Conflicts in signed SLAs between consumers and service providers
 Efficient VM provisioning depends on the cloud architecture and management of cloud
infrastructures.
 In a virtualized cluster of servers, efficient installation of VMs, live VM migration, and fast
recovery from failures needed.
Resource Provisioning Methods
 Three cases of static cloud resource provisioning policies available, (a), over provisioning with the
peak load causes heavy resource waste.
1. Under provisioning of resources results in losses by both user and provider in that paid
demand by the users is not served and wasted. (lead to broken SLA and penalties)
2. Over provisioning of resources lead to resource underutilization.
3. Constant provisioning of resources with fixed capacity to a declining user demand could
result in even worse resource waste.
Three resource-provisioning methods
1. Demand-driven method - provides static resources
2. Event driven method - based on predicted workload by time.
3. Popularity-driven method – based on Internet traffic monitored.
Demand-Driven Resource Provisioning
 This method adds or removes computing instances based on the current utilizationlevel of the
allocated resources.
 In general, when a resource has surpassed a threshold for a certain amount of time, the scheme
increases that resource based on demand.
 When a resource is below a threshold for a certain amount of time, that resource couldbe decreased
accordingly.
 The scheme does not work out right if the workload changes abruptly.
Event-Driven Resource Provisioning
 This scheme adds or removes machine instances based on a specific time event.
 The scheme works better for seasonal or predicted events such as Christmastime inthe West and
the Lunar New Year in the East.
 During these events, the number of users grows before the event period and thendecreases during the
event period.
 This scheme anticipates peak traffic before it happens.
 The method results in a minimal loss of QoS, if the event is predicted correctly.
 Otherwise, wasted resources are even greater due to events that do not follow a fixedpattern.
Popularity-Driven Resource Provisioning
 In this method, the Internet searches for popularity of certain applications and createsthe instances
by popularity demand.
 The scheme anticipates increased traffic with popularity.
 Again, the scheme has a minimal loss of QoS, if the predicted popularity is correct.
 Resources may be wasted if traffic does not occur as expected.
Dynamic Resource Deployment
 Dynamic resource deployment can be implemented to achieve scalability inperformance.
 The InterGrid-managed infrastructure was developed by a Melbourne Universitygroup .
 The Inter- Grid is a Java-implemented software system that lets users create execution cloud
environments.
 An inter grid gateway (IGG) allocates resources from a local cluster to deployapplications in three
steps:
(1) Requesting the VMs
(2) Enacting the leases, and
(3) Deploying the VMs as requested
 Under peak demand, this IGG interacts with another IGG that can allocate resourcesfrom a cloud
computing provider.
 A grid has predefined peering arrangements with other grids, which the IGG manages.
 Through multiple IGGs, the system coordinates the use of InterGrid resources.
 An IGG is aware of the peering terms with other grids, selects suitable grids that canprovide the
required resources, and replies to requests from other IGGs.
 The InterGrid allocates and provides a distributed virtual environment (DVE).
 This is a virtual cluster of VMs that runs isolated from other virtual clusters.
 A component called the DVE manager performs resource allocation and managementon behalf of
specific user applications.
 The core component of the IGG is a scheduler for implementing provisioning policiesand peering
with other gateways.
Provisioning of Storage Resources
 The data storage layer is built on top of the physical or virtual servers.
 As the cloud computing applications often provide service to users, it is unavoidablethat the data
is stored in the clusters of the cloud provider.
 The service can be accessed anywhere in the world.
 A distributed file system is very important for storing large-scale data.
 In cloud computing, another form of data storage is (Key, Value) pairs.
 Amazon S3 service uses SOAP to access the objects stored in the cloud.
 Typical cloud databases include
 BigTable from Google,
 SimpleDB from Amazon
 SQL service from Microsoft Azure.
15-11-2021
1
Global Exchange of Cloud
Resources
15-11-2021
2
Global Exchange of Cloud Resources
• No single cloud infrastructure provider will be able to
establish its data centers at all possible locations throughout
the world.
• As a result, cloud application service (SaaS) providers will
have difficulty in meeting QoS expectations for all their
consumers.
• Hence, they would like to make use of services of multiple
cloud infrastructure service providers who can provide
better support for their specific consumer needs.
Global Exchange of Cloud Resources
• This kind of requirement often arises in enterprises with global
operations and applications such as Internet services, media
hosting, and Web 2.0 applications.
• This necessitates federation of cloud infrastructure service
providers for seamless provisioning of services across different
cloud providers.
• To realize this, the Cloudbus Project at the University of
Melbourne has proposed InterCloud architecture supporting
brokering and exchange of cloud resources for scaling
applications across multiple clouds.
15-11-2021
3
Global Exchange of Cloud Resources
• They consist of client brokering and coordinator services that
support utility-driven federation of clouds: application
scheduling, resource allocation, and migration of workloads.
• The architecture cohesively couples the administratively and
topologically distributed storage and compute capabilities of
clouds as part of a single resource leasing abstraction.
• The system will ease the cross domain capability integration
for on-demand, flexible, energy-efficient, and reliable access
to the infrastructure based on virtualization technology .
Global Exchange of Cloud Resources
• The Cloud Exchange (CEx) acts as a market maker for bringing
together service producers and consumers. It aggregates the
infrastructure demands from application brokers and evaluates
them against the available supply currently published by the
cloud coordinators.
• It supports trading of cloud services based on competitive
economic models such as commodity markets and auctions.
CEx allows participants to locate providers and consumers with
fitting offers.
15-11-2021
4
Global Exchange of Cloud Resources
• Such markets enable services to be commoditized, and thus
will pave the way for creation of dynamic market
infrastructure for trading based on SLAs.
• An SLA specifies the details of the service to be provided in
terms of metrics agreed upon by all parties, and incentives and
penalties for meeting and violating the expectations,
respectively.
• The availability of a banking system within the market ensures
that financial transactions pertaining to SLAs between
participants are carried out in a secure and dependable
15-11-2021
1
Security in Cloud Computing
• Security Overview
• Cloud Security Challenges
• Software-as-a-Service Security
• Data Security
• Security Governance
• Virtual Machine Security
Cloud Security !! A major Concern
⚫ Security concerns arising because both customer data and
program are residing at Provider Premises.
⚫ Security is always a major concern in Open System
Architectures
Customer
Customer
Data
Customer
Code
Provider Premises
15-11-2021
2
Security Concerns
• Eight Threats that users encounter while transferring to and saving
data in the cloud
- carrying
• Handling of data by Third Party
•Supplier works on all aspects of handling data
updates to uploading to safety controls
• 100 % security
•Cyber attacks
•Every time saving data on the internet- risk of cyber attack
•Insider Threats
•Workers get access to your cloud – all the data from
consumer data to secret information and intellectual
property can be revealed
Security Concerns
•Government Intrusion
•Supervision programs and contestants are not only the
ones who might want to look into your data
•Legal Liability
• Not restricted to safety and also comprise its
consequences, like court cases
•Lack of standardization
•Cloud consistency
•Lack of Support
• support from cloud providers
•Constant Risk
•Identity Management and access control are fundamental
functions required for secure computing
Threats to Infrastructure , Data and Access control
•Denial of service
•Distributed DOS is an attempt to make a network or
machine resource inaccessible to its anticipated consumers
•Man in the Middle Attack
• improper configuration of Secure Socket Layer (SSL)
•Information shared between two parties could be hacked
by the middle party
•Network Sniffing
•Another form of hacking
•Hack the passwords that are improperly encrypted during
communication
•Solution :encryption techniques to secure data
Threats to Infrastructure , Data and Access Control
•Port Scanning
•Hackers use the ports 80 – HTTP , 21 – FTP
•Solution : firewall to secure data from port attacks
•Sql Injection Attack
•Special Characters are used by the hackers to return the
data
•Cross Site Scripting
•user enters the correct URL of a website, whereas on
another site, hacker redirects the user to his / her website
and hacks its identification
15-11-2021
3
Availability
Confidentiality
Integrity
Security Services Data Confidentiality: refers to limiting data access only to authorized
users and stopping access to unauthorized ones.
• Access Control: Utilized for controlling which assets a client can get
access to and the jobs which cane be presented with the accessed
resources
• Biometric: Recognize persons identity based on individual
characteristics. Retinascanning, facial recognition, voice
recognition and fingerprint recognition
• Encryption: Method that converts readable (plain text) into
ciper text
• Privacy: Maintaining confidential or individual data form being
viewed by unauthorized parties
• Ethics: Employees should be granted clear direction by principle
Data Integrity: Refers to the technique for
ensuring that the data is genuine, correct and
protected from illegal user alteration
Example: Digital Signature, Hashing methods
and message verification codes are used for protecting
data integrity
Data Availability: Availability of data resource
• Is double-checking that the authorized users have
access to data and affiliated assets when required
• This can be carried out by utilizing data backup and
recoveryplan
15-11-2021
4
Challenge
• Data-level security and sensitive data in the domain of the
enterprise
• Security need to move to the data level – so enterprises can be
• sure that their data is protected wherever it goes.
Methods
• Enterprise specify data is not allowed to go outside of US
• Force encryption of certain types of data
• Permit access to specified users to access the data
• Provide compliance with Payment Card Industry
Data Security Standard (PCIDSS)
Data Security
15-11-2021
5
• Application security – key success for SaaS company
• where security features and requirements are defined
• security test results are reviewed
• Application security processes, secure coding guidelines,
training and testing scripts and tools – collaborative effort
between the security and the development teams
Application Security
Example:
In Product Engineering,
• Focus on application layer, security design and
infrastructure layers
• Security Team – provide security requirements to
implement
• Need collaboration between security and product
development team
Application Security
• External Penetration Testers
• Used for application source code reviews and attacks
• It provides review of the security of the application as well as
assurance to customers that attack and penetration tests are
performed regularly
• Fragmented and undefined collaboration on application security
result in lower quality deign, coding efforts and testing results
• Open Web Application Security Project (OWASP) – Guidelines for
secure application development in web
Application Security
15-11-2021
6
In Cloud Environment,
• Physical servers are consolidated to multiple virtual machine
instances on virtualized servers
• Security is the major concern – Data center security teams
replicate security controls for large data centers
Virtual Machine Security Virtual Machine Security
• Firewalls, intrusion detection and prevention, integrity
monitoring and log inspection deployed as software on virtual
machines
• It helps to increase protection and maintain compliance integrity
of servers and applications as virtual resources move from on-
premises to public cloud environment
• It enable critical applications and data to be moved to the cloud
securely
Virtual Machine Security
To facilitate the centralized management of a server firewall policy, the
security software loaded onto a virtual machine
• Bidirectional stateful firewall
• Security software installed on virtual machine
• Enables virtual machine isolation and location awareness - Enabling a
tightened policy and flexibility to move the virtual machine from on-
premises to cloud resources
• Integrity monitoring and log inspection software must be applied at virtual
machine level
The security software can be put into a single software agent
• provides consistent control and management throughout the cloud
• provides economy of scale, deployment and cost savings for both service
provider and the enterprise
UNIT V – CASE STUDIES
GOOGLE APP ENGINE
 Google AppEngine is a PaaS implementation that provides services for developing and hosting
scalable Web applications.
 AppEngine is essentially a distributed and scalable runtime environment.
 It leverages Google’s distributed infrastructure to scale out applications facing a large number of
requests by allocating more computing resources to them and balancing the load among them.
 Developers can develop applications in Java, Python, and Go.
Architecture and core concepts
AppEngine is a platform for developing scalable applications accessible through the Web.The platform is
logically divided into four major components:
 Infrastructure
 Run- time environment
 Underlying storage
 Set of scalable services
Infrastructure
 AppEngine’s infrastructure takes advantage of many servers available within Google datacenters.
 For each HTTP request, AppEngine locates the servers hosting the application that processes the
request, evaluates their load and, if necessary, allocates additional resources (i.e., ser- vers) or redirects
the request to an existing server.
 The infrastructure is also responsible for monitoring application performance and collectingstatistics on
which the billing is calculated.
Runtime environment
 The runtime environment represents the execution context of applications hosted onAppEngine.
 Sandboxing
 Responsibilities of the runtime environment are, it can execute without causing a threatto the server
and without being influenced by other applications.
 Supported runtimes
 AppEngine applications developed using three different languages and relatedtechnologies: Java,
Python, and Go.
 AppEngine currently supports Java 6, and Java Server Pages (JSP) used for webapplication
development.
 Support for Python is provided by an optimized Python 2.5.2 interpreter.
 The Go runtime environment allows applications developed with the Goprogramming
language to be hosted and executed in AppEngine.
Storage
 AppEngine provides various types of storage, which operate differently depending on the volatility of
the data.
 There are three different levels of storage:
o In memory-cache
o Storage for semi-structured data
o Long-term storage for static data.
 Data Store is a service that allows developers to store semi-structured data.
 The service is designed to scale and optimized to quickly access data.
 The underlying infrastructure of DataStore is based on Bigtable a redundant, distributed, and
semistructured data store.
Application services
 Applications hosted on AppEngine take the most from the services made available through the run-
time environment.
 It simplifies access to data, account management, integration of external resources, messaging and
communication, image manipulation, and asynchronous computation.
 Application Services include UrlFetch, MemCache, Mail and Instant Messaging, Account management
and Image Manipulation.
UrlFetch: Sandbox does provide developers with the capability of retrieving a remoteresource through
HTTP/HTTPS by means of the UrlFetch service.
MemCache: AppEngine provides caching services by means of Memcache. This is a distributed in-memory
cache that is optimized for fast access and providesdevelopers with a volatile store for the objects that are
frequently accessed.
Mail and instant messaging
 AppEngine provides developers with the ability to send and receive mails through Mail.
 The service allows sending email on behalf of the application to specific user accounts.
 Mail operates asynchronously, and in case of failed delivery the sending address is notified through an
email detailing the error.
 AppEngine provides also another way to communicate with the external world: the Extensible
Messaging and Presence Protocol (XMPP).
 Any chat service that supports XMPP, such as Google Talk, can send and receive chat messages to and
from the Web application.
Account management
 AppEngine simplifies account management by allowing developers to leverage Google account
management by means of by means of Google Accounts.
 Using Google Account, Web applications can store profile settings in the form of key-value pairs,
attach them to a given Google account, and quickly retrieve them once the user authenticates.
Image manipulation
 AppEngine allows applications to perform image resizing, rotation, mirroring, and enhance Image
manipulation.
 Image Manipulation is mostly designed for lightweight image processing and is optimized for speed.
Compute services
 AppEngine offers additional services such as Task Queues and Cron Jobs that simplify the execution of
computations.
 Task queues: It allows applications to submit a task for a later execution. This is useful for long
computations that cannot be completed within the maximum response time of a request handler.
 Cron Jobs: Cron Jobs used to schedule the required operation at the desired time. It invokes the
request handler specified in the task at a given time and does not re-execute the task in case of failure.
AMAZON WEB SERVICES
Amazon Web Services (AWS) is a platform that allows the development of flexibleapplications.
 It provides solutions for elastic infrastructure scalability, messaging, and data storage.
 The platform is accessible through SOAP or RESTful Web service interfaces.
 It offers a variety cloud services, most notably:
o S3: Amazon Simple Storage Service (storage)
o EC2: Elastic Compute Cloud (virtual servers)
o Cloudfront (content delivery)
o Cloudfront Streaming (video streaming)
o SimpleDB (structured datastore)
o RDS (Relational Database)
o SQS (reliable messaging)
o Elastic MapReduce (data processing)
o Amazon Virtual Private Cloud (Communication Network)
Compute Services
Compute services constitute the fundamental element of cloud computing systems.
 The fundamental service in this space is Amazon EC2, which delivers an IaaS solution.
 Amazon EC2 allows deploying servers in the form of virtual machines created as instances of a specific
image.
Amazon Machine Images
 Amazon machine images are templates from which it is possible to create a virtual machine.
 An AMI contains a physical file system layout with a predefined operating system installed.
EC2 instances
 EC2 instances represent virtual machines.
 They are created using AMI as templates, which are specialized by selecting the number of cores, their
computing power, and the installed memory.
 Configurations for EC2 instances
 It possesses Standard instances, Micro instances, High-memory instances, High-CPU instances, Cluster
Compute instances, Cluster GPU instances
Advanced Compute services
 AWS Cloud Formation: Extension of cloud deployment model based on EC2. Uses Templates based
on JSON.
 AWS Elastic Beanstalk: Easy way to package applications and deploy them on the AWS Cloud.
 Amazon Elastic MapReduce: It provides AWS users with a cloud computing platform for
MapReduce applications.
Storage Services
 AWS provides a collection of services for data storage and information management.
 The core service in this area is represented by Amazon Simple Storage Service(S3).
 This is a distributed object store that allows users to store information in different formats.
 The core components of S3 are
o Buckets: It represents virtual containers in which to store objects;
o Objects: It represents the content that is actually stored.
S3 Key concepts
S3 has been designed to provide a simple storage service that’s accessible through aRepresentational State
Transfer (REST) interface.
 The storage is organized in a two-level hierarchy
 Stored objects cannot be manipulated like standard files.
 Content is not immediately available to users.
 Requests will occasionally fail.
Resource Naming
 Buckets, objects, and attached metadata are made accessible through a REST interface.
 They are represented by uniform resource identifiers (URI)
 Amazon offers three different ways of addressing a bucket:
o Canonical form
o Subdomain form
o Virtual hosting form.
Amazon Elastic Block Store
 Allows AWS users to provide EC2 instances with persistent storage in the form of volumes that can
be mounted at instance startup.
 They accommodate up to 1 TB of space and are accessed through a block device interface.
Amazon ElastiCache
 It is an implementation of an elastic in-memory cache based on a cluster of EC2 instances.
 It provides fast data access from other EC2 instances through a Memcached-compatible protocol.
Structured Storage Solutions
 Amazon provides applications with structured storage services in three different forms:P
reconfigured
EC2 AMIs, Amazon Relational Data Storage (RDS), and Amazon SimpleDB.
 Preconfigured EC2 AMIs: Predefined templates featuring an installation of a givendatabase
management system.
 RDS: Relational database service that relies on the EC2 infrastructure and is managed byAmazon.
 Amazon SimpleDB: A lightweight, highly scalable, and flexible data storage solution for
applications that do not require a fully relational model for their data.
Communication Services
Amazon provides facilities to structure and facilitate the communication among existingapplications and
services residing within the AWS infrastructure
These facilities can be organized into two major categories
 Virtual Networking
 Messaging
Virtual networking
 Virtual networking comprises a collection of services that allow AWS users to control the connectivity
to and between compute and storage services.
 Amazon Virtual Private Cloud (VPC) and Amazon Direct Connect provideconnectivity solutions in
terms of infrastructure.
 Route 53 facilitates connectivity in terms of naming.
 Amazon VPC provides a great degree of flexibility in creating virtual privatenetworks within the
Amazon infrastructure
 Amazon Direct Connect allows AWS users to create dedicated networks between theuser private
network and Amazon Direct Connect locations, called ports.
 Amazon Route 53 implements dynamic DNS services that allow AWS resources tobe reached
through domain names different from the amazon.com domain
Messaging
The three different types of messaging services offered are
 Amazon Simple Queue Service (SQS): Constitutes disconnected model forexchanging messages
between applications by means of message queues.
 Amazon Simple Notification Service (SNS): Provides a publish-subscribe method for connecting
heterogeneous applications.
 Amazon Simple Email Service (SES): Provides AWS users with a scalable email service that
leverages the AWS infra- structure.
EUCALYPTUS
 The Eucalyptus framework was one of the first open-source projects to focus onbuilding IaaS clouds.
 Eucalyptus is an open source software platform for implementing Infrastructure as aService (IaaS) in a
private or hybrid cloud computing environment.
 It has been developed with the intent of providing an open source implementationnearly identical in
functionality to Amazon Web Services APIs.
 As an Infrastructure as a Service (IaaS) product, Eucalyptus allows your users toprovision your
compute and storage resources on-demand.
 Eucalyptus was founded out of a research project in the Computer Science Department at the University of
California, Santa Barbara, and became a for-profit business called Eucalyptus Systems in 2009.
Architecture of Eucalyptus
Components of Eucalyptus are
 Cloud Controller
 Walrus
 Cluster Controller
 Storage Controller
 VMWare BrokerNode Controller
1. Cluster Controller (CC)
 Cluster Controller manages the one or more Node controller and responsible fordeploying and
managing instances on them.
 It communicates with Node Controller and Cloud Controller simultaneously.
 CC also manages the networking for the running instances under certain types of networking
modes available in Eucalyptus.
2. Cloud Controller (CLC)
 Cloud Controller is front end for the entire ecosystem.
 CLC provides an Amazon EC2/S3 compliant web services interface to the client tools on one side and
interacts with the rest of the components of the Eucalyptus infrastructure on the other side.
3. Node Controller (NC)
 It is the basic component for Nodes.
 Node controller maintains the life cycle of the instances running on each nodes.
 Node Controller interacts with the OS, hypervisor and the Cluster Controller simultaneously.
4. Walrus Storage Controller (WS3)
 Walrus Storage Controller is a simple file storage system. WS3 stores the the machine images and
snapshots.
 It also stores and serves files using S3 APIs.
5. Storage Controller (SC)
 Allows the creation of snapshots of volumes.
 It provides persistent block storage over AoE or iSCSI to the instances.
 It communicates with the Cluster Controller and Node Controller and manages Eucalyptus block
volumes and snapshots to the instances within its specific cluster
Features of Eucalyptus
 SSH Key Management
 Image Management
 Linux-based VM Management
 IP Address Management
 Security Group Management
 Volume and Snapshot Management
Additional Features incorporated in Version 3.3
 Auto Scaling: Allows application developers to scale Eucalyptus resources up or down based on
policies defined using Amazon EC2-compatible APIs and tools
 Elastic Load Balancing: AWS-compatible service that provides greater fault tolerance for applications
 CloudWatch: An AWS-compatible service that allows users to collect metrics, set alarms, identify
trends, and take action to ensure applications run smoothly.
OpenNubula
 OpenNebula is a simple open-source solution to build Private Clouds and manage Data Center
virtualization.
 OpenNebula is an open source cloud middleware solution that manages heterogeneous distributed data
centre infrastructures serves as Infrastructure as a Service.
 The two primary uses of the OpenNebula platform are data center virtualization solutions and cloud
infrastructure solutions.
 OpenNebula combines existing virtualisation technologies with advanced features for multi-tenancy,
automated provisioning and elasticity.
OpenNubula Architecture
The OpenNebula Project's deployment model resembles classic cluster architecture whichutilizes
 Front-End (Master Node): Executes the OpenNebula services.
 Hypervisor Enabled Hosts (Worker Nodes): Provides the resources needed by the VMs.
 Datastores: Hold the base images of the VMs.
 A Physical Network: Used to support basic services such as interconnection of the storage servers and
OpenNebula control operations, and VLANs for the VMs.
Master Node:
 A single gateway or front-end machine, sometimes also called the master node, is responsible for
executing all the OpenNebula services.
 Execution involves queuing, scheduling and submitting jobs to the machines in the cluster.
 The master node also provides the mechanisms to manage the entire system.
 This includes adding virtual machines, monitoring the status of virtual machines, hosting the
repository, and transferring virtual machines when necessary.
Worker node:
 The other machines in the cluster, known as ‘worker nodes’, provide raw computing power for
processing the jobs submitted to the cluster.
 The worker nodes in an OpenNebula cluster are machines that deploy a virtualisation hypervisor, such
as VMware, Xen or KVM.
DataStore
 The datastores simply hold the base images of the Virtual Machines.
 Three different datastore classes are included with OpenNebula: system datastores, image datastores,
and file datastores.
 System datastores hold the images used for running the virtual machines. The images can be complete
copies of an original image, deltas, or symbolic links.
 Image datastores are used to store the disk image repository. Images from the image datastores are
moved to or from the system datastore when virtual machines are deployed or manipulated.
 File datastore is used for regular files and is often used for kernels, ram disks, or context files.
Physical networks
 Physical networks are required to support the interconnection of storage servers and virtual
machines in remote locations.
 It is also essential that the front-end machine can connect to all the worker nodes orhosts.
 At the very least two physical networks are required as OpenNebula requires a service network
and an instance network.
 Service Network: Used by the OpenNebula front-end daemons to access the hosts inorder to manage
and monitor the hypervisors, and move image files.
 Instance Network: Offers network connectivity to the VMs across the different hosts.
Features of OpenNubula
 Interoperability
 Security
 Integration with third party tools
 Scalability
 Flexibility
OpenStack
 OpenStack is a free and open-source software platform for cloud computing, mostly deployed as
infrastructure-as-a-service (IaaS), whereby virtual servers and other resources are made available to
customers.
 OpenStack software controls large pools of compute, storage, and networking resources throughout a
datacenter, managed through a dashboard or via the OpenStack API.
 OpenStack works with popular enterprise and open source technologies making it ideal for
heterogeneous infrastructure.
 The software platform consists of interrelated components that control diverse, multi- vendor hardware
pools of processing, storage, and networking resources throughouta data center.
 Users either manage it through a web-based dashboard, through command-line tools, or through
RESTful webservices.
Architecture and Components
OpenStack has a modular architecture with various code names for its components
Dashboard (Horizon)
 Horizon is a web based interface for managing Openstack services.
 It provides GUI for operations such as launching instances, managing network andsetting access
controls.
Identity (Keystone)
 This is the component that provides identity services for OpenStack.
 Basically, this is a centralized list of all the users and their permissions for the services
 It includes authentication and authorization services.
Compute (Nova)
 This is the primary computing engine behind OpenStack.
 This allows deploying and managing virtual machines and other instances to handle computing
tasks.
 Nova is a distributed component and it interacts with keystone for authentication, glance for images
and horizon for web interfaces.
Network (Neutron)
 Neutron is the networking component of OpenStack.
 It makes all the components communicate with each other smoothly, quickly and efficiently.
Object Storage (Swift)
 The storage system for objects and files is referred to as Swift.
 Swift files are referred to by a unique identifier and the Swift is in charge where to store the files.
 This makes the system in charge of the best way to make data backup in case of network or hardware
problems.
Block Storage (Cinder)
 It manages storage volumes for virtual machines
 It is a block storage component that enables the cloud system to access data with higher speed in
situations when it is an important feature.
Image (Glance)
 It is a component that provides image services or virtual copies of the hard disks.
 Glance allows these images to be used as templates when deploying new virtual machine instances.
Ceilometer
 Ceilometer provides data measurement services, thus enabling the cloud to offer billing services to
individual users of the cloud.
Orchestration (Heat)
 Heat allows developers to store the requirements of a cloud application in a file that defines what
resources are necessary for that application.
 It helps to manage the infrastructure needed for a cloud service to run.

More Related Content

PDF
Linux Presentation
PPTX
1 introduction to windows server 2016
PPTX
Cloud security Presentation
PPTX
PPT
Cloud deployment models
PPT
Cloud Computing Security Challenges
PDF
Cloud Ecosystem
PPTX
Linux
Linux Presentation
1 introduction to windows server 2016
Cloud security Presentation
Cloud deployment models
Cloud Computing Security Challenges
Cloud Ecosystem
Linux

What's hot (20)

PDF
CloudOpen 2012 OpenNebula talk
PPTX
History of Linux.pptx
PPT
Active directory slides
PDF
Cloud Security: A New Perspective
PPTX
Cloud security
PPTX
Cloud Software Enviornment
PPT
Linux
PPTX
Introduction to Active Directory
PPTX
Cloud computing using Eucalyptus
PPTX
Virtualization in cloud computing
PDF
Cloud Computing Architecture
PDF
Google Cloud Networking Deep Dive
PPT
PPT
Cloud computing
PPTX
Group Policy Windows Server 2008
PPT
11 distributed file_systems
PPT
Microsoft Active Directory
PPTX
Microsoft Active Directory.pptx
PPTX
Top 10 cloud service providers
CloudOpen 2012 OpenNebula talk
History of Linux.pptx
Active directory slides
Cloud Security: A New Perspective
Cloud security
Cloud Software Enviornment
Linux
Introduction to Active Directory
Cloud computing using Eucalyptus
Virtualization in cloud computing
Cloud Computing Architecture
Google Cloud Networking Deep Dive
Cloud computing
Group Policy Windows Server 2008
11 distributed file_systems
Microsoft Active Directory
Microsoft Active Directory.pptx
Top 10 cloud service providers
Ad

Similar to OIT552 Cloud Computing Material (20)

PPTX
Cloud Computer and Computing Fundamentals.pptx
PPTX
1..pptxcloud commuting cloud commuting cloud commuting
PPTX
Cloud Computing in Cloud Computing .pptx
PPTX
cloud computing1234567891234567891223 .pptx
PDF
_Cloud_Computing_Overview.pdf
PDF
Week 1 Lecture_1-5 CC_watermark.pdf
PDF
Week 1 lecture material cc
PPTX
PPTX
vssutcloud computing.pptx
PPTX
unit 1.pptx
PPTX
CS8791 CLOUD COMPUTING_UNIT-I_FINAL_ppt (1).pptx
PPTX
ITC4344_3_Cloud Computing Technologies.pptx
PPTX
Cloud-mod1-chap1.pptx
PPTX
Unit II CC .pptx hello sirf this is Utkarsh mi
PPTX
3 - Grid Computing.pptx
PPTX
Unit 1 - Cloud Computing Basics and Details.pptx
PDF
CLOUD COMPUTING BY SIVASANKARI
DOC
Computing notes
PPTX
Cloud Computing and its features unit - 1
PDF
UNIT 1.pdf
Cloud Computer and Computing Fundamentals.pptx
1..pptxcloud commuting cloud commuting cloud commuting
Cloud Computing in Cloud Computing .pptx
cloud computing1234567891234567891223 .pptx
_Cloud_Computing_Overview.pdf
Week 1 Lecture_1-5 CC_watermark.pdf
Week 1 lecture material cc
vssutcloud computing.pptx
unit 1.pptx
CS8791 CLOUD COMPUTING_UNIT-I_FINAL_ppt (1).pptx
ITC4344_3_Cloud Computing Technologies.pptx
Cloud-mod1-chap1.pptx
Unit II CC .pptx hello sirf this is Utkarsh mi
3 - Grid Computing.pptx
Unit 1 - Cloud Computing Basics and Details.pptx
CLOUD COMPUTING BY SIVASANKARI
Computing notes
Cloud Computing and its features unit - 1
UNIT 1.pdf
Ad

More from pkaviya (20)

PDF
IT2255 Web Essentials - Unit V Servlets and Database Connectivity
PDF
IT2255 Web Essentials - Unit IV Server-Side Processing and Scripting - PHP.pdf
PDF
IT2255 Web Essentials - Unit III Client-Side Processing and Scripting
PDF
IT2255 Web Essentials - Unit II Web Designing
PDF
IT2255 Web Essentials - Unit I Website Basics
PDF
BT2252 - ETBT - UNIT 3 - Enzyme Immobilization.pdf
PDF
OIT552 Cloud Computing - Question Bank
PDF
CS8791 Cloud Computing - Question Bank
PDF
CS8592 Object Oriented Analysis & Design - UNIT V
PDF
CS8592 Object Oriented Analysis & Design - UNIT IV
PDF
CS8592 Object Oriented Analysis & Design - UNIT III
PDF
CS8592 Object Oriented Analysis & Design - UNIT II
PDF
CS8592 Object Oriented Analysis & Design - UNIT I
PDF
Cs8591 Computer Networks - UNIT V
PDF
CS8591 Computer Networks - Unit IV
PDF
CS8591 Computer Networks - Unit III
PDF
CS8591 Computer Networks - Unit II
PDF
CS8591 Computer Networks - Unit I
PDF
IT8602 Mobile Communication - Unit V
PDF
IT8602 - Mobile Communication Unit IV
IT2255 Web Essentials - Unit V Servlets and Database Connectivity
IT2255 Web Essentials - Unit IV Server-Side Processing and Scripting - PHP.pdf
IT2255 Web Essentials - Unit III Client-Side Processing and Scripting
IT2255 Web Essentials - Unit II Web Designing
IT2255 Web Essentials - Unit I Website Basics
BT2252 - ETBT - UNIT 3 - Enzyme Immobilization.pdf
OIT552 Cloud Computing - Question Bank
CS8791 Cloud Computing - Question Bank
CS8592 Object Oriented Analysis & Design - UNIT V
CS8592 Object Oriented Analysis & Design - UNIT IV
CS8592 Object Oriented Analysis & Design - UNIT III
CS8592 Object Oriented Analysis & Design - UNIT II
CS8592 Object Oriented Analysis & Design - UNIT I
Cs8591 Computer Networks - UNIT V
CS8591 Computer Networks - Unit IV
CS8591 Computer Networks - Unit III
CS8591 Computer Networks - Unit II
CS8591 Computer Networks - Unit I
IT8602 Mobile Communication - Unit V
IT8602 - Mobile Communication Unit IV

Recently uploaded (20)

PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
Sustainable Sites - Green Building Construction
PPTX
Safety Seminar civil to be ensured for safe working.
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPT
Mechanical Engineering MATERIALS Selection
PPT
introduction to datamining and warehousing
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
737-MAX_SRG.pdf student reference guides
PDF
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
PDF
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PPT
Project quality management in manufacturing
PDF
Well-logging-methods_new................
PDF
PPT on Performance Review to get promotions
PPT
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
Foundation to blockchain - A guide to Blockchain Tech
UNIT 4 Total Quality Management .pptx
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
R24 SURVEYING LAB MANUAL for civil enggi
Sustainable Sites - Green Building Construction
Safety Seminar civil to be ensured for safe working.
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Mechanical Engineering MATERIALS Selection
introduction to datamining and warehousing
Internet of Things (IOT) - A guide to understanding
737-MAX_SRG.pdf student reference guides
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
III.4.1.2_The_Space_Environment.p pdffdf
Project quality management in manufacturing
Well-logging-methods_new................
PPT on Performance Review to get promotions
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...

OIT552 Cloud Computing Material

  • 1. OIT552 Cloud Computing Course Material Prepared By Kaviya.P Assistant Professor / Information Technology Kamaraj College of Engineering & Technology (Autonomous)
  • 2. 15-11-2021 1 IBM Power Systems Cloud computing is an umbrella term used to refer to Internet based development and services Introduction to Cloud Computing IBM Power Systems The Next Revolution in IT The Big Switch in IT • Classical Computing – Buy & Own • Hardware, System Software, Applications often to meet peak needs. – Install, Configure, Test, Verify, Evaluate – Manage – .. – Finally, use it – $$$$....$(High CapEx) • Cloud Computing – Subscribe – Use – $ - pay for what you use, based on QoS Every 18 months? IBM Power Systems WHAT IS CLOUD COMPUTING ? What do they say ? IBM Power Systems What is Cloud Computing? • Shared pool of configurable computing resources • On-demand network access • Provisioned by the Service Provider 4
  • 3. 15-11-2021 2 IBM Power Systems • A model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) • It can be rapidly provisioned and released with minimal management effort or service provider interaction. • Promotes availability • Provides high level abstraction of computation and storage model. • It has essential characteristics, service models, and deployment models. Cloud Definitions IBM Power Systems Cloud Definitions • Definition from Wikipedia – Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand like the electricity grid. – Cloud computing - A style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. IBM Power Systems Cloud Definitions • Definition from Whatis.com – Name cloud computing was inspired by the cloud symbol that's often used to represent the Internet in flowcharts and diagrams. – Cloud computing is a general term for anything that involves delivering hosted services over the Internet. IBM Power Systems Cloud Definitions • Definition from Berkeley – Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services.
  • 4. 15-11-2021 3 IBM Power Systems Cloud Definitions • Definition from Buyya  A Cloud is a type of parallel and distributed system consisting of a collection of interconnected and virtualized computers .  They are dynamically provisioned and presented as one or more unified computing resources based on service-level agreements established through negotiation between the service provider and consumers. IBM Power Systems Cloud Applications •Scientific / Technical Applications •Business Applications •Consumer / Social Applications Science and Technical Applications Business Applications Consumer / Social Applications IBM Power Systems Cloud Computing • Provides the facility to provision virtual hardware, runtime environment and services to people – on demand service • These facilities are used by end user as long as they are needed by them • Long term Vision of cloud computing o IT services are traded as utilities on an open market without technological and legal barriers IBM Power Systems
  • 5. 15-11-2021 4 IBM Power Systems IBM Power Systems IBM Power Systems IBM Power Systems
  • 6. 15-11-2021 5 IBM Power Systems IBM Power Systems IBM Power Systems IBM Power Systems
  • 7. 15-11-2021 6 IBM Power Systems IBM Power Systems IBM Power Systems IBM Power Systems
  • 9. 15-11-2021 2 ROOTS OF CLOUD COMPUTING • Hardware (virtualization, multi-core chips) • Internet technologies (Web services, service-oriented architectures, Web 2.0), • Distributed computing (clusters, grids) • Systems management (autonomic center automation). computing, data From Mainframes to Clouds • Switch in the IT world • From in-house generated computing power • into • Utility-supplied computing resources delivered over the Internet as Web services Computing delivered as a utility can be defined as ―on demand delivery of infrastructure, applications, and business processes in a security-rich, shared, scalable, and based computer environment over the Internet for a fee”
  • 10. 15-11-2021 3 In 1970s, • Common data processing tasks ( payroll automation) operated time-shared mainframes as utilities • Mainframes had to operate at very high utilization rates • They are very expensive. Disadvantages • With the advent of fast and inexpensive microprocessors • isolation of workload into dedicated servers • Incompatibilities between software stacks and operating systems • the unavailability of efficient computer network SOA, Web Services, Web 2.0 and Mashups • Web services • glue together applications running on different messaging product platforms • enabling information from one application to be made available to others • enabling internal applications to be made available over the Internet. SOA, Web Services, Web 2.0 and Mashups • Describe, compose, and orchestrate services, package and transport messages between services, publish and discover services, represent quality of service (QoS) parameters, and ensure security in service access. • Created on HTTP and XML - providing a common mechanism for delivering services, making them ideal for implementing a service-oriented architecture (SOA). • Purpose of a SOA • to address requirements of loosely coupled, standards-based, and protocol-independent distributed computing. • Software resources are packaged as ―services, • They are well-defined, self contained modules that provide standard business functionality • They are independent of the state or context of other services. • Service Mashups - information and services may be building blocks of programmatically aggregated, acting as complex compositions
  • 11. 15-11-2021 4 • Use of distributed Systems to solve computational problems • The processors communicate with one another through communication lines such as high speed buses or telephone lines • Each processor has its own local memory • Examples : • ATM, Internet, Intranet / Workgroups Distributed Computing Properties of Distributed Computing • Fault Tolerance • When one or some node fails, the whole system still work fine except performance • Need to check the status of each node • Resource Sharing • Each user can share the computing power and storage resources in the system with other users • Load Sharing • Dispatching several tasks to each nodes can help share loading to the whole system • Easy to expand • Expect to use few time when adding nodes • Performance • Parallel computing can be considered a subset of distributed computing Why Distributed Computing ? • Nature of application • Performance • Computing intensive • Task consume lot of time on computing • Ex: computation of pi value using Monte carlo simulation • Data intensive • Task deals with a large amount or large size of filesng • Ex: Facebook, Experimental data processing • Robustness • No SPOF ( Single Point Of Failure) • Other nodes can execute the same task executed on failed node
  • 12. 15-11-2021 5 • Grid • Users (client applications) gain access to computing resources (processors, storage, data, applications) as needed with little knowledge of where those resources are located or what the underlying technologies, hardware and operating system • “The Gird” links computing workstations, servers, together storage elements) and provides resources (PC, the mechanism needed to access them • Grid Computing • is a computing infrastructure that provides dependable, consistent, pervasive and inexpensive access to computational capabilities Grid Computing • Grid Computing • Share more than information • Data, computing power, applications in dynamic environment, multi-institutional, virtual organizations • Effective use of resources at many institutes. People from many institutions working to solve a common problem ( virtual organization) • Join local communities • Interactions with the underneath layers must be transparent and seemless to the users • Open Grid ServicesArchitecture (OGSA) • defining a set of core capabilities and behaviors that address key concerns in grid systems • Globus Toolkit is a middleware that implements several standard Grid services • Grid brokers, which facilitate user interaction with multiple middleware and implement policies to meet QoS needs.
  • 13. 15-11-2021 6 • Types of Grid • Computational Grid • provide secure access to large pool of shared processing power suitable for high throughput applications • Data Grid • provide an infrastructure to support data storage, data manipulation of large volume of data stored discovery, data handling, data publication and data in heterogeneous databases and file system • Disadvantages • Ensuring QoS in grids is difficult • availability of resources with diverse software configurations • Eg: Disparate operating systems, libraries, compilers, runtime environments but user applications would often run only on specially customized environments • Cluster • is a type of parallel or distributed computer system consists of a collection of inter-connected stand-alone computers working together as a single integrated computing resource • Key components • Multiple standalone computers, operating systems, high performance interconnects, middleware, parallel computing environments and applications • Clusters are usually deployed to improve speed Cluster Computing
  • 14. 15-11-2021 7 • Types of Clusters • High Availability or Failover clusters • Load Balancing Clusters • Parallel / Distributed Processing Clusters • Benefits of clustering • System availability • Offer inherent high system availability due to redundancy of hardware, OS and applications • Hardware fault tolerance • Redundancy for most system components (hardware and software) • OS and applications reliability • Run multiple copies of OS, applications • Scalability • Adding servers to the cluster • High Performance • Running cluster enabled programme • Utility • Eg : electrical power – seek to meet fluctuating needs and charge for the resources based on usage rather than flat basis. • Utility Computing • Service provisioning models in which a service provider makes computing resources and infrastructure management available to the customer as needed and changes them for specific usage rather than a flat rate • Advantage • Low or no initial cost to acquire compute resource – Computational resource are essentially rented Utility Computing
  • 15. 15-11-2021 8 • U tility Computing ? • Pay-for-use Pricing Model • Data Center Virtualization and provisioning • Solves Resource utilization problem • Outsourcing • Web Services Delivery • Automation Hardware Virtualization • Hardware virtualization allows running multiple operating systems and software stacks on a single physical platform • Software layer - Virtual machine monitor (VMM) - Hypervisor - mediates access to the physical hardware presenting to each guest operating system a virtual machine (VM), which is a set of virtual platform interfaces
  • 16. 15-11-2021 9 Technologiesincreased adoption of virtualization • Multi-core chips, • Paravirtualization, • Hardware-assisted virtualization, and • Live migration of VMs Benefits • Improvements on sharing and utilization • Better manageability • Higher reliability. Capabilities regarding management of workload in a virtualized system • Isolation • Consolidation • Migration • Work load Isolation • Execution of one VM should not affect the performance of another VM • Consolidation • Consolidation of several individual and heterogeneous workloads onto a single physical platform leads to better system utilization. • Workload Migration • It is done by encapsulating a guest OS state within a VM and allowing it to be suspended, fully serialized, migrated to a different platform, and resumed immediately or preserved to be restored at a later date Virtual Appliances • An application combined with the environment needed to run it Environment - operating system, libraries, compilers, databases, application containers, and so forth. • It eases software customization, configuration, and patching and improves portability. Example –AMI(Amazon Machine Image) format forAmazon EC2 public cloud
  • 17. 15-11-2021 10 Open Virtualization Format • Consists of a file or Set of files – • Describing the VM hardware characteristics (e.g., memory, network cards, and disks) • Operating system details, startup, and shutdown actions • Virtual disks themselves • Other metadata containing product and licensing information. Autonomic Computing • Systems should manage themselves, with high-level guidance from humans • Autonomic (self-managing) systems rely on • Monitoring probes and gauges (sensors), • On an adaptation engine (autonomic manager) for computing optimizations based on monitoring data, and • On effectors to carry out changes on the system. • 4 properties of autonomic systems(by IBM): • self-configuration, • self-optimization, • self-healing, and • self-protection. • IBM - Reference model for autonomic control loops of autonomic managers MAPE-K (Monitor Analyze Plan Execute—Knowledge) • Autonomic computing inspire software technologies for data centre automation • Its Tasks are • Management of service levels of running applications • Management of data centre capacity • Proactive disaster recovery and • Automation of VM provisioning
  • 18. 15-11-2021 1 Desired Features of Cloud 1 Desired Features of Cloud To satisfy the expectations of consumers cloud must provide, • Self-Service • Per Usage Metering – Billing • Elastic • Customization 2
  • 19. 15-11-2021 2 Desired Features of Cloud Self Service • On-demand instant access to resources • Must allow self service access, So customers can request, customize, pay and use services without intervention. 3 Desired Features of Cloud Per Usage Metering and Billing • Services must be prized on short term basis • Allow users to release resources as soon as they are not needed. • Must offer efficient trading services like prizing, accounting and billing • Metering should be done accordingly for different services • Usage promptly reported 4
  • 20. 15-11-2021 3 Desired Features of Cloud Elasticity • Infinite computing resources available on demand. • Rapidly provide resources in any quantity and at any time. • Additional resources can be provided when application load increases • Release when load decreases. 5 Desired Features of Cloud Customization • Resources rented from cloud must be customizable. • In IaaS – allow users to deploy specialised virtual appliances and give privileged access to servers. 6
  • 21. CHALLENGES AND RISKS OF CLOUD COMPUTING Despite the initial success and popularity of the cloud computing paradigm and the extensive availability of providers and tools, a significant number of challenges and risks are inherent to this new model of computing. Issues faced in cloud computing are  Security, Privacy and Trust  Data Lock in Standardization  Availability, Fault-Tolerance, and Disaster Recovery  Resource Management and Energy Efficient Security, Privacy and Trust  Current cloud offerings are essentially public, exposing the system to more attacks. For this reason there are potentially additional challenges to make cloud computing environments as secure as in-house IT systems.  Security and privacy affect the entire cloud computing stack, since there is a massive use of third-party services and infrastructures that are used to host important data or to perform critical operations.  In this scenario, the trust toward providers is fundamental to ensure the desired level of privacy for applications hosted in the cloud.  Legal and regulatory issues also need attention. When data are moved into the Cloud, providers may choose to locate them anywhere on the planet.  The physical location of data centers determines the set of laws that can be applied to the management of data.  For example, specific cryptography techniques could not be used because they are not allowed in some countries.  Similarly, country laws can impose that sensitive data, such as patient health records, are to be stored within national border. Data Lock-In and Standardization  A major concern of cloud computing users is about having their data locked-in by a certain provider.  Users may want to move data and applications out from a provider that does not meet their requirements.  However, in their current form, cloud computing infrastructures and platforms do not employ standard methods of storing user data and applications.  The answer to this concern is standardization. In this direction, there are efforts to create open standards for cloud computing.  The Cloud Computing Interoperability Forum (CCIF) was formed by organizations such as Intel, Sun, and Cisco in order to “enable a global cloud computing ecosystem whereby organizations are able to seamlessly work together for the purposes for wider industry adoption of cloud computing technology”.  The development of the Unified Cloud Interface (UCI) by CCIF aims at creating a standard programmatic point of access to an entire cloud infrastructure.  In the hardware virtualization sphere, the Open Virtual Format (OVF) aims at facilitating
  • 22. packing and distribution of software to be run on VMs so that virtual appliances can be made portable—that is, seamlessly run on hypervisor of different vendor Availability, Fault-Tolerance and Disaster Recovery  Availability of the service, its overall performance, and what measures are to be taken when something goes wrong in the system or its components is very essential in cloud.  Users seek for a warranty before they can comfortably move their business to the cloud.  SLAs, which include QoS requirements, must be ideally set up between customers andcloud computing providers to act as warranty.  An SLA specifies the details of the service to be provided, including availability and performance guarantees.  Additionally, metrics must be agreed upon by all parties, and penalties for violating the expectations must also be approved. Resource Management and Energy-Efficiency  Resource Management and Energy-Efficiency is an important challenge faced by providers of cloud computing services is the efficient management of virtualized resource pools.  Physical resources such as CPU cores, disk space, and network bandwidth must be sliced and shared among virtual machines running potentially heterogeneous workloads.  The multi-dimensional nature of virtual machines complicates the activity of finding a good mapping of VMs onto available physical hosts while maximizing user utility.  Dimensions to be considered include: number of CPUs, amount of memory, size of virtual disks, and network bandwidth.  Dynamic VM mapping policies may leverage the ability to suspend, migrate, and resume VMs as an easy way of preempting low-priority allocations in favor of higher-priority ones.  Migration of VMs also brings additional challenges such as detecting when to initiate a migration, which VM to migrate, and where to migrate.  In addition, policies may take advantage of live migration of virtual machines to relocate data center load without significantly disrupting running services.  Data centers consumer large amounts of electricity. According to a data published by HP, 100 server racks can consume 1.3MWof power and another 1.3 MW are required by the cooling system, thus costing USD 2.6 million per year.  Besides the monetary cost, data centers significantly impact the environment in terms of CO2 emissions from the cooling systems.
  • 23. BENEFITS OF CLOUD COMPUTING No upfront commitment  IT assets, namely software and infrastructure, are turned into utility costs, which are paid for as long as they are used, not paid for upfront.  Capital costs are costs associated with assets that need to be paid in advance to start a business activity.  Before cloud computing, IT infrastructure and software generated capital costs, since they were paid upfront so that business start-ups could afford a computing infrastructure, enabling the business activities of the organization. Cost efficiency  The most evident benefit from the use of cloud computing systems and technologies is the increased economical return due to the reduced maintenance costs and operational costs related to IT software and infrastructure.  The biggest reason behind shifting to cloud computing is that it takes considerably lesser cost than an on-premise technology.  Now the companies need not store the data in disks anymore as the Cloud offers enormous storage space, saving money and resources of the companies.  It helps you to save substantial capital cost as it does not need any physical hardware investments.  Also, you do not need trained personnel to maintain the hardware. The buying and managing of equipment is done by the cloud service provider. On Demand  Services can be accessed on demand and only when required.  Cloud users can access the required services only when they need and pay for only for theusage.  Any subscriber of cloud service can access the services from anywhere and at anytime. Disaster Recovery:  It is highly recommended that businesses have an emergency backup plan ready in the case of an emergency. Cloud storage can be used as a back‐up plan by businesses by providing a second copy of important files.  These files are stored at a remote location and can be accessed through an internet connection. Excellent accessibility  Storing the information in cloud allows you to access it anywhere and anytime regardless of the machine making it highly accessible and flexible technology of present times.  Information and services stored in the cloud are exposed to users by Web-based interfaces that make them accessible from portable devices as well as desktops at home. Scalability  If you are anticipating a huge upswing in computing need (or even if you are surprised bya sudden demand), cloud computing can help you manage. Rather than having to buy, install, and configure new equipment, you can buy additional CPU cycles or storage froma third party.  For example, organizations can add more servers to process workload spikes and dismiss them when they are no longer needed. Flexibility  Increased agility in defining and structuring software systems is another significantbenefit of cloud computing.  Since organizations rent IT services, they can more dynamically and flexibly compose their software systems, without being constrained by capital costs for IT assets.  There is a reduced need for capacity planning, since cloud computing allows organizations to react to unplanned surges in demand quite rapidly.
  • 24. DISADVANTAGES OF CLOUD COMPUTING Downtime  With massive overload on the servers from various clients, the service provider might come up against technical outages. Due to this unavoidable situation your business could be temporarily sabotaged.  And in case your internet connection is down, you will not be able to access the data, software or applications on the cloud. So basically you are depending on the quality ofthe internet to access the tools and software, as it is not installed in-house. Security  There is room for imminent risk for your data even though cloud service providers abide by strict confidentiality terms, are industry certified and implement the best security standards.  When you seek to use cloud-based technology you are extending your access controls toa third party agent to import critical confidential data from your company onto the cloud.  With high levels of security and confidentiality involved, the cloud service providers are often faced with security challenges.  The presence of data on the cloud opens up a greater risk of data theft as hackers could find loopholes in the framework. Basically your data on the cloud is at a higher risk, than if it was managed in-house.  Hackers could find ways to gain access to data, scan, exploit a loophole and look for vulnerabilities on the cloud server to gain access to the data.  For instance, when you are dealing with a multi-tenant cloud server, the chances of a hacker breaking into your data are quite high, as the server has data stored by multiple users.  But the cloud-based servers take enough precautions to prevent data thefts and thelikelihood of being hacked is quite less. Vendor Lock-In  Companies might find it a bit of a hassle to change the vendors.  Although the cloud service providers assure that it is a breeze to use the cloud and integrate your business needs with them, disengaging and moving to the next vendor is not a forte that’s completely evolved.  As the applications that work fine with one platform may not be compatible with another.  The transition might pose a risk and the change could be inflexible due to synchronization and support issues. Limited Control  Organizations could have limited access control on the data, tools and apps as the cloud iscontrolled by the service provider.  It hands over minimal control to the customer, as the access is only limited to the applications, tools and data that is loaded on the server and no access to the infrastructureitself.  The customer may not have access to the key administrative services. Legal Issues  Legal issues may also arise. These are specifically tied to the ubiquitous nature of cloud computing, which spreads computing infrastructure across diverse geographical locations.  Different legislation about privacy in different countries may potentially create disputesas to the rights that third parties (including government agencies) have to your data.  U.S. legislation is known to give extreme powers to government agencies to acquire confidential data when there is the suspicion of operations leading to a threat to national security.  European countries are more restrictive and protect the right of privacy.
  • 25. 15-11-2021 1 Basics of Virtualization, Types of Virtualization, Implementation Levels of Virtualization BASICS OF VIRTUALIZATION • Virtualization is a computer architecture technology by which multiple virtual machines (VMs) are multiplexed in the same hardware machine. • The purpose of a VM is to enhance resource sharing by many users and improve computer performance in terms of resource utilization and application flexibility. • Hardware resources such as CPU, memory, I/O devices or software resources such as OS, software libraries can be virtualized
  • 26. 15-11-2021 2 Levels of Virtualization Implementation Levels of Virtualization Implementation • A traditional computer runs with host OS with its hardware architecture • After virtualization different user applications managed by their own OS can run on the same hardware independent of host OS • Additional layer called virtualization layer called hypervisor or virtual machine monitor(VMM) • Main function of software layer is to virtualize the physical hardware of host machine into virtual resources to be used by VMs.
  • 27. 15-11-2021 3 Levels of Virtualization Implementation • Virtualization software creates the abstraction of VMs by interposing a virtualization layer at various levels of a computer system. • Common virtualization layers are: 1. Instruction Set Architecture (ISA) level 2. Hardware level 3. Operating System level 4. Library support level 5. Application level Virtualization ranging from hardware to applications in five abstraction levels Levels of Virtualization Implementation
  • 28. 15-11-2021 4 Instruction Set Architecture Level • Virtualization is performed by emulating a given ISA by the ISA of the host machine. • For example, MIPS binary code can run on an x86-based host machine with the help of ISA emulation. • It is possible to run a large amount of legacy binary code written for various processors on any given new hardware host machine. • Instruction set emulation leads to virtual ISAs created on any hardware machine. Instruction Set Architecture Level • Basic emulation method is through code interpretation. • An interpreter program interprets the source instructions to target instructions one by one. • This process is relatively slow. • For better performance, dynamic binary translation is desired. This approach translates basic blocks of dynamic source instructions to target instructions.
  • 29. 15-11-2021 5 Instruction Set Architecture Level • Instruction set emulation requires binary translation and optimization. • A virtual instruction set architecture (V-ISA) thus requires adding a processor-specific software translation layer to the compiler. Hardware Abstraction Level • Hardware-level virtualization is performed right on top of the bare hardware. • This approach generates a virtual hardware environment for a VM. • The intention is to upgrade the hardware utilization rate by multiple users concurrently. • The idea was implemented in the IBM VM/370 in the 1960s. • recently, the Xen hypervisor has been applied to virtualize x86-based machines to run Linux or other guest OS applications.
  • 30. 15-11-2021 6 Operating System Level • Refers to an abstraction layer between traditional OS and user applications. • OS-level virtualization creates isolated containers on a single physical server and the OS instances to utilize the hardware and software in data centers. • The containers behave like real servers. Operating System Level • OS-level virtualization is commonly used in creating virtual hosting environments to allocate hardware resources among a large number of mutually distrusting users.
  • 31. 15-11-2021 7 Library Support Level • Most applications use APIs exported by user-level libraries rather than using lengthy system calls by the OS. • Virtualization with library interfaces is possible by controlling the communication link between applications and the rest of a system through API hooks. • The software tool WINE has implemented this approach to support Windows applications on top of UNIX hosts. • Another example is the vCUDA which allows applications executing within VMs to leverage GPU hardware acceleration. User-Application Level • Virtualizes an application as a VM. • Application-level virtualization is also known as process-level virtualization. • Application seems to be running on a local machine infact it is running on virtual machine(such as server) in another location • The most popular approach is to deploy high level language (HLL) • The virtualization layer sits as an application program on top of the operating system. • The Microsoft .NET CLR and Java Virtual Machine (JVM) are two good examples of this class of VM.
  • 32. 15-11-2021 8 Application-level virtualization • Application-level virtualization are known as – application isolation, – Application sandboxing, or – Application streaming. • The process involves wrapping the application in a layer that is isolated from the host OS and other applications. • An example is the LANDesk application virtualization platform : self-contained, executable files in an isolated • Environment without requiring installation, system modifications, or elevated security privileges. Relative Merits of Different Approaches
  • 33. 15-11-2021 9 VMM Design Requirements and Providers • Hardware-level virtualization inserts a layer between real hardware and traditional operating systems. • This layer is commonly called the Virtual Machine Monitor (VMM) and it manages the hardware resources of a computing system. • Each time programs access the hardware the VMM captures the process • One hardware component, such as the CPU, can be virtualized as several virtual copies. Three requirements for a VMM 1. VMM should provide an environment identical to the original machine. 2. Programs run in this environment should show, only minor decreases in speed. 3. VMM should be in complete control of the system resources.
  • 34. 15-11-2021 10 Virtual Machine Monitor • VMM should exhibit a function identical to that which it runs on the original machine directly. • Two possible exceptions permitted: – Differences caused by the availability of system resources: arises when more than one VM runs on the same machine – Differences caused by timing dependencies. • These two differences pertain to performance, while the function a VMM provides stays the same as that of a real machine Virtual Machine Monitor • Compared with a physical machine, no one prefers a VMM if its efficiency is too low. • Traditional emulators and complete software interpreters emulate each instruction by means of functions or macros • Provides the most flexible solutions for VMMs. • However, emulators or simulators are too slow to be used as real machines. • To guarantee the efficiency of a VMM, a statistically dominant subset of the virtual processor’s instructions needs to be executed directly by the real processor, with no software intervention by the VMM
  • 35. 15-11-2021 11 • Complete control of these resources by a VMM includes the following aspects: (1) The VMM is responsible for allocating hardware resources for programs; (2) it is not possible for a program to access any resource not explicitly allocated to it; and (3) it is possible under certain circumstances for a VMM to regain control of resources already allocated Comparison of Four VMM and Hypervisor Software Package
  • 36. LOAD BALANCING  With the explosive growth of the Internet and its increasingly important role in our lives, the traffic on the Internet is increasing dramatically, which has been growing at over 100% annualy.  The workload on servers is increasing rapidly, so servers may easily be overloaded, especially servers for a popular web site. There are two basic solutions to the problem of overloaded servers, One is a single-server solution,  i.e., upgrade the server to a higher performance server. However, the new server may also soon be overloaded, requiring another upgrade.  Further, the upgrading process is complex and the cost is high. The second solution is a multiple-server solution,  i.e., build a scalable network service system on a cluster of servers. When load increases, you can simply add one or more new servers to the cluster, and commodity servers have the highest performance/cost ratio.  Therefore, it is more scalable and more cost-effective to build a server cluster system for network services. Cloud Load Balancing  Cloud load balancing is the process of distributing workloads across multiple computing resources.  Cloud load balancing is defined as the method of splitting workloads and computing resources in a cloud computing.  It enables enterprise to manage workload demands or application demands by distributing resources among numerous computers, networks or servers. Load Balancer  A load balancer is a device that distributes network or application traffic across a cluster of servers.  Load balancing improves responsiveness and increases availability of applications.  A load balancer sits between the client and the server farm accepting incoming network and application traffic and distributing the traffic across multiple backend servers using various methods. Load Balancer  Cloud-based server farms can achieve high scalability and availability using server load balancing. This technique makes the server farm appear to clients as a single server.
  • 37.  Load balancing distributes service requests from clients across a bank of servers and makes those servers appear as if it is only a single powerful server responding to client requests.  Load balancing solutions can be divided into software-based load balancers and hardware-based load balancers.  Hardware-based load balancers are specialized boxes that include Application Specific Integrated Circuits (ASICs) customized for a specific use.  Software-based load balancers run on standard operating systems and standard hardware components such as desktop PCs. Load balancing Algorithms  Round Robin  Weighted Round Robin  Least Connection  Source IP Hash  Global Server Load Balancing Round Robin:  This load balancing technique involves a pool of servers that have been identically configured to deliver the same service as each other.  Each will have a unique IP address but will be linked to the same domain name, and requests and servers are linked. Weighted Round Robin  Weighted Round Robin builds on the simple Round Robin load balancing method.  In the weighted version, each server in the pool is given a static numerical weighting.  Servers with higher ratings get more requests sent to them. Least Connection  Neither Round Robin or Weighted Round Robin take the current server load into consideration when distributing requests.  The Least Connection method does take the current server load into consideration.  The current request goes to the server that is servicing the least number of active sessions at the current time. Source IP Hash  This algorithm combines source and destination IP addresses of the client and server to generate a unique hash key.  The key is used to allocate the client to a particular server. As the key can be regenerated if the session is broken, the client request is directed to the same server it was using previously.  This is useful if it’s important that a client should connect to a session that is still active after a disconnection.  For example, to retain items in a shopping cart between sessions. Global Server Load Balancing (GSLB)  GSLB load balances DNS requests, not traffic.  It uses algorithms such as round robin, weighted round robin, fixed weighting, real server load, location-based, proximity and all available. It offers High Availability through multiple data centers.  If a primary site is down, traffic is diverted to a disaster recovery site. Clients connect to their fastest performing, geographically closest data center.  Application health checking ensures unavailable services or data centers are not visible to clients.
  • 38. 15-11-2021 1 Virtualization Structures Tools and Mechanisms Virtualization structures Tools & Mechanisms • Before virtualization, the operating system manages the hardware. • After virtualization, a virtualization layer is inserted between the hardware and the OS. • The virtualization layer is responsible for converting portions of the real hardware into virtual hardware. Virtualization structures Tools & Mechanisms • Depending on the position of the virtualization layer, there are several classes of VM architectures, namely – Hypervisor architecture – Paravirtualization – Host-based virtualization Hypervisor and Xen Architecture • Hypervisor supports hardware-level virtualization on bare metal devices such as CPU,memory,disk and network interfaces • Hypervisor sits directly between physical hardware and its OS • Depending on the functionality, a hypervisor can assume a micro-kernel architecture or a monolithic hypervisor architecture
  • 39. 15-11-2021 2 Hypervisor and Xen Architecture • A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical memory management and processor scheduling) • Device drivers and other changeable components are outside the hypervisor • A monolithic hypervisor implements all the aforementioned functions, including those of the device drivers • The size of the hypervisor code of a micro-kernel hypervisor is smaller than that of a monolithic hypervisor Xen Architecture • Xen is an open source hypervisor program developed by Cambridge University • Xen is a microkernel hypervisor, which separates the policy from the mechanism • It implements all the mechanisms, leaving the policy to be handled by Domain 0 • Xen does not include any device drivers natively • It just provides a mechanism by which a guest OS can have direct access to the physical devices Xen Architecture • Like other virtualization systems, many guest OSes can run on top of the hypervisor • Not all guest OS es are created equal and one in particular controls other • The guest OS (privileged guest OS), which has control ability, is called Domain 0, and the others are called Domain U • It is first loaded when Xen boots • Domain 0 is designed to access hardware directly and manage devices.
  • 40. 15-11-2021 3 Xen Architecture • VM is named Domain 0, which has the privilege to manage other VMs implemented on the same host • If Domain 0 is compromised, the hacker can control the entire system Binary Translation with Full Virtualization • Depending on implementation technologies, hardware virtualization can be classified into two categories: full virtualization host-based virtualization Full virtualization: – Does not need to modify the host OS – Relies on binary translation to trap and to virtualize the execution of certain sensitive, nonvirtualizable instructions
  • 41. 15-11-2021 4 Full Virtualization • With full virtualization, noncritical instructions run on the hardware directly • Critical instructions are discovered and replaced with traps into the VMM to be emulated by software. • Noncritical instructions do not control hardware or threaten the security of the system, but critical instructions do • Running noncritical instructions on hardware not only can promote efficiency, but also can ensure system security Binary Translation of Guest OS Requests Using a VMM • VMware puts the VMM at Ring 0 and the guest OS at Ring 1 • VMM scans the instruction stream and identifies the privileged, control and behaviour-sensitive instructions • These instructions are identified, they are trapped into the VMM, which emulates the behaviour of these instructions Binary Translation of Guest OS Requests Using a VMM • The method used is binary translation • Full virtualization combines binary translation and direct execution • The guest OS is completely decoupled from the underlying hardware Binary Translation of Guest OS Requests Using a VMM • Performance of full virtualization may not be ideal, involves binary translation • Code cache to store translated hot instructions to improve performance, but it increases the cost of memory usage
  • 42. 15-11-2021 5 Host Based Virtualization • An alternative is to install a virtualization layer on top of the host OS • Host OS is still responsible for managing the hardware • Guest OSes are installed and run on top of the virtualization layer • Dedicated applications may run on the VMs • Some other applications can also run with the host OS directly Host based virtualization • First, the user can install this VM architecture without modifying the host OS • Second, the host-based approach appeals to many host machine configurations • Performance is too low when compared with hypervisor/VMM architecture • Application requesting hardware access involves four layers of mapping
  • 43. 15-11-2021 6 Para-virtualization • OS recognize the presence of VMM • Guest OS communicates directly with hypervisor • Para-virtualization needs to modify the guest operating systems • Para-virtualized VMs provides special APIs requiring OS modifications • API exchanges hyper calls with hypervisor • Assisted by compiler to replace nonvirtualizable OS instructions by hyper calls Para-virtualization • X86 offers four instruction execution rings: Ring 0,1,2,3 • The lower ring number responsible for running high privileged instructions • OS responsible for managing the hardware and privilege instructions to execute at Ring0. • User level applications at Ring3 Para-Virtualization Architecture
  • 44. 15-11-2021 7 Problems with Para-virtualization • It must support the unmodified OS as well. • Second, the cost of maintaining paravirtualized OSes is high, because they may require deep OS kernel modifications • Finally, the performance advantage of para-virtualization varies greatly due to workload variations • Main problem in full virtualization is its low performance in binary translation Problems with Para-virtualization • To speed up binary translation is difficult • Many virtualization products employ the para- virtualization architecture • Eg: XEN, VMWare, ESX KVM (Kernel based VM) • Linux para-virtualization system • Memory management and scheduling activities are carried out by the existing Linux kernel, KVM does the rest • KVM is a hardware-assisted para-virtualization tool, which improves performance and supports unmodified guest OSes such as Windows, Linux, Solaris, and other UNIX variants
  • 45. 15-11-2021 8 Para-Virtualization with Compiler Support • Full virtualization architecture which intercepts and emulates privileged and sensitive instructions at runtime • Para-virtualization handles these instructions at compile time • The guest OS kernel is modified to replace the privileged and sensitive instructions with hypercalls to the hypervisor or VMM Para-Virtualization with Compiler Support • The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0 • This implies that the guest OS may not be able to execute some privileged and sensitive instructions • Privileged instructions are implemented by hypercalls to the hypervisor • After replacing the instructions with hypercalls, the modified guest OS emulates the behavior of the original guest OS VMware ESX Server for Para- Virtualization
  • 46. SERVER VIRTUALIZATION  Server virtualization is the process of using software on a physical server to create multiple partitions or "virtual instances" each capable of running independently.  Whereas on a single dedicated server the entire machine has only one instance of an operating system, on a virtual server the same machine can be used to run multiple server instances each with independent operating system configurations.  Server virtualization is a virtualization technique that involves partitioning a physical server into a number of small, virtual servers with the help of virtualization software.  In server virtualization, each virtual server runs multiple operating system instances at the same time. The primary uses of server virtualization are:  To centralize the server administration  Improve the availability of server  Helps in disaster recovery  Ease in development & testing  Make efficient use of server resources. Types of Server Virtualization and Approaches to Server Virtualization There are 3 types of server virtualization in cloud computing: Hypervisor  A Hypervisor is a layer between the operating system and hardware. The hypervisor is the reason behind the successful running of multiple operating systems.  It can also perform tasks such as handling queues, dispatching and returning the hardware request. Host operating system works on the top of the hypervisor, we use it to administer and manage the virtual machines. Para-Virtualization  In Para-virtualization model, simulation in trapping overhead in software virtualizations.  It is based on the hypervisor and the guest operating system and modified entry compiled for installing it in a virtual machine.  After the modification, the overall performance is increased as the guest operating system communicates directly with the hypervisor.
  • 47. Full Virtualization  Full virtualizations can emulate the underlying hardware. It is quite similar to Para- virtualization. Here, machine operation used by the operating system which is further used to perform input-output or modify the system status.  The unmodified operating system can run on the top of the hypervisor. This is possible because of the operations, which are emulated in the software and the status codes are returned with what the real hardware would deliver. WhyServerVirtualization?  Server Virtualization allows us to use resources efficiently. With the help of server virtualization, you can eliminate the major cost of hardware.  This virtualization in Cloud Computing can divide the workload to the multiple servers and all these virtual servers are capable of performing a dedicated task.  One of the reasons for choosing server virtualization is that a person can move the workload between virtual machine according to the load. Application server virtualization  Application server virtualization abstracts a collection of application servers that provide the same services as a single virtual application server by using load- balancing strategies and providing a high-availability infrastructure for the services hosted in the application server.  This is a particular form of virtualization and serves the same purpose of storage virtualization: providing a better quality of service rather than emulating a different environment Advantages of Server Virtualization  Cost Reduction: Server virtualization reduces cost because less hardware is required.  Independent Restart: Each server can be rebooted independently and that reboot won't affectthe working of other virtual servers.
  • 48. DESKTOP VIRTUALIZATION  Desktop virtualization abstracts the desktop environment available on a personal computer in order to provide access to it using a client / server approach.  Desktop virtualization provides the same out- come of hardware virtualization but serves a different purpose. Similarly to hardware virtualization, desktop virtualization makes accessible a different system as though it were natively installed on the host, but this system is remotely stored on a different host and accessed through a network connection.  Moreover, desktop virtualization addresses the problem of making the same desktop environment accessible from everywhere.  Although the term desktop virtualization strictly refers to the ability to remotely access a desktop environment, generally the desktop environment is stored in a remote server or a datacenter that provides a high-availability infrastructure and ensures the accessibility and persistence of the data.  In this scenario, an infrastructure supporting hardware virtualization is fundamental to provide access to multiple desktop environments hosted on the same server; a specific desktop environment is stored in a virtual machine image that is loaded and started on demand when a client connects to the desktop environment.  This is a typical cloud computing scenario in which the user leverages the virtual infrastructure for performing the daily tasks on his computer. The advantages of desktop virtualization are high availability, persistence, accessibility, and ease of management  The basic services for remotely accessing a desktop environment are implemented in software components such as Windows Remote Services, VNC, and X Server.  Infrastructures for desktop virtualization based on cloud computing solutions include Sun Virtual Desktop Infrastructure (VDI), Parallels Virtual Desktop Infrastructure (VDI), Citrix XenDesktop, and others
  • 49. APPLICATION VIRTUALIZATION  Application-level virtualization is a technique allowing applications to be run in runtime environments that do not natively support all the features required by such applications.  In this scenario, applications are not installed in the expected runtime environment but are run as though they were.  In general, these techniques are mostly concerned with partial file systems, libraries, and operating system component emulation. Such emulation is performed by a thin layer – a program or an operating system component—that is incharge of executing the application.  Emulation can also be used to execute program binaries compiled for different hardware architectures. In this case, one of the following strategies can be implemented.  Interpretation. In this technique every source instruction is interpreted by an emulator for Executing native ISA instructions, leading to poor performance. Interpretation has a minimal Startup cost but a huge overhead, since each instruction is emulated.  Binary translation. In this technique every source instruction is converted to native instructions with equivalent functions. After a block of instructions is translated, It is cached and reused.  Binary translation has a large initial overhead cost, but over time it is subject to better performance, since previously translated instruction blocks are directly executed.
  • 50.  Emulation, as described, is different from hardware-level virtualization. The former simply allows the execution of a program compiled against a different hardware, whereas the latter emulates a complete hardware environment where an entire operating system can be installed  Application virtualization is a good solution in the case of missing libraries in the host operating system; in this case a replacement library can be linked with the application, or library calls can be remapped to existing functions available in the hostsystem  Another advantage is that in this case the virtual machine manager is much lighter since it provides a partial emula- tion of the runtime environment compared to hardware virtualization..  Moreover, this technique allows incompatible applications to run together. Compared to programming-level virtualization, which works across all the applications developed for that virtual machine, application-level virtualization works for a specific environment: It supports all the applications that run on top of a specific environment.  One of the most popular solutions implementing application virtualization is Wine, which is a software application allowing Unix-like operating systems to execute programs written for the Microsoft Windows platform.  Wine takes its inspiration from a similar product from Sun, Windows Application Binary Interface (WABI), which implements the Win 16API specifications on Solaris.  VMware ThinApp, another product in this area, allows capturing the setup of an installed application and packaging it into an executable image isolated from the hosting operating system.
  • 51. UNIT III CLOUD ARCHITECTURE, SERVICES AND STORAGE NIST CLOUD REFERENCE ARCHTECTURE The Conceptual Reference Model NIST cloud computing reference architecture, which identifies the major actors, their activities andfunctions in cloud computing. The diagram depicts a generic high-level architecture and is intended to facilitate the understanding ofthe requirements, uses, characteristics and standards of cloud computing. The NIST cloud computing reference architecture defines five major actors:  cloud consumer,  cloud provider,  cloud carrier,  cloud auditor and  cloud broker. Each actor is an entity (a person or an organization) that participates in a transaction or process and/or performs tasks in cloud computing.
  • 52. Cloud Consumer  The cloud consumer is the principal stakeholder for the cloud computing service. A cloud consumer represents a person or organization that maintains a business relationship with, and uses the service from a cloud provider.  A cloud consumer browses the service catalog from a cloud provider, requests the appropriate service, sets up service contracts with the cloud provider, and uses the service.  The cloud consumer may be billed for the service provisioned, and needs to arrange payments accordingly.  A cloud consumer can freely choose a cloud provider with better pricing and more favorable terms.  Typically a cloud provider‟s pricing policy and SLAs are non-negotiable, unless the customer expects heavy usage and might be able to negotiate for better contracts  SaaS applications in the cloud and made accessible via a network to the SaaS consumers. SaaS consumers can be billed based on the number of end users, the time of use, the network bandwidth consumed, the amount of data stored or duration of stored data.  Cloud consumers of PaaS can employ the tools and execution resources provided by cloud providers to develop, test, deploy and manage the applications hosted in a cloud environment  Consumers of IaaS have access to virtual computers, network-accessible storage, network infrastructure components, and other fundamental computing resources onwhich they can deploy and run arbitrary software. Cloud Provider  A cloud provider is a person, an organization; it is the entity responsible for making a service available to interested parties.  A Cloud Provider acquires and manages the computing infrastructure required for providing the services, runs the cloud software that provides the services, and makes arrangement to deliver the cloud services to the Cloud Consumers through network acces.  A cloud provider conducts its activities in the areas of service deployment, service orchestration, cloud service management, security, and privacy. Service Orchestration Service Orchestration refers to the composition of system components to support the Cloud Providersactivities in arrangement, coordination and management of computing resources in order to provide cloud services to Cloud Consumers
  • 53.  A three-layered model is used in this representation, representing the grouping of three types of system components Cloud Providers need to compose to deliver their services.  The top is the service layer, this is where Cloud Providers define interfaces for Cloud Consumers to access the computing services. Access interfaces of each of the three service models are provided in this layer.  The optional dependency relationships among SaaS, PaaS, and IaaS components are represented graphically as components stacking on each other;  The middle layer in the model is the resource abstraction and control layer. This layer contains the system components that Cloud Providers use to provide andmanage access to the physical computing resources through software abstraction.  Examples of resource abstraction components include software elements such as hypervisors, virtual machines, virtual data storage, and other computing resource abstractions.  The lowest layer in the stack is the physical resource layer, which includes all the physical computing resources. This layer includes hardware resources, such as computers (CPU and memory), networks (routers, firewalls, switches, network links and interfaces), storage components (hard disks) and other physical computing infrastructure elements. It also includes facility resources, such as heating, ventilation and air conditioning (HVAC), power, communications, and other aspects of the physical plant. Cloud Service Management Cloud Service Management includes all of the service-related functions that are necessary for the management and operation of those services required by or proposed to cloud consumers. Cloud service management can be described from the perspective of business support, provisioning and configuration, and from the perspective of portability and interoperability requirements. Business Support Business Support entails the set of business-related services dealing with clients and supporting processes. It includes the components used to run business operations that are client-facing.  Customer management: Manage customer accounts, open/close/terminate accounts, manage user profiles, manage customer relationships by providing points-of-contact and resolving customer issues and problems, etc.  Contract management: Manage service contracts, setup/negotiate/close/terminatecontract, etc.
  • 54.  Inventory Management: Set up and manage service catalogs, etc.  Accounting and Billing: Manage customer billing information, send billing statements, process received payments, track invoices, etc.  Reporting and Auditing: Monitor user operations, generate reports, etc.  Pricing and Rating: Evaluate cloud services and determine prices, handle promotionsand pricing rules based on a user's profile, etc. Provisioning and Configuration  Rapid provisioning: Automatically deploying cloud systems based on the requested service/resources/capabilities.  Resource changing: Adjusting configuration/resource assignment for repairs,upgrades and joining new nodes into the cloud.  Monitoring and Reporting: Discovering and monitoring virtual resources, monitoring cloud operations and events and generating performance reports.  Metering: Providing a metering capability at some level of abstraction appropriate tothe type of service (e.g., storage, processing, bandwidth, and active user accounts).  SLA management: Encompassing the SLA contract definition (basic schema with theQoS parameters), SLA monitoring and SLA enforcement according to defined policies. Portability and Interoperability  The proliferation of cloud computing promises cost savings in technologyinfrastructure and faster software upgrades.  Cloud providers should provide mechanisms to support data portability, service interoperability, and system portability  Data portability is the ability of cloud consumers to copy data objects into or out of a cloud or to use a disk for bulk data transfer.  Service interoperability is the ability of cloud consumers to use their data andservices across multiple cloud providers with a unified management interface Cloud Auditor  A cloud auditor is a party that can perform an independent examination of cloud service controls with the intent to express an opinion thereon.  Audits are performed to verify conformance to standards through review of objective evidence.  A cloud auditor can evaluate the services provided by a cloud provider in terms of security controls, privacy impact, performance, etc.  A privacy impact audit can help Federal agencies comply with applicable privacy lawsand regulations governing an individual‟s privacy, and to ensure confidentiality, integrity, and availability of an individual‟s personal information at every stage of development and operation. Security  It is critical to recognize that security is a cross-cutting aspect of the architecture that spans across all layers of the reference model, ranging from physical security to application security.  Therefore, security in cloud computing architecture concerns is not solely under the purview of the Cloud Providers, but also Cloud Consumers and other relevant actors.  Cloud-based systems still need to address security requirements such as authentication, authorization, availability, confidentiality, identity management, integrity, audit, security monitoring, incident response, and security policy management.
  • 55.  While these security requirements are not new, we discuss cloud specific perspectives to help discuss, analyze and implement security in a cloud system Cloud Broker  As cloud computing evolves, the integration of cloud services can be too complex for cloud consumers to manage.  A cloud consumer may request cloud services from a cloud broker, instead ofcontacting a cloud provider directly.  A cloud broker is an entity that manages the use, performance and delivery of cloud services and negotiates relationships between cloud providers and cloud consumers.  In general, a cloud broker can provide services in three categories [9]:  Service Intermediation: A cloud broker enhances a given service by improving some specific capability and providing value-added services to cloud consumers. The improvement can be managing access to cloud services, identity management, performance reporting, enhanced security, etc.  Service Aggregation: A cloud broker combines and integrates multiple services into one or more new services. The broker provides data integration and ensures the secure data movement between the cloud consumer and multiple cloud providers.  Service Arbitrage: Service arbitrage is similar to service aggregation except that the services being aggregated are not fixed. Service arbitrage means a broker has the flexibility to choose services from multiple agencies. The cloud broker, for example, can use a credit-scoring service to measure and select an agency with the best score. Cloud Carrier  A cloud carrier acts as an intermediary that provides connectivity and transport of cloud services between cloud consumers and cloud providers.  Cloud carriers provide access to consumers through network, telecommunication and other access devices.  The distribution of cloud services is normally provided by network and telecommunication carriers or a transport agent, where a transport agent refers to a business organization that provides physical transport of storage media such as high- capacity hard drives.  Note that a cloud provider will set up SLAs with a cloud carrier to provide services consistent with the level of SLAs offered to cloud consumers, and may require the cloud carrier to provide dedicated and secure connections between cloud consumers and cloud providers.
  • 56. CLOUD DEPLOYMENT MODELS A cloud infrastructure may be operated in one of the following deployment models:  Public cloud,  Private cloud,  Community cloud, or  Hybrid cloud. The differences are based on how exclusive the computing resources are made to a CloudConsumer. Public Cloud  A public cloud is one in which the cloud infrastructure and computing resources are made available to the general public over a public network. A public cloud is ownedby an organization selling cloud services, and serves a diverse pool of clients.  A public cloud is built over the Internet and can be accessed by any user who has paid forthe service.  In Public cloud, the services offered are made available to anyone, from anywhere, and at any time through the Internet.  From a structural point of view public cloud is a distributed system, most likely composed of one or more datacenters connected together, on top of which the specific services offered by the cloud are implemented.  Any customer can easily sign in with the cloud provider, enter her credential and billing details, and use the services offered.  Public cloud offer solutions for minimizing IT infrastructure costs and serve as a viable option for handling peak loads on the local infrastructure.  They have become an interesting option for small enterprises, which are able to start their businesses without large up-front investments by completely relying on public infrastructure for their IT needs.  A fundamental characteristic of public clouds is multi-tenancy. A public cloud is meant to serve a multitude of users, not a single customer. Any customer requires a virtual computing environment that is separated, and most likely isolated, from other users.  A public cloud can offer any kind of service: infrastructure, platform, or applications.  From an architectural point of view there is no restriction concerning the type of distributed system implemented to support public clouds.  Public clouds can be composed of geographically dispersed data centers to share the load of users and better serve them according to their locations.  Public cloud is better suited for business requirements which require managing the load; Benefit of Public Cloud  Public clouds promote standardization, preserve capital investment, and offer applicationflexibility. Example of Public Cloud  Amazon EC2 is a public cloud that provides infrastructure as a service;
  • 57.  Google AppEngine is a public cloud that provides an application development platform as a service;  SalesForce.com is a public cloud that provides software as a service. Drawbacks  In the case of public clouds, the provider is in control of the infrastructure and, eventually, of the customers’ core logic and sensitive data.  The risk of a breach in the security infrastructure of the provider could expose sensitive information to others.  Public cloud service offering has low degree of control and physical and security aspects of the cloud. Private Cloud  A private cloud gives a single Cloud Consumers organization the exclusive access to and usage of the infrastructure and computational resources.  In private cloud, the cloud infrastructure is operated solely for an organization.  It may be managed either by the Cloud Consumer organization or by a third party, and may be hosted on the organizations premises  Private clouds give local users a flexible and agile private infrastructure to run service workloads within their administrative domains.  A private cloud is supposed to deliver more efficient and convenient cloud services. It may impact the cloud standardization, while retaining greater customization and organizational control.  In a private cloud security management and day to day operations are relegated to internal IT or third party vendor, with contractual SLAs.  Hence customer of private cloud service offering has high degree of control and physical and security aspects of the cloud.  Security concerns are less critical, since sensitive information does not flow out of the private infrastructure.  Business that has dynamic or unforeseen needs, assignments which are mission critical, security alarms, management demands and uptime requirements are better suited for private cloud.  Private clouds have the advantage of keeping the core business operations in- house by relying on the existing IT infrastructure and reducing the burden of maintaining it once the cloud has been set up.  Moreover, existing IT resources can be better utilized because the private cloud canprovide services to a different range of users.  Contrary to popular belief, private cloud may exist off premises and can be managed by thirdparty. Thus two private cloud scenarios exist, as follows, On premises or On site Private Cloud  Applies to private cloud implemented at a customer premises. Outsourced Private Cloud  Applies to private clouds where the server side is outsourced to a hosting company.
  • 58. Key advantages of using a private cloud computing infrastructure  Customer information protection.- In-house security is easier to maintain and rely on.  Infrastructure ensuring SLAs.  Compliance with standard procedures and operations.  Private clouds attempt to achieve customization and offer higher efficiency, resiliency, security, and privacy. Drawback  From an architectural point of view, private clouds can be implemented on more heterogeneous hardware: They generally rely on the existing IT infrastructure already deployed on the private premises.  Private clouds can provide in-house solutions for cloud computing, but if compared to public clouds they exhibit more limited capability to scale elastically on demand. Example  VMWare vSphere  Openstack  Amazon VPC (Virtual Private Cloud)  Microsoft ECI data center Hybrid and Community Cloud  A hybrid cloud is a composition of two or more clouds (on-site private, on-site community, off-site private, off-site community or public) that remain as distinct entities but are bound together by standardized or proprietary technology that enables data and application portability.  Hybrid clouds allow enterprises to exploit existing IT infrastructures, maintain sensitive information within the premises, and naturally grow and shrink by provisioning external resources and releasing them when they’re no longer needed.  Hybrid clouds address scalability issues by leveraging external resources for exceeding capacity demand.  These resources or services are temporarily leased for the time required and then released .This practice is also known as cloud bursting.  A hybrid cloud provides access to clients, the partner network, and third parties.  In summary, public clouds promote standardization, preserve capital investment, andoffer application flexibility.  Private clouds attempt to achieve customization and offer higher efficiency, resiliency, security, and privacy.  Hybrid clouds operate in the middle, with many compromises in terms of resource sharing.  In hybrid cloud the resources are managed and provided either in-house or by externalproviders.  It is an adaptation among two platforms in which the workload exchange between theprivate cloud and the public cloud as per the need and demand.  For example organizations can use the hybrid cloud model for processing big data.
  • 59. Ona private cloud it can retain sales, business and various data that needs security and privacy.  Hybrid cloud hosting is enabled with features like scalability, flexibility and security. Example  Microsoft Azure  VMWare – vSphere for private and vCloudAir for public  Rackspace Rackconnect Community Cloud  A community cloud serves a group of Cloud Consumers which have shared concerns such as mission objectives, security, privacy and compliance policy, rather than serving a single organization as does a private cloud.  A community cloud is “shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations)  Similar to private clouds, a community cloud may be managed by the organizations or by a third party, and may be implemented on customer premise (i.e. on-site community cloud) or outsourced to a hosting company (i.e. outsourced community cloud).  From an architectural point of view, a community cloud is most likely implemented over multi- ple administrative domains. This means that different organizations such as government bodies private enterprises, research organizations, and even public virtual infrastructure providers contrib- ute with their resources to build the cloud infrastructure. Candidate sectors for community clouds are as follows:  Media industry  Healthcare industry  Energy and other core industries  Public sector  Scientific research The benefits of these community clouds are the following:  Openness. - By removing the dependency on cloud vendors, community clouds are open systems in which fair competition between different solutions can happen.  Community - Being based on a collective that provides resources and services, the infrastructure turns out to be more scalable because the system can grow simply by expanding its user base.  Graceful failures - Since there is no single provider or vendor in control of the infrastructure, there is no single point of failure.  Convenience and control - Within a community cloud there is no conflict between convenience and control because the cloud is shared and owned by the community, which makes all the decisions through a collective democratic process.  Environmental sustainability - The community cloud is supposed to have a smaller carbon footprint because it harnesses underutilized resources.
  • 60. CLOUD SERVICE MODELS Infrastructure as a Service (IaaS)  In cloud Computing Offering virtualized resources (computation, storage, and communication) on demand is known as Infrastructure as a Service (IaaS).  This model allows users to use virtualized IT resources for computing, storage, andnetworking.  In short, the service is performed by rented cloud infrastructure. The user can deployand run his applications over his chosen OS environment.  They deliver customizable infrastructure on demand.  IaaS (Infrastructure as a Service): provides you the computing infrastructure, physical or (quite often) virtual machines and other resources like virtual-machine disk image library, block and file-based storage, firewalls, load balancers, IP addresses, virtual local area networks etc.  A cloud infrastructure enables on-demand provisioning of servers running several choices of operating systems and a customized software stack. Infrastructure services are considered to be the bottom layer of cloud computing systems. Examples: Amazon EC2, Windows Azure, Rackspace, Google Compute Engine.  The main technology used to deliver and implement these solutions is hardware virtualization: one or more virtual machines opportunely configured and interconnected define the distributed sys- tem on top of which applications are installed and deployed.  IaaS/HaaS solutions bring all the bene- fits of hardware virtualization: workload partitioning, application isolation, sandboxing, and hard- ware tuning.  From the perspective of the service provider, IaaS/HaaS allows better exploiting the IT infrastructure and provides a more secure environment where executing third party applications.  From the perspective of the customer it reduces the administration and maintenance cost as well as the capital costs allocated to purchase hardware.
  • 61. It is possible to distinguish three principal layers:  the physical infrastructure,  the software management infrastructure, and  the user interface.  At the top layer the user interface provides access to the services exposed by the software management infrastructure. Such an interface is generally based on Web 2.0 technologies: Web services, RESTful APIs, and mash-ups.  The core features of an IaaS solution are implemented in the infrastructure management software layer. In particular, management of the virtual machines is the most important function performed by this layer. A central role is played by the scheduler, which is in charge of allocating the execution of virtual machine instances.  The bottom layer is composed of the physical infrastructure, on top of which the management layer operates.
  • 62.  In the case of complete IaaS solutions, all three levels are offered as service. This is generally the case with public clouds vendors such as Amazon, GoGrid, Joyent, Rightscale, Terremark, Rackspace, ElasticHosts, and Flexiscale. Platform as a Service (PaaS)  Platform-as-a-Service (PaaS) solutions provide a development and deployment platform for running applications in the cloud. They constitute the middleware on topof which applications are built.  In PaaS we can able to develop, deploy, and manage the execution of applications using provisioned resources demands a cloud platform with the proper software environment.  Such a platform includes operating system and runtime library support.  PaaS (Platform as a Service provides you computing platforms which typically includes operating system, programming language execution environment, database, web server etc. Examples:  Google AppEngine, an example of Platform as a Service  AWS Elastic Beanstalk,  Windows Azure, Heroku,  Force.com,,  Apache Stratos  Application management is the core functionality of the middleware. PaaS implementations pro- vide applications with a runtime environment and do not expose any service for managing the underlying infrastructure.  Developers design their systems in terms of applications and are not concerned with hardware (physical or virtual), operating systems, and other low-level services.  The core middleware is in charge of managing the resources and scaling applications on demand or automatically, according to the commitments made with users.  Developers generally have the full power of programming languages such as Java, .NET, Python, or Ruby, with some restrictions to provide better scalability and security.  In this case the traditional development environments can be used to design and develop applications, which are then deployed on the cloud by using the APIs exposed by the PaaS provider.  PaaS solutions can offer middleware for developing applications together with the infrastructure or simply provide users with the software that is installed on the user premises.  It is possible to organize the various solutions into three wide categories PaaS-I, PaaS- II, and PaaS-III.  The first category identifies PaaS implementations that completely follow the cloud computing style for application development and deployment.  Example - Force.com and Longjump. Both deliver as platforms the combination of middleware and infrastructure.  In the second class we can list all those solutions that are focused on providing a scalable infrastructure
  • 63. for Web application, mostly websites. In this case, developers generally use the providers’ APIs, which are built on top of industrial runtimes, to develop applications.  Example - Google AppEngine is the most popular product in this category.  The third category consists of all those solutions that provide a cloud programming platform for any kind of application, not only Web applications  Example - Microsoft Windows Azure, which provides a comprehensive framework for building service- oriented cloud applications on top of the .NET technology, hosted on Microsoft’s datacenters  Manjrasoft Aneka, Apprenda SaaSGrid, Appistry Cloud IQ Platform, DataSynapse,and GigaSpaces DataGrid, provide only middleware with different services Some essential characteristics that identify a PaaS solution:  Runtime framework  Abstraction  Automation  Cloud services  Another essential component for a PaaS-based approach is the ability to integrate third-party cloud services offered from other vendors by leveraging service-oriented architecture.  One of the major concerns of leveraging PaaS solutions for implementing applicationsis vendor lock- in.  Differently from IaaS solutions, which deliver bare virtual servers that can be fully custom- ized in terms of the software stack installed, PaaS environments deliver a platform for developing applications, which exposes a well-defined set of APIs and,in most cases, binds the application to the specific runtime of the PaaS provider.  Finally, from a financial standpoint, although IaaS solutions allow shifting the capital cost into operational costs through outsourcing, PaaS solutions can cut the cost across development, deployment, and management of applications.  It helps management reduce the risk of ever-changing technologies by offloading the cost of upgrading the technology to the PaaS provider. Software as a Service (SaaS)  Software-as-a-Service (SaaS) is a software delivery model that provides access to applications through the Internet as a Web-based service.  It provides a means to free users from complex hard- ware and software management by offloading such tasks to third parties, which build applications accessible to multiple users through a Web browser.  SaaS (Software as a Service) model you are provided with access to application software often referred
  • 64. to as "on-demand software".  No need to worry about the installation, setup and running of the application. Service provider will do that for you. You just have to pay and use it through some client.  On the provider side, the specific details and features of each customer’s applica- tion are maintained in the infrastructure and made available on demand  The SaaS model is appealing for applications serving a wide range of users and thatcan be adapted to specific needs with little further customization. This requirement characterizes SaaS as a “one-to- many” software delivery model, whereby an application is shared across multiple users.  The SaaS model provides software applications as a service. As a result, on the customer side, there is no upfront investment in servers or software licensing.  On the provider side, costs are kept rather low, compared with conventional hosting of user applications. Customer data is stored in the cloud that is either vendor proprietary or publicly hosted to support PaaS and IaaS. Examples: Google Apps, Microsoft Office 365.  SaaS applications are naturally multitenant.  Multitenancy which is a feature of SaaS compared to traditional packaged software, allows providers to centralize and sustain the effort of managing large hardware infrastructures, maintaining and upgrading applications transparently to the users, and optimizing resources by sharing the costs among the large user base. Benefits of SaaS  Software cost reduction and total cost of ownership (TCO) were paramount  Service-level improvements  Rapid implementation  Standalone and configurable applications  Rudimentary application and data integration  Subscription and pay-as-you-go (PAYG) pricing  Software-as-a-Service applications can serve different needs. CRM, ERP, and social networking applications are definitely the most popular ones.  SalesForce.com is probably the most successful and popular example of a CRM service  Another important class of popular SaaS applications comprises social networkingapplications such as Facebook and professional networking sites such as LinkedIn.  Office automation applications are also an important representative for SaaS applications: Google Documents and Zoho Office are examples of Web-based applications that aim to address all user needs for documents, spreadsheets, and presentation management  It is important to note the role of SaaS solution enablers, which provide an environment in which to integrate third-party services and share information with others.
  • 69. CLOUD COMPUTING DESIGN CHALLENGES Cloud computing presents many challenges for industry and academia. The interoperation between different clouds, the creation of standards, security, scalability,fault tolerance, and organizational aspects. Cloud interoperability and standards  Cloud computing is a service-based model for delivering IT infrastructure and applications like utilities such as power, water, and electricity.  To fully realize this goal, introducing standards and allowing interoperability between solutions offered by different vendors are objectives of fundamental importance.  Vendor lock-in constitutes one of the major strategic barriers against the seam- less adoption of cloud computing at all stages.  Vendor lock-in can prevent a customer from switching to another competitor’s solution,  The presence of standards that are actually implemented and adopted in the cloud computing community could give room for interoperability and then lessen the risks resulting from vendor lock-in.  The standardization efforts are mostly concerned with the lower level of the cloud computing architecture, which is the most popular and developed.  The Open Virtualization Format (OVF) [51] is an attempt to provide a common format for storing the information and metadata describing a virtual machine image.  Another direction in which standards try to move is devising general reference architecture for cloud computing systems and providing a standard interface through which one can interact with them. Scalability and fault tolerance  The ability to scale on demand constitutes one of the most attractive features of cloud computing. Clouds allow scaling beyond the limits of the existing in-house IT resources, whether they are infrastructure (compute and storage) or applications services.  To implement such a capability, the cloud middleware has to be designed with the principle of scalability along different dimensions in mind—for example, performance, size, and load.  the ability to tolerate failure becomes fundamental, sometimes even more important than providing an extremely efficient and optimized system.  Hence, the challenge in this case is designing highly scalable and fault-tolerant systems that are easy to manage and at the same time provide competitive performance. Security, trust, and privacy  Security, trust, and privacy issues are major obstacles for massive adoption of cloud computing.  The traditional cryptographic technologies are used to prevent data tampering and access to sensitive information.  The massive use of virtualization technologies exposes the existing system to new threats, which previously were not considered applicable.
  • 70.  It then happens that a new way of using existing technologies creates new opportunities for additional threats to the security of applications.  The lack of control over their own data and processes also poses severe problems for the trust we give to the cloud service provider and the level of privacy we want to have for our data.  On one side we need to decide whether to trust the provider itself; on the other side, specific regulations can simply prevail over the agreement the provider is willing to establish with us concerning the privacy of the information managed on our behalf.  The challenges in this area are, then, mostly concerned with devising secure and trustable systems from different perspectives: technical, social, and legal. Organizational aspects  Cloud computing introduces a significant change in the way IT services are consumed and man- aged. More precisely, storage, compute power, network infrastructure, and applications are delivered as metered services over the Internet.  This introduces a billing model that is new within typical enterprise IT departments, which requires a certain level of cultural and organizational process maturity. In particular, a wide acceptance of cloud computing will require a significant change to business processes and organizational boundaries.  From an organizational point of view, the lack of control over the management of data and processes poses not only security threats but also new problems that previously did not exist.  Traditionally, when there was a problem with computer systems, organizations developed strategies and solutions to cope with them, often by relying on local expertise and knowledge.
  • 71. 15-11-2021 1 Cloud Storage: • Storage-as-a-Service • Advantages of Cloud Storage Cloud Storage Providers: S3 2 3 4
  • 72. 15-11-2021 2 5 What is cloud storage? History J.C.R.Licklider – One of the fathers of the cloud based computing idea. Global network that allows access from anywhere at anytime. Technological limits of the 60’s. What is cloud storage?  Cloud storage is a service model in which data is maintained, managed and backed up remotely and made available to users over a network (typically the Internet). How does cloud storage work? Redundancy  Core of cloud computing Equipment  Data servers  Power supplies Data files  Replication
  • 73. 15-11-2021 3 9 Provider failures Amazon S3 systems failure downs Web 2.0 sites Twitterers lose their faces, others just want their data back Computer World, July 21, 2008 Customers Shrug Off S3 Service Failure At about 7:30 EST this morning, S3, Amazon.com‟s online storage service, went down. The 2-hour service failure affected customers worldwide. Wired, Feb. 15, 2008 Loss of customer data spurs closure of online storage service 'The Linkup„ Network World, Nov 8, 2008 Spectacular Data Loss Drowns Sidekick Users October 10, 2009 Temporary unavailability Permanent data loss How do we increase users’ confidence in the cloud? Cloud Storage iCloud •iCloud is a service provided by Apple •5GB storage space is free of cost •Once the iCloud is used you can share your stored data on any of your different Apple devices •Aceess to all files, music, calendar, email •Only iOS 5 has iCloud installed First 1 TB / month $0.140 per GB Next 49 TB / month $0.125 per GB Next 450 TB / month $0.110 per GB Next 500 TB / month $0.095 per GB Next 4000 TB / month $0.080 per GB Over 5000 TB / month $0.055 per GB Home: Business: Packages: 3 2 Price Range: $7.95 - $24.95 $49.95 - $159.95 Storage Space: 2TB - 5TB 2TB - 10TB+ Users: 1 3 - 10+
  • 74. 15-11-2021 4 Free Options 15 16 Data Storage Saving: • By storing your data online you are reducing the burden of your hard disk, which means you are eventually saving disk space World Wide Accessibility • You can access your data anywhere in the world. You don’t have to carry your hard disk pen drive or any other storage device Data Safety • You cannot trust your HDD and storage device every time because it can crash anytime • In order to make your data safe from such hazards you can keep it online Advantages
  • 75. 15-11-2021 5 17 Security • Most of the online storage sites provide better security • Only the user can access the account Easy sharing • You can share data faster, easy and secure manner Data Recovery • Online data storage sites provide quick recovery of your files and folders • This makes them more safe and secure Automatic backup • User can even schedule automatic backup of your personal computer in order to avoid manual backup of files Advantages 18 19 Improper handling can cause trouble • You must need your user-id and password safe to protect your data • If someone knows or even guess your credentials, it may result in loss of data • Use complex passwords and try to avoid storage them in your personal storage devices such as pen drive and HDD Disadvantages 20 Choose trustworthy source to avoid any hazard • There are many online storage sites out there but you have to choose the one, on which you can trust Internet connection sucks • To access your files everywhere the only thing you need is internet connection • If you don’t get internet connection somewhere then will end up with no access of data even though it is safely stored online Disadvantages
  • 76. 15-11-2021 6 Cloud Storage • Several large Web companies are now exploiting the fact that they have data storage capacity that can be hired out to others. – allows data stored remotely to be temporarily cached on desktop computers, mobile phones or other Internet-linked devices. • Amazon’s Elastic Compute Cloud (EC2) and Simple Storage Solution (S3) are well known examples Amazon Simple Storage Service (S3) • Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. • S3 provides the object-oriented storage service for users. • Users can access their objects through Simple Object Access Protocol (SOAP) with either browsers or other client programs which support SOAP. • SQS is responsible for ensuring a reliable message service between two processes, even if the receiver processes are not running. 23 • Fundamental operation unit of S3 is called an object. • Each object is stored in a bucket and retrieved via a unique, developer- assigned key - the object has other attributes such as values, metadata, and access control information. • The storage provided by S3 can be viewed as a very coarse-grained key- value pair. • Through the key-value programming interface, users can write, read, and delete objects containing from 1 byte to 5 gigabytes of data each. • There are two types of web service interface for the user to access the data stored in Amazon clouds. • REST (web 2.0) interface, • SOAP interface. Amazon Simple Storage Service (S3) 24
  • 77. 15-11-2021 7 25 • Redundant through geographic dispersion. • Designed to provide 99.999999999 percent durability and 99.99 percent availability of objects over a given year with cheaper reduced redundancy storage (RRS). • Authentication mechanisms to ensure that data is kept secure from unauthorized access. • Objects can be made private or public, and rights can be granted to specific users. • Per-object URLs and ACLs (access control lists). 6, 2010). Key features of S3: 26 • Default download protocol of HTTP. • A BitTorrent protocol interface is provided to lower costs for high-scale distribution. • $0.055 (more than 5,000 TB) to 0.15 per GB per month storage (depending on total amount). • First 1 GB per month input or output free and then $.08 to $0.15 per GB for transfers outside an S3 region. • There is no data transfer charge for data transferred between Amazon EC2 and Amazon S3 within the same region or for data transferred between the Amazon EC2 Northern Virginia region and the Amazon S3 U.S. Standard region (as of October Key features of S3: 27 Amazon Elastic Block Store (EBS) and SimpleDB • The Elastic Block Store (EBS) provides the volume block interface for saving and restoring the virtual images of EC2 instances. • The status of EC2 are saved in the EBS system after the machine is shut down. • Users can use EBS to save persistent data and mount to the running instances of EC2. • EBS is analogous to a distributed file system accessed by traditional OS disk access mechanisms. • EBS allows you to create storage volumes from 1 GB to 1 TB that can be mounted as EC2 instances. 28 • Multiple volumes can be mounted to the same instance. • These storage volumes behave like raw, unformatted block devices, with user-supplied device names and a block device interface. • You can create a file system on top of Amazon EBS volumes, or use them in any other way you would use a block device (like a hard drive). • Snapshots are provided so that the data can be saved incrementally. • EBS also charges $0.10 per 1 million I/O requests made to the storage (as of October 6, 2010). Amazon Elastic Block Store(EBS) and SimpleDB
  • 78. 15-11-2021 8 29 Amazon SimpleDB Service • SimpleDB provides a simplified data model based on the relational database data model. • Structured data from users must be organized into domains. • Each domain can be considered a table. • The items are the rows in the table. • A cell in the table is recognized as the value for a specific attribute (column name) of the corresponding row. • This is similar to a table in a relational database and possible to assign multiple values to a single cell in the table. • This is not permitted in a traditional relational database which wants to maintain data consistency 30 • Many developers simply want to quickly store, access, and query the stored data. • SimpleDB removes the requirement to maintain database schemas with strong consistency. • SimpleDB is priced at $0.140 per Amazon SimpleDB Machine Hour consumed with the first 25 Amazon SimpleDB Machine Hours consumed per month free (as of October 6, 2010). SimpleDB called as “LittleTable” 31 32
  • 80. UNIT IV RESOURCE MANAGEMENT AND SECURITY IN CLOUD Inter-cloud Resource Management Extended Cloud Computing Services Top three service layers Include  SaaS,  PaaS, and  IaaS  The cloud platform provides PaaS, which sits on top of the IaaS infrastructure.  The top layer offers SaaS.  These must be implemented on the cloud platforms provided. The implication is that one cannot launch SaaS applications with a cloud platform. The cloud platform cannot be built if compute and storage infrastructures are not there.  The cloud infrastructure layer can be further subdivided as Data as a Service (DaaS) and Communication as a Service (CaaS) in addition to compute and storage in IaaS.  Cloud players are divided into three classes: (1) cloud service providers and IT administrators, (2) software developers or vendors, and (3) end users or business users.  These cloud players vary in their roles under the IaaS, PaaS, and SaaS models.
  • 81.  From the software vendors’ perspective, application performance on a given cloudplatform is most important.  From the providers’ perspective, cloud infrastructure performance is the primaryconcern.  From the end users’ perspective, the quality of services, including security, is the most important. Cloud Service Tasks and Trends  Cloud services are introduced in five layers. The top layer is for SaaS applications  For example, CRM is heavily practiced in business promotion, direct sales, and marketing services.  CRM offered the first SaaS on the cloud successfully.  SaaS tools also apply to distributed collaboration, and financial and human resources management.  PaaS is provided by Google, Salesforce.com, and Facebook, among others.  IaaS is provided by Amazon, Windows Azure, and RackRack among others.  Collocation services require multiple cloud providers to work together to supportsupply chains in manufacturing.  Network cloud services provide communications such as those by AT&T, Qwest, and AboveNet. Software Stack for Cloud Computing  Developers have to consider how to design the system to meet critical requirements such as high throughput, HA, and fault tolerance.  The overall software stack structure of cloud computing software can be viewed as layers.  Each layer has its own purpose and provides the interface for the upper layers just as the traditional software stack does.  By using VMs, the platform can be flexible, that is, the running services are not bound to specific hardware platforms.  The software layer on top of the platform is the layer for storing massive amounts of data. This layer acts like the file system in a traditional single machine.  Other layers running on top of the file system are the layers for executing cloud computing applications. They include the database storage system, programming for large-scale clusters, and data query language support.  The next layers are the components in the software stack. Runtime Support Services  As in a cluster environment, there are also some runtime supporting services in the cloud computing environment.  Cluster monitoring is used to collect the runtime status of the entire cluster.  The scheduler queues the tasks submitted to the whole cluster and assigns the tasks to the processing nodes according to node availability.  Runtime support is software needed in browser-initiated applications applied by thousands of cloud customers.
  • 82. RESOURCE PROVISIONING METHODS In Cloud the following resources are provisioned  Computer resources or VMs.  Storage allocation schemes to interconnect distributed computing infrastructures Provisioning of Compute Resources (VMs)  Providers supply cloud services by signing SLAs with end users.  The SLAs must commit sufficient resources such as CPU, memory, and bandwidth that the user can use for a preset period.  Deploying an autonomous system to efficiently provision resources to users is a challenging problem  The difficulty comes from the unpredictability of consumer demand, 1. Software and hardware failures, 2. Heterogeneity of services, 3. Power management, and 4. Conflicts in signed SLAs between consumers and service providers  Efficient VM provisioning depends on the cloud architecture and management of cloud infrastructures.  In a virtualized cluster of servers, efficient installation of VMs, live VM migration, and fast recovery from failures needed. Resource Provisioning Methods  Three cases of static cloud resource provisioning policies available, (a), over provisioning with the peak load causes heavy resource waste. 1. Under provisioning of resources results in losses by both user and provider in that paid demand by the users is not served and wasted. (lead to broken SLA and penalties) 2. Over provisioning of resources lead to resource underutilization. 3. Constant provisioning of resources with fixed capacity to a declining user demand could result in even worse resource waste. Three resource-provisioning methods 1. Demand-driven method - provides static resources 2. Event driven method - based on predicted workload by time. 3. Popularity-driven method – based on Internet traffic monitored. Demand-Driven Resource Provisioning
  • 83.  This method adds or removes computing instances based on the current utilizationlevel of the allocated resources.  In general, when a resource has surpassed a threshold for a certain amount of time, the scheme increases that resource based on demand.  When a resource is below a threshold for a certain amount of time, that resource couldbe decreased accordingly.  The scheme does not work out right if the workload changes abruptly. Event-Driven Resource Provisioning  This scheme adds or removes machine instances based on a specific time event.  The scheme works better for seasonal or predicted events such as Christmastime inthe West and the Lunar New Year in the East.  During these events, the number of users grows before the event period and thendecreases during the event period.  This scheme anticipates peak traffic before it happens.  The method results in a minimal loss of QoS, if the event is predicted correctly.  Otherwise, wasted resources are even greater due to events that do not follow a fixedpattern. Popularity-Driven Resource Provisioning  In this method, the Internet searches for popularity of certain applications and createsthe instances by popularity demand.  The scheme anticipates increased traffic with popularity.  Again, the scheme has a minimal loss of QoS, if the predicted popularity is correct.  Resources may be wasted if traffic does not occur as expected. Dynamic Resource Deployment  Dynamic resource deployment can be implemented to achieve scalability inperformance.  The InterGrid-managed infrastructure was developed by a Melbourne Universitygroup .  The Inter- Grid is a Java-implemented software system that lets users create execution cloud environments.  An inter grid gateway (IGG) allocates resources from a local cluster to deployapplications in three steps: (1) Requesting the VMs (2) Enacting the leases, and (3) Deploying the VMs as requested
  • 84.  Under peak demand, this IGG interacts with another IGG that can allocate resourcesfrom a cloud computing provider.  A grid has predefined peering arrangements with other grids, which the IGG manages.  Through multiple IGGs, the system coordinates the use of InterGrid resources.  An IGG is aware of the peering terms with other grids, selects suitable grids that canprovide the required resources, and replies to requests from other IGGs.  The InterGrid allocates and provides a distributed virtual environment (DVE).  This is a virtual cluster of VMs that runs isolated from other virtual clusters.  A component called the DVE manager performs resource allocation and managementon behalf of specific user applications.  The core component of the IGG is a scheduler for implementing provisioning policiesand peering with other gateways. Provisioning of Storage Resources  The data storage layer is built on top of the physical or virtual servers.  As the cloud computing applications often provide service to users, it is unavoidablethat the data is stored in the clusters of the cloud provider.  The service can be accessed anywhere in the world.  A distributed file system is very important for storing large-scale data.  In cloud computing, another form of data storage is (Key, Value) pairs.  Amazon S3 service uses SOAP to access the objects stored in the cloud.  Typical cloud databases include  BigTable from Google,  SimpleDB from Amazon  SQL service from Microsoft Azure.
  • 86. 15-11-2021 2 Global Exchange of Cloud Resources • No single cloud infrastructure provider will be able to establish its data centers at all possible locations throughout the world. • As a result, cloud application service (SaaS) providers will have difficulty in meeting QoS expectations for all their consumers. • Hence, they would like to make use of services of multiple cloud infrastructure service providers who can provide better support for their specific consumer needs. Global Exchange of Cloud Resources • This kind of requirement often arises in enterprises with global operations and applications such as Internet services, media hosting, and Web 2.0 applications. • This necessitates federation of cloud infrastructure service providers for seamless provisioning of services across different cloud providers. • To realize this, the Cloudbus Project at the University of Melbourne has proposed InterCloud architecture supporting brokering and exchange of cloud resources for scaling applications across multiple clouds.
  • 87. 15-11-2021 3 Global Exchange of Cloud Resources • They consist of client brokering and coordinator services that support utility-driven federation of clouds: application scheduling, resource allocation, and migration of workloads. • The architecture cohesively couples the administratively and topologically distributed storage and compute capabilities of clouds as part of a single resource leasing abstraction. • The system will ease the cross domain capability integration for on-demand, flexible, energy-efficient, and reliable access to the infrastructure based on virtualization technology . Global Exchange of Cloud Resources • The Cloud Exchange (CEx) acts as a market maker for bringing together service producers and consumers. It aggregates the infrastructure demands from application brokers and evaluates them against the available supply currently published by the cloud coordinators. • It supports trading of cloud services based on competitive economic models such as commodity markets and auctions. CEx allows participants to locate providers and consumers with fitting offers.
  • 88. 15-11-2021 4 Global Exchange of Cloud Resources • Such markets enable services to be commoditized, and thus will pave the way for creation of dynamic market infrastructure for trading based on SLAs. • An SLA specifies the details of the service to be provided in terms of metrics agreed upon by all parties, and incentives and penalties for meeting and violating the expectations, respectively. • The availability of a banking system within the market ensures that financial transactions pertaining to SLAs between participants are carried out in a secure and dependable
  • 89. 15-11-2021 1 Security in Cloud Computing • Security Overview • Cloud Security Challenges • Software-as-a-Service Security • Data Security • Security Governance • Virtual Machine Security Cloud Security !! A major Concern ⚫ Security concerns arising because both customer data and program are residing at Provider Premises. ⚫ Security is always a major concern in Open System Architectures Customer Customer Data Customer Code Provider Premises
  • 90. 15-11-2021 2 Security Concerns • Eight Threats that users encounter while transferring to and saving data in the cloud - carrying • Handling of data by Third Party •Supplier works on all aspects of handling data updates to uploading to safety controls • 100 % security •Cyber attacks •Every time saving data on the internet- risk of cyber attack •Insider Threats •Workers get access to your cloud – all the data from consumer data to secret information and intellectual property can be revealed Security Concerns •Government Intrusion •Supervision programs and contestants are not only the ones who might want to look into your data •Legal Liability • Not restricted to safety and also comprise its consequences, like court cases •Lack of standardization •Cloud consistency •Lack of Support • support from cloud providers •Constant Risk •Identity Management and access control are fundamental functions required for secure computing Threats to Infrastructure , Data and Access control •Denial of service •Distributed DOS is an attempt to make a network or machine resource inaccessible to its anticipated consumers •Man in the Middle Attack • improper configuration of Secure Socket Layer (SSL) •Information shared between two parties could be hacked by the middle party •Network Sniffing •Another form of hacking •Hack the passwords that are improperly encrypted during communication •Solution :encryption techniques to secure data Threats to Infrastructure , Data and Access Control •Port Scanning •Hackers use the ports 80 – HTTP , 21 – FTP •Solution : firewall to secure data from port attacks •Sql Injection Attack •Special Characters are used by the hackers to return the data •Cross Site Scripting •user enters the correct URL of a website, whereas on another site, hacker redirects the user to his / her website and hacks its identification
  • 91. 15-11-2021 3 Availability Confidentiality Integrity Security Services Data Confidentiality: refers to limiting data access only to authorized users and stopping access to unauthorized ones. • Access Control: Utilized for controlling which assets a client can get access to and the jobs which cane be presented with the accessed resources • Biometric: Recognize persons identity based on individual characteristics. Retinascanning, facial recognition, voice recognition and fingerprint recognition • Encryption: Method that converts readable (plain text) into ciper text • Privacy: Maintaining confidential or individual data form being viewed by unauthorized parties • Ethics: Employees should be granted clear direction by principle Data Integrity: Refers to the technique for ensuring that the data is genuine, correct and protected from illegal user alteration Example: Digital Signature, Hashing methods and message verification codes are used for protecting data integrity Data Availability: Availability of data resource • Is double-checking that the authorized users have access to data and affiliated assets when required • This can be carried out by utilizing data backup and recoveryplan
  • 92. 15-11-2021 4 Challenge • Data-level security and sensitive data in the domain of the enterprise • Security need to move to the data level – so enterprises can be • sure that their data is protected wherever it goes. Methods • Enterprise specify data is not allowed to go outside of US • Force encryption of certain types of data • Permit access to specified users to access the data • Provide compliance with Payment Card Industry Data Security Standard (PCIDSS) Data Security
  • 93. 15-11-2021 5 • Application security – key success for SaaS company • where security features and requirements are defined • security test results are reviewed • Application security processes, secure coding guidelines, training and testing scripts and tools – collaborative effort between the security and the development teams Application Security Example: In Product Engineering, • Focus on application layer, security design and infrastructure layers • Security Team – provide security requirements to implement • Need collaboration between security and product development team Application Security • External Penetration Testers • Used for application source code reviews and attacks • It provides review of the security of the application as well as assurance to customers that attack and penetration tests are performed regularly • Fragmented and undefined collaboration on application security result in lower quality deign, coding efforts and testing results • Open Web Application Security Project (OWASP) – Guidelines for secure application development in web Application Security
  • 94. 15-11-2021 6 In Cloud Environment, • Physical servers are consolidated to multiple virtual machine instances on virtualized servers • Security is the major concern – Data center security teams replicate security controls for large data centers Virtual Machine Security Virtual Machine Security • Firewalls, intrusion detection and prevention, integrity monitoring and log inspection deployed as software on virtual machines • It helps to increase protection and maintain compliance integrity of servers and applications as virtual resources move from on- premises to public cloud environment • It enable critical applications and data to be moved to the cloud securely Virtual Machine Security To facilitate the centralized management of a server firewall policy, the security software loaded onto a virtual machine • Bidirectional stateful firewall • Security software installed on virtual machine • Enables virtual machine isolation and location awareness - Enabling a tightened policy and flexibility to move the virtual machine from on- premises to cloud resources • Integrity monitoring and log inspection software must be applied at virtual machine level The security software can be put into a single software agent • provides consistent control and management throughout the cloud • provides economy of scale, deployment and cost savings for both service provider and the enterprise
  • 95. UNIT V – CASE STUDIES GOOGLE APP ENGINE  Google AppEngine is a PaaS implementation that provides services for developing and hosting scalable Web applications.  AppEngine is essentially a distributed and scalable runtime environment.  It leverages Google’s distributed infrastructure to scale out applications facing a large number of requests by allocating more computing resources to them and balancing the load among them.  Developers can develop applications in Java, Python, and Go. Architecture and core concepts AppEngine is a platform for developing scalable applications accessible through the Web.The platform is logically divided into four major components:  Infrastructure  Run- time environment  Underlying storage  Set of scalable services Infrastructure  AppEngine’s infrastructure takes advantage of many servers available within Google datacenters.  For each HTTP request, AppEngine locates the servers hosting the application that processes the request, evaluates their load and, if necessary, allocates additional resources (i.e., ser- vers) or redirects the request to an existing server.  The infrastructure is also responsible for monitoring application performance and collectingstatistics on which the billing is calculated. Runtime environment  The runtime environment represents the execution context of applications hosted onAppEngine.  Sandboxing
  • 96.  Responsibilities of the runtime environment are, it can execute without causing a threatto the server and without being influenced by other applications.  Supported runtimes  AppEngine applications developed using three different languages and relatedtechnologies: Java, Python, and Go.  AppEngine currently supports Java 6, and Java Server Pages (JSP) used for webapplication development.  Support for Python is provided by an optimized Python 2.5.2 interpreter.  The Go runtime environment allows applications developed with the Goprogramming language to be hosted and executed in AppEngine. Storage  AppEngine provides various types of storage, which operate differently depending on the volatility of the data.  There are three different levels of storage: o In memory-cache o Storage for semi-structured data o Long-term storage for static data.  Data Store is a service that allows developers to store semi-structured data.  The service is designed to scale and optimized to quickly access data.  The underlying infrastructure of DataStore is based on Bigtable a redundant, distributed, and semistructured data store. Application services  Applications hosted on AppEngine take the most from the services made available through the run- time environment.  It simplifies access to data, account management, integration of external resources, messaging and communication, image manipulation, and asynchronous computation.  Application Services include UrlFetch, MemCache, Mail and Instant Messaging, Account management and Image Manipulation. UrlFetch: Sandbox does provide developers with the capability of retrieving a remoteresource through HTTP/HTTPS by means of the UrlFetch service. MemCache: AppEngine provides caching services by means of Memcache. This is a distributed in-memory cache that is optimized for fast access and providesdevelopers with a volatile store for the objects that are frequently accessed. Mail and instant messaging  AppEngine provides developers with the ability to send and receive mails through Mail.  The service allows sending email on behalf of the application to specific user accounts.  Mail operates asynchronously, and in case of failed delivery the sending address is notified through an
  • 97. email detailing the error.  AppEngine provides also another way to communicate with the external world: the Extensible Messaging and Presence Protocol (XMPP).  Any chat service that supports XMPP, such as Google Talk, can send and receive chat messages to and from the Web application. Account management  AppEngine simplifies account management by allowing developers to leverage Google account management by means of by means of Google Accounts.  Using Google Account, Web applications can store profile settings in the form of key-value pairs, attach them to a given Google account, and quickly retrieve them once the user authenticates. Image manipulation  AppEngine allows applications to perform image resizing, rotation, mirroring, and enhance Image manipulation.  Image Manipulation is mostly designed for lightweight image processing and is optimized for speed. Compute services  AppEngine offers additional services such as Task Queues and Cron Jobs that simplify the execution of computations.  Task queues: It allows applications to submit a task for a later execution. This is useful for long computations that cannot be completed within the maximum response time of a request handler.  Cron Jobs: Cron Jobs used to schedule the required operation at the desired time. It invokes the request handler specified in the task at a given time and does not re-execute the task in case of failure. AMAZON WEB SERVICES Amazon Web Services (AWS) is a platform that allows the development of flexibleapplications.  It provides solutions for elastic infrastructure scalability, messaging, and data storage.  The platform is accessible through SOAP or RESTful Web service interfaces.  It offers a variety cloud services, most notably: o S3: Amazon Simple Storage Service (storage) o EC2: Elastic Compute Cloud (virtual servers) o Cloudfront (content delivery) o Cloudfront Streaming (video streaming) o SimpleDB (structured datastore) o RDS (Relational Database) o SQS (reliable messaging) o Elastic MapReduce (data processing) o Amazon Virtual Private Cloud (Communication Network)
  • 98. Compute Services Compute services constitute the fundamental element of cloud computing systems.  The fundamental service in this space is Amazon EC2, which delivers an IaaS solution.  Amazon EC2 allows deploying servers in the form of virtual machines created as instances of a specific image. Amazon Machine Images  Amazon machine images are templates from which it is possible to create a virtual machine.  An AMI contains a physical file system layout with a predefined operating system installed. EC2 instances  EC2 instances represent virtual machines.  They are created using AMI as templates, which are specialized by selecting the number of cores, their computing power, and the installed memory.  Configurations for EC2 instances  It possesses Standard instances, Micro instances, High-memory instances, High-CPU instances, Cluster Compute instances, Cluster GPU instances Advanced Compute services  AWS Cloud Formation: Extension of cloud deployment model based on EC2. Uses Templates based on JSON.  AWS Elastic Beanstalk: Easy way to package applications and deploy them on the AWS Cloud.  Amazon Elastic MapReduce: It provides AWS users with a cloud computing platform for MapReduce applications. Storage Services  AWS provides a collection of services for data storage and information management.  The core service in this area is represented by Amazon Simple Storage Service(S3).  This is a distributed object store that allows users to store information in different formats.  The core components of S3 are o Buckets: It represents virtual containers in which to store objects; o Objects: It represents the content that is actually stored. S3 Key concepts S3 has been designed to provide a simple storage service that’s accessible through aRepresentational State Transfer (REST) interface.  The storage is organized in a two-level hierarchy  Stored objects cannot be manipulated like standard files.  Content is not immediately available to users.  Requests will occasionally fail.
  • 99. Resource Naming  Buckets, objects, and attached metadata are made accessible through a REST interface.  They are represented by uniform resource identifiers (URI)  Amazon offers three different ways of addressing a bucket: o Canonical form o Subdomain form o Virtual hosting form. Amazon Elastic Block Store  Allows AWS users to provide EC2 instances with persistent storage in the form of volumes that can be mounted at instance startup.  They accommodate up to 1 TB of space and are accessed through a block device interface. Amazon ElastiCache  It is an implementation of an elastic in-memory cache based on a cluster of EC2 instances.  It provides fast data access from other EC2 instances through a Memcached-compatible protocol. Structured Storage Solutions  Amazon provides applications with structured storage services in three different forms:P reconfigured EC2 AMIs, Amazon Relational Data Storage (RDS), and Amazon SimpleDB.  Preconfigured EC2 AMIs: Predefined templates featuring an installation of a givendatabase management system.  RDS: Relational database service that relies on the EC2 infrastructure and is managed byAmazon.  Amazon SimpleDB: A lightweight, highly scalable, and flexible data storage solution for applications that do not require a fully relational model for their data. Communication Services Amazon provides facilities to structure and facilitate the communication among existingapplications and services residing within the AWS infrastructure These facilities can be organized into two major categories  Virtual Networking  Messaging Virtual networking  Virtual networking comprises a collection of services that allow AWS users to control the connectivity to and between compute and storage services.  Amazon Virtual Private Cloud (VPC) and Amazon Direct Connect provideconnectivity solutions in terms of infrastructure.  Route 53 facilitates connectivity in terms of naming.  Amazon VPC provides a great degree of flexibility in creating virtual privatenetworks within the Amazon infrastructure
  • 100.  Amazon Direct Connect allows AWS users to create dedicated networks between theuser private network and Amazon Direct Connect locations, called ports.  Amazon Route 53 implements dynamic DNS services that allow AWS resources tobe reached through domain names different from the amazon.com domain Messaging The three different types of messaging services offered are  Amazon Simple Queue Service (SQS): Constitutes disconnected model forexchanging messages between applications by means of message queues.  Amazon Simple Notification Service (SNS): Provides a publish-subscribe method for connecting heterogeneous applications.  Amazon Simple Email Service (SES): Provides AWS users with a scalable email service that leverages the AWS infra- structure. EUCALYPTUS  The Eucalyptus framework was one of the first open-source projects to focus onbuilding IaaS clouds.  Eucalyptus is an open source software platform for implementing Infrastructure as aService (IaaS) in a private or hybrid cloud computing environment.  It has been developed with the intent of providing an open source implementationnearly identical in functionality to Amazon Web Services APIs.  As an Infrastructure as a Service (IaaS) product, Eucalyptus allows your users toprovision your compute and storage resources on-demand.  Eucalyptus was founded out of a research project in the Computer Science Department at the University of California, Santa Barbara, and became a for-profit business called Eucalyptus Systems in 2009. Architecture of Eucalyptus Components of Eucalyptus are  Cloud Controller  Walrus  Cluster Controller  Storage Controller  VMWare BrokerNode Controller 1. Cluster Controller (CC)  Cluster Controller manages the one or more Node controller and responsible fordeploying and managing instances on them.  It communicates with Node Controller and Cloud Controller simultaneously.  CC also manages the networking for the running instances under certain types of networking modes available in Eucalyptus.
  • 101. 2. Cloud Controller (CLC)  Cloud Controller is front end for the entire ecosystem.  CLC provides an Amazon EC2/S3 compliant web services interface to the client tools on one side and interacts with the rest of the components of the Eucalyptus infrastructure on the other side. 3. Node Controller (NC)  It is the basic component for Nodes.  Node controller maintains the life cycle of the instances running on each nodes.  Node Controller interacts with the OS, hypervisor and the Cluster Controller simultaneously. 4. Walrus Storage Controller (WS3)  Walrus Storage Controller is a simple file storage system. WS3 stores the the machine images and snapshots.  It also stores and serves files using S3 APIs. 5. Storage Controller (SC)  Allows the creation of snapshots of volumes.  It provides persistent block storage over AoE or iSCSI to the instances.  It communicates with the Cluster Controller and Node Controller and manages Eucalyptus block volumes and snapshots to the instances within its specific cluster Features of Eucalyptus  SSH Key Management  Image Management  Linux-based VM Management  IP Address Management  Security Group Management  Volume and Snapshot Management
  • 102. Additional Features incorporated in Version 3.3  Auto Scaling: Allows application developers to scale Eucalyptus resources up or down based on policies defined using Amazon EC2-compatible APIs and tools  Elastic Load Balancing: AWS-compatible service that provides greater fault tolerance for applications  CloudWatch: An AWS-compatible service that allows users to collect metrics, set alarms, identify trends, and take action to ensure applications run smoothly. OpenNubula  OpenNebula is a simple open-source solution to build Private Clouds and manage Data Center virtualization.  OpenNebula is an open source cloud middleware solution that manages heterogeneous distributed data centre infrastructures serves as Infrastructure as a Service.  The two primary uses of the OpenNebula platform are data center virtualization solutions and cloud infrastructure solutions.  OpenNebula combines existing virtualisation technologies with advanced features for multi-tenancy, automated provisioning and elasticity. OpenNubula Architecture The OpenNebula Project's deployment model resembles classic cluster architecture whichutilizes  Front-End (Master Node): Executes the OpenNebula services.  Hypervisor Enabled Hosts (Worker Nodes): Provides the resources needed by the VMs.  Datastores: Hold the base images of the VMs.  A Physical Network: Used to support basic services such as interconnection of the storage servers and OpenNebula control operations, and VLANs for the VMs. Master Node:  A single gateway or front-end machine, sometimes also called the master node, is responsible for executing all the OpenNebula services.  Execution involves queuing, scheduling and submitting jobs to the machines in the cluster.  The master node also provides the mechanisms to manage the entire system.  This includes adding virtual machines, monitoring the status of virtual machines, hosting the repository, and transferring virtual machines when necessary.
  • 103. Worker node:  The other machines in the cluster, known as ‘worker nodes’, provide raw computing power for processing the jobs submitted to the cluster.  The worker nodes in an OpenNebula cluster are machines that deploy a virtualisation hypervisor, such as VMware, Xen or KVM. DataStore  The datastores simply hold the base images of the Virtual Machines.  Three different datastore classes are included with OpenNebula: system datastores, image datastores, and file datastores.  System datastores hold the images used for running the virtual machines. The images can be complete copies of an original image, deltas, or symbolic links.  Image datastores are used to store the disk image repository. Images from the image datastores are moved to or from the system datastore when virtual machines are deployed or manipulated.  File datastore is used for regular files and is often used for kernels, ram disks, or context files. Physical networks  Physical networks are required to support the interconnection of storage servers and virtual machines in remote locations.  It is also essential that the front-end machine can connect to all the worker nodes orhosts.  At the very least two physical networks are required as OpenNebula requires a service network and an instance network.  Service Network: Used by the OpenNebula front-end daemons to access the hosts inorder to manage and monitor the hypervisors, and move image files.  Instance Network: Offers network connectivity to the VMs across the different hosts. Features of OpenNubula  Interoperability  Security  Integration with third party tools  Scalability  Flexibility
  • 104. OpenStack  OpenStack is a free and open-source software platform for cloud computing, mostly deployed as infrastructure-as-a-service (IaaS), whereby virtual servers and other resources are made available to customers.  OpenStack software controls large pools of compute, storage, and networking resources throughout a datacenter, managed through a dashboard or via the OpenStack API.  OpenStack works with popular enterprise and open source technologies making it ideal for heterogeneous infrastructure.  The software platform consists of interrelated components that control diverse, multi- vendor hardware pools of processing, storage, and networking resources throughouta data center.  Users either manage it through a web-based dashboard, through command-line tools, or through RESTful webservices. Architecture and Components OpenStack has a modular architecture with various code names for its components Dashboard (Horizon)  Horizon is a web based interface for managing Openstack services.  It provides GUI for operations such as launching instances, managing network andsetting access controls.
  • 105. Identity (Keystone)  This is the component that provides identity services for OpenStack.  Basically, this is a centralized list of all the users and their permissions for the services  It includes authentication and authorization services. Compute (Nova)  This is the primary computing engine behind OpenStack.  This allows deploying and managing virtual machines and other instances to handle computing tasks.  Nova is a distributed component and it interacts with keystone for authentication, glance for images and horizon for web interfaces. Network (Neutron)  Neutron is the networking component of OpenStack.  It makes all the components communicate with each other smoothly, quickly and efficiently. Object Storage (Swift)  The storage system for objects and files is referred to as Swift.  Swift files are referred to by a unique identifier and the Swift is in charge where to store the files.  This makes the system in charge of the best way to make data backup in case of network or hardware problems. Block Storage (Cinder)  It manages storage volumes for virtual machines  It is a block storage component that enables the cloud system to access data with higher speed in situations when it is an important feature. Image (Glance)  It is a component that provides image services or virtual copies of the hard disks.  Glance allows these images to be used as templates when deploying new virtual machine instances. Ceilometer  Ceilometer provides data measurement services, thus enabling the cloud to offer billing services to individual users of the cloud. Orchestration (Heat)  Heat allows developers to store the requirements of a cloud application in a file that defines what resources are necessary for that application.  It helps to manage the infrastructure needed for a cloud service to run.