CLOUD
COMPUTING -
1
by
Dr.
S
K
Nayak
Syllabus Cloud
computing
OECS63-:3Credit-3
Course
Objective: This course gives students an insight into the basics of
cloud computing along with virtualization,
cloud
computing
is
one
of
the
fastest
growing
domain
from
a
while
now.
It
will
provide
the
students basic understanding about cloud and virtualization along
with it how one can migrate over it. Module-I
10
Hrs
Evolution
of Computing Paradigms - Overview of Existing Hosting Platforms, Grid
Computing, Utility Computing,
Autonomic
Computing,
Dynamic
Datacenter
Alliance,
Hosting/
Outsourcing,
Introduction
to
Cloud
Computing,
Workload
Patterns
for
the
Cloud,
“Big
Data”,
IT
as
a
Service,
Technology
Behind
Cloud
Computing,
Module-II
10 Hrs
A
Classification
of
Cloud
Implementations-
Amazon
Web
Services
-
IaaS,
The
Elastic
Compute
Cloud
(EC2), The Simple
Storage Service (S3), The Simple Queuing Services (SQS), VMware v
Cloud - IaaS, v Cloud Express, Google AppEngine - PaaS, The Java
Runtime Environment.
Module-III
10 Hrs
The
Python
Runtime
Environment-
The
Datastore,
Development
Workflow,
Windows
Azure
Platform
-
PaaS, Windows
Azure, SQL Azure, Windows Azure AppFabric, Salesforce.com - SaaS /
PaaS, Force.com, Force Database - the persistency layer, Data
Security, Microsoft Office Live - SaaS, LiveMesh.com, Google Apps -
SaaS, A Comparison of Cloud Computing Platforms, Common Building
Blocks.
Module-IV
8
Cloud
Security
–
Infrastructure
security
–
Data
security
–
Identity
and
access
management
Privacy-
Audit
and Compliance.
Text
Book:
Kai
Hwang,
Geoffrey
C.
Fox
and
Jack
J.
Dongarra,
“Distributed
and
Cloud
Computing
.
from
Parallel
Processing
to
the
Internet
of
Things”,
Morgan
Kaufmann,
Elsevier,
2012 Reference
Books
Barrie
Sosinsky,
“Cloud
Computing
Bible”
John
Wiley
&
Sons,
2010
Tim
Mather,
Subra
Kumaraswamy,
and
Shahed
Latif,
“Cloud
Security
and
Privacy
An
enterprise
Perspective on Risks and Compliance”, O'Reilly 2009
Mastering
Cloud Computing by Rajkumar Buyya
Objective
Basic
cloud
computing Services
Virtualization
Cloud
security
And
migrating
to
cloud
Introduction
Computing technology
Last
60
years
computing
technology
has
been changed
significantly.
Evolutionary
changes
–
machine
architecture
(Stored procedures, parallelism, processor parallelism, operating
system (serial
processing, batch processing, multi programmed batch system,
distributed system, time shared, real time OS),
network
connectivity, application workload.
Modern world
of computing
Last 30 years evolutionary
changes in computing
system has been done From
centralized
computing
to
parallel
distributed
and
cloud
computing. Which
uses multiple computers to solve large scale problems. So
distributed
computing
becomes
data
intensive
and
network
centric.
So
large
scale
internet
applications
has
changed
the
quality
of life
and services.
World of
internet
Billions
of
people
use
Text,
hypertext,
multimedia,
blogs
Has
influenced
every
aspect
of
life
science,
engineering,
communication,
medicine, marketing , manufacturing
Internet
changed
the
way
we
use
marketing,
reading
books,
give
opinion, financial
transaction, trade.
What Internet
has changed
Simultaneously
it has
changed the
way of
software engineering
So
internet
is
characterized
by
data
intensiveness,
network
intensiveness,
unpredictable load, concurrency, availability.
For this
reason super
computing sites
and large
data centres
(large data set) must
provide High performance computing (HPC)
HPC
So
HPC
emphasizes the
raw speed
performance.
Speed
was
the
driving
force
from
scientific,
engineering
and
manufacturing
community.
Top 500
high performance
computers have
been ranked
based on
floating point speed.
But due
to network
intensiveness, computing
is also
moving towards
High Throughput Computing (HTC)
HTC
But
high
throughput
computing
is
required
in
high
end market
oriented computing
systems.
It
pays
more
attention
towards
internet
searches
and
web
services.
So
performance goal
is measured
and more
optimal in
HTC.
Lastly
not
only
performance
but
also
it
intends
towards cost,
energy
saving,
security
and reliability.
Computing Paradigms
High technology community have
argued a precise definition
of centralized,
parallel
computing,
distributed
computing
and
cloud computing
as:
Centralized
computing
All
resources
are
centralized
in
one
physical
system
All
memory,
processors
and
storage
are
fully
shared
tightly
coupled
within
one
integrated
OS
Parallel
computing
all
processors
are
either
tightly
coupled
with
centralized
shared memory
or loosely
coupled with distributed memory.
Inter
processor
communication
is
accomplished
through
shared
memory or via message
passing.
Distributed
computing
multiple
autonomous
computers,
each
having
its
own
private memory,
communicating through a computer network.
Information
exchange
in
a
distributed
system
is
accomplished
through message
passing.
Cloud
computing
resources
can
be
either
a
centralized
or
a
distributed
computing
system.
The
cloud
applies
parallel
or
distributed
computing,
or
both.
Clouds
can
be
built
with
physical
or
virtualized
resource
Summary
distributed
computing is opposite
of centralized.
Parallel
computing
fields
overlap
distributed
computing.
Cloud
computing
overlaps
distributed,
parallel
and
centralized
computing.
Different computing
models
Grid
Computing
All machines on that network work under the same protocol to
act as a virtual supercomputer.
Group of networked computers which work together as a virtual
supercomputer to
perform large
tasks, such
as analyzing
huge sets
of data or weather
modeling.
Autonomic
Computing
Autonomic
Computing
is
a
type
of
visionary
computing
that
has
been started by
IBM.
This
is
made
to
make
adaptive
decisions
that
use
high-level
policies.
It
has
a
feature
of
constant
up-gradation
of decisions
using
optimization
and adaptation.
Utility
Computing
Utility
computing
is a service provisioning model where a provider makes computing
resources, infrastructure
management and technical
services available to customers as they need them.
Sometimes
known as pay-per-use or metered services ex: internet
service, website access, file sharing.
Similar
model has been adopted in cloud computing also.
Clouds
are characterized by the fact of having virtually infinite
capacity, being tolerant
to failures.
What Is
a Data Center
At its
simplest, a
data center
is a physical
facility that
organizations use
to house their critical applications and data.
Why
important
In the
world of
enterprise IT,
data centers
are designed
to support
business applications and activities that include:
Email
and
file
sharing
Productivity
applications
Customer
relationship
management
(CRM)
Enterprise
resource
planning
(ERP)
and
databases
Big
data,
artificial
intelligence,
and
machine
learning
Virtual
desktops,
communications
and
collaboration
services
Dynamic
Data
center
The
basic
premise
of
Dynamic
Data
Center
is
that
leveraging
pooled IT
resources can provide flexible
IT capacity.
enabling
the
seamless,
real-time
allocation
of
IT
resources
in
line
with demand
from business processes.
Big
data
Big
data
is
a
combination
of structured,
semi structured
and
unstructured data collected by organizations.
Mined
for
information
Used
in
machine
learning
projects,
predictive
modeling
and
other advanced
analytic
applications.
the
large
volume
of
data
in
many
environments
the
wide
variety
of
data
types
frequently
stored
in
big
data
systems
the
velocity
at
which
much
of
the
data
is
generated,
collected
and processed.
Recent
Computing model
Now
Computing
is being transformed
into a model consisting of services
that
are
commoditized
and
delivered
in
a
manner
similar
to utilities
such as water, electricity, gas, and telephony.
In
such
a
model,
users
access
services
based
on
their
requirements,
regardless
of where the services are hosted.
Cloud
computing
is
the
most
recent
emerging
paradigm
promising
to turn the
vision of “computing utilities” into a reality.
Vision
Leonard
Kleinrock,
of
(ARPANET),
which
seeded
the
Internet,
said:
As
of now, computer
networks are still in their infancy, but as they grow
up
and
become
sophisticated,
we
will
probably
see
the
spread
of
‘computer
utilities’
which,
like
present
electric
and
telephone
utilities,
will service individual homes and offices across the country.
Cloud
computing vision
Cloud
computing
provisions
virtual
hardware,
runtime
environments,
and services.
These
are
used
for
as long
as
needed,
with
no
up-front
commitments
required.
The
entire stack of a computing system is transformed
into a collection of utilities.
which
can be
provisioned
and composed together
to
deploy
systems
in
hours
rather
than
days
with virtually no maintenance costs.
Idea of
cloud computing
Modern
world
is
moving
towards
Computational
science
that
is changing to
be data-intensive.
In
the
future,
working
with
large
data
sets
will
typically
mean
sending the
computations
(programs) to the data, rather than copying the data to the
workstations.
Trend
in
IT
of
moving
computing
and
data
from
desktops
to
large data
centers.
there
is
on-demand
provision
of
software,
hardware,
and
data
as
a service.
Defining a
cloud
The
term cloud is used
to
refer
to
different
technologies,
services,
and
concepts.
It
is often associated
with:
virtualized
infrastructure
or
hardware
on demand,
utility computing, IT outsourcing, platform as a service and software
as a service, other things that now are the focus of the IT industry.
Definition
Cloud
computing
refers
to
both
the
applications
delivered
as
services
over the Internet and
the hardware,
system
software in the data centers that provide those services.
U.S.
National Institute of Standards and Technology (NIST): Cloud
computing is a model
for enabling ubiquitous, convenient, on- demand network access to a
shared pool of configurable computing resources
(e.g.,
networks,
servers,
storage,
applications,
and
services) that
can be rapidly provisioned and released with minimal management
effort or service provider interaction.
What it
means
Cloud computing
is the
most recent
emerging paradigm
promising to
turn the vision of “computing utilities” into a reality.
Cloud
computing
is
a
technological
advancement
that
focuses
on
the way
we design computing systems, develop
applications, and leverage
existing
services for building software.
Based
on the concept of dynamic
provisioning, which is applied not only
to
services
but
also
to
compute
capability,
storage,
networking,
and information
technology(IT) infrastructure in general.
Resources
are
made
available
through
the
Internet
and
offered
on
a pay-per-use
basis from cloud computing vendors.
Today,
anyone can subscribe
to cloud services and deploy and configure
servers
for
an
application
in
hours
its
application
according to
the demand, and paying only for the time these resources have been
used.
Who are
benefited
Cloud
computing
allows
renting
infrastructure,
runtime
environments, and services on a pay-per-use basis.
This
finds
several
practical
applications
and
then
gives
different
images of cloud computing to different people.
IT
officers of large enterprises see opportunities for scaling
their infrastructure
on
demand
and
sizing
it
according
to
their
business
needs.
End
users
leveraging
cloud
computing
services
can
access
their
documents and data anytime,
anywhere, and from any device
connected to the Internet.
What
it Describes
Cloud
computing is a technological advancement and described as a
phenomenon touching on the entire
computing stack: from
the
underlying
hardware
to
the
high-level
software
services
and
applications.
It
introduces the concept of everything
as a service, mostly referred as XaaS where the different
components of a system—IT infrastructure,
development
platforms,
databases,
and
so
on—can
be delivered,
measured, and consequently priced
as a service.
This
new approach significantly influences not only the way
that we build
software but also
the
way we deploy
it.
the
way we make it
accessible.
the
way we design
our
IT
infrastructure.
even
the
way
companies
allocate
the costs
for IT needs..
Major milestones
cloud computing
technology
mainframe
computing, cluster
computing, and
grid computing.
Mainframes
These
were the first examples of large
computational facilities leveraging multiple processing units.
Mainframes were powerful,
highly
reliable
computers
specialized
for
large
data
movement
and massive
input/output (I/O) operations.
They
were mostly used by large organizations
for bulk data processing
tasks
such
as
online
transactions,
enterprise
resource
planning.
Even
though mainframes cannot
be considered distributed
systems, they
offered
large
computational
power
by
using
multiple
processors,
which were presented as a single entity to users.
One
of
the
most
attractive
features
of
mainframes
was
the
ability
to be highly
reliable computers
that were “always on” and capable of tolerating failures
transparently.
No
system
shutdown
was
required
to
replace
failed
components,
and the system
could work without interruption.
Clusters
Cluster
computing
started
as
a
low-cost
alternative
to
the
use
of mainframes
and supercomputers.
The
technology
advancement
that
created
faster
and
more
powerful
mainframes and supercomputers eventually
generated an
increased availability of cheap
commodity machines.
Cluster
technology
contributed
considerably
to
the
evolution
of
tools and
frameworks
for distributed computing.
One
of
the
attractive
features
of
clusters
Parallel
Virtual
Machine (PVM)
and Message Passing Interface (MPI).
Grids
Initially
developed
as
aggregations
of
geographically
dispersed
clusters
by means of Internet
connections.
These
clusters
belonged
to
different
organizations,
and
arrangements
were made among them to share
the computational
power.
Technology behind
cloud computing
Five core
technologies that
played an
important role
in the
realization of
cloud computing. These
technologies are distributed systems, virtualization, Web 2.0,
service orientation, and utility computing.
Distributed
systems
Proposed
by Tanenbaum:
A distributed
system is
a collection
of independent
computers that
appears to its users as a single coherent system.
Distributed
systems often exhibit other properties
such as heterogeneity,
openness,
scalability,
transparency,
concurrency,
continuous availability, and independent failures.
The
primary
purpose
of
distributed
systems
is
to share
resources
and utilize
them
better.
This
is
true
in
the
case
of
cloud
computing,
where this
concept is taken to the extreme and resources (infrastructure,
runtime environments, and services) are rented
to users.
Clouds
are
essentially
large
distributed
computing
facilities
that
make available
their services to third parties on demand.
In
fact, one of the driving factors
of cloud computing has been the availability of the large
computing
facilities of IT giants (Amazon, Google) that found that offering
their computing capabilities as a service
provided
opportunities
to
better
utilize
their
infrastructure.
Web
2.0
The
Web
is
the
primary
interface
through
which
cloud
computing
delivers
its
services.
that
facilitate
interactive
information
sharing,
collaboration,
user-centered
design, and
application composition.
At
present,
Web
encompasses
a
set
of
technologies
and
services
This
evolution
has
transformed
the
Web
into
a rich
platform
for
application
development and is
known as Web 2.0
Web
2.0
brings
interactivity
and
flexibility
into
Web
pages,
providing
enhanced
user experience
by gaining Web-based access to all
the functions
that are normally found in desktop
applications.
These
capabilities are obtained by integrating a collection of standards
and technologies
such as XML, Asynchronous JavaScript and XML (AJAX), Web Services,
and others. These technologies allow us to build applications
leveraging
the
contribution
of
users,
who
now
become
providers
of
content.
Furthermore,
the
capillary
diffusion
of
the
Internet
opens
new
opportunities
and markets.
Web
2.0
applications
are
extremely
dynamic.
they
improve
continuously,
and
new
updates
and
features
are
integrated
at
a constant
rate by following the usage trend of the community.
There
is
no
need
to
deploy
new
software
releases
on
the
installed
base
at
the
client side.
Users can take advantage
of the new
software features
simply by interacting with cloud applications.
New
applications can
be “synthesized” simply by composing existing services and
integrating
them, thus providing added value. Finally, Web 2.0 applications aim
to
leverage
the
“long
tail”
of
Internet
users
by
making
themselves
available to
everyone in terms of either media accessibility
or affordability.
Examples
of
Web
2.0
applications
are
Google
Documents,
Google
Maps,
Flickr,
Facebook, Twitter, YouTube, delicious, Blogger, and Wikipedia.
Today
it
is
a
mature
platform
that strongly
supports
the
needs
of
cloud
computing,
which strongly leverages Web 2.0. Applications
service
orientation
A
service is an abstraction
representing a self-describing
and platform-agnostic component.
For
example, multiple business processes in an organization require
the user authentication
functionality.
Instead of rewriting
the authentication code for all business processes, you can create
a single authentication service and reuse
it for all applications,
Similarly,
almost all systems across a healthcare
organization, such as patient
management
systems and electronic health record (EHR) systems, need to register
patients. These systems can call
a single,
common
service
to perform the patient registration task.
It
can
perform any
function—anything
from
a
simple
function
to
a complex
business process.
Any
piece
of
code
that
performs
a
task
can
be
turned
into
a
service
and
expose
its
functionalities through a network-accessible protocol.
A
service
is
supposed
to
be loosely
coupled,
reusable,
programming
language
independent, and location transparent function.
Loose
coupling
allows
services
to
serve
different
scenarios
more
easily
and makes
them reusable.
Independence
from
a
specific
platform
Services
are composed and
aggregated into a
service-oriented architecture
(SOA)
which
is
a
logical
way
of
organizing
software
systems
to
provided to
end
users
or
other entities
distributed over the network
Service-oriented
computing
introduces
and
diffuses
two
important
concepts,
which are also
fundamental to cloud computing:
Quality
of
service
(QoS)
Quality
of service (QoS) refers
to any technology
that manages data
traffic to reduce
packet
loss, latency
and jitter
on a network.
QoS
controls and manages
network resources by setting
priorities for
specific types of data on the network.
Quality
of
service
(QoS)
identifies
a
set
of
functional
and
nonfunctional
attributes
that can be used
to evaluate the
behavior of a
service from different perspectives.
Software-as-a-Service
The
concept of Software-as-a-Service introduces
a new delivery
model for applications.
The
term has been inherited
from the world of application service providers
(ASPs),
which
deliver
software
services-based
solutions
across
the
wide
area
network from
a central datacenter