Wednesday, 7 September 2022
Cloud Computing Note
CLOUD COMPUTING - 1
by
DR. S K Nayak
Table of Contents
Syllabus Cloud computing 5
Objective 7
Introduction 8
Computing technology 8
Modern world of computing 8
World of internet 8
What Internet has changed 8
HPC 9
HTC 9
Computing Paradigms 9
Different computing models 10
Utility Computing 11
What Is a Data Center 11
Big data 12
Recent Computing model 12
Vision 13
Cloud computing vision 13
Idea of cloud computing 13
Defining a cloud 14
Definition 14
What it means 15
Who are benefited 15
Major milestones cloud computing technology 16
Technology behind cloud computing 19
Distributed systems 20
Web 2.0 20
service orientation 22
Virtualization 24
Concept of Virtualization 28
Hardware virtualization 29
Virtual storage 29
Virtual networking 30
Characteristics of virtualization 30
Increased security 30
Managed execution 31
Sharing 31
Aggregation 31
Emulation 31
Isolation 32
Performance tuning 32
Portability 33
Taxonomy of virtualization techniques 35
Machine reference model 36
Questions 38
Amazon Web Services 41
Understanding Amazon Web Services 42
Amazon Web Service Components and Services 42
Amazon machine images 43
EC2 instances 43
EC2 environment 47
Advanced compute services 48
Storage services 50
S3 key concepts 51
Requests will occasionally fail. 52
Communication services 53
Virtual networking 53
Amazon Virtual Private Cloud (VPC) 53
Amazon Direct Connect 53
Messaging 54
Google AppEngine 58
Architecture and core concepts 58
Architecture and core concepts 60
Syllabus Cloud computing
OECS63-:3Credit-3
Course Objective: This course gives students an insight into the basics of cloud computing along with virtualization, cloud computing is one of the fastest growing domain from a while now. It will provide the students basic understanding about cloud and virtualization along with it how one can migrate over it. Module-I
10 Hrs
Evolution of Computing Paradigms - Overview of Existing Hosting Platforms, Grid Computing, Utility Computing, Autonomic Computing, Dynamic Datacenter Alliance, Hosting/ Outsourcing, Introduction to Cloud Computing, Workload Patterns for the Cloud, “Big Data”, IT as a Service, Technology Behind Cloud Computing,
Module-II 10 Hrs
A Classification of Cloud Implementations- Amazon Web Services - IaaS, The Elastic Compute Cloud (EC2), The Simple Storage Service (S3), The Simple Queuing Services (SQS), VMware v Cloud - IaaS, v Cloud Express, Google AppEngine - PaaS, The Java Runtime Environment.
Module-III 10 Hrs
The Python Runtime Environment- The Datastore, Development Workflow, Windows Azure Platform - PaaS, Windows Azure, SQL Azure, Windows Azure AppFabric, Salesforce.com - SaaS / PaaS, Force.com, Force Database - the persistency layer, Data Security, Microsoft Office Live - SaaS, LiveMesh.com, Google Apps - SaaS, A Comparison of Cloud Computing Platforms, Common Building Blocks.
Module-IV 8 Hrs
Cloud Security – Infrastructure security – Data security – Identity and access management Privacy- Audit and Compliance.
Text Book: Kai Hwang, Geoffrey C. Fox and Jack J. Dongarra, “Distributed and Cloud Computing
. from Parallel Processing to the Internet of Things”, Morgan Kaufmann, Elsevier, 2012 Reference Books
1. Barrie Sosinsky, “Cloud Computing Bible” John Wiley & Sons, 2010
2. Tim Mather, Subra Kumaraswamy, and Shahed Latif, “Cloud Security and Privacy An Enterprise Perspective on Risks and Compliance”, O'Reilly 2009
3. Mastering Cloud Computing by Rajkumar Buyya
Objective
Basic cloud computing Services
Virtualization Cloud security
And migrating to cloud
Introduction
Computing technology
Last 60 years computing technology has been changed significantly.
Evolutionary changes – machine architecture, operating system, network connectivity, application workload.
Modern world of computing
Last 30 years evolutionary changes in computing system has been done From centralized computing to parallel distributed and cloud computing. Which uses multiple computers to solve large scale problems. So distributed computing becomes data intensive and network centric. So large scale internet applications has changed the quality of life and services.
World of internet
Billions of people use Text, hypertext, multimedia, blogs
Has influenced every aspect of life science, engineering, communication, medicine, marketing , manufacturing
Internet changed the way we use marketing, reading books, give opinion, financial transaction, trade.
What Internet has changed
Simultaneously it has changed the way of software engineering
So internet is characterized by data intensiveness, network intensiveness, unpredictable load, concurrency, availability.
For this reason supercomputing sites and large data centres (large data set) must provide High performance computing (HPC)
HPC
So HPC emphasizes the raw speed performance.
Speed was the driving force from scientific, engineering and manufacturing community.
Top 500 high performance computers have been ranked based on floating point speed.
But due to network intensiveness, computing is also moving towards High Throughput Computing (HTC)
HTC
But high throughput computing is required in high end market oriented computing systems.
It pays more attention towards internet searches and web services.
So performance goal is measured and more optimal in HTC.
Lastly not only performance but also it intends towards cost, energy saving, security and reliability.
Computing Paradigms
High technology community have argued a precise definition of centralized, parallel computing, distributed computing and cloud computing as:
• Distributed computing is opposite of centralized.
• Parallel computing fields overlap distributed computing.
• Cloud computing overlaps distributed, parallel and centralized computing.
Centralized computing
• All resources are centralized in one physical system
• All memory, processors and storage are fully shared
• tightly coupled within one integrated OS
Parallel computing
• all processors are either tightly coupled with centralized shared memory or loosely coupled with distributed memory.
• Inter processor communication is accomplished through shared memory or via message passing.
Distributed computing
• multiple autonomous computers, each having its own private memory, communicating through a computer network.
• Information exchange in a distributed system is accomplished through message passing.
Cloud computing
• resources can be either a centralized or a distributed computing system.
• The cloud applies parallel or distributed computing, or both.
• Clouds can be built with physical or virtualized resources
Different computing models
Grid Computing
All machines on that network work under the same protocol to act as a virtual supercomputer.
Group of networked computers which work together as a virtual supercomputer to perform large tasks, such as analysing huge sets of data or weather modeling.
Autonomic Computing
• Autonomic Computing is a type of visionary computing that has been started by IBM.
• This is made to make adaptive decisions that use high-level policies.
• It has a feature of constant up-gradation using optimization and adaptation.
Utility Computing
• Utility computing is a service provisioning model where a provider makes computing resources, infrastructure management and technical services available to customers as they need them.
• Sometimes known as pay-per-use or metered services ex: internet service, website access, file sharing.
• Similar model has been adopted in cloud computing also.
• Cloud computing similarly allows renting infrastructure, runtime environments, and services on a pay per-use basis.
Clouds are characterized by the fact of having virtually infinite capacity, being tolerant to failures
What Is a Data Center
At its simplest, a data center is a physical facility that organizations use to house their critical applications and data
Why important
In the world of enterprise IT, data centers are designed to support business applications and activities that include:
• Email and file sharing
• Productivity applications
• Customer relationship management (CRM)
• Enterprise resource planning (ERP) and databases
• Big data, artificial intelligence, and machine learning
• Virtual desktops, communications and collaboration services
Dynamic Datacenter
• The basic premise of Dynamic Data Center is that leveraging pooled IT resources can provide flexible IT capacity
• enabling the seamless, real-time allocation of IT resources in line with demand from business processes.
Big data
• Big data is a combination of structured, semistructured and unstructured data collected by organizations.
• Mined for information
• Used in machine learning projects, predictive modeling and other advanced analytics applications.
• the large volume of data in many environments;
• the wide variety of data types frequently stored in big data systems
• the velocity at which much of the data is generated, collected and processed.
Recent Computing model
• Now Computing is being transformed into a model consisting of services that are commoditized and delivered in a manner similar to utilities such as water, electricity, gas, and telephony.
• In such a model, users access services based on their requirements, regardless of where the services are hosted.
• Cloud computing is the most recent emerging paradigm promising to turn the vision of “computing utilities” into a reality.
Vision
• Leonard Kleinrock, of (ARPANET), which seeded the Internet, said:
• As of now, computer networks are still in their infancy, but as they grow up and become sophisticated, we will probably see the spread of‘computer utilities’ which, like present electric and telephone utilities, will service individual homes and offices across the country.
Cloud computing vision
• Cloud computing provisions virtual hardware, runtime environments, and services.
• These are used for as long as needed, with no up-front commitments required.
• The entire stack of a computing system is transformed into a collection of utilities
• which can be provisioned and composed together to deploy systems in hours rather than days
• with virtually no maintenance costs.
Idea of cloud computing
• Modern world is moving towards Computational science that is changing to be data-intensive.
• In the future, working with large data sets will typically mean sending the computations (programs) to the data, rather than copying the data to the workstations.
• Trend in IT of moving computing and data from desktops to large data centers.
• there is on-demand provision of software, hardware, and data as a service.
Defining a cloud
• The term cloud is used to refer to different technologies, services, and concepts.
• It is often associated with:
virtualized infrastructure or hardware on demand, utility computing, IT outsourcing, platform as a service and software as a service, other things that now are the focus of the IT industry.
Definition
1. Cloud computing refers to both the applications delivered as services over the Internet and the hardware, system software in the data centers that provide those services.
2. U.S. National Institute of Standards and Technology (NIST): Cloud computing is a model for enabling ubiquitous, convenient, on- demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
What it means
Cloud computing is the most recent emerging paradigm promising to turn the vision of “computing utilities” into a reality.
◦ Cloud computing is a technological advancement that focuses on the way we design computing systems, develop applications, and leverage existing services for building software.
◦ Based on the concept of dynamic provisioning, which is applied not only to services but also to compute capability, storage, networking, and information technology(IT) infrastructure in general.
◦ Resources are made available through the Internet and offered on a pay-per-use basis from cloud computing vendors.
◦ Today, anyone can subscribe to cloud services and deploy and configure servers for an application in hours its application according to the demand, and paying only for the time these resources have been used.
Who are benefited
◦ Cloud computing allows renting infrastructure, runtime environments, and services on a pay-per-use basis.
◦ This finds several practical applications and then gives different images of cloud computing to different people.
◦ IT officers of large enterprises see opportunities for scaling their infrastructure on demand and sizing it according to their business needs.
◦ End users leveraging cloud computing services can access their documents and data anytime, anywhere, and from any device connected to the Internet.
What it Describes
◦ Cloud computing is a technological advancement and described as a phenomenon touching on the entire computing stack: from the underlying hardware to the high-level software services and applications.
◦ It introduces the concept of everything as a service, mostly referred as XaaS where the different components of a system—IT infrastructure, development platforms, databases, and so on—can be delivered, measured, and consequently priced as a service.
◦ This new approach significantly influences not only the way that we build software but also
• the way we deploy it
• the way we make it accessible
• the way we design our IT infrastructure
• even the way companies allocate the costs for IT needs.
Major milestones cloud computing technology
mainframe computing, cluster computing, and grid computing.
Mainframes
◦ These were the first examples of large computational facilities leveraging multiple processing units. Mainframes were powerful, highly reliable computers specialized for large data movement and massive input/output (I/O) operations.
◦ They were mostly used by large organizations for bulk data processing tasks such as online transactions, enterprise resource planning
◦ Even though mainframes cannot be considered distributed systems, they offered large computational power by using multiple processors, which were presented as a single entity to users.
◦ One of the most attractive features of mainframes was the ability to be highly reliable computers that were “always on” and capable of tolerating failures transparently.
◦ No system shutdown was required to replace failed components, and the system could work without interruption.
Clusters
◦ Cluster computing started as a low-cost alternative to the use of mainframes and supercomputers.
◦ The technology advancement that created faster and more powerful mainframes and supercomputers eventually generated an increased availability of cheap commodity machines
◦ Cluster technology contributed considerably to the evolution of tools and frameworks for distributed computing
◦ One of the attractive features of clusters Parallel Virtual Machine (PVM) and Message Passing Interface (MPI)
Grids
◦ Initially developed as aggregations of geographically dispersed clusters by means of Internet connections.
◦ These clusters belonged to different organizations, and arrangements were made among them to share the computational power.
Cloud Computing Sept 2022
Quiz 1
12/09/2022 Answer All Questions
1. (1 point) In centralized computing
(a) All resources are centralized in one physical system
(b) All memory, processors and storage are fully shared
(c) tightly coupled within one integrated OS
(d) all
2. (1 point) Due to network intensiveness, computing is moving towards High Throughput Computing (HTC)
A. true
B. false
3. (1 point) Computing performance intends towards A. throughput
C. energy saving D. security E. reliability F. all
B. low cost
4. (2 points) Idea of cloud computing emerged due to
⃝U data-intensive
⃝U sending the computations (programs) to the data
⃝U moving computing and data from desktops to large data centers.
⃝U on-demand provision of software, hardware, and data as a service.
⃝U all
5. (2 points) Mark box if true. In cloud computing
• Resources can be centralized
• distributed computing system
• can be built with physical resources
• applied to parallel systems
• can be built with virtual resources
6. (4 points) For what activities data centers are designed to support business applications
7. (4 points) Who are benefited in cloud computing
Technology behind cloud computing
Five core technologies that played an important role in the realization of cloud computing. These technologies are distributed systems, virtualization, Web 2.0, service orientation, and utility computing.
Distributed systems
Proposed by Tanenbaum:
A distributed system is a collection of independent computers that appears to its users as a single coherent system.
◦ Distributed systems often exhibit other properties such as heterogeneity, openness, scalability, transparency, concurrency, continuous availability, and independent failures.
◦ The primary purpose of distributed systems is to share resources and utilize them better.
◦ This is true in the case of cloud computing, where this concept is taken to the extreme and resources (infrastructure, runtime environments, and services) are rented to users.
◦ Clouds are essentially large distributed computing facilities that make available their services to third parties on demand.
◦ In fact, one of the driving factors of cloud computing has been the availability of the large computing facilities of IT giants (Amazon, Google) that found that offering their computing capabilities as a service provided opportunities to better utilize their infrastructure.
Web 2.0
The Web is the primary interface through which cloud computing delivers its services.
that facilitate interactive information sharing, collaboration, user-centered design, and application composition.
At present, Web encompasses a set of technologies and services
This evolution has transformed the Web into a rich platform for application development and is known as Web 2.0
◦ Web 2.0 brings interactivity and flexibility into Web pages, providing enhanced user experience by gaining Web-based access to all the functions that are normally found in desktop applications.
◦ These capabilities are obtained by integrating a collection of standards and technologies such as XML, Asynchronous JavaScript and XML (AJAX), Web Services, and others. These technologies allow us to build applications leveraging the contribution of users, who now become providers of content. Furthermore, the capillary diffusion of the Internet opens new opportunities and markets.
◦ Web 2.0 applications are extremely dynamic.
◦ they improve continuously, and new updates and features are integrated at a constant rate by following the usage trend of the community.
◦ There is no need to deploy new software releases on the installed base at the client side. Users can take advantage of the new software features simply by interacting with cloud applications.
◦ New applications can be “synthesized” simply by composing existing services and integrating them, thus providing added value. Finally, Web 2.0 applications aim to leverage the “long tail” of Internet users by making themselves available to everyone in terms of either media accessibility or affordability.
◦ Examples of Web 2.0 applications are Google Documents, Google Maps, Flickr, Facebook, Twitter, YouTube, delicious, Blogger, and Wikipedia.
◦ Today it is a mature platform that strongly supports the needs of cloud computing, which strongly leverages Web 2.0. Applications
service orientation
A service is an abstraction representing a self-describing and platform-agnostic component.
For example, multiple business processes in an organization require the user authentication functionality. Instead of rewriting the authentication code for all business processes, you can create a single authentication
service and reuse it for all applications,
Similarly, almost all systems across a healthcare organization, such as patient management systems and electronic health record (EHR) systems, need to register patients. These systems can call a single, common service to perform the patient registration task.
It can perform any function—anything from a simple function to a complex business process.
Any piece of code that performs a task can be turned into a service and expose its functionalities through a network-accessible protocol.
◦ A service is supposed to be loosely coupled, reusable, programming language independent, and location transparent function.
◦ Loose coupling allows services to serve different scenarios more easily and makes them reusable.
◦ Independence from a specific platform
Services are composed and aggregated into a service-oriented architecture (SOA)
which is a logical way of organizing software systems to provided to end users or other entities distributed over the network
Service-oriented computing introduces and diffuses two important concepts, which are also fundamental to cloud computing:
◦ quality of service (QoS)
◦ Software-as-a-Service (SaaS).
Quality of service (QoS)
Quality of service (QoS) refers to any technology that manages data traffic to reduce packet loss, latency and jitter on a network.
QoS controls and manages network resources by setting priorities for specific types of data on the network.
Quality of service (QoS) identifies a set of functional and nonfunctional attributes that can be used to evaluate the behavior of a service from different perspectives.
Software-as-a-Service
The concept of Software-as-a-Service introduces a new delivery model for applications.
• The term has been inherited from the world of application service providers (ASPs), which deliver software services-based solutions across the wide area network from a central datacenter
• Available on a subscription or rental basis.
• The ASP is responsible for maintaining the infrastructure and making available the application
• The client is freed from maintenance costs and difficult upgrades.
Virtualization
Virtualization technology is one of the fundamental components of cloud computing, especially in regard to infrastructure-based services.
◦ Virtualization allows the creation of a secure, customizable, and isolated execution environment for running applications, even if they are untrusted, without affecting other users’ applications.
◦ The basis of this technology is the ability of a computer program—or a combination of software and hardware—to emulate an executing environment separate from the one that hosts such programs.
For example, we can run Windows OS on top of a virtual machine, which itself is running on Linux OS.
◦ Virtualization provides a great opportunity to build elastically scalable systems that can provision additional capability with minimum costs.
◦ Therefore, virtualization is widely used to deliver customizable computing environments on demand.
◦ Virtualization is a large umbrella of technologies and concepts that are meant to provide an abstract environment
◦ virtualization, which plays a fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions for cloud computing.
virtualization technologies provide a virtual environment for not only executing applications but also for storage, memory, and networking.
Popularity
Virtualization technologies have gained renewed interests recently due to the confluence of several phenomena.
◦ Increased performance and computing capacity.
Nowadays, the average end-user desktop PC is powerful enough to meet almost all the needs of everyday computing, with extra capacity that is rarely used.
Almost all these PCs have resources enough to host a virtual machine manager and execute a virtual machine with by far acceptable performance.
◦ Underutilized hardware and software resources. Hardware and software underutilization is occurring due to
(1) increased performance and computing capacity, and
(2) limited use of resources.
Computers today are so powerful that in most cases only a fraction of their capacity is used by an application or the system.
Moreover, if we consider the IT infrastructure of an enterprise, many computers are only partially utilized whereas they could be used without interruption on a 24/7/365 basis.
For example, desktop PCs mostly devoted to office automation tasks and used by administrative staff are only used during work hours, remaining completely unused overnight.
Using these resources for other purposes after hours could improve the efficiency of the IT infrastructure.
So this can be provided transparently as service, it would be necessary to deploy a completely separate environment, which can be achieved through virtualization.
• Lack of space
The continuous need for additional capacity, whether storage or compute power, makes data centers grow quickly.
Companies such as Google and Microsoft expand their infrastructures by building data centers as large as football fields that are able to host thousands of nodes. Although this is viable for IT giants, in most cases enterprises cannot afford to build another data center to accommodate additional resource capacity.
This condition, along with hardware underutilization, has led to the diffusion of a technique called server consolidation, for which virtualization technologies are fundamental.
◦ Greening initiatives. Recently, companies are increasingly looking for ways to reduce the amount of energy they consume and to reduce their carbon footprint.
• Data centers are one of the major power consumers;
Maintaining a data center operation not only involves keeping servers on, but a great deal of energy is also consumed in keeping them cool.
Infrastructures for cooling have a significant impact on the carbon footprint of a data center.
Hence, reducing the number of servers through server consolidation will definitely reduce the impact of cooling and power consumption of a data center.
So Virtualization technologies can provide an efficient way of consolidating servers.
◦ Rise of administrative costs.
Power consumption and cooling costs have now become higher than the cost of IT equipment. Moreover, the increased demand for additional capacity, which translates into more servers in a data center, is also responsible for a significant increment in administrative costs.
Computers—in particular, servers
—do not operate all on their own, but they require care and feeding from system administrators.
Common system administration tasks include hardware monitoring, defective hardware replacement, server setup and updates, server resources monitoring, and backups.
These are labor-intensive operations, and the higher the number of servers that have to be managed, the higher the administrative costs.
Virtualization can help reduce the number of required servers for a given workload, thus reducing the cost of the administrative personnel.
The first step toward consistent adoption of virtualization technologies was made with the wide spread of virtual machine-based programming languages: In 1995 Sun released Java,
Java played a significant role in the application server market segment
In 2002 Microsoft released the first version of .NET Framework, which was Microsoft’s alternative to the Java technology. Based on the same principles as Java, able to support multiple programming languages, and featuring complete integration with other Microsoft technologies.
Java and Python, were based on the virtual machine model.
Concept of Virtualization
Virtualization is a broad concept that refers to the creation of a virtual version of something -
Hardware, a software environment, storage, or a network. In a virtualized environment, there are three major components:
guest, host, and virtualization layer.
◦ Guest represents the system component that interacts with the virtualization layer
◦ The host represents the original environment where the guest is supposed to be managed.
◦ The virtualization layer is responsible for recreating the same or a different environment where the guest will operate.
The most intuitive and popular virtualized environment is represented by hardware virtualization, which also constitutes the original realization of the virtualization concept.
Hardware virtualization
Guest
◦ Represented by a system image comprises operating system and installed applications.
◦ These are installed on top of virtual hardware
Virtualization Layer
Controls and manages guest also called the Virtual Machine Manager (VMM)
Host
• Represented by the physical hardware,
◦ operating system, that defines the environment where the virtual machine manager is running.
Virtual storage
Guest
client applications or users that interact with the virtual storage management software
Virtualization layer
Provides virtualization management software.
Host
Real storage system
Virtual networking
◦ The guest applications and users—interact with a virtual network, such as a virtual private network (VPN).
◦ Managed by specific software (VPN client) using the physical network available on the node.
◦ VPNs → being within a different physical network and thus accessing the resources in it.
The technologies of today allow profitable use of virtualization and make it possible to fully exploit the advantages that come with it.
Such advantages have always been characteristics of virtualized solutions.
Characteristics of virtualization
Increased security
◦ The ability to control the execution of a guest in a completely transparent manner opens new possibilities for delivering a secure, controlled execution environment.
◦ The virtual machine represents an emulated environment in which the guest is executed.
◦ All the operations of the guest are generally performed against the virtual machine, which then translates and applies them to the host.
◦ Security allows the virtual machine manager to control and filter the activity of the guest, thus preventing some harmful operations from being performed.
◦ Resources exposed by the host can then be hidden or simply protected from the guest.
◦ Sensitive information contained in the host can be hidden without the need to install complex security policies.
◦ Increased security is a requirement when dealing with untrusted code. For example, applets downloaded from the Internet run in a sandboxed version of the Java Virtual Machine (JVM), which provides them with limited access to the hosting operating system resources.
Both the JVM and the .NET runtime provide extensive security policies for customizing the execution environment of applications.
VMware Desktop, VirtualBox, provide the ability to create a virtual computer with customized virtual hardware on top of which a new operating system can be installed.
By default, the file system exposed by the virtual computer is completely separated from the one of the host machine.
This becomes the perfect environment for running applications without affecting other users in the environment.
Managed execution
Wider range of features implemented.
Sharing, aggregation, emulation, and isolation are the most relevant features.
Sharing
creation of a separate computing environments within the same host.
Provides different system images
In this way it is possible to fully exploit the capabilities of a powerful guest, which would otherwise be underutilized.
Aggregation
virtualization also allows aggregation, which is the opposite process.
◦ A group of separate hosts can be tied together and represented to guests as a single virtual host.
◦ This function is naturally implemented in middleware for distributed computing.
Emulation
Guest programs are executed within an environment that is controlled by the virtualization layer.
◦ Allows for controlling and tuning the environment
◦ For instance, a completely different environment with respect to the host can be emulated.
◦ Allows the execution of guest programs requiring specific characteristics that are not present in the physical host.
This feature becomes very useful for testing purposes, where a specific guest has to be validated against different platforms or architectures and the wide range of options is not easily accessible during development.
◦ Old and legacy software that does not meet the requirements of current systems can be run on emulated hardware without any need to change the code.
◦ This is possible either by emulating the required hardware architecture or within a specific operating system sandbox, such as the MS-DOS mode in Windows 95/98.
Isolation
Virtualization allows providing guests—whether they are operating systems, applications, or other entities, with a completely separate environment, in which they are executed.
◦ The guest program performs its activity by interacting with an abstraction layer, which provides access to the underlying resources.
◦ Isolation brings several benefits;
▪ for example, it allows multiple guests to run on the same host without interfering with each other.
▪ Second, it provides a separation between the host and the guest.
▪ The virtual machine can filter the activity of the guest and prevent harmful operations against the host.
Performance tuning
◦ By Virtualization, guests are represented by processes on the host machine. This means that processing power, memory, and other resources of the host are used to emulate the functions and capabilities of the guest's virtual hardware.
◦ However, guest hardware can be less effective at using the resources than the host.
◦ Therefore, adjusting the amount of allocated host resources may be needed for the guest to perform its tasks at the expected speed.
◦ In addition, various types of virtual hardware may have different levels of overhead
◦ so an appropriate virtual hardware configuration can have significant impact on guest performance.
◦ Finally, depending on the circumstances, specific configurations enable virtual machines to use host resources more efficiently.
◦
◦ It becomes easier to control the performance of the guest by finely tuning the properties of the resources exposed through the virtual environment.
◦ This capability provides a means to effectively implement a quality- of-service (QoS) infrastructure that more easily fulfills the service- level agreement (SLA) established for the guest.
◦ For instance, software-implementing hardware virtualization solutions can expose to a guest operating system
◦ Another advantage of managed execution is that sometimes it allows easy capturing of the state of the guest program, persisting it, and resuming its execution.
◦ This, for example, allows virtual machine managers such as Xen Hypervisor to stop the execution of a guest operating system, move its virtual image into another machine, and resume its execution in a completely transparent manner. This technique is called virtual machine migration and constitutes an important feature in virtualized data centers for optimizing their efficiency in serving application demands.
Portability
The concept of portability applies in different ways according to the specific type of virtualization considered.
◦ In the case of a hardware virtualization solution, the guest is packaged into a virtual image that, in most cases, can be safely moved and executed on top of different virtual machines.
◦ In the case of programming-level virtualization, as implemented by the JVM or the .NET runtime, the binary code representing application components (jars or assemblies) can be run without any recompilation on any implementation of the corresponding virtual machine.
◦ This makes the application development cycle more flexible and application deployment very straightforward.
◦ One version of the application, in most cases, is able to run on different platforms with no changes. Finally, portability allows having your own system always with you and ready to use as long
as the required virtual machine manager is available.
Taxonomy of virtualization techniques
Guest programs are executed within an environment that is controlled by the virtualization layer.
Virtualization covers a wide range of emulation techniques that are applied to different areas of computing.
A classification of these techniques helps us better understand their characteristics and use (see Figure).
◦ The first classification discriminates
against the service or entity that is being emulated.
◦ Virtualization is mainly used to emulate execution environments, storage, and networks.
◦ Among these categories, execution virtualization constitutes the oldest, most popular, and most developed area.
◦ In particular we can divide these execution virtualization techniques into two major categories by considering the type of host they require.
• Process-level techniques are implemented on top of an existing operating system, which has full control of the hardware.
• System-level techniques are implemented directly on hardware and do not require—or require a minimum of support from—an existing operating system.
◦ Within these two categories we can list various techniques that offer the guest a different type of virtual computation environment through
◦ bare hardware, operating system resources, low-level programming language, and application libraries.
Execution virtualization
Execution virtualization includes all techniques that aim to emulate an execution environment.
All these techniques provide support for the execution of programs
• These are the operating system, a binary specification of a program compiled against an abstract machine model, or an application.
• Therefore, execution virtualization can be implemented directly on top of the hardware by the operating system,
an application, or libraries dynamically or statically linked to an application image.
Machine reference model
Virtualizing an execution environment
at different levels of the computing stack requires a reference model
that defines the interfaces between the levels of abstractions, which hide implementation details.
Modern computing systems can be expressed in terms of the reference model described in Figure.
• At the bottom layer, the model for the hardware is expressed in terms of the Instruction Set Architecture (ISA) which defines the instruction set for the
processor, registers, memory, and interrupt management.
◦ ISA is the interface between hardware and software, and it is important to the operating system (OS) developer (System ISA) and developers of applications that directly manage the underlying hardware (User ISA).
◦ The application binary interface (ABI) separates the operating system layer from the applications and libraries, which are managed by the OS.
◦ ABI covers details such as low-level data types, alignment, and call conventions and defines a format for executable programs.
◦ System calls are defined at this level. This interface allows portability of applications and libraries across operating systems that implement the same ABI.
◦ The highest level of abstraction is represented by the application programming interface (API), which interfaces applications to libraries and/or the underlying operating system.
Operations:
For any operation to be performed in the application level API, ABI and ISA are responsible for making it happen.
◦ The high-level abstraction is converted into machine-level instructions to perform the actual operations supported by the processor.
◦ The machine-level resources, such as processor registers and main memory capacities, are used to perform the operation at the hardware level .
◦ This layered approach simplifies the development and implementation of computing system and multiple executing environments.
◦ In fact, such a model not only requires limited knowledge of the entire computing stack, but it also provides ways to implement a minimal security model for managing and accessing shared resources
For this purpose, the instruction set exposed by the hardware has been divided into different security classes that define who can operate with them.
◦ The first distinction can be made between privileged and nonprivileged instructions.
◦ Nonprivileged instructions are those instructions that can be used without interfering with other tasks because they do not access shared resources. This category contains, for example, all the floating, fixed-point, and arithmetic instructions.
◦ Privileged instructions are those that are executed under specific restrictions and are mostly used for sensitive operations, which expose or modify the privileged state.
Questions
For answers refer to class note and “Mastering cloud computing by Rajkumar Buyya”
1. What is the innovative characteristic of cloud computing? Hints: No up-front commitments, On-demand access, Nice pricing, Simplified application acceleration and scalability, Efficient resource allocation, Energy efficiency, Seamless creation and use of third-party services
◦ What are the major distributed computing technologies that led to cloud computing?
Hints: heterogeneity, openness, scalability, transparency, concurrency, continuous availability, and independent failures.
◦ Describe the main characteristics of a service oriented computing and what are the concepts of SOA that are fundamental to cloud computing.
Hints: QoS, SaaS
◦ Briefly summarize the Cloud Computing Reference Model. Hints: IaaS, PaaS, SaaS
◦ What is the major benefits of cloud computing? Refer to class note.
◦ Briefly summarize the challenges still open in cloud computing. Hints: Dynamic provisioning, security, confidentiality, protection of data, legal issues.
◦ Describe NIST classification models for deploying and accessing cloud computing environments.
Hints: service models, deployment models (public cloud, private cloud, hybrid cloud, community cloud )
Terminal – I Cloud Computing
1. Describe the main characteristics of a service oriented computing and what are the concepts of SOA that are fundamental to cloud computing.
2. Describe NIST classification models for deploying and accessing cloud computing environments.
3. Briefly summarize the Cloud Computing Reference Model.
Terminal – I Cloud Computing
1. Describe the main characteristics of a service oriented computing and what are the concepts of SOA that are fundamental to cloud computing.
2. Describe NIST classification models for deploying and accessing cloud computing environments.
3. Briefly summarize the Cloud Computing Reference Model.
Amazon Web Services
◦ Amazon.com is one of the most important and heavily trafficked Web sites in the world. It provides a vast selection of products using an infrastructure based on Web services.
◦ As Amazon.com has grown, it has dramatically grown its infrastructure to accommodate peak traffic times.
◦ Over time the company has made its network resources available to partners and affiliates, which also has improved its range of products.
◦ Starting in 2006, Amazon.com made its Web service platform available to developers on a usage-basis model.
◦ Through hardware virtualization on Xen hypervisors, Amazon.com has made it possible to create private virtual servers that you can run worldwide.
▪ These servers can be provisioned with almost any kind of application software you might envisage, and they tap into a range of support services that not only make distributed cloud computing applications possible,
Amazon Web Services is based on SOA standards, including HTTP, REST, and SOAP transfer protocols, open source and commercial operating systems, application servers, and browser-based access.
Virtual private servers can provision virtual private clouds connected through virtual private networks providing for reasonable security and administrative control.
AWS has a great value proposition: You pay for what you use. While you may not save a great deal of money over time using AWS for enterprise class Web applications,
Understanding Amazon Web Services
◦ Amazon.com is the world’s largest online retailer. The company is a long way past selling books and records.
◦ Amazon.com offers the largest number of retail product SKUs through a large ecosystem of partnerships.
By any measure, Amazon.com is a huge business. To support this business, Amazon.com has built an enormous network of IT systems to support not only average, but peak customer demands Amazon Web Services (AWS) takes what is essentially unused infrastructure capacity on Amazon.com’s network and turns it into a very profitable business.
( http://aws.amazon.com/ ).
◦ AWS is having enormous impact in cloud computing.
◦ Indeed, Amazon.com’s services represent the largest pure Infrastructure as a Service (IAAS) play in the marketplace today.
◦ The structure of Amazon.com’s Amazon Web Services (AWS) is therefore highly educational in understanding just how disruptive cloud computing can be to traditional fixed asset IT deployments,
◦ virtualization enables a flexible approach to system rightsizing, and how dispersed systems can impart reliability to mission critical systems.
Amazon Web Service Components and Services
At the base of the solution stack are services that provide raw compute and raw storage: Amazon Elastic Compute (EC2) and Amazon Simple Storage Service (S3). These are the two most popular services,
The largest component of Amazon’s offerings is Amazon’s Elastic Compute Cloud (EC2),
◦ Amazon Elastic Compute Cloud (EC2; http://aws.amazon.com/ec2/ ), is the central application in the AWS portfolio.
◦ It enables the creation, use, and management of virtual private servers running the Linux or Windows operating system over a Xen hypervisor.
◦ Amazon Machine Instances are sized at various levels and rented on a computing hour basis.
◦ Spread over data centers worldwide,
◦ EC2 applications may be created that are highly scalable, redundant, and fault tolerant.
Amazon machine images
Amazon Machine Images (AMIs) are templates from which it is possible to create a virtual machine.
◦ They are stored in Amazon S3 and identified by a unique identifier in the form of ami-xxxxxx and manifest XML file.
◦ An AMI contains a physical file system layout with a predefined operating system installed. These are specified by the Amazon Ramdisk Image (ARI, id: ari-yyyyyy) and the Amazon Kernel Image (AKI, id: aki-zzzzzz), which are part of the configuration of the template.
◦ AMIs are either created from scratch or “bundled” from existing EC2 instances.
◦ A common practice is to prepare new AMIs to create an instance from a preexisting AMI, log into it once it is booted and running, and install all the software needed. Using the tools provided by Amazon, we can convert the instance into a new image.
◦ Once an AMI is created, it is stored in an S3 bucket and the user can decide whether to make it available to other users or keep it for personal use.
◦ Finally, it is also possible to associate a product code with a given AMI, thus allowing the owner of the AMI to get revenue every time this AMI is used to create EC2 instances.
EC2 instances
◦ EC2 instances represent virtual machines. They are created using AMI as templates, which are specialized by selecting
the number of cores, their computing power, and the installed memory.
Computing Power:
The processing power is expressed in terms of
virtual cores and EC2 Compute Units (ECUs).
The ECU s a measure of the computing power of a virtual core; it is used to express a predictable quantity of real CPU power that is allocated to an instance.
By using compute units instead of real frequency values, Amazon can change over time the mapping of such units to the underlying real amount of computing power allocated,
so Real amount of computing power is mapped to such units.
thus keeping the performance of EC2 instances consistent with standards set by the times.
◦ Over time, the hardware supporting the underlying infrastructure will be replaced by more powerful hardware, and the use of ECU helps give users a consistent view of the performance offered by EC2 instances.
◦ Users rent computing capacity rather than buying hardware, this approach is reasonable.
Table shows all the currently available configurations
We can identify
Six major categories:
1. Standard instances. This class offers a set of configurations that are suitable for most applications. EC2 provides three different categories of increasing computing power, storage, and memory.
2. Micro instances. This class is suitable for those applications that consume a limited amount of computing power and memory. Micro instances can be used for small Web applications with limited traffic.
3. High-memory instances. This class targets applications that need to process huge workloads and require large amounts of memory.Three-tier Web applications characterized by high traffic are the target profile.
4. High-CPU instances. This class targets compute-intensive applications. Two configurations are available where computing power proportionally increases more than memory.
5. Cluster Compute instances. This class is used to provide virtual cluster services. Instances in this category are characterized by high CPU compute power and large memory and an extremely high I/O and network performance, which makes it suitable for HPC applications.
6. Cluster GPU instances. This class provides instances featuring graphic processing units (GPUs) and high compute power, large memory, and extremely high I/O and network performance. This class is particularly suited for cluster applications that perform heavy graphic computations, such as rendering clusters. Since GPU can be used for general-purpose computing, users of such instances can benefit from additional computing power, which makes this class suitable for HPC applications.
EC2 environment
EC2 instances are executed within a virtual environment, which provides them with the services they require to host applications.
◦ The EC2 environment is in charge of allocating addresses, attaching storage volumes, and configuring security in terms of access control and network connectivity.
◦ By default, instances are created with an internal IP address, which makes them capable of communicating within the EC2 network and accessing the Internet as clients.
◦ It is possible to associate an Elastic IP to each instance, which can then be remapped to a different instance over time.
◦ Elastic IPs allow instances running in EC2 to act as servers
◦ Together with an external IP, EC2 instances are also given a domain name that generally is in the form ec2-xxx-xxx- xxx.compute-x.amazonaws.com.
◦ Currently, there are five availability zones that are priced differently: two in the United States (Virginia and Northern California), one in Europe (Ireland), and two in Asia Pacific (Singapore and Tokyo).
◦ Instance owners can partially control where to deploy instances. Instead, they have a finer control over the security of the instances as well as their network accessibility.
◦ Instance owners can associate a key pair to one or more instances when these instances are created. A key pair allows the owner to remotely connect to the instance once this is running and gain root access to it.
◦ Amazon EC2 controls the accessibility of a virtual instance with basic firewall configuration, allowing the specification of source address, port, and protocols (TCP, UDP, ICMP).
◦ Rules can also be attached to security groups, and instances can be made part of one or more groups before their deployment.
◦ Security groups and firewall rules constitute a flexible way of providing basic security for EC2 instances, which has to be complemented by appropriate security configuration within the instance itself.
Advanced compute services
EC2 instances and AMIs constitute the basic building blocks of an IaaS computing cloud.
On top of these, Amazon Web Services provide more sophisticated services that allow the easy packaging and deploying of applications and a computing platform that supports the execution of MapReduce- based applications.
AWS CloudFormation
AWS CloudFormation constitutes an extension of the simple deployment model that characterizes EC2 instances.
◦ CloudFormation introduces the concepts of templates, which are JSON formatted text files that describe the resources needed to run an application or a service in EC2 and relations between resources
◦ CloudFormation allows easily and explicitly linking EC2 instances together and introducing dependencies among them.
◦ Templates provide a simple and declarative way to build complex systems and integrate EC2 instances with other AWS services such as S3, SimpleDB, SQS, SNS, Route s3, Elastic Beanstalk, and others.
AWS elastic beanstalk
▪ AWS Elastic Beanstalk constitutes a simple and easy way to package applications and deploy them on the AWS Cloud.
▪ This service simplifies the process of provisioning instances and deploying application code and provides appropriate access to them.
◦ Currently, this service is available only for Web applications developed with the Java/Tomcat technology stack.
◦ Developers can conveniently package their Web application into a WAR file and use Beanstalk to automate its deployment on the AWS Cloud.
◦ With respect to other solutions that automate cloud deployment, Beanstalk simplifies tedious tasks without removing the user’s capability of accessing—and taking over control of—the underlying EC2 instances that make up the virtual infrastructure on top of which the application is running.
◦ With respect to AWS CloudFormation, AWS Elastic Beanstalk provides a higher-level approach for application deployment on the cloud, which does not require the user to specify the infrastructure in terms of EC2 instances and their dependencies.
Amazon elastic MapReduce
◦ The major advantage of MapReduce is that it is easy to scale data processing over multiple computing nodes.
◦ MapReduce is a processing technique and a program model for distributed computing based on java.
◦ The MapReduce algorithm contains two important tasks, namely Map and Reduce.
◦ Map takes a set of data and converts it into another set of data, where individual elements are broken down into tuples (key/value pairs).
◦ Secondly, reduce task, which takes the output from a map as an input and combines those data tuples into a smaller set of tuples.
◦ As the sequence of the name MapReduce implies, the reduce task is always performed after the map job.
◦ The major advantage of MapReduce is that it is easy to scale data processing over multiple computing nodes.
▪ Amazon Elastic MapReduce provides AWS users with a cloud computing platform for MapReduce applications.
▪ It utilizes Hadoop as the MapReduce engine, deployed on a virtual infrastructure composed of EC2 instances, and uses Amazon S3 for storage needs.
▪ The Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.
▪ It is designed to scale up from single servers to thousands of machines.
▪ Elastic MapReduce introduces elasticity and allows users to dynamically size the Hadoop cluster according to their needs.
▪ Selects the appropriate configuration of EC2 instances to compose the cluster (Small, High-Memory, High-CPU, Cluster Compute, and Cluster GPU).
▪ On top of these services, basic Web applications allowing users to quickly run data-intensive applications without writing code are offered.
Storage services
▪ AWS provides a collection of services for data storage and information management.
▪ The core service in this area is represented by Amazon Simple Storage Service (S3).
▪ This is a distributed object store that allows users to store information in different formats.
▪ The core components of S3 are two: buckets and objects.
▪ Buckets represent virtual containers in which to store objects; objects represent the content that is actually stored.
▪ Objects can also be enriched with metadata that can be used to tag the stored content with additional information.
S3 key concepts
As the name suggests, S3 has been designed to provide a simple storage service that’s accessible through a Representational State Transfer (REST) interface
Quite similar to a distributed file system but which presents some important differences that allow the infrastructure to be highly efficient:
The storage is organized in a two-level hierarchy
• S3 organizes its storage space into buckets that cannot be further partitioned.
• So it is not possible to create directories or other kinds of physical groupings for objects stored in a bucket.
• Stored objects cannot be manipulated like standard files.
• S3 has been designed to essentially provide storage for objects that will not change over time.
• Therefore, it does not allow renaming, modifying, or relocating an object.
• Once an object has been added to a bucket, its content and position is immutable
• Only way to change it is to remove the object from the store and add it again.
• Content is not immediately available to users.
• The main design goal of S3 is to provide an eventually consistent data store.
• As a result, because it is a large distributed storage facility, changes are not immediately reflected.
• For instance, S3 uses replication to provide redundancy and efficiently serve objects across the globe.
• This practice introduces latencies when adding objects to the store—especially large ones— which are not available instantly across the entire globe.
Requests will occasionally fail.
• Due to the large distributed infrastructure being managed, requests for object may occasionally fail.
• Under certain conditions, S3 can decide to drop a request by returning an internal server error.
• Therefore, it is expected to have a small failure rate during day-to- day operations, which is generally not identified as a persistent failure.
Communication services
Amazon provides facilities to structure and facilitate the communication among existing applications and services residing within the AWS infrastructure.
These facilities can be organized into two major categories: virtual networking and messaging.
Virtual networking
Virtual networking comprises a collection of services that allow AWS users to control the connectivity to and between compute and storage services. Amazon Virtual Private Cloud (VPC) and Amazon Direct Connect provide connectivity solutions in terms of infrastructure.
Amazon Virtual Private Cloud (VPC)
◦ Amazon VPC provides a great degree of flexibility in creating virtual private networks within the Amazon infrastructure
◦ The service providers prepare either templates covering most of the usual scenarios or a fully customizable network service for advanced configurations.
Prepared templates include public subnets, isolated networks, private networks accessing Internet through network address translation (NAT), and hybrid networks including AWS resources and private resources.
Amazon Direct Connect
◦ Allows AWS users to create dedicated networks between the user private network and Amazon Direct Connect locations, called ports.
◦ This connection can be further partitioned in multiple logical connections and give access to the public resources hosted on the Amazon infrastructure.
◦ The advantage of using Direct Connect versus other solutions is the consistent performance of the connection between the users’ premises and the Direct Connect locations.
◦ This service is compatible with other services such as EC2, S3, and Amazon VPC and can be used in scenarios requiring high bandwidth between the Amazon network and the outside world.
Messaging
Messaging services constitute the next step in connecting applications by leveraging AWS capabilities. The three different types of messaging services offered are Amazon Simple Queue Service (SQS), Amazon Simple Notification Service (SNS), and Amazon Simple Email Service (SES).
Simple Queue Service (SQS)
◦ Amazon SQS constitutes the model for exchanging messages between applications by means of message queues, hosted within the AWS infrastructure.
◦ Using the AWS console or directly the underlying Web service AWS, users can create an unlimited number of message queues and configure them to control their access.
◦ Applications can send messages to any queue they have the access.
◦ These messages are securely and redundantly stored within the AWS infrastructure for a limited period of time, and they can be accessed by other (authorized) applications.
◦ While a message is being read, it is kept locked to avoid spurious processing from other applications.
◦ Such a lock will expire after a given period.
Amazon Simple Notification Service (SNS)
◦ Amazon SNS provides a publish-subscribe method for connecting heterogeneous applications.
◦ With respect to Amazon SQS, where it is necessary to continuously poll a given queue for a new message to process, Amazon SNS allows applications to be notified when new content of interest is available.
◦ This feature is accessible through a Web service whereby AWS users can create a topic, which other applications can subscribe to.
◦ At any time, applications can publish content on a given topic and subscribers can be automatically notified.
◦ The service provides subscribers with different notification models (HTTP/HTTPS, email/email JSON, and SQS).
Amazon Simple Email Service (SES)
◦ Amazon SES provides AWS users with a scalable email service that leverages the AWS infrastructure.
◦ Once users are signed up for the service, they have to provide an email that SES will use to send emails on their behalf.
◦ To activate the service, SES will send an email to verify the given address and provide the users with the necessary information for the activation.
◦ Upon verification, the user is given an SES sandbox to test the service, and he can request access to the production version.
◦ Using SES, it is possible to send either SMTP-compliant emails or raw emails by specifying email headers and Multipurpose Internet Mail Extension (MIME) types.
◦ Emails are queued for delivery, and the users are notified of any failed delivery. SES also provides a wide range of statistics that help
users to improve their email campaigns for effective communication with customers.
◦ Google AppEngine
◦ Architecture and core concepts
▪ Infrastructure
▪ Run time Environments Storage
Google AppEngine
Google AppEngine is a PaaS implementation that provides services for developing and hosting scalable Web applications.
◦ AppEngine is essentially a distributed and scalable runtime environment that leverages Google’s distributed infrastructure to scale out applications facing a large number of requests by allocating more computing resources to them and balancing the load among them.
◦ The runtime is completed by a collection of services that allow developers to design and implement applications that naturally scale on AppEngine.
◦ Developers can develop applications in Java, Python, and Go, a new programming language developed by Google to simplify the development of Web applications.
◦ Application usage of Google resources and services is metered by AppEngine, which bills users when their applications finish their free quotas.
Architecture and core concepts
AppEngine is a platform for developing scalable applications accessible through the Web (see Figure )
The platform is logically divided into four major components: infrastructure, the runtime environment, the underlying storage, and the set of scalable services that can be used to develop applications.
Infrastructure
◦ AppEngine hosts Web applications, and its primary function is to serve users requests efficiently.
◦ AppEngine’s infrastructure takes advantage of many servers available within Google datacenters.
◦ For each HTTP request, AppEngine locates the servers hosting the application that processes the request, evaluates their load, and, if necessary, allocates additional resources or redirects the request to an existing server.
Runtime environment
◦ The runtime environment represents the execution context of applications hosted on AppEngine With reference to the AppEngine infrastructure code, which is always active and running.
◦ The run-time comes into existence when the request handler starts executing and terminates once the handler has completed.
Sandboxing
◦ One of the major responsibilities of the runtime environment is to provide the application environment with an isolated and protected context in which it can execute without causing a threat to the server and without being influenced by other applications.
◦ In other words, it provides applications with a sandbox.
◦ Currently, AppEngine supports applications that are developed only with managed or interpreted languages, which by design require a runtime for translating their code into executable instructions.
◦ Therefore, sandboxing is achieved by means of modified runtimes for applications that disable some of the common features normally available with their default implementations.
◦ If an application tries to perform any operation that is considered potentially harmful, an exception is thrown and the execution is interrupted.
◦ Some of the operations that are not allowed in the sandbox include writing to the server’s file system; accessing computer through network besides using Mail, UrlFetch, and XMPP.
Architecture and core concepts
Infrastructure
Run time Environments
Supported runtimes
Currently, it is possible to develop AppEngine applications using three different languages and related technologies: Java, Python, and Go.
AppEngine currently supports Java 6, and developers can use the common tools for Web application development in Java, such as the Java Server Pages (JSP), and the applications interact with the environment by using the Java Servlet standard.
Furthermore, access to AppEngine services is provided by means of Java libraries
Developers can create applications with the AppEngine Java SDK, which allows developing applications using any Java library that does not exceed the restrictions imposed by the sandbox.
Support for Python is provided by an optimized Python 2.5.2 interpreter.
As with Java, the run time environment supports the Python standard library, but some of the modules that implement potentially harmful operations have been removed, and attempts to import such modules or to call specific methods generate exceptions.
To support application development, AppEngine offers a rich set of libraries connecting applications to AppEngine services. In addition, developers can use a specific Python Web application framework, called webapp, simplifying the development of Web applications.
The Go runtime environment allows applications developed with the Go programming language to be hosted and executed in AppEngine.
The SDK includes the compiler and the standard libraries for developing applications in Go and interfacing it with AppEngine services.
As with the Python environment, some of the functionalities have been removed or generate a runtime exception.
Developers can include third-party libraries in their applications as long as they are implemented in pure Go.
Storage
AppEngine provides various types of storage, which operate differently depending on the volatility of the data.
There are three different levels of storage:
in memory-cache, storage for semistructured data, and long-term storage for static data.
Static file servers
Web applications are composed of dynamic and static data.
Dynamic data are a result of the logic of the application and the interaction with the user.
Static data often are mostly constituted of the components that define the graphical layout of the application (CSS files, plain HTML files, JavaScript files, images, icons, and sound files) or data files.
These files can be hosted on static file servers, since they are not frequently modified.
Such servers are optimized for serving static content, and users can specify how dynamic content should be served when uploading their applications to AppEngine.
DataStore
DataStore is a service that allows developers to store semistructured data. The service is designed to scale and optimized to quickly access data.
DataStore can be considered as a large object database in which to store objects that can be retrieved by a specified key.
With respect to the traditional Web applications backed by a relational database, DataStore imposes less constraint on the regularity of the data but, at the same time, does not implement some of the features of the relational model (such as reference constraints and join operations).
These design decisions originated from a careful analysis of data usage patterns for Web applications and were taken in order to obtain a more scalable and efficient data store.
DataStore provides high-level abstractions that simplify interaction with Bigtable.
◦ Developers define their data in terms of entity and properties, and these are persisted and maintained by the service into tables in Bigtable.
◦ Each entity is associated with a key, which is either provided by the user or created automatically by AppEngine.
◦ An entity is associated with a name that AppEngine uses to optimize its retrieval from Bigtable.
◦ Although entities and properties seem to be similar to rows and tables in SQL, there are a few differences that have to be taken into account.
◦ Entities of the same kind might not have the same properties, and properties of the same name might contain values of different types.
◦ Moreover, properties can store different versions of the same values.
◦ Finally, keys are immutable elements and, once created, they cannot be changed.
◦ DataStore also provides facilities for creating indexes on data and to update data within the context of a transaction.
◦ Indexes are used to support and speed up queries.
◦ A query can return zero or more objects of the same kind or simply the corresponding keys.
◦ Returned result sets can be sorted by key value or properties value.
◦ Even though the queries are quite similar to SQL queries, their implementation is substantially different.
◦ DataStore has been designed to be extremely fast in returning result sets;
◦ To do so it needs to know in advance all the possible queries that can be done for a given kind, because it stores for each of them a separate index.
◦ The indexes are provided by the user while uploading the application to AppEngine and can be automatically defined by the development server.
In todays class time go through this material
Submit to cseaigit.1923@blogger.com today before 9 AM.
Discuss the Architecture and core concepts of Google AppEngine. (with less than 10 lines)
class M
icrosoft has a very extensive cloud computing portfolio under active development. Efforts to extend Microsoft products and
third-party applications into the cloud are centered around adding
more capabilities to existing Microsoft tools. Microsoft’s approach is to view cloud applications as software plus service. In this model, the cloud is another platform and applications can run locally and access cloud services or run entirely in the cloud and be accessed by browsers using standard Service Oriented Architecture (SOA) protocols.
Microsoft calls their cloud operating system the Windows Azure Platform. You can think of Azure as a combination of virtualized infrastructure to which the .NET Framework has been added as a set of .NET Services. The Windows Azure service itself is a hosted environment of virtual machines enabled by a fabric called Windows Azure AppFabric. You can host your application on Azure and provision it with storage, growing it as you need it. Windows Azure service is an Infrastructure as a Service offering.
A number of services interoperate with Windows Azure, including SQL
Azure (a version of SQL Server), SharePoint Services, Azure Dynamic CRM, and many of Windows Live Services comprising what is the Windows Azure Platform, which is a Platform as a Service cloud computing model.
Eventually, many more services will be added, encompassing the whole Eventually, many more services will be added, encompassing the whole
range of Microsoft’s offerings. This architecture positions Microsoft to either extend its product into the Web or to license its products, whichever way the cloud computing marketplace develops. From Microsoft’s position and that of its developers, Windows Azure has lots of advantages.
Windows Live Services is a collection of applications and services that run on the Web. Some of these applications called Windows Live Essentials are
add-ons to Windows and downloadable as applications. Other Windows Live Services are standalone Web applications viewable in a browser. An important subset of these
Windows Live Services is available to Windows Azure applications through the Windows Live Messenger Connect API. A set of Windows Live for Mobile applications also exists. These applica- tions and services are more fully described in this chapter.
Exploring Microsoft Cloud Services
Microsoft CEO Steve Balmer recently said at a University of Washington speech that Microsoft was “betting our company” on the cloud. Balmer also claimed that about 70 percent of Microsoft employees were currently working on cloud-related projects and that the number was expected to rise to about 90 percent within a year. Plans to integrate cloud-based applications and services into the Microsoft product portfolio dominates the thinking at Microsoft and is playing a central role in the company’s ongoing product development. The starting place for Microsoft’s cloud computing efforts may be found at Microsoft.com/cloud, shown in Figure 10.1.
Microsoft has a vast array of cloud computing products and initiatives, and a number of industry- leading Web applications. Although services like America Online Instant Messenger (AIM) may garner mindshare in the United States, surprisingly Microsoft Messenger is the market leader in many other countries. Product by product in any category you can name—calendars, event manag- ers, photo galleries, image editors, movie making, and so on—Microsoft has a Web application for it. Some of these products are also-rans, some are good, some are category leaders, and a few of them are really unique. What is also true is that Web apps are under very active development.
Microsoft sees its on-line application portfolio as a way of extending its desktop applications to
make the company pervasive and to extend its products’ lives well into the future. Going forward,
Microsoft sees its future as providing the best Web experience for any type of
device, which means that it structures its development environment so the application alters its behavior depending upon the device. For a mobile device, that would mean adjusting the user interface to accommodate the small screen, while for a PC the Web application would take advan- tage of the PC hardware to accelerate the application and add richer graphics and other features. That means Microsoft is pushing cloud development in terms of applications serving as both a ser- vice and an application. This duality—like light, both a particle and a wave—manifests itself in the way Microsoft is currently structuring its Windows Live Web products. Eventually, the company intends to create a Microsoft app store to sell cloud applications to users.
Microsoft Live is only one part of the Microsoft cloud strategy. The second part of the strategy is the extension of the .NET Framework and related development tools to the cloud. To enable .NET developers to extend their applications into the cloud, or to build .NET style applications that run completely in the cloud, Microsoft has created a set of .NET services, which it now refers to as the Windows Azure Platform. .NET Services itself had as its origin the work Microsoft did to create its
BizTalk products. Azure and its related services were built to allow developers to extend their applications into the
cloud. Azure is a virtualized infrastructure to which a set of additional enterprise services has been layered on top, including:
lA virtualization service called Azure AppFabric that creates an application hosting envi- ronment. AppFabric (formerly .NET Services) is a cloud-enabled version of the .NET Framework.
lA high capacity non-relational storage facility called Storage. lA set of virtual machine instances called Compute.
lA cloud-enabled version of SQL Server called SQL Azure Database.
lA database marketplace based on SQL Azure Database code-named “Dallas.”
lAn xRM (Anything Relations Management) service called Dynamics CRM based on Microsoft Dynamics.
lA document and collaboration service based on SharePoint called SharePoint Services. lWindows Live Services, a collection of services that runs on Windows Live, which can be used in applications that run in the Azure cloud.
https://drive.google.com/file/d/1yNTd3SzZ8x8UiMv2DpO6-9ssvOEECtxi/view?usp=sharing Go through the chapter 9 of this material and send answers to cseaigit@gmail.com
1. What is Windows Azure?
2. Describe the architecture of Windows Azure.
3. Discuss the storage services provided by Windows Azure.
4. What is SQL Azure?
5. Illustrate the architecture of SQL Azure. Next:
go through this material for security of cloud computing. https://drive.google.com/file/d/1lgDaFHR9eTzarjVaLT53sd2WjEQL-1gg/view?usp=sharing this will be covered by online class.
Subscribe to:
Post Comments (Atom)
C Programming
Table of Contents About the Author 3 Introduction 4 Introduction to Computers 4 1. What is a Computer? 4 2. History...
-
Semester,Roll No,Regn No,Student Name,Student Ph No,Mentor/faculty 3rd yr BTech,417001,2201105001,ABHILASH DEHURY,7846907407,Prof S.N.Mishra...
-
Practice Questions What are evolutionary changes in computing system? centralized computing to parallel distri...
-
CLOUD COMPUTING - 1 by Dr. S K Nayak Table of Contents Syllabus Cloud c...
No comments:
Post a Comment