Exin Cloud Computing

Certification Training
5888 Learners
View Course Now!

PMI-RMP Control Risks Tutorial

1.1 EXIN Cloud Computing Foundation

Introduction: Hello and Welcome to the Cloud Computing Foundation Course by SimpliLearn. This course is designed for the participants who are looking at successfully completing the foundation certification by EXIN on Cloud Computing. These slides contain all the basic materials required to prepare students for the Exin Foundation Cloud Computing Exam. Let us get started with this course, in the next slide.

1.2 Welcome to the Basic Training Material

Welcome to the Basic Training Material: These slides contain basic presentation material to prepare students for the EXIN Cloud Computing Foundation examination. The slides may be used in a Foundation training and as a basis for an accredited training A good training will always require extra examples, elaborating subjects of special interest to the audience and a good time schedule, including break-out sessions The order in which the Foundation subjects are presented, follow the order of the exam requirements, which is not necessarily the order in a good training course. Let us proceed to the next slide and look at the agenda of this course.

1.3 Agenda

Agenda: We will start off by going through the Principles of Cloud Computing Where we will be looking at the principles of cloud computing i.e the concepts, definitions, characteristics, deployment and service models. We will also understand the evolution of cloud, what is virtualization; cloud architectures and benefits of the cloud. This will be followed by Implementing and Managing Cloud Computing Here, we will look at how to build a local cloud environment; the principles of managing the cloud, and governance. After that we will look at Using the Cloud. We will then understand the various ways to access a cloud, thin clients; how cloud can support business processes, service providers using the cloud etc. Then we will go through Security and Compliance. This is a main topic which concerns everyone and here we shall be looking into the various security measures and risks along with legal and regulatory aspects. We shall also deal with identity management on the cloud. Lastly we will learn about Evaluation of Cloud Computing Here we shall look at business case, evaluating implementations, and what to check before deciding to move on to a cloud. Let us get introduced with the course now.

1.4 Introduction

Introduction: So let's proceed and see what the agenda will be for this course.

1.5 Course objectives

Course Objectives: Principles of Cloud computing Implementing and managing cloud Using the Cloud Security and compliance Evaluation of Cloud computing: the business case We start by looking at an overview of cloud computing in the next slide.

1.6 Overview Cloud Computing

Overview of Cloud Computing: This diagram shows us the methodology of an effective working cloud environment. The most important thing to note is that all these factors are entirely co-dependent and have to work in tandem for us to have a functioning reliable cloud platform. As the diagram illustrates we have the Cloud Computing Infrastructure which we shall first evaluate. Evaluation is a very important aspect of moving to a cloud, as first, we need to find out that everything that we need is available and functioning properly. We need to use it to really know it and be sure that everything is compatible with what we have in the organization. Managing the cloud is another important aspect as it should be easy and the learning curve not that high. The employees within the organization should be able to adjust to the environment and feel comfortable with it. It should not happen that training the employees on the new platform become an uphill task. We should also verify the support offered by the service provider and make sure that he is able to support the needs of the organization, and should be prompt with it. The delays could cost the organization valuable time and money. Once we are sure with all these aspects then is the time to implement the cloud within the organization and reap the benefits that this technology has to offer. Next we will discuss the principles of Cloud Computing.

1.7 1 The Principle-sof Cloud Computing

The Principles of Cloud Computing: Here let's look at the concepts of what makes a cloud, the definition of what the cloud and other aspects are. We will then look at the various deployment and service models and what they mean. This will give you a clear picture of what exactly you need from a cloud. Contents: This slide gives you an overview of what we will be looking at in this module.

1.8 1 1 The Concept of cloud computing

The Concept of Cloud Computing: Let us now see the concepts of cloud computing. Here we will study the definitions and main characteristics of the cloud. Let us look into a diagram in the next slide which describes the overview of the concept of Cloud Computing.

1.9 Overview of the Concept of Cloud Computing

Overview of the concept of cloud computing: In this figure we see Cloud Computing which offers a service that can be accessed through the internet. The service itself is both shared and scalable and the most important factor is that all this is based on virtualization. In the next slide we get to learn the definitions of the cloud.

1.10 Definitions

Definitions: There is no one single definition for Cloud. Also all the definitions you will come across are inclusive not definitive i.e. they try to include the scope of the cloud and the components involved. It would actually be very difficult to come up with an exact definition of a cloud as it has to be inclusive. Here we look at two definitions of the cloud, one by Encyclopaedia Britannica and the other by NIST (National Institute of Standards and Technology, America). “cloud computing, method of running application software and storing related data in central computer systems and providing customers or other users access to them through the Internet”. Encyclopaedia Britannica (eb.com, 2012) If we look at this definition it elaborates more on the functioning of the cloud rather than its components. “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” NIST Special Publication 800-145 (September 2011) ubiquitous : existing or being everywhere. This definition elaborates not only on the functioning of the cloud but also includes a few components that comprise a cloud. Let us now look at the five characteristics of a cloud.

1.11 Five Characteristics

Five Characteristics: This slide lists the 5 very important characteristics of a cloud. These 5 characteristics need to exist for the model to be called a cloud. On-demand self-service Resource pooling (multi-tenancy) Rapid elasticity (flexibility, scalability) Measured service (pay-per-use) And Broad network access (“any time, any place, any device”) Let us get into details of these five characteristics now. On-Demand Self Service: This characteristic is where a user can, himself, when he wants, avail of a service or can upgrade or downgrade the service as per his convenience. The user is in total control, that he gets anything and everything possible whenever and wherever he wants it. Resource Pooling: Here, we can look at two different aspects. Cloud is nothing but pooling of resources and creating virtualized platforms on it. Thus resource pooling becomes a very important part of a cloud as without it the cloud would not exist. Multi Tenancy also plays a huge role in the cloud. Multi Tenancy is where there is a single instance of an application and multiple people can log in and use the application as per their requirements. The best example of multi tenancy is Gmail where worldwide there is a single application running on Google's servers and people log in using that API interface to connect to their individual accounts. Rapid Elasticity: This is a very important feature of the cloud. It offers scalability. Let's say in a physical infrastructure environment, if there is a sudden unexpected spike in the workloads how much time would it take to prepare servers and introduce them in the network to handle the spike. It would be impossible to manage this situation. However in a cloud environment you can have copies (snapshots) of your servers and when an unexpected spike happens you can just add virtual servers in the virtual environment and delete them as the workload lowers. This can be done in minutes. Measured Service: A cloud always offers a service and this service is always pay as you use. This is one of the concept on which the cloud is designed on. Example: even if you have let's say, created one virtual machine on AWS (Amazon Web Services) you will be charged only for the time when the machine is running and not when it is switched off. If you use it 4 hours in a day you will be charged only for 4 hours and not 24hours. Broad Network Access: An internet connection is always required where a cloud is concerned. With an internet connection you can access your cloud or any service from anywhere in the world at any time. In the next slide, we will discuss when and how IT becomes a Utility.

1.12 IT becomes an utility

IT becomes a utility: There was a time when every household, town, farm or village had its own water well. Today, shared public utilities give us access to clean water by simply turning on the tap; cloud computing works in a similar fashion. Just like water from the tap in your kitchen, cloud computing services can be turned on or off quickly as needed. Like at the water company, there is a team of dedicated professionals making sure the service provided is safe, secure and available on a 24/7 basis. When the tap isn't on, not only are you saving water, but you aren't paying for resources you don't currently need. Vivek Kundra Federal CIO, United States Government. Vivek Kundra is the Federal Chief Information Officer in the United States Government and is an advocate of the cloud. He himself has transformed quite a few government offices and introduced them to cloud. Let us now look into some examples of cloud computing.

1.13 Cloud Computing Some examples

Cloud Computing: Some Examples: Everyone has used a cloud whether knowingly or unknowingly. If you have used Gmail, Yahoo or Hotmail then you have been using a part of the cloud since you started using these services. Let's look at a few examples of cloud that we encounter on a daily basis. Facebook, Twitter, Orkut, or any other web-based social networking site. All wiki's (Wikipedia) that we use and refer to. Any online game. Any web-based email (Hotmail, Yahoo, Gmail, Rediffmail etc.) Dropbox or any other online storage offering. For business purposes: CRM backup services ERP Financial etc.... Most of these services you will find on Salesforce.com In the next slide we will talk about four deployment models.

1.14 Four Deployment models

Four Deployment Models: Let us now look at the various deployment models in the cloud. There are four models: 1) Private Cloud 2) Public Cloud 3) Community Cloud 4) Hybrid Cloud 1) Private Cloud: This is where you develop your own in house cloud using virtualization used only for your organization. You will have to buy a basic set of infrastructure (servers, routers, firewalls, IDS, etc.) on which you can create a virtual environment. You can use either Microsoft Hyper-V, VMware Esxi, or Citrix Xen based upon your requirements. These three are the most popular providers that offer a platform for virtualization and cloud. 2) Public Cloud: This is where you build your own cloud on a cloud service providers infrastructure. You do not buy any physical infrastructure, you rent the cloud service providers infrastructure and use it. You are essentially outsourcing your IT infrastructure needs. 3) Community Cloud: An off shoot of the private cloud. However, here a number of organizations come together to pool their monetary and IT resources and create a cloud that is accessible only to their organizations. 4) Hybrid Cloud: This is where you have your own private cloud and also have a public cloud and both are integrated with each other. Let us get into details of private cloud and know more about it, in the next slide.

1.15 Private Cloud just another name for a datacenter

Private Cloud: just another name for a datacenter? A private cloud is one in which the services and infrastructure are maintained on a private network. These clouds offer the greatest level of security and control, but they require the company to still purchase and maintain all the software and infrastructure, which reduces the cost savings. A private cloud is the obvious choice when • Your business is your data and your applications. Therefore, control and security are paramount. • Your business is part of an industry that must conform to strict security and data privacy issues. • Your company is large enough to run a next generation cloud data center efficiently and effectively on its own. To complicate things, the lines between private and public clouds are blurring. For example, some public cloud companies are now offering private versions of their public clouds. Some companies that only offered private cloud technologies are now offering public versions of those same capabilities. A private cloud essentially resides on a private network that runs on a part of a data center that is exclusively used by an organization, which is very similar to that of a physical infrastructure. However you will be using virtualization technology thus reducing the size of the infrastructure and saving yourself a bit of money. This cloud will be maintained either by the organizations itself or a third party contractor and/or a combination of both. We will discuss on public cloud in the next slide.

1.16 Public Cloud

Public Cloud: A public cloud is one in which the services and infrastructure are provided off-site over the Internet. These clouds offer the greatest level of efficiency in shared resources; however, they are also more vulnerable than private clouds. A public cloud is the obvious choice when • Your standardized workload for applications is used by lots of people, such as e-mail. • You need to test and develop application code. • You have SaaS (Software as a Service) applications from a vendor who has a well-implemented security strategy. • You need incremental capacity (the ability to add computer capacity for peak times). • You’re doing collaboration projects. • You’re doing an ad-hoc software development project using a Platform as a Service (PaaS) offering cloud. Many IT department executives are concerned about public cloud security and reliability. A public cloud is aimed at a wide audience and offer multiple services like email, social media etc thus allowing for social networking and collaboration. Security is a concern due to the resources being shared by multiple users. Let us now understand community cloud in the next slide.

1.17 Community Cloud

Community Cloud: The cloud infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises. So to look at it, it is a type of a shared private cloud, as it is not open to the public at large; but is limited to organizations who have invested in it. Hence, it will deliver services only to those specific organizations and offer easy sharing of data, platforms and applications. It also keeps the capital investment to a minimal as a number of organizations are pooling their monetary resources. Since it is a cloud it will offer 24/7 access and support to all organizations using it along with shared services and support contracts. Best examples of this kind of cloud would be educational or research who want to collaborate and share their data amongst each other.

1.18 Hybrid Cloud

Hybrid Cloud: A hybrid cloud includes a variety of public and private options with multiple providers. By spreading things out over a hybrid cloud, you keep each aspect at your business in the most efficient environment possible. The downside is that you have to keep track of multiple different security platforms and ensure that all aspects of your business can communicate with each other. Here are a couple of situations where a hybrid environment is best. • Your company wants to use a SaaS application but is concerned about security. Your SaaS vendor can create a private cloud just for your company inside their firewall. They provide you with a virtualprivate network (VPN) for additional security. • Your company offers services that are tailored for different vertical markets. You can use a public cloud to interact with the clients but keep their data secured within a private cloud. The management requirements of cloud computing become more complex when you need to manage private, public, and traditional data centers all together. You'll need to add capabilities for federating these environments. This model allows you to choose specific services for either Private or Public Cloud suitability and to balance security, privacy and compliance. You can choose what data is important and needs to be kept private and that data can be stored on your private cloud whereas, services and data not deemed private can be moved onto the public cloud. It also helps in compliance as important services and data are in house and secured. In the next slide, we will look into some cloud service models.

1.19 Cloud Service Models

Cloud Service Models: So far we have seen the deployment models of clouds that serve a variety of purposes. Let us now look at service models which allow a variety of functions over the deployment models. There are basically three important service models: 1) Software as a Service (SaaS) The key benefits are that the customer does not need to worry about the development and management of applications. 2) Platform as a Service (PaaS) Not owning a computer platform, but being able to use it ‘on demand’ can save costs in ownership, management and maintenance. 3) Infrastructure as a Service (IaaS) Rental of physical or virtual hardware like storage, servers or internet connectivity. Let us now learn about SaaS, in the next slide.

1.20 SaaS

SaaS: SaaS is where web applications are hosted on the internet to provide a service for both consumer and enterprise. We have been using this service for quite a long time now in our daily lives. A simple example would be our web based email or facebook and their likes. Jon Williams, CTO of Kaplan has this to say about SaaS: "I love the fact that I don't have to deal with servers, staging, version maintenance, security and performance." Eric Knorr with Computerworld says that “[there is an] increasing desperation on the part of IT to minimize application deployment and maintenance hassles” Software as a service (SaaS), sometimes referred to as "on-demand software" is a software delivery model in which software and associated data are centrally hosted on the cloud. SaaS is typically accessed by users using a thin client via a web browser. SaaS has become a common delivery model for most business applications, including accounting, collaboration, customer relationship management (CRM), management information systems (MIS), enterprise resource planning (ERP), invoicing, human resource management (HRM), content management (CM) and service desk management. SaaS has been incorporated into the strategy of all leading enterprise software companies. The key characteristics of SaaS are that the software is hosted offsite. It is an on demand service where users can access it whenever and wherever they want. It is a software package and users are not allowed to modify it. It can be used as a plug in software where the offsite software can be merged into an in-house software for better functionality best suitable in a hybrid cloud. In this situation it is the vendor who has entire control of the software and its maintenance. The user once with the vendor is locked as migrating to another vendor is a difficult option. We will discuss on PaaS in the next slide.

1.21 Paas

PaaS: Here a design platform like .NET is rented for development purposes. Let's look at an example. You want to develop an application based on the .NET framework. Under normal circumstances you buy a computer, buy the .NET framework for development and also create an environment for testing the application. This is a lot of money. Instead, using your existing computer or a thin client you approach Microsoft Azure and rent a virtual pc for the development, rent the .NET framework and also rent the testing environment. Once you are finished with the application you can just pay the rent for utilizing those services and discard them when done. This saves you from the initial infrastructure investment and you only pay rent for usage. This also keeps the cost of your application down. The key characteristics of PaaS are that it is used mainly for remote application development and support. The platforms used may have special features. This model allows for low development costs. The variations for PaaS are the different environments for software development; the hosting environments for applications and online storage used to host. Next we will talk about IaaS.

1.22 Iaas

IaaS: This is where you rent computation and/or storage space as a metered service. This concept is based on Utility Computing which has been around for years. However Utility computing never took off due to lack of virtualization which made this model economically unviable. The cost in utility was approximately $1 for one hour of usage. This service is similar to a piped gas connection or an electricity connection. IaaS can be traced back from the merger between IT and Telecom infrastructure and services in the past decade. The key characteristics are it offers dynamic scaling i.e. upgrade and down gradation of services and platforms. It allows for desktop virtualization thus allowing for better resource utilization etc. It also serves the purpose of policy based services. Examples of IaaS are hosting services supporting e-commerce, web hosting services that include broadband connections and storage. Let’s now look into a few questions in the next slide.

1.23 Questions

Questions: These questions are based on what we have learned so far. I hope you are able to answer the questions: 1) What are the deployment models in a cloud? ANS) Private, Public, Hybrid and Community Cloud. 2)Rapid Elasticity is an essential characteristic of Cloud. True/False ANS. True. 3) Gmail would be SaaS, PaaS or IaaS? ANS) SaaS 4) Multiple organizations pooling resources together to create a cloud is what model? ANS) Community Cloud.

1.24 1 2 The evolution toward Cloud computing

The Evolution towards Cloud Computing: Now we will look at how we evolved towards Cloud Computing. Let us start off by going through the overview of the evolution of cloud computing in the next slide.

1.25 Overview of the Evolution of Cloud Computing

Overview of the evolution of Cloud Computing: This diagram shows us the evolution of computers and related technologies. As we all know the first computer ever built was the mainframe. These computers were huge and used to occupy space in terms of rooms. These used to run on vacuum tubes and people who operated these machines used assembly language to interact. At this point in time there was no communication channels developed for computers. The LAN as we know wasn’t even discovered. However in the latter stages a communication channel was developed and these machines could then interact with each other. However this communication was when the computers were in the same location and not like today where geographical locations are no longer a hindrance. After that the Minicomputers came. This is a term for a smaller class of computers that evolved in the mid-60’s and cost less than a mainframe. So these were a breed of new computers which were more affordable and much smaller than the mainframe which led to gain in popularity of computers thus generating interest in research and further development. During this era the Local Area Network was created. Again this was inter-communication between computers located in the same area but was faster and more robust than its predecessor. Microcomputer was the logical development of the Minicomputer. This is where computers actually became of a size and price that could interest the general public. The first microcomputer built had 256 bytes of RAM and no input/output devices other than indicator lights and switches. This is the time when internet was created and as both these technologies developed, they created a revolution of types. This brings us to today where we have laptops that could be used as servers. Such is the humongous amount of processing power we have at our hands. Internet has given us the opportunity to have no boundaries. We have many services running on the internet in our daily lives. The latest technology advancement which has been around a few years (in actual use) is virtualization. This allows creating a virtual computer within our Physical computer which can be productively used. This is the core basis of Cloud Computing which allows us to reduce investment and overhead costs drastically. Out attempt is to understand this technology and its ability. Virtualization again is nothing new as it has been around since the 1960’s. It was used much then as there was a huge dearth of bandwidth. However with the advent of the internet and ever increasing bandwidth’s virtualization has now become a reality that can be used real time? Let’s now look into the historic timeline of cloud computing in the next slide.

1.26 Historic timeline

Historic Timeline: Computers were discovered way back in the 1960's. Technologies like multi-tasking, virtualization etc. have been in existence since then. However these technologies could not be exploited due to a number of factors; especially lack of bandwidth and processing power. However development of the internet over the period of time has definitely addressed the bandwidth issue. Internet entered India in the late 1990's. At that point in time we started with dial up connection that had a speed of 9.6 bps. Physical landlines were needed to connect to the internet. Cut to 2012 and we have 1Mbps lines even at home. The increase of bandwidth has helped us develop and adopt technologies which we couldn't use earlier. Same has been the case with computers. The processing power, RAM and storage capacity of a computer has been increasing steadily, giving us the opportunity to move forward with technology. If we look at the range of processor's today, especially the XEON processors, they have now been optimised for virtualization. Even if we look at processors of everyday use gadgets, we have multi core processors even on phones, phablets and tablets. Due to these continuously evolving technologies we now are shifting to Cloud Computing which relies heavily on processor, RAM, storage and bandwidth. As it is all new devices that are flooding the market are designed to be connected to the internet 24/7 and also promote cloud services. In the next slide we will discuss on Minicomputers.

1.27 Minicomputers

Minicomputers: We have also seen what minicomputers were. These were smaller systems than the mainframe, were less expensive, easier to purchase and could multi-task. Multi-tasking is where the computer is able to perform two or more tasks at the same time thus becoming time effective. Proprietary system could be installed in this computer that would allow multi user functionality; that means that a single operating system could have more than two users who could utilize the system resources when needed. These systems worked over LAN thus affording expandability and robust functionality. In the next slide we will talk about the evolution from Microcomputer to PC.

1.28 From Microcomputer to PC

From Microcomputer to PC: As we progressed our computers became smaller and smaller. Along with this the costs greatly reduced too and even common man was able to afford the new PC. With evolution we moved from a single user to a multi user environment where multiple logins were possible. Those days’ storage and RAM was very limited and costly whereas now we have limitless resources. This has allowed for innovation of new operating systems and applications allowing us multi user, multi tenancy and user friendly environments compared to the earlier elementary operating systems like UNIX. This evolution has helped us drastically to make our move towards cloud computing successful. Let us now talk about Local Area Networking or LAN.

1.29 Local Area Networking

Local Area Networking: A Local Area Network (LAN) is a computer network that interconnects computers in a limited area such as a home, school or office building etc. The defining characteristics of LAN’s are smaller geographic area and usually high data-transfer rates. The initial driving force for networking was generally to share storage and printers, which were both expensive at that time. It was DoD (Department of Defence, USA) that developed internetworking in their APRANET project. However it was Xerox (the photocopier company) that developed Ethernet in 1973. From that point in time the Ethernet have evolved drastically and now we have Gigabit Ethernet cards. Now, we will understand about network and servers.

1.30 Network and Servers

Network and Servers: There were two forms of communications systems back then. 1) Dedicated Leased Lines and 2)Dial-Up 1) Dedicated Leased Lines: These were expensive and in use by corporations. They had to lease lines from ISP’s who would give them a direct connection from their infrastructure to the company concerned with specified bandwidth. These were expensive to lease and had to have a quite a bit of hardware to be installed at the customer’s end due to which maintenance was also an issue. 2) Dial-Up: Dial up was the first type of internet connection created. In India VSNL distributed such connections. You were required to have a landline telephone connection. A modem was attached between the phone and your computer. A dialler software installed on the computer dialled through the modem and connected to another system at the ISP’s end to establish an Internet connection. These connections were very slow and it used to take ages to open a simple html page. The speed at the creation of this technology was 14.4 kbps (kilobits per second). No wonder that cloud couldn’t be implemented then. The devices were attached to dedicated terminals and allowed for time sharing services. If attached to an intelligent device special services like terminal server (remote access) and batch processing (job entry) could be offered. In the next slide we will discuss the role of the Internet.

1.31 The role of the Internet

The Role of the Internet: JRC Licklider who is known as the father of the internet has a vision of an intergalactic computer network way back in 1963. It was due to his vision and concept that we now have the internet. The predecessor of the internet is the ARPANET (Advanced Research Projects Agency Network) where the concept began. Internet was the logical development of LAN’s WAN’s and MAN’s. It is a global system of interconnected computer networks that use the standard Internet Protocol Suite (also called as TCP/IP). It is a network of networks that consists of millions and millions of private, public, academic, business, and government networks of local to global scope. The internet carries an extensive range of information resources and services. This is a core part of the Cloud Computing model as well. Its initial goal was that of to connect the globe in a reliable manner even if there is a partial equipment or network failure. It was also meant to connect different types of computers and operating systems; hence they had to come up with a standard model for communications based on which any and every operating system can communicate with each other. This created a co-operative effort of many organizations in an effort to realize this goal. Examples of services offered over the internet are www, ftp, smtp, http etc. Let us now talk about Virtualization in the next slide.

1.32 Virtualization

Virtualization: Virtualization is the creation of a virtual (rather than actual) version of something, such as a hardware platform, operating system, storage device, or network resources. It is an integral part of Cloud Computing and if we take away virtualization for them Cloud we will be looking at nothing but Grid Computing. It is not a new technology; it has been around since the 60’s in mainframe environments but wasn’t utilized due to lack of processing/computing power, storage and bandwidth. It allows us create virtual resources thus saving on new hardware and office space. Now, we will discuss virtualization thoroughly in the next slide.

1.33 Virtualization

Virtualization: Virtualization is an integral part of the cloud model. If we take out virtualization from Cloud then we are looking at the Utility computing model. The essence of virtualization is to provide a virtualized operating system (computer) that can be accessed by thin clients via the internet. With virtualization we can integrate the internet, storage, and processing power. This allows for more flexibility and resilience. Using this we can multiply key features like the usage of high performance computers. Normally when we run a single operating system our resources are not fully utilized at all times. If we run multiple instances of computers on a single computer, then not only do we save on investment we also end up fully utilizing the resources of our computer. It also offers multi tenancy, which means, that it allows multiple logins on a single instance of an application. In the next slide we will go through five types of virtualization.

1.34 Five types of virtualization

Five types of Virtualization: Let us look at the various ways we can use Virtualization: We can use it in six different ways: 1) Access Virtualization: We can access the cloud from any device. 2) Application Virtualization: Any application running on any platform and operating system can be virtualized. 3) Processing Virtualization: We can virtualize and create multiple computers using the hardware of the same single computer. 4) Network Virtualization: This way we can have artificial views of a network. This is a process of combining hardware and software network resources and network functionality into a single, software-based entity, i.e. a virtual network. 5) Storage Virtualization: This enables us to virtualize a storage area and utilize it with advanced feature. Let us now talk about the types of virtualization in private cloud.

1.35 Types of Virtualization in Private Cloud

Types of Virtualization in Private Cloud: Virtualization can be of two types in Private Clouds. 1) Full Virtualization 2) Paravirtualization Next we will learn about Full virtualization.

1.36 Full Virtualization

Full Virtualization: In this type there is full simulation of the underlying hardware. The virtual machines are wholly isolated. The guest OS installed on the VM is unaware of its virtualized status and thinks that it is installed on a physical machine. The memory and processor handling is done by the guest OS in conjunction with the Hypervisor layer. Thus all the system calls made by the VM pass and are handled by the hypervisor. This causes delays which are not suitable for real-time requirements. Also there are some processes that are difficult to run in virtualized situations and have to be run in real time scenarios. This type of virtualization is suitable for most of our needs and in almost every industry except where real time processing is required. For example: high end medical devices and facilities, stock brokers, research and development (NASA, CERN Hydron Collider) etc. As discussed earlier there is full simulation of the underlying hardware. Full virtualization requires that all the features offered by the physical machine should also be utilized in the virtual machines. Eg. Instruction set, i/o operations, interrupts etc. In the next slide, we will go through Paravirtualization.

1.37 Paravirtualization

Paravirtualization: Paravirtualization is a virtualization technique that presents a software interface to virtual machines that is similar but not identical to that of the underlying hardware. The intent of the modified interface is to reduce the portion of the guest's execution time spent performing operations which are substantially more difficult to run in a virtual environment compared to a non-virtualized environment. The paravirtualization provides specially defined 'hooks' to allow the guest(s) and host to request and acknowledge these tasks, which would otherwise be executed in the virtual domain (where execution performance is worse). These hooks, allow system critical processes direct access to the hardware, i.e. it allows the system processes to bypass the hypervisor layer, thus allowing running them in real time and reducing the time delay. The processes however have to be defined and configured beforehand. Let us now discuss the difference between Full and Paravirtualization.

1.38 Difference between Full and Paravirtualization

Difference between Full and Paravirtualization: Full Virtualization: System is unaware of its virtualized status. All system calls are executed in virtual machines thus delaying execution of real time processes. Realtime processes require to be directly executed on hardware. Hypervisor layer denies that, thus delaying execution. Let us continue this topic in the next slide.

1.39 Difference between Full and Paravirtualization

Difference between Full and Paravrtualization: Paravirtualization: Operating System is aware of its virtualized status. System critical system calls are identified beforehand and configured to bypass the hypervisor and have real time access to the hardware. Realtime processes allowed to directly executing on the hardware thus causing no delay. This virtualization technique is most suitable for environments where real time processing is of paramount importance such as the medical field, stock market, research and development etc. Next we will understand the Operating Systems running on Full Virtualization.

1.40 Operating systems running on Full Virtualization

Operating Systems running on Full Virtualization: To make matters even more clear let us look at various operating systems those are used as a platform to create a Cloud. Microsoft Server 2008 R2 with Hyper-V (GUI) This is your regular server operating system from Microsoft which install with a GUI interface and allows multiple roles to be installed through the server manager. Eg: AD DS, DNS, DHCP. This also allows one to install the Hyper-V as a role and has a Hyper-V manager that offers the GUI to handle the VM and Hypervisor. The Hypervisor layer is the abstraction layer that allows virtualization and is found in every operating system that allows virtualization. Hyper-V is Microsoft's trademarked and copyrighted name to the application that installs the Hypervisor. Microsoft Hyper-V Server 2008 R2 (Non-GUI). This operating system does not have any roles on it and is Non-GUI. It directly installs the hypervisor layer on the hardware and needs manager software to administrate the Hyper-V server. Hyper-V manager is available as a download as well and can be installed on Windows XP and Windows 7 operating systems. VMware Esx and Esxi (Non-GUI). It also directly installs the Hypervisor layer on the hardware. There are no other features in this operating system and it also needs a manager to be installed on another computer to manage this server. These operating systems work on Full Virtualization but its latest releases now support Paravirtualization for a few certain devices. Now, we are going to discuss the operating systems working on Paravirtualization.

1.41 Operating system running on Paravirtualization

Operating systems working on Paravirtualization: Citrix XenServer is the oprating system that works on Paravirtualization. Citrix XenServer is a complete, managed server virtualization platform built on the powerful Xen hypervisor. Xen technology is widely acknowledged as the fastest and most secure virtualization software in the industry. XenServer is designed for efficient management of Windows® and Linux® virtual servers and delivers cost-effective server consolidation and business continuity. Let us now learn about the Managed Service in the Cloud.

1.42 Managed Services in the Cloud

Managed Service in the Cloud: The figure shows us the new technologies for the delivery of IT services. We can now use not only our PC's but laptops, mobile phone, smartphones, tablets, etc to access the internet through which we can access our cloud infrastructure or any service that can be provided over the cloud. Let us proceed to the next slide and understand Managed Services.

1.43 Managed Services

Managed Services: The new technology of managed services offers us a variety of advantages and disadvantages. The main advantages are that it offers us accessibility from anywhere provided that we have internet access. Since the services are outsourced to the cloud provider, the management and maintenance of the services lies in the hands of the provider. This allows the business to shift its focus from management of these services to their core business. There is also no need for highly trained IT staff since the infrastructure is no longer in house. However they also have a few disadvantages: The main problems lie with performance, compliance and contingency. Before migrating it is necessary to check whether everything is compliant with each other and does not have any issues. If they do then most likely the performance is going to be affected or worse still, might not function correctly. Also legal and regulatory compliance have to be taken care of and great care must be taken as to not to break any of the laws of the country. Plans should be taken into consideration taking in mind the future and all steps must be planned accordingly.

1.44 Questions

Questions: Let’s answer a few questions to see whether we have understood what we have studied so far: What is Virtualization? ANS) Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources. Types of Virtualization? ANS) Full Virtualization and Para Virtualization. Some system calls bypass the hypervisor and gain direct access to hardware, what type is virtualization is this? ANS) Paravirtualization. Virtualizations is an integral part of Cloud. True or False? ANS) True. Let us move on to the next slide now.

1.45 1 3 History of Cloud

History of the Cloud: Let us now look at the history and technologies involved in Cloud Computing.

1.46 History and The making of the Cloud

History and the making of the Cloud: If we are to look at it, cloud is nothing but a repackaging of various existing concepts and technologies. In fact some of the aspects of Cloud are not new at all and we have been using them for quite a long time now. Virtualization which is an integral part of the Cloud has been in existence since the late 1960's. SaaS has also been in existence since the advent of the internet. Any web based email service qualifies as SaaS, and web based email is something that we have been using for a lot many years now. Now let us look at the various technologies involved.

1.47 Technologies

Technologies: Grid Computing: Grid computing is a federation of computer resources from multiple administrative domains to reach a common goal. It is a distributed system with non-interactive workloads involving a large number of files. What distinguishes it from Cluster Computing is that they are more loosely coupled, have widely dissimilar elements or constituents and are geographically dispersed. A single grid is normally used for a variety of different purposes. They are often constructed with the aid of general purpose grid software libraries known as middleware. The best example of grid computing is that of SETI (Search for Extra Terrestrial Intelligence). This organization uses telescopes and other tools to scan the sky for radio frequency sounds over a wide range. These scans are converted into files for analysis. These files are distributed to individuals who voluntarily sign up with the organization for analysis. This is done through software installed on the volunteer's computer. This software scans the files and reports the data back to SETI. Once the file is complete the software will download another file and continue its analysis. Normally the software uses the hardware resources of the computer when they are not in use by the user or are idle. So if we look at this situation, we have a lot of computers from different administrative domains, with different components and are geographically dispersed. The main tasks for these computers also vary as per their users. However this creates a virtual super computer for SETI for their file analysis, amassing huge computational strength. Utility Computing: Utility computing is the packaging of computer resources such as computation, storage and services as a metered service. This model gives you the advantage of low or no initial cost to create computer infrastructure, instead you rent all your requirements from a provider. IBM, HP and Microsoft were early leaders in the new field of Utility Computing with their business units and researchers working on the architecture, payment and development challenges of the new computing model. Google, Amazon and others started to take the lead in 2008, as they established their own utility services for computing, storage and applications. Utility Computing can support grid computing which has the characteristic of very large computations or a sudden peaks in demand which are supported via a large number of computers. Utility computing can be at best compared to services like electricity as a metered service, gas as a metered service etc which we use on a daily basis. This model is based on the same principle of us getting computing resources as a service as well. Thus we will not have to invest in buying computers, instead we could rent the resources required and upgrade and downgrade as per our requirements. This same model has been implemented in Cloud Computing as Infrastructure as a Service (IaaS). Cluster Computing: A cluster is a set of loosely connected computers that work together so that they look like they are a single system. The computers in a cluster are connected by LAN with each computer running its own instance of an operating system. Clusters evolved due to the availability of low cost microprocessors, high speed networks, and software for high performance distributed computing. The necessity for a cluster is to make a task or process highly available. Take for instance a task that requires a lot of computational power depending on the workload. This workload will spike any moment and drop at any moment. It would be a futile exercise to manually introduce systems when required and take them out when not. So we form a cluster to service this task, with x number of computers and depending upon the workload that many number of computers will operate to service the task. As the workload increases more computers in the clusters will be made operational and when not needed, it will be shutdown to save resources. This is called Load Balancing. This ensures that the task is being serviced properly with no undue stress given to a particular server or a set of servers. Whenever a computer or a set of computers get utilized more than a particular set percentage, immediately more computers are brought into use and the extra load is transferred to these newly introduced computers. Thus workload is maintained across this cluster. As soon as the workload decreases unrequired computers are automatically shut down or put in hibernate state. Virtualization : In computing, virtualization is the creation of a virtual, rather than actual, version of something, such as a hardware platform, operating system, storage device, or network resources. We create a total abstraction of the underlying physical system and create a complete virtual system in which the guest operating systems can execute. We can hence create multiple instances of a virtual system on a single computer and implement them over the network. This also frees us from buying multiple systems and creating them over a single system saving us from multiple hardware and maintenance costs. Virtualization is an integral part of Cloud Computing. If we take out virtualization from the Cloud we are looking at nothing but Grid Computing. We shall look into this concept later in our course. Let us now proceed to the next slide and learn about cloud evolution.

1.48 Cloud Evolution

Cloud Evolution: We will now study how Cloud Computing evolved over various technologies and its path. Like we have seen earlier we have had many platforms on which Cloud was developed. We have Grid Computing, Utility Computing, Software as a Service (SaaS) and then Cloud Computing. Let's go through a small overview of what is Grid Computing just to jog our memory. Grid computing is a federation of computer resources from multiple administrative domains to reach a common goal. It is a distributed system with non-interactive workloads involving a large number of files. We have also seen an example of Grid Computing viz. SETI. We use Grid Computing for a task that requires huge computational power eg. research etc. This was made mainstream by Global Alliance. Utility Computing is where we offer computer resources such as computation, storage and services as a metered service. This saves us huge money on infrastructure investment and we only pay rent for the services used. This service has never picked up until now, but is still in its infancy stage. This has been in existence since the 60’s but was not used due to insufficient bandwidth. Was rediscovered in the 90’s but again never really took off. Software as a Service (SaaS): This is where a provider licenses an application to customers for use as a service on demand. This service gained momentum since 2001 but again has been in existence since the advent of the internet. The most common example for this is your web-based email. You use your web browser to navigate to the website which gives us an interface to interact with their server and database. After logging in we see our email and attachments which are in reality being stored on their server. So effectively we are using web-based software given to us by the service provider to interact with their servers and view our data stored with them. Other examples of SaaS would be, Google Maps, Picasso, Hotmail, Yahoo services, Facebook etc. From this you will understand that we have been using these services for ages. Cloud Computing: This is next-generation internet computing. This is where anything and everything will be accessible to anyone from anywhere. Internet is a must for Cloud Computing; without internet Cloud will not work. Let me say internet is the core essential of a Cloud Infrastructure for it to even function. Let us look at how Cloud has evolved from various technologies. The underlying concept of a cloud is of Utility Computing. If we remember, utility computing is giving computing resources as a metered service. This is known as IaaS (Infrastructure as a Service) in Cloud Computing. This infrastructure can also give you many virtual machines with varied hardware or similar hardware for use in many or a single task thus also utilizing the concept of Grid Computing. We have already seen SaaS (Software as a Service) which is also a concept in cloud technology. We will be going through these concepts in detail later in the course. Let us now look into few examples of Cloud providers, in the next slide.

1.49 Examples of Cloud Providers

Examples of Cloud Providers: Listed are a few major Cloud service providers in the world along with a list of some of their services: Amazon Web Service also known as (AWS) which offers services like Elastic Cloud Compute (EC2) and Simple Storage Service (S3). Google App Engine which qualifies as PaaS, as multiple softwares for daily and work routines are offered. Google Docs, Google Drive, Calendar, etc.

1.50 Examples Contd

Examples Contd. Microsoft Windows Azure Microsoft SQL Services Microsoft .NET Services Live Services Microsoft Sharepoint Microsoft Office 365 SkyDrive Pro Windows Azure is an open and flexible cloud platform that enables you to quickly build, deploy and manage applications across a global network of Microsoft-managed datacenters. You can build applications using any language, tool or framework. And you can integrate your public cloud applications with your existing IT environment. Windows Azure enables you to use any language, framework, or tool to build applications. Features and services are exposed using open REST protocols. The Windows Azure client libraries are available for multiple programming languages, and are released under an open source license and hosted on GitHub. Windows Azure enables you to easily scale your applications to any size. It is a fully automated self-service platform that allows you to provision resources within minutes. Elastically grow or shrink your resource usage based on your needs. You only pay for the resources your application uses. Windows Azure is available in multiple datacenters around the world, enabling you to deploy your applications close to your customers. Windows Azure delivers a flexible cloud platform that can satisfy any application need. It enables you to reliably host and scale out your application code within compute roles. You can store data using relational SQL databases, NoSQL table stores, and unstructured blob stores, and optionally use Hadoop and business intelligence services to data-mine it. You can take advantage of Windows Azure’s robust messaging capabilities to enable scalable distributed applications, as well as deliver hybrid solutions that run across a cloud and on-premise enterprise environment. Windows Azure’s distributed caching and CDN services allow you to reduce latency and deliver great application performance anywhere in the world. Now, we will look into a few services by windows Azure in the next slide.

1.51 Services by Microsoft Azure

Few services by Windows Azure: Web Sites Virtual Machines Mobile Services Cloud Services Big Data Media Most of the services are used by us on a daily basis. Let's have a look at some of them.

1.52 Big Data

Big Data: Microsoft has been doing Big Data long before it was mega-trend in the market: At Bing we analyze over 100 petabytes of data to deliver high quality search results. More broadly, Microsoft provides a range of solutions to help customers address big data challenges. Our family of data warehouse solutions from Microsoft® SQL Server® 2008 R2, SQL Server® Fast Track Data Warehouse, Business Data Warehouse and SQL Server® 2008 R2 Parallel Data Warehouse offer a robust and scalable platform for storing and analyzing data in a traditional data warehouse. Parallel Data Warehouse (PDW) offers customers: Enterprise-class performance that handles massive volumes to over 600 TB. We also provide LINQ to HPC (High Performance Computing) a distributed runtime and a programming model for technical computing.

1.53 Big Data

Big Data: In addition to their traditional capabilities mentioned above, Microsoft is embracing Apache HadoopTM as part of an end to end roadmap to deliver on our vision of providing business insights to all users by activating new types of data of any size. In the next slide we will look into few examples.

1.54 Examples contd

Examples Contd.: Salesforce: Salesforce.com is an enterprise cloud computing company that is leading the shift to the Social Enterprise. Their cloud platform and apps especially their CRM (Customer Relationship Management) solutions are widely popular across the world especially in America. Now we will learn about the products offered by Salesforce, in the next slide.

1.55 Products offerred by Salesforce

Products offered by Salesforce: • Sales Cloud • Service Cloud • Desk.com • Chatter • Radian6 • Force.com Platform • Heroku • Database.com • Pricing and Editions • AppExchange • Remedyforce Let us look at a few of them.

1.55 Products offerred by Salesforce

Products offered by Salesforce: • Sales Cloud • Service Cloud • Desk.com • Chatter • Radian6 • Force.com Platform • Heroku • Database.com • Pricing and Editions • AppExchange • Remedyforce Let us look at a few of them.

1.56 Sales Cloud

SalesCloud: Your sales success toolkit The Sales Cloud puts everything you need at your fingertips—available anywhere. From Social accounts and contacts to Mobile, Chatter, and Analytics, collaboration across your global organization and getting deals done faster is not only possible, it's easy. Social accounts and contacts Bring social intelligence to your sales process. Gain insights from popular social media sites—like Facebook, Twitter, LinkedIn, YouTube, and Klout—right within Salesforce to help you sell more effectively. Data.com Get contacts and company profiles from the leading data sources, right inside the Sales Cloud. Connect with key decision makers faster, easily plan territories, and increase sales and marketing productivity with up-to-date, accurate data. Analytics and Forecasting Forecasting accuracy has never been this easy. Get in-line editing, override visibility, multi-currency support, custom forecast categories, and a complete, real-time view in to your team's forecasts.

1.57 Service Cloud

Service Cloud: With the Service Cloud you can meet customers wherever they are -- including social networks such as Facebook and Twitter. Your agents also benefit from employee social networks that help them work together like never before. And because you get all the features a social contact center needs, your customers experience amazing service on any channel. Let us now learn about Grid computing in the next slide.

1.58 Grid Computing

Grid Computing: Grid computing is a federation of computer resources from multiple administrative domains to reach a common goal. It is a distributed system with non-interactive workloads involving a large number of files. What distinguishes it from Cluster Computing is that they are more loosely coupled, have widely dissimilar elements or constituents and are geographically dispersed. A single grid is normally used for a variety of different purposes. They are often constructed with the aid of general purpose grid software libraries known as middleware. The best example of grid computing is that of SETI (Search for Extra Terrestrial Intelligence). This organization uses telescopes and other tools to scan the sky for radio frequency sounds over a wide range. These scans are converted into files for analysis. These files are distributed to individuals who voluntarily sign up with the organization for analysis. This is done through software installed on the volunteer’s computer. This software scans the files and reports the data back to SETI. Once the file is complete the software will download another file and continue its analysis. Normally the software uses the hardware resources of the computer when they are not in use by the user or are idle. If we look at this situation, we have a lot of computers from different administrative domains, with different components and are geographically dispersed. The main tasks for these computers also vary as per their users. However this creates a virtual super computer for SETI for their file analysis, amassing huge computational strength. Now, we will be looking at utility computing, in the next slide.

1.59 Utility Computing

Utility Computing: Utility computing is the packaging of computer resources such as computation, storage and services as a metered service. This model gives you the advantage of low or no initial cost to create computer infrastructure, instead you rent all your requirements from a provider. IBM, HP and Microsoft were early leaders in the new field of Utility Computing with their business units and researchers working on the architecture, payment and development challenges of the new computing model. Google, Amazon and others started to take the lead in 2008, as they established their own utility services for computing, storage and applications. Utility Computing can support grid computing which has the characteristic of very large computations or a sudden peaks in demand which are supported via a large number of computers. Utility computing can be at best compared to services like electricity as a metered service, gas as a metered service etc which we use on a daily basis. This model is based on the same principle of us getting computing resources as a service as well. Thus we will not have to invest in buying computers, instead we could rent the resources required and upgrade and downgrade as per our requirements. This same model has been implemented in Cloud Computing as Infrastructure as a Service (IaaS).

1.60 Utility Computing

Utility Computing: John McCarthy, a MIT Centennial in 1961 certainly had a broad vision when he said back then that "Computing may someday be organized as a public utility". This is coming true now, almost after 50 years. Utility Computing offered a lot of advantages. Foremost was that the availability of computational and storage capabilities was huge. This allowed for scalability as well as reliability. The model was simple; you only had to pay for what you had used. For example if you are renting a single computer and you only use for 3 hours in a day then you will be charged only for those 3 hours and not for the entire day. Accessibility was simple and there was no such learning curve to gain access. Let us move on to the next slide to know more about Virtualization.

1.61 Virtualization

Virtualization: Virtualization has existed since the 1960's and is not a new phenomenon. As we have already seen, it was underutilized due to lack of bandwidth and computational power. It is now that virtualization has kicked off again due to availability of resources. Virtualization technology is revolutionizing the computer industry by lowering capital and operational costs, providing higher service availability, and providing new data protection mechanisms. In the next slide, we will understand what virtualization is all about.

1.62 What is Virtualization

What is Virtualization: Virtualization can be defined as the creation of a virtual (rather than real) version of something, such as an operating system, a server, a storage device or network resources. It is a technique of hiding the physical computing resources to simplify the new way in which other systems, applications or end users interact with those resources. Virtualization allows a single physical resource to appear as multiple logical resources or allows multiple physical resources to appear as a single logical resource. Let us continue this topic in the next slide.

1.63 What is Virtualization

What is Virtualization: We create a total abstraction of the underlying physical system and create a complete virtual system in which the guest operating systems can execute. We can hence create multiple instances of a virtual system on a single computer and implement them over the network. This also frees us from buying multiple systems and creating them over a single system saving us from multiple hardware and maintenance costs. Virtualization is an integral part of Cloud Computing. If we take out virtualization from the Cloud we are looking at nothing but Grid Computing. The abstraction layer allows the hardware of the physical machine to be installed as a software version in the virtual machines. These virtual machines are capable of hosting any operating system. Please understand that virtualization is not equivalent to simulation and/or emulation. Next, we will discuss on today’s IT challenges.

1.64 Today s IT Challenges

Today's IT Challenges: In today's work the IT industry is faced with a lot of challenges, especially when it comes to power, space and cooling costs. As inflation raises these entire costs rise as well. These costs are more than double compared to what they were 5 years ago. Also in the physical world a server hosts only one application. This adds up the number of servers one has to use. This in order affects the power, space and cooling costs. We all know the rising rates of land. Imagine building a 2000 server datacenter. Imagine the space it will occupy and the HVAC costs to cool it. Now due to virtualization these 2000 servers can be accommodated in let's say 200 high end servers. Notice the difference of maintaining and the power, space and cooling of 2000 compared to 200 servers. It is an industry standard of not utilizing a server more than 25 percent of its capacity. Virtualization allows us to break this barrier and allows us to optimize the use of these unused resources thus reducing excessive acquisition and maintenance costs. In the next slide, we will learn about Typical Dev/Test Infrastructure is an IT headache.

1.65 Typical Dev Test Infrastructure is an IT Headache

Typical Dev/Test Infrastructure is an IT headache: Physical infrastructure is quite a hassle in today's world. Managing desktops and servers is quite a headache. There is the part where you have to maintain them. That means a lot of stock for hardware replacement and maintenance. Also over a period of time these systems gather a lot of dust and if not cleaned can shut down or even worse, the hardware could burn out. Managing all this and having a team ready for this along with provisioning is very cumbersome, labour intensive and monetarily draining. Requests sent to the service desk take a long time to complete as there is a procedure to follow. Once the request is made, release management comes in the picture which is a resource intensive and error prone process. Let us move on to the next slide.

1.66 Virtualization is the Key

Virtualization is the key: Virtualization allows us to create virtual machines (VM) on a single computer that can be accessed by different devices over the web. We can use Intel or AMD servers and partition them and create VM's which can host different operating systems and can work the same way a physical infrastructure works. As we see in the corresponding figure we can have different VM's assigned a different role in the network. In the next slide, we will look into a pictorial representation of virtual hardware.

1.67 Virtual Hardware

Virtual Hardware: As you can see in the figure the virtual hardware is very similar to you existing physical infrastructure. When we talk about creating a VM we actually create a machine with the same characteristics and traits of a physical machine. It will have all the components that a physical machine has, except that the components will be virtual, that means that the components will be nothing but a code that works and functions in the same way the hardware does. Now, we are going to look into a diagram describes virtualization basics.

1.68 Virtualization Basics

Virtualization Basics: This slide shows two different systems, one without virtualization software and the other with virtualization software. Both the systems have everything else in common except the one with virtualization has a VMware Esxi server operating system installed on it. You can see the VMware is the host operating system which allows for virtualization and thus hosts two guest VM's with their own operating systems. Let us now move on to the next slide and learn about Cloud Enabling Technology: Virtualization.

1.69 Cloud Enabling Technology Virtualization

Cloud Enabling Technology: Virtualization: The two stacks shown in this figure in this diagram are very important. Let us look at the traditional stack first. This stack refers to our regular computer; desktops and notebooks. Here we have our hardware i.e processor, motherboard, RAM, hard disk etc which is the bottom layer in the stack. The second layer is that of the operating system once we install it on our computer and this layer then hosts the rest of the applications that we install on the operating system viz, MS Office, Internet Explorer's and other applications. In the Virtualized stack we have directly installed the hypervisor layer on the hardware which hosts the virtual machines (VM's) and their operating systems. It is on these operating systems that the applications will then be installed. The hypervisor layer is the layer that controls the communications between the VM's and the hardware. When any VM is switched on, calls are made to the hypervisor layer for hardware requirements, like processing power, ram etc. The hardware checks if resources are available and if they are then it allocates these resources to the VM's. The allocated resources are then reserved for that specific VM and not allocated to any other to prevent inter communication that would corrupt the data. Thus the hypervisor layer regulates the distribution of hardware resources between the VM's it hosts. VM's cannot directly interact with the hardware. Now, we will go through the virtualization basics in the next slide.

1.70 Virtualization Basics

Virtualization Basics: Before virtualization we only had the choice of using physical machines. That limited us to using a single operating system per machine. We could multi-boot but we could use only one operating system at the same time. Software and hardware were tightly coupled that means the hardware and software are dependent on each other and share the entire resources. Due to this running multiple applications on the same computer will cause problems and conflicts. The server's resources were also under-utilized as the industry rule is not to use a server beyond 15% of its capacity. The infrastructure is also Inflexible and costly. In the next slide, we will continue the virtualization basics.

1.72 Server Virtualization Architectures

Server Virtualization Architectures: There are basically three different architectures we can employ to make use of virtualization. 1) The Hypervisor 2) Virtualization as the operating system 3) Virtualization as the host operating system. Next we will learn about the Hypervisor.

1.73 The Hypervisor

The Hypervisor: The Hypervisor is also known as the Virtual Machine Monitor. This layer is the foundation of virtualization. Without the Hypervisor layer installed virtualization is just isn't possible. This layer is installed directly on the hardware layer effectively replacing the operating system. Thus an abstraction layer is formed that manages hardware resources and allocation of them to the virtual machines. The Hypervisor layer then intercepts system calls made by the VM's thus enabling itself to manage memory and processor addressing. This it does in tandem with the guest operating system. It is the Hypervisor layer that does the hardware and VM isolation preventing memory and buffer overruns. This also allows for multi environment protection. Let us move on to the next slide now.

1.74 Questions

Questions: Let us go through some question based on what we have studied so far: Resource pooling of multiple computers to process one task is an example of _________ computing. ANS) Grid Computing. Metered service is a concept of ________ computing. ANS) Utility Computing. Creation of a virtual rather than real version of something is known as __________? ANS) Virtualization What is the Hypervisor? ANS) An abstraction layer that allows virtualization; also known as the VMM (Virtual Machine Monitor).

1.76 Overview of Cloud Computing Architectures

Overview of Cloud Computing Architectures: The hypervisor has the architecture depending on which the service provided is decided. Types of architectures are Service Oriented, Tiered, Multipurpose and Datacenter. The services are Communication (CaaS), Software (SaaS), Platform (PaaS), Infrastructure (IaaS) and Monitoring (MaaS). In the next slide, we will look into a diagram describing cloud computing architecture.

1.77 Cloud Computing Architectures

Cloud Computing Architectures: The various services offered by cloud depend on different architectures which are also dependent on the hypervisor (the layer that controls granting hardware to the virtual machines and host OS) and the type of service. The various architectures are: 1) Service Oriented Architecture (SOA) 2) Tiered Architecture 3) Multipurpose Architecture 4)Datacenter Architecture 1)SOA : In software engineering, a Service-Oriented Architecture (SOA) is a set of principles and methodologies for designing and developing software in the form of interoperable services. These services are well-defined business functionalities that are built as software components that can be reused for different purposes. SOA design principles are used during the phases of systems development and integration. SOA generally provides a way for consumers of services, such as web-based applications, to be aware of available SOA-based services. For example, several disparate departments within a company may develop and deploy SOA services in different implementation languages; their respective clients will benefit from a well-defined interface to access them. XML is often used for interfacing with SOA services, though this is not required. JSON is also becoming increasingly common. 2) Tiered Architecture: Multi-tier architecture is a client-server architecture in which the presentation, the application processing, and the data management are logically separate processes. An application that uses middleware to service data requests between a user and a database employs multi-tier architecture. The most widespread use of multi-tier architecture is the three-tier architecture. 3) Multipurpose Architecture: This architecture is meant especially for language runtime systems, thus used in the development and deployment of web application (SaaS). 4) Data center Architecture: The data center is an essential corporate asset that connects all servers, applications and storage services. Businesses rely on their data centers to support critical business operations and drive greater efficiency and value. As such, the data center is a key component that needs to be planned and managed carefully to meet the growing performance demands of users and applications. Now, we will understand Multipurpose Architecture, in the next slide.

1.78 Multipurpose Architecture

Multipurpose Architecture: This architecture is meant especially for language runtime systems, thus used in the development and deployment of web application (SaaS). The key characteristics of multipurpose architecture are that it supports virtualization; is multi- tiered which allows for scalability, availabiltiy, reliability etc. This gives it inter operable layers making it more robust and maintainable. It is built on open standards as well. Let us move on to the next slide, explaining Server Virtualization Architectures.

1.81 Virtualization with a Host Operating System

Virtualization with a Host Operating System: In a type 2 hypervisor, a host operating system is used as the first tier of access control. For example, a Linux or Windows-based operating system may first be installed on the hardware. This operating system can be fully functional as a stand-alone computer system, and also host the second tier. In the second tier of access control, some type of hypervisor would be installed on top of the Linux operating system. The hypervisor is essentially a guest operating system or application to the Linux or Windows operating system. It has been a common practice to install hypervisor and other application programs that are managed by the base operating system. The weakness in this design is the hypervisor will compete for access and resources with the application programs running on the base operating system. The third tier of access control are the guest operating systems installed “on top” of the hypervisor function. The applications running on those guest operating systems are all now negotiating with their own host operating system. These applications in turn negotiate with the hypervisor which negotiates with the host operating system for access to system resources. This additional level of complexity may be confusing, not only to the programs that are executing on the hardware, but also to the administrator. Eg:) Windows Server 2008 also comes in a regular GUI version which is a full server operating system and can install Hyper-V (The trademarked name given to Microsoft’s version of the Hypervisor software.) as a role. This manager is a full GUI version and comparatively very easy to manage. SCVMM will work on this version too. Microsoft however, advises not to install any other server role on the computer that has Hyper-V installed on it especially the Active Directory role. Let us now look into a pictorial representation in the next slide, describing Tiered Architecture.

1.82 Tiered Architecture

Tiered Architecture: Tiered architecture is typically found in a network-based application environment in the following locations: • Load-balancing tier: Supporting the client requests • Web front end tier: Responding to client requests • Business logic tier: Multiple backend application servers • Database tier: Load balanced databases Now, we will know each tier properly. The Load Balancer The first tier is the load balancer. When the user accesses Web-based applications, access is often controlled by a load balancer. The purpose of a load balancer is to make sure that the backend servers being accessed by users properly distribute the workload requests among the backend servers. There are many different techniques for load balancing. Load balancing in a network-based environment is similar to that of a fast food environment. In a fast food environment, load balancing is achieved by having multiple order takers at the counter. The counter staff will submit an order electronically; back in the kitchen a group of people will quickly get to work on it; Once the order is ready, it is be passed back to the front, and to the customer. The length of the queue of people waiting to order during a rush-hour is controlled by the number of available order takers, and the number of the kitchen workers. Relating this example back to the computer user, a request to a Web-based application (the counter staff); will be typically supported by multiple backend servers (the kitchen workers). Web Tier The second tier is the Web tier which contains the Web servers. The function of the Web server is to receive information or application requests from the user, interpret and process these requests using software such as IIS or Apache, and provide the user with a formatted response. Business Logic The third tier is the business logic tier which contains the application decision-making support for the user request. In smaller environments, the business logic tier may actually be implemented in the single Web server facing users. There would not be the need for a separate server to decode a client’s request due to the smaller client environment. In a large-scale environment the business logic tier is implemented in servers other than the Web servers. This allows the development of scaled capabilities. In a larger enterprise environment, 200 Web servers may be backed up by 25 business logic servers providing application support for the user requests. The failure of a single Web server or a single business logic server will not impact the overall success or failure of the cloud site. This gives the developer of the site a supportable environment. Database Tier The final tier is the database tier. In the database tier, database engines provide data access support for the applications in the business logic process. By separating the database from the business logic, the designers can scale database access capabilities based on client and business needs. In each of the tiers, additional resources can be added or taken away as needed providing the elasticity or flexibility that the cloud environment relies upon. Next, we will learn about Multi-Tenancy Architecture.

1.84 Service Oriented Architectures

Service Oriented Architectures: It's would be easy to conclude that the move to Service Orientation really commenced with Web services. However, Web services were merely a step along a much longer road. The notion of a service is an integral part of component thinking, and it is clear that distributed architectures were early attempts to implement service-oriented architecture. What's important to recognize is that Web services are part of the wider picture that is SOA. The Web service is the programmatic interface to a capability that is in conformance with WSnn protocols. So Web services provide us with certain architectural characteristics and benefits—specifically platform independence, loose coupling, self-description, and discovery—and they can enable a formal separation between the provider and consumer because of the formality of the interface. Service is the important concept. Web Services are the set of protocols by which Services can be published, discovered and used in a technology neutral, standard form. In fact Web services are not a mandatory component of a SOA, although increasingly they will become so. SOA is potentially much wider in its scope than simply defining service implementation, addressing the quality of the service from the perspective of the provider and the consumer. You can draw a parallel with CBD and component technologies. COM and UML component packaging address components from the technology perspective, but CBD, or indeed Component-Based Software Engineering (CBSE), is the discipline by which you ensure you are building components that are aligned with the business. In the same way, Web services are purely the implementation. SOA is the approach, not just the service equivalent of a UML component packaging diagram. So, to sum it all the SOA is nothing but an architectural style that supports service orientation. What is service orientation? It is a way of thinking in terms of services and service-based development and the outcomes of services. Now, what is a service? Is a logical representation of a repeatable business activity that has a specified outcome (e.g., check customer credit; provide weather data, consolidate drilling reports) and is self-contained; may be composed of other services, and is a “black box” to consumers of the service. This segregation will help us to understand SOA more clearly. In the next slide we will talk about Cloud and SOA.

1.85 Cloud and SOA

Cloud and SOA: Question asked by Paul Krill: Can we build a datacenter infrastructure on SOA principles? Question answered by (Gerry Cuomo) ‘Yes, and that's the cloud, so it's a service-oriented infrastructure,… It's taking that architectural principle of SOA and applying it to an infrastructure.’ Cloud is heavily dependent on SOA as cloud is basically a service delivery model. SOA is nothing but a collection of services that communicate with each other, and connecting these services in most cases involves Web Services using XML. So, now we can easily say “No Cloud without SOA!” Let us proceed to the next slide and learn about Service Oriented Architecture Criteria.

1.86 Service Oriented Architecture Criteria

Service Oriented Architecture Criteria: In order to implement SOA, the architecture must meet a few requirements: Firstly the services involved must be able to communicate with each other. The interface to these services must be well understood and easy to navigate. And it should be a message oriented communication process. Now, we will go through few questions in the next slide.

1.88 1 5 Benefits and limitations of cloud computing

1.5 Benefits and Limitations of Cloud Computing: So far we have studied what cloud computing is. Now let us look at the benefits and limitations that the cloud has to offer. Firstly, we will overview of Drivers and Limitations.

1.89 Overview of the Drivers Limitations

Overview of Drivers and Limitations: As with every technology there are a lot of benefits and limitations attached. Let us look at the benefits first. First and foremost it reduces our costs a lot. It saves a lot of investment in infrastructure and maintenance overheads. All you need to pay for is the rent for usage of these services. This technology offers flexible storage that can be accessed from any part of the world provided you have internet access. This gives you high availability and enables you to travel freely without carrying excess gadgets. It is very flexible where the model allows you to upgrade or downgrade infrastructure amenities as per your requirements and also offers you flexible payment options where you only pay for usage; or you can keep multiple instances in hibernate state which are pressed into service when load requirement increases and pay for these instances only when they were used. This is an eco-friendly model as it reduces your carbon footprint since you will be taking service from the provider. This will also reduce your HVAC bills prominently. Being on a cloud offers you mobility since like discussed earlier you can access it from anywhere. Let us now look at some of the limitations. Security becomes a huge concern especially if the data you handle is sensitive. Even in normal circumstances it depends on how comfortable you are that your data is in somebody else’s hands and you have no control over it. Here you need to ensure the security measures maintained and offered by the provider and make a detailed report of requirements and provision in the SLA. Data Location: Since being on a cloud centralizes your data, it also makes you unaware of the exact location. Eg:) Amazon has 8 availability zones. Hence data can be stored on anyone of these zones. Obviously the provider will keep the data where it is more convenient for him to handle. But it might cause quite a few issues. Eg:) Data that you handle is sensitive and thus illegal in the country where the provider is hosting it etc. Compliance: Since the data is located elsewhere it could be located in another country as well. Your data is then as safe as the implementation of the laws of that particular land. It also creates issues of immediate ownership of that data. Are the government laws sufficient, what protection do they offer in case of a crisis or a natural disaster. Does the government have the right to analyse the data without your knowledge. Internet Dependency: Cloud makes you totally dependent on the internet. Without the internet the cloud model will fail. It will not function. So if by chance there is an internet outage due to some issues with the internet lines joining gateways of various countries you will find that you cannot access the cloud and hence cannot work. Service Levels: In the service level agreements with the cloud provider keep everything detailed and try to mention as many points as possible; even a few scenarios to keep you duly protected; and make a specific mention and details of penalties in case of non-performance. Migration: Keep an open eye when migrating to the cloud and make sure that every application and software runs on the virtualized environment. Also verify that any future software that are planning to buy are compatible as well before migrating, as this could cause serious issues in the functioning of your company. In the next slide we will learn about main benefits of Cloud Computing.

1.90 Main benefits of Cloud computing

Main Benefits of Cloud Computing: Let's look at the some of the main benefits of could computing. Cloud is basically a cost cutting model. The user moving on to a cloud, whether it is a public or private, first of all saves a huge sum of money in initial infrastructure investment. The organization moving onto a public cloud will have to make a bare minimum investment in hardware. The minimal investment will be in things like routers, switches, thin clients etc. Also the Cloud is a pay-per-use model, so the user will only be paying rent for the amount of time s/he is using the services on the cloud. The maintenance of hardware and software is the responsibility of the service providers; hence any updates, upgrades, security patches and backups will be taken care of by the service provider allowing the user to concentrate on his/her core business. The Cloud model is hugely flexible and scalable thus offering elasticity. Since the Cloud depends in the internet the user can access his services from anywhere and anytime. Services and be availed of and gotten rid of as per requirements thus making it highly scalable. In short the user can log in from anywhere, anytime and scale up or scale down his services any time s/he requires. Cloud works on resource pooling and multi-tenancy this allowing multiple users to log in to the same instance at the same time. This makes the cloud an ideal model as the benefits from it are pretty much phenomenal. Let us now go through the Cloud Computing limitations.

1.91 Cloud Computing Limitations

Cloud Computing Limitations: After such great benefits, let us look at the limitations the Cloud poses to its consumers. It is solely dependent on the internet to function and to be accessible to anyone. If there is no internet then the Cloud fails. Hence if your internet connection goes down, your entire infrastructure and services on the cloud remain inaccessible and your business comes to a standstill. Another huge worry amongst users is that of Security. When going on to a cloud you trust the service vendor entirely with what he says he offers in security. However, you have no control on the security measures and implementation at the service provider's end. Privacy is also a huge worry. When you talk about a Public Cloud, you are essentially taking international. It may so happen that your data which is stored on the Cloud could be stored on some foreign located datacenter. In this scenario your data is governed by the laws of that nation. Those laws could be detrimental depending on the data you store. Another issue could be of inter-cloud migration. Once you go with a particular vendor it might not be possible to migrate to another vendor, due to incompatibility. This could be a problem as you are stuck with the same vendor whether you like it or not. Service Level Agreements could be a dicey issue. Not only would they be extensive but they could also be incompatible with your business requirements. If they support your business then that is a good thing, otherwise you will have to reconsider your options. Next, we will discuss on Implementing and Managing Cloud Computing.

1.93 Overview of Implementing and Managing Cloud Computing

Overview of Implementing and Managing Cloud Computing: This figure gives you a brief insight into implementing a cloud and how it functions on different levels. This we will see in the on-going slides.

1.94 2 1 Building a local Cloud environment

2.1 Building a Local Cloud Environment: Let's have a look at building a Local Cloud Environment.

1.95 Overview of Local Cloud Environment

Overview of Local Cloud Environment: This shows us how a cloud can be accessed and has both internet and LAN activity. Both have certain issues with risks and measures taken to tide over those risks. Through the local network we can access messaging and software and hardware within the local network which would give us access to various things like storage processing and communications. Now, we will discuss, why own a Local Cloud Environment.

1.97 Main Components and their interconnection

Main components and their interconnection: The cloud uses many different independent components in its implementation. The standards applied in cloud computing environment require the different components to interact successfully at all times or the cloud will stop functioning. Hardware Components: Cloud involves virtualization. This requires specific hardware requirements as far as processors are concerned. All processors that are hosting virtual environments need to be VT (virtualization technology) enabled. Intel has its brand as Intel-VT and AMD as AMD-V. All new processors manufactured by these companies are now VT enabled. The rest of the components can be our regular components used in regular servers. There are quite a few many vendors supplying hardware’s such as routers, switches, rack mounted servers and quite a few other peripherals. It is important that the hardware we buy has adhered to industry standards so as to prevent any future incompatibilities. All vendors these days adhere to industry standards. Software Components: Software is also a key to successful cloud computing. Whether the software comes in the form of operating systems, database engines, or server software such as that used for Web services, the software used comes from many different vendor sources. The application software used can be purchased from a commercial source, acquired from an open source organization, or developed by internal application developers. Some utility software is also necessary for managing local storage and disk resources. The software is used for backup and restoration capabilities. With such a large number of sources for software and hardware, the use of standards-based messaging allows the interaction between the components. This integration of component capabilities is the key to having cloud success. In the next slide we are going to learn about main hardware components.

1.98 Main hardware components

Main Hardware Components: Let's look at few examples of what might constitute your main hardware components. Essentially you will have a Local Area Network (LAN). For this you will need switches, routers etc. for proper functionality. Blade server array's for running database servers, application servers, web servers, etc. User workstations depending on the requirement, thin clients, desktops, mobile devices etc. Storage Area Network (SAN), Network Attached Storage (NAS) for storage solutions. Load Balancers for balancing the work load. Let us now talk about the main software components, in the next slide.

1.99 Main software components

Main Software Components: You will need virtualization software like MS Hyper-V, VMware Esxi etc for creating the private cloud. Cloud based application software for services to be run. Eg. CRM, Accounting software, ERP. Database software to handle the databases and acess. Middleware software to handle the communication between different applications and hardware, and finally operating systems to run the PC's. Now, we will understand Architectural Considerations (General) in the next slide.

1.101 Architectural considerations Connection requirements

Architectural Considerations: Connection Requirements: The Clouds main dependency is on the Internet/Intranet. Since virtual machines will be accessed over the wire, transfer speed gains importance. It is imperative to have a high speed high capacity line so there is no latency or lag. The infrastructure should be available and accessible from anywhere. So VPN connections must be set up and appropriate hardware and software. This will help in security and availability. Let us now learn about Virtual Private Network Access, in the next slide.

1.102 Virtual Private Network access

Virtual Private Network Access: Virtual Private Networks (VPN) is a private network that interconnects remote networks that are often geographically separate, through the internet. These use tunnelling protocols and encryption that provide security for VPN users. Eg) VPN can be used to connect branch office of an organization to its head office. An employee can use VPN to access his head office from any remote location thus enabling him to access his work and important data. Tunnel Mode: In tunnel mode, the entire IP packet (data and IP header) is encrypted and/or authenticated. It is then encapsulated into a new IP packet with a new IP header. Tunnel mode is used to create Virtual Private Networks for network-to-network communications (e.g. between routers to link sites), host-to-network communications (e.g. remote user access), and host-to-host communications (e.g. private chat). This means that the clients are not aware of this tunnel and all traffic transferred is encrypted. Transport Mode: In transport mode, only the payload (the data you transfer) of the IP packet is encrypted and/or authenticated. The routing is intact, since the IP header is neither modified nor encrypted; however, when the authentication header is used, the IP addresses cannot be translated, as this will invalidate the hash value. The transport and application layers are always secured by hash, so they cannot be modified in any way (for example by translating the port numbers). Transport mode is used for host-to-host communications. Using this mode, VPN client software is required at both ends to establish the tunnel. Here the IP addresses are not encrypted which pose as a security threat as if anyone gets hold of this IP addresses he can attack the computers directly to gain control of the VPN. Both these modes pose security risks and should be used carefully. The key benefits of using VPN are that they offer remote secure connectivity. They are cheaper than a leased line connection or private and rented ones. They also offer mobility to the employees as they can connect to the infrastructure securely from anywhere in the world. While considering VPN connections some architectural considerations should also be planned such as IP-tunnelling techniques, IP protocol, security measures to be used, encryption channels and authentication methods (The AAA concept). In the next slide we will go through the risks of connecting a local Cloud network to the Public Internet.

1.103 Risks of connecting a local Cloud Network to the Public Internet

Risks of connecting a local Cloud network to the Public Internet: Keeping all your data on the public cloud is very risky, especially now when Cloud is a high value target for all hackers. Key issues here are to identify responsibilities of both the service provider and the customer. Before migrating to a cloud it is imperative to check with the provider the security measures he provides and how privacy is maintained. If one is happy with the solutions provided only then should a move to the cloud be made. However the duties are twofold. A chunk of the responsibility lies with the customer as well. Both the service provider and the customer have to work in tandem to keep everything secure. While the provider responsibilities like with the security and privacy of the data, the customer will also have to keep in mind the compliance of legislations and regulations of various countries, since the Cloud is international in nature. Next, we will learn about Data protection and partitioning.

1.105 2 2 The principles of managing Cloud services

The Principles of Managing Cloud Services: Let us now look at the principals involved in managing cloud services.

1.106 IT Service Management Principles in a Cloud Environment

IT Service Management Principles in a Cloud Environement: Outsourcing to the Cloud means that the provider needs to be in control of the complete supply chain. Key areas of control are IT-governance; the customer needs to remain in control over his/her business processes. Business-IT alignment; the customer needs to make sure that the Cloud IT processes support his/her business in the short and long term. In the next slide we will understand the IT Governance.

1.107 IT Governance

IT Governance: Proper governance is important for operational reasons. It ensures the proper and uninterrupted functioning of the infrastructure. For this to happen, the following elements need to be in place: Good Service Level Management: Maintaining pre-defined service levels ensure the proper day to day functioning. Proper analysis should be maintained on a regular basis and fine tuning done whenever required. Correct measures and policy controls should be mentioned in the SLA. Proper audit standards and internal audit mechanisms should be in place for compliance and management issues. Some of the standard audits and compliance models are: Proper audit standards and internal audit mechanisms Provider: ISO/IEC 20000:2011 (Service Management) ISO/IEC 27001, 2 (Information Security) Customer: Cobit®4.1 or ISO/IEC 38500:2008 (corporate governance of IT) Let us now talk about Managing Service Levels in a Cloud Environment.

1.109 ISOIEC 200002011 Processe

ISO/IEC 20000: 2011 Processes: This slide gives us details of the processes to be followed in ISO 20000 for the year 2011. They have been classified in Process group and process. All the employees of the organization should be aware of these policies and should strictly adhere to them. Let us now look into few Questions to ask from the Cloud Provider.

1.110 Questions to ask from the Cloud provider

Questions to ask from the Cloud Provider: We have already studied the benefits and limitations offered by cloud. The onus is on us to stay protected from the limitations of the cloud. Before moving onto a cloud there are a few things that should be looked at and investigated. Before signing the SLA and other formalities there are a few things that should be checked with the service provider: Some of the basic questions to ask the cloud service provider are? How are audits performed? Where are the servers located, and which legislation applies to the data? What are the provisions when a service changes or ends (service life cycle and end of life)? What are the provisions if we want to migrate to another provider (contract life cycle and end of life)? Next, we will be looking at few questions.

1.111 Questions

Questions: What are the main components of a local cloud environment? ANS) Hardware components and Software Components. What are the elements that need to be in place in IT Governances? ANS) Good Service Level Management and Proper audit standards and internal audit mechanisms. What are the architectural considerations? ANS) Standard building blocks and Security and service continuity.

1.114 Accessing Web applications through a Web Browser

Accessing Web Application through a Web Browser: The basic requirement for accessing a web application through a web browser is any web enabled device that supports html. In today's world almost every device supports html. Earlier we had WAP enabled devices, but these are now obsolete. So, in today's world a PC, tablet, smartphone, thin client etc would be required. These devices should have an internet browser installed on it. An internet connection is also required, so an internet service provider and an IP address is important. To access a web application it needs to be cloud based. If all the above requirements are met then you would be successful in accessing the web application through a web browser. Please understand the fact that a cloud will not work without the internet. In the next slide, we will cover Cloud Web Access Architecture.

1.115 Cloud Web Access Architecture

Cloud Web Access Architecture: When we talk about architecture we need to keep in mind the very basic requirements. We need standard protocols for each ISO-OSI layer. We need web enabled devices like pc's, smartphones tablets, thin clients etc. and more importantly we need an internet connection to access the web. Now, we will learn about the Internet in the next slide.

1.118 Examples of standard protocols

Examples of Standard Protocols: Let's look at a few standard protocols that we use regularly. HTTP: Hyper Text Transfer Protocol. This is used by us daily to access the internet and surf. VT : In open systems, a virtual terminal (VT) is an application service that: Allows host terminals on a multi-user network to interact with other hosts regardless of terminal type and characteristics, Allows remote log-on by local area network managers for the purpose of management, Allows users to access information from another host processor for transaction processing, Serves as a backup facility. RTSE: The Reliable Transfer Service Element supports application that which to transfer large amounts of data and do not wish to restart the transfer from scratch if the connection fails before the transfer is complete. Other protocols are: API-sockets. TCP and IP SSL Ethernet IEEE 802.3 10BASE-T Let us now move on to the next slide, which describes the use of a Thin Client.

1.119 The use of a thin Client

The use of a Thin Client: A thin client (sometimes also called a lean or slim client) is a computer or a computer program which depends heavily on some other computer (its server) to fulfil its traditional computational roles. This stands in contrast to the traditional fat client, a computer designed to take on these roles by it. The exact roles assumed by the server may vary, from providing data persistence to actual information processing on the client's behalf. Thin clients occur as components of a broader computer infrastructure, where many clients share their computations with the same server. As such, thin client infrastructures can be viewed as the providing of some computing service via several user-interfaces. This is desirable in contexts where individual fat clients have much more functionality or power than the infrastructure either requires or uses. This can be contrasted, for example, with grid computing. Thin-client computing is also a way of easily maintaining computational services at a reduced total cost of ownership. The most common type of modern thin client is a low-end computer terminal which concentrates solely on providing a graphical user interface to the end-user. The remaining functionality, in particular the operating system, is provided by the server. So, to sum it all, the thin client is a simple network enabled computer with no moving parts like a hard disk or a dvd player. It boots from the network. It comparatively cheaper than a pc, as far pricing and running cost is concerned. It is better for the environment as it produces lesser heat and uses less electricity. As it boots from the network the security aspect is heightened as there is no local data and there is controlled access.

1.120 Categories of Web applications for everyone

Categories of Web Application (For everyone) We have been using web applications for decades now. Let us list a few examples. Gmail, Yahoo Mail, Hotmail, Facebook, Twitter, Sales force, Dropbox, Skype, Google Docs just to name a few. These are now a basic need in our life to such an extent that we are trying to make our mobile devices that much more functional that we can be connected to these services 24*7. These services offer us a lot of freedom but with these also come some issues. Like always the first and the foremost is Security. We have a lot of confidential and work related information stored on these services. We need to ensure that these are kept secure. For this to happen we need to work on development of security measures. Not only do we need to check the security measures adopted by the provider but we need to ensure security from our end. Remember, most hacks occur from the user’s end not the provider’s end. For example, most of us now have smart phones which keep us connected all the time. How many of us have good antivirus and firewalls installed on these devices to ensure security. How secure are our internet and 3g connections? Security is a necessity that should not be taken lightly. Interoperability: Since we use multiple services can these services be easily integrated for ease of use. For example is I am using an online storage service to back up my data. I need to ensure that this data can be transferred to another utility just as easily to ensure full functionality. Bandwidth: To access all these services we need a good internet connection which offers good and consistent bandwidth. If bandwidth is not available the purpose of the services is defeated. Latency: Latency is most commonly measures in the round trip time a data packet takes to travel from the source to destination and back to the source. The lesser the latency, the faster the data delivery. On DSL or cable Internet connections, latencies of less than 100 milliseconds (ms) are typical and less than 25 ms desired. Satellite Internet connections, on the other hand, average 500 ms or higher latency. Design: A web application is as useful as it is user friendly or easily operable. The design should be such that a normal web user can easily adapt to the design and format of the application. A few examples of web applications: Gmail Yahoo Mail Hotmail Twitter Zimbra Salesforce Dropbox Skype etc...... Let us look into business related web application, in the next slide.

1.121 Categories of Web applications for business

Overview of the use of Mobile Devices in accessing the Cloud: This figure shows us the usage and operability of a smartphone. Smartphones offer the user freedom to access data from anywhere. They are mobile devices meant to be carried around with ease. The operating system depends on the type of the smartphone that you have. There are four major OS’s in the market right now competing with each other. They are: 1) iOS for Apple iphones. 2) Android OS which is used by many smartphone manufacturers. 3) Windows OS currently being marketed by Nokia 4) Blackberry OS for Blackberry business phones. The flipside is that none of these OS are interoperable. A particular application running on iOS will not work on Android and vice versa and so on. But by using that application which will be in collaboration with your cloud service provider you can access multiple services. The most common features still used on smartphones are text messaging and multimedia messaging, though multimedia messaging has never caught on in India. In the next slide we will discuss on Mobile Web Enabled Devices.

1.123 Mobile web enabled devices

Mobile Web Enabled Devices: As already discussed there are four major operating systems in use. 1) iOS for Apple iphones. 2) Android OS which is used by many smart phone manufacturers. 3) Windows OS currently being marketed by Nokia 4) Blackberry OS for Blackberry business phones. iOS is a proprietary operating system which is only functional on Apple iphones. These are very popular smart phones but are not entirely compatible with the work environment we use especially with Microsoft and Linux products. The main issue here is that of interoperability. None of these operating systems interact with each other. Hence not all applications can work on all the systems. Also cell phone selection is a matter of personal choice. Unless the enterprise or small business provides cell phones for employee use, there will continue to be the challenges between the different software and hardware platforms. We have recently seen a lot of development on the smart phone front and now have entered the tablet market. It is essentially a smart phone with better hardware but the same operating system. With the launch of Samsung's latest product (Samsung Note 800) we are bridging the gap between the laptop and tablets; the tablet has a quad core processor with 2 GB RAM. Now, we will talk about Typical Solutions for mobile devices.

1.124 Typical solutions for mobile devices

Questions: Let's look at a few questions to gauge whether we have understood everything so far: Thin clients are thin because? ANS) They have no moving parts and have bare minimum functionality. Gmail, Hotmail, Yahoo mail are examples of? ANS) SaaS HTTP, SSL, TLS etc are examples of? ANS) Standard Protocol. Let us move on to the next slide.

1.126 3 2 HOW cloud computing Can support business processes

How Cloud Computing can support business processes: Let's see how Cloud Computing has an impact and can support business processes.

1.127 Impact of Cloud Computing on primary business processes

Impact of Cloud Computing on primary business processes: When we talk about businesses we are not only talking about the IT industry but other businesses as well. The Cloud and its services have been developed for all types of business. In a normal business the primary processes would be: Sales Manufacturing Purchasing Advertising and Marketing Let's see what the contribution of a hybrid or public cloud could be like: In purchasing or manufacturing they could collaborate with their suppliers and exchange or share platforms making accessibility and overall compatibility possible. In Sales, advertising and marketing they could interact with potential customers, have surveys done, get feedback using social media. The businesses can directly communicate with their customers making for good customer and after sales service. Use CRM software's online for customer registration and accounts. In the next slide we will learn about role of standard applications, in collaboration.

1.128 Role of standard applications in collaboration

Role of standard applications in collaboration: Let us look at examples of standard applications that can be used for business purposes too. Social media like, LinkedIN, Facebook, Twitter, etc. Email/Webmail applications like Gmail, Yahoo Mail, Hotmail etc. Skype for Videoconferencing and calls. Online storage and file sharing applications like Dropbox, Skydrive, Google Drive Salesforce for Sales and CRM.

1.131 Impact on Relationship Vendor Customer

Impact on Relationship - Vendor Customer: Once you start using the cloud the relationship between the vendor and customer changes. The vendor will now be running the entire infrastructure of the customer and his whole supply chain. They both will be working hand in hand allowing for more innovations and excellence. The requirement of SLA's will be very clear and both the parties satisfied. Audit trails will be there as the vendor will have to provide them to the customer which translates into greater transparency. Compliance will be adhered to as the vendor will help in the compliance of legislation, regulation and international audit standards. Next, we will learn about the Benefits and Risks of providing Cloud Based Services.

1.132 Benefits and Risks of providing Cloud based Services

Benefits and Risks of providing Cloud Based Services: Some of the benefits of providing Cloud based services are Old datacenters get a new lease of life. Old datacenters were costly as they did not employ virtualization, but now they can and use their resources in a more sustainable and manageable manner. Cloud also offers Multi-Tenancy which adds to more resource management. There is a huge cost reduction in maintenance, overheads etc. Allows for quick development and running of application due to services like PaaS and SaaS. However there are quite a few risks as well: Compliance can be an issue as a cloud is international in nature which brings into the picture various privacy acts, different standards to maintain, and a lot many different laws and regulations to take care of. Performance issues can begin with availability, capacity, flexibility and scalability. Where any one factor fails could spell doom for the customers. Security is also a factor everyone is worried about right now. The vendor should offer multiple security solutions catering to the various needs of its customers. Privacy is a major concern as well. Almost all countries have their own privacy acts that govern and describes as to what constitutes Personal Identifiable Information (PII) and how it should be handled. The USA has GLBA and HIPAA, where GLBA is a financial act and HIPAA is a Health Insurance act. Both these acts deal with management and handling of personal financial data and health related data. When talking about a cloud we essentially are talking global. So we have to consider the international ramifications and would have to comply with the law of the land where our data resides.

1.133 4 Security and Compliance

Security & Compliance: Let us look at the security and compliance risks that the cloud poses.

1.135 Is the Cloud safe

Is the Cloud Safe: Safety and security has always been a substantial issue with the cloud. Since it is new and in its infancy we keep on finding new issues every now and then. It is also a high value target to hackers as it poses to them a new challenge and new exploits and hacks keep on cropping up. Though this is a security issue they are also inadvertently helping in making the cloud more secure. In the past and recently quite a few vulnerabilities were found in Amazon Web Services, VMware and Microsoft Hyper-V. These issues have now been patched but new issues could still be found. Security experts are making a concerted effort in keeping up with the challenges and making the cloud as secure as possible. On October 6, 2011, Tim Greene of the Network World reported that researchers found massive security flaws in cloud architectures. Amazon Web Services vulnerabilities were found and fixed, and that others were likely to be susceptible. Robert Lemos, of www.darkreading.com on May 02, 2011 reported that recent breaches were spurring new thinking on cloud security. He said that cloud providers might be attractive targets for attackers however liability cannot be outsourced. Now, we will discuss on Security Risks and Mitigating Measures.

1.136 4 1 Security risks and mitigating measures

4.1 Security Risks and Mitigating Measures: Let us look at some security risks and how to mitigate them.

1.137 Security risks in the Cloud

Security Risks in the Cloud: Security issues have been plaguing the Cloud for a long time now. Most of the security risks are: Data Loss/Leakage: Since data is no longer in house and at the service providers end, the security measures he offers should be extensive. It could happen that data loss due to hacking or leakage could take place. On a public cloud the technologies used are shared and proper account isolation parameters should be introduced. API has been instrumental in hacking even in web technologies. Cloud is no different and API should be securely coded. The threat of malicious insiders is prevalent even in our physical structures and hiring employees should be a well-documented and policed task. Other risks are: Abuse and nefarious use of Cloud computing Unknown risk profile and account Account, service and traffic hijacking Let us now move on to the next slide.

1.139 Security is generally perceived as a huge issue for the cloud

Security is generally perceived as a huge issue for the cloud: During a keynote speech to the Brookings Institution policy forum, “Cloud Computing for Business and Society,” [Microsoft General Counsel Brad] Smith also highlighted data from a survey commissioned by Microsoft measuring attitudes on cloud computing among business leaders and the general population. The survey found that while 58 percent of the general population and 86 percent of senior business leaders are excited about the potential of cloud computing, more than 90 percent of these same people are concerned about the security, access and privacy of their own data in the cloud. Let us now look into a diagram in the next slide, describing Data point for Cloud and Security.

1.140 Another Data Point for Clouds and Security

Another Data point for Clouds and Security: This graph shows us the response received from citizens regarding the challenges and issues of the cloud. These statistics only include responses with ratings of 4 or 5. 1 being not significant and 5 being very significant. Looking at the graph we can clearly see that majority of the people were worried about the security aspects followed by performance, availability and so on. Let us proceed to the next slide now.

1.141 In Some Ways Cloud Computing Security Is No Different Than Regular Security

: In some ways, "Cloud Computing Security" is no different than "Regular Security" For example, many applications interface with end users via the web. All the normal OWASP web security vulnerabilities -- things like SQL injection, cross site scripting, cross site request forgeries, etc., -- all of those vulnerabilities are just as relevant to applications running on the cloud as they are to applications running on conventional hosting. Similarly, consider physical security. A data center full of servers supporting cloud computing is internally and externally indistinguishable from a data center full of "regular" servers. In each case, it will be important for the data center to be physically secure against unauthorized access or potential natural disasters, but there are no special new physical security requirements which suddenly appear simply because one of those facilities is supporting cloud computing. We will now learn about the CIA security objectives.

1.142 The CIA Security Objectives

The CIA Security Objectives: This is the first requirement of security, the CIA (Confidentiality, Integrity and Availability). Confidentiality means to keep the data classified and allow access only to authorized users. It focuses on protecting the data. Thus depending on the position you hold in the company access to data will be given to you. This includes creating user groups and accounts, security and access policies to grant a group a certain access while denying another group access. This provides for no unauthorized access, ensures privacy and data protection. During the architectural stages one needs to plan what kind of encryption protocols one employs and how physical security should also be maintained. Wikileaks is a good example as they were responsible in leaking tons of confidential documents. Integrity is ensuring the accuracy of the data. Integrity ensures that the information presented is authentic and accurate. Integrity seeks to make sure that all processing in the system goes through a set of well-formed transactions. No direct access to data is allowed without edits provided by application programs. Availability focuses on access to the information. When seeking to enforce availability, the system must allow access to the data by authorized users where and when it is needed. A common standard for availability is that found in most service-level agreements, 99.999% or five nines. This is often the standard used. Let us now look into Bitbucket D'DoS'd Off the Air.

1.144 Maintenance Induced Cascading Failures

Maintenance Induced Cascading Failures: This is Gmail's blog where Gmail services were down for a full 100 minutes. This happened due to miscalculation on Google's end on maintenance issues. They took offline a few servers for routine maintenance failing to correctly calculate the workload. As it happened the workload bogged down almost all online server's essentially creating a self-induced DOS attack. Let us move on to the next slide now.

1.145 Storage related failure

Storage related failures: T-Mobile which had stored data on Microsoft's/Danger's server had a huge issue on their hands when the servers failed and almost all data was lost.

1.146 Natural Disaster Causing Power Failure

Natural disaster causing power failure: This is where nature tells us how helpless we are in its fury. An Electrical storm damaged power equipment at one of Amazon's data centers. Amazon customers were offline for a full 4 hours causing huge losses. Next we will talk about Mitigating Cloud Computing Availability Issues.

1.148 Mitigating Data Loss Risks

Mitigating Data Loss Risks: The risk of data loss (as in the T-Mobile Sidekick case) is an exception to the availability discussed on the preceding slide. Users may be able to tolerate an occasional service interruption, but non-recoverable data losses can kill a business. Most cloud computing services use distributed and replicated global file systems which are designed to insure that hardware failures (or even loss of an entire data center) will not result in any permanent data loss, but I believe there is still value in doing a traditional off site backup of one's data, whether that data is in use by traditional servers or cloud computing servers. When looking for solutions, make sure you find ones that backs up data FROM the cloud (many backup solutions are meant to backup local data TO the cloud!) Let us now gain an understanding on Cloud Computing and Perimeter Security.

1.149 Cloud Computing And Perimeter Security

Cloud Computing and Perimeter Security: There may be a misconception that cloud computing resources can't be sheltered behind a firewall (see for example "HP's Hurd: Cloud computing has its limits (especially when you face 1,000 attacks a day)," Oct 20th, 2009, http://blogs.zdnet.com/BTL/?p=26247) Contrast that with "Amazon Web Services: Overview of Security Processes". AWS has a mandatory inbound firewall configured in a default deny mode, and customers must explicitly open ports inbound. Security within Amazon EC2 is provided on multiple levels: The operating system (OS) of the host system, the virtual instance operating system or guest OS, a stateful firewall and signed API calls. Each of these items builds on the capabilities of the others. The goal is to ensure that data contained within Amazon EC2 cannot be intercepted by non-authorized systems or users and that Amazon EC2 instances themselves are as secure as possible without sacrificing the flexibility in configuration that customers demand. We will discuss on Cloud Computing and Host based Intrusion Detection, in the next slide.

1.150 Cloud Computing Host Based Intrusion Detection

Cloud Computing and Host Based Intrusion Detection: A common misconception is that Cloud networks cannot be secured like physical networks. You need to remember that a Cloud is nothing but a physical network being virtualized. It works the same way, communications happen using the same protocols etc. We even have a free Host based intrusion detection system especially for virtualized environments viz. OSSEC. OSSEC is a full platform to monitor and control your systems. It mixes together all the aspects of HIDS (host-based intrusion detection), log monitoring and SIM/SIEM together in a simple, powerful and open source solution. It is also backed and fully supported by Trend Micro. Next we will find that Cloud Computing also relies on the Security of Virtualization.

1.153 Cloudburst Report cntd

Cloudburst Report contd: The “VMware SVGA II” device is a virtualized PCI Display Adapter encountered in virtual machines run within any of the VMware products: Vmware Workstation, VMware Server, VMware ESX and so on. This device has a PCI Vendor ID of 0x15ad and a PCI Product ID of 0x0405. It replaced a while ago an older device “VMware SVGA” that had a PCI Product ID of 0x0710. This SVGA compatible controller is emulated on the host, and carries the graphical operations requested by the guest. Let us now discuss on Choice of Cloud Provider, in the next slide.

1.154 Choice of Cloud Provider

Choice of Cloud Provider: When going on to a public cloud or renting services from one you are essentially outsourcing your infrastructure and applications. What you need at this point in time is a high level trust in the capability of the cloud provider you will be partnering with. At first it will be daunting and you will ask yourself again and again whether it is the correct choice, but you have to remember what you are doing is not something new. Even if you are not using a cloud you should still outsource some of your requirements such as: network service providers, hardware vendors, software vendors, service providers, data sources, etc. All you will be doing is adding another one to that long list. Also a Cloud service provider is a data center, so they are better organized and highly professional. Next slide consists of a diagram describing Security Layers in the Cloud.

1.157 Provider Tenant Responsibility Matrix

Provider Tenant Responsibility Matrix: This figure demonstrates the Cloud layers along with the service models and who is responsible to maintain the security at what layer. Now, we will learn about Hypervisor Virus.

1.158 Hypervisor Virus

Hypervisor Virus: Viruses have been the bane for everybody. Anybody who uses a computer has been affected by a virus at one time or the other. Cloud is no different. We have already seen the Cloudburst report where a user can invade into the host OS from the guest OS. Now, hosting a private cloud can be done in two different ways. One by installing the hypervisor as the host OS which is a lot secure compared to the second method. The second method is to install the hypervisor on top of a host OS. Eg. Microsoft Server 2008R2 with Hyper-V installed as a role. This leaves open your system vulnerable for all known windows viruses. Now regular viruses will not infect the hypervisor layer but will still cause instability in the host OS due to which your virtualized environment can be threatened. Also there are a few viruses that affect the hypervisor layer or act as a hypervisor layer as well. Let us continue with Hypervisor Viruses in the next slide.

1.161 What is Crisis Malware

Crisis Malware: Crisis, also known as Morcut, is a rootkit which infects both Windows and Mac OS X machines using a fake Adobe Flash Player installer. Discovered in July, the trojan OSX.Crisis targets Windows and Mac OS users and is able to record Skype conversations, capture traffic from instant messaging, and track websites visited in Firefox or Safari. However, it has now come to light that the malware can be spread in four different environments -- including virtual machines. Let's see how the Crisis Malware spreads, in the next slide.

1.162 Crisis Malware Contd

Crisis Malware: It is spread through "social engineering attacks" -- in other words, it tricks a user into running a Java applet Flash installer, detects the operating system, and runs the suitable trojan installer through a JAR file. Both released .exe files open a back door, compromising the computer. Originally, it was believed the malware could only spread on these two operating systems. However, Symantec has found a number of additional means of replication. One method is the ability to copy itself and create an autorun.inf file to a removable disk drive, another is to insinuate itself onto a VMware virtual machine, and the final way is to drop modules onto a Windows Mobile device. The next slide contains more information about Crisis Malware.

1.163 Crisis Malware Contd

Crisis Malware: This is the first time malware targeting virtual machines has been exposed, but Symantec insists that this is not due to security loopholes or vulnerabilities in the VMware software itself being exploited, but rather the Crisis trojan takes advantage of the form -- namely that the VM is nothing more than one or more files on the disk of a machine. Even if the virtual machine is not running, these files can still be mounted or manipulated by malicious code. "Many threats will terminate themselves when they find a virtual machine monitoring application, such as VMware, to avoid being analyzed, so this may be the next leap forward for malware authors," Katsuki writes. However, there is good news for iOS and Android device users. As it uses the Remote Application Programming Interface (RAPI), these systems are not held hostage by the same vulnerabilities as Windows phone models. Symantec software detects the JAR file as Trojan.Maljava, the threat for Mac as OSX.Crisis, and the threat for Windows as W32.Crisis. Crisis was first discovered by Kaspersky Lab researchers last month. Computer World reports that security researchers from Intego have suggested Crisis has connections as a trojan program originally licensed to authorities for surveillance uses.

1.165 Blue Pill

Blue Pill: The Blue Pill concept is to trap a running instance of the operating system by starting a thin hypervisor and virtualizing the rest of the machine under it. The previous operating system would still maintain its existing references to all devices and files, but nearly anything, including hardware interrupts, requests for data and even the system time could be intercepted (and a fake response sent) by the hypervisor. The original concept of Blue Pill was published by another researcher at IEEE Oakland on May 2006, under the name VMBR. Let's see more of the Blue Pill in the next slide.

1.166 Blue Pill

Blue Pill: Joanna Rutkowska claims that, since any detection program could be fooled by the hypervisor, such a system could be "100% undetectable". Since AMD virtualization is seamless by design, a virtualized guest is not supposed to be able to query whether it is a guest or not. Therefore, the only way Blue Pill could be detected is if the virtualization implementation were not functioning as specified. In the next slide let's see the best practices to observe to prevent malware infection.

1.167 Best practices to prevent malware infection

Best Practices to Prevent Malware Infection: Isolating the management interfaces of, and connections to the hypervisor to only the systems that need access, not running un-trusted code on the hypervisor , such as software not provided by the hypervisor vendor and keeping the hypervisor software up to date. This excludes any security measures that should be taken on the guest OS’es on the virtual infrastructure to ensure the guests cannot be used to attack the hypervisor.

1.169 Hypervisor Security Contd

Researchers Develop Malware Detection for Hypervisor Security: The hypervisor or virtual machine manager is the brains of a virtual machine and manages the sharing of hardware between multiple guest systems. Initially, the code-base of hypervisors had been small and seen as relatively secure, but the code-base has been increasing to support more systems and as a result there have been increased vulnerabilities, Ning said. Threats against the hypervisor have been theoretical. Some security researchers have demonstrated ways attackers can defeat the hypervisor, creating a backdoor to gain control of the guest machines.

1.170 Hypervisor Security Contd

Researchers Develop Malware Detection for Hypervisor Security: The software resides in the memory in the platform management interface of a server and uses the system management mode of the processor. An agent that remains undetectable is used to examine the hypervisor. It inspects the program memory and the registers inside the CPU for any anomalies that could be malware. If anything out of the ordinary is detected, the software sends an alert to an IT administrator. "It looks at the code of the hypervisor to see if any part of the software has been changed," Ning said. "It also looks to see if the hypervisor has enforced isolation between different virtual machines as it should have."

1.171 Hypervisor Security Contd

Researchers Develop Malware Detection for Hypervisor Security: The HyperSentry software runs on existing hardware and firmware and remains isolated from the hypervisor, Ning said. This keeps a compromised hypervisor from detecting the software's measuring process, he said.

1.172 Questions

Questions: Confidentiality, Integrity and __________? ANS) Availability. Cloud Computing also relies on the security of Virtualization. True or False. ANS)True What is Cloudburst? ANS) This report presents the results of an auditing work carried out against VMware virtualization products in an attempt to find a way to execute code on the host from the guest. What are Hypervisor Viruses? ANS) Viruses that affect or behave like the hypervisor layer. Now, we will understand to manage Identity and Privacy.

1.174 Overview of Managing identity and privacy

Overview of Managing Identity and Privacy: Federation and Presence are two ways to implement Identity Management. Federation is an authentication method that offers us a single sign on to multiple sites. Presence offers us Claim-based solutions; Identity as a Service and Compliance as a Service.

1.175 Authentication

Authentication: Cloud computing authentication can be done in two different ways. One is Non-Cloud authentication and the other one called Authentication in the Cloud. In our regular infrastructure authentications happens either one of the following ways: In Non-Cloud authentication, it happens by: Simple Authentication And Active Directory Authentication Simple authentication is our day to day login by user-id and password. We do this every day. Active Directory is used when you are in a domain and authentication is done by user account and groups stored on the domain server. Kerberos protocol is used for authentication which makes it more secure as no data is transmitted clear text, but is in fact encrypted. When it comes to Authentication in the Cloud, We must talk about Active Directory authentication where VMware plays the role of the domain controller and/or security server. We also discuss on LDAP Lightweight Directory Access Protocol which too uses kerberos protocol.

1.176 Triple A Authentication

Triple-A Authentication: A common thread of CIA was the use of the term authorized user. To determine if the user is authorized, the AAA (authentication, authorization, and accountability) standard is used: Authentication: The first phase is authentication. What identity does the user claim and how can we verify that the user is who he says he is? The user ID is a common form of identification. Authentication of the identity is done through one or more mechanisms. We can verify the identity by evaluating something that the user would know, something that the user has, or something that the user is. Something the user knows could be a password. Something that the user has could be some type of physical device like an RSA token device. Something that the user is could be a biometric measurement. Common biometrics used includes fingerprints, retinal scans, and geometry. Authorization: Once the user identity is verified and authenticated, the next step is to authorize the user and determine what the user is allowed to do. This is often done through limited permissions by the user in various operating systems. Administrators have, in some cases, unlimited permissions. Once the limitations placed on the user are identified, the user is allowed to perform his tasks. Accountability: Once the user is verified and what the user can do is determined, then what the user did must be recorded. Through auditing functions, user activities can be tracked. As transactions are processed, a pathway can be followed. Authentication and authorization are proactive. They are both done during the process. Accountability is reactive. It is an after-the-fact review of activities. To enforce accountability, periodic views of logs and other audit data is necessary. Let us now look at the main aspects of Identity management in the next slide.

1.178 Single Sign On (SSO) for web services

Single Sign on (SSO) for Web Services: SSO is not a new concept. It has been in existence for quite a long time. It's just that its role has been enhanced. Single Sign On is nothing but a convenience process where with a single authentication you get logged on into multiple sites and services. If anyone remembers Microsoft Passport, you will understand the concept. Even with google these days, if you sign in on gmail and then try to access youtube or any other service it automatically logs you in with your gmail credentials. This is SSO in a single domain. Now the role has been enhanced where SSO is available across multiple domains. Now, we will learn about Privacy, Compliance issues and safeguards in Cloud Computing. The issue actually lies where in a cloud infrastructure the sign on processes are distributed. This gives rise to a lot of security issues. The answer to that is SSO (Single Sign On) where all the distributed elements are consolidated on a single SSO server. SSO uses SOAP protocol (Simple Object Access Protocol) where the credentials are offered by Active Directory accounts, token or smart cards.

1.179 Privacy compliance issues and safeguards in Cloud computing

Privacy, Compliance Issues and Safeguards in Cloud Computing: With cloud being an international service, a lot of privacy and compliance issues arise. Every country has its own privacy laws and regulations. You have to be compatible and comply with all of the requirements. How would you handle Personal Identifiable Information (PII) in a different country? In India as we speak we have no privacy laws, and if there are they are not implemented properly. In India we do not have a proper definition of what constitutes PII; whereas the rest of the world has different definitions based on which country is concerned. Now with cloud computing where data is fragmented over multiple locations, how could we comply with each countries law and regulations of every location? Cloud does have many positive points too. It is an effective audit control and auditing mechanism if configured properly. It is secure and offers secure cloud storage. And the network infrastructure is secure. Now, we will talk about International Privacy/Compliance, in the next slide.

1.180 Personal Identifiable Information (PII)

Personal Identifiable Information (PII) PII is information that can be used to uniquely identify, contact or locate a single person or can be combined with other sources to uniquely identify a single individual. The abbreviation PII is widely accepted, but the phrase it abbreviates has four common variants based on personal, personally, identifiable, and identifying. Not all are equivalent, and for legal purposes the effective definitions vary depending on the jurisdiction and the purposes for which the term is being used. What types of information are considered personal identifiable information? The following list of information contains things that might be considered as common knowledge: • Contact information: Name, e-mail address, phone, and postal address • Forms of identification: Social Security number, driver’s license, passport, and fingerprints • Demographic information: Age, gender, ethnicity, religious affiliation, sexual orientation, and criminal record • Occupational information: Job title, company name, and industry • Health care information: Plans, providers, history, insurance, and genetic information • Financial information: Bank, credit, or debit card account numbers, purchase history, and credit records • Online activity: IP address, cookies, flash cookies, and log-in credentials. For example, much of this information may be casually placed on a person’s social networking page without concern for privacy or secrecy. Place the same information in the hands of a hospital, financial institution, corporate human resources system, university student database, or similar repository and it becomes information that must be protected under penalty of law. Information stored on internationally accessible clouds present one of the more difficult protection environments. Without common information protection laws in different countries or equal enforcement of privacy laws, it is difficult to obtain or maintain privacy for information that is collected.

1.181 International Privacy Compliance

International Privacy/Compliance: International privacy standards are just as vast as in the United States. USA has the Privacy ACT of 1974, and federal laws are HIPAA (Health Insurance Portability and Accountability Act; the GLBA Act (The Gramm-Leach-Bliley Act also known as the Financial Services Modernization Act, 1999) and the Safe Harbor Act. The EU (European Union) has strong personal data rules. The EU itself has protection standards and also requires each member state to implement their own privacy standards: • EU Data Protection Directive of 1998 • EU Internet Privacy Law of 2002 (DIRECTIVE 2002/58/EC) Similarly, Japan and Canada have strong privacy protection laws. • Japan: - Personal Information Protection Law (Act) - Law for the Protection of Computer Processed Data Held by Administrative Organs, December 1988 • Canada: - The Privacy Act of July 1983 - PIPEDA (Personal Information Protection and Electronic Data Act of 2008.) In the next slide we will understand safeguards.

1.183 Questions

Questions: Triple-A stands for? ANS) Authentication, Authorization and Accountability. What is PII? ANS) Personal Identifiable Information What are the Safeguards for Identity Management? ANS) Effective Access control and audit, Secure Cloud Storage, Secure Network Infrastructure and Legal advice.

1.184 5 The business case

5.0 The Business Case: Let's look at what kind of a business case a Cloud makes. First, we will look into the contents of module 5.

1.185 he business case for Cloud computing

The Business Case for Cloud Computing: This diagram illustrates the SLA (Service Level Agreement) signed with a business or a service provider or an Infrastructure supplier. This SLA will have clauses and penalties in case of non-performance which will take care of the benefits savings and the cost factor of the agreement. Depending on this you will hire staff for operations and hardware and software required to maintain the infrastructure at your end. Now, we will go through Business Drivers in the next slide.

1.187 Compelling feature quicker time to market

Compelling Feature: Quicker Time to Market: But before we make a decision we need to answer a few questions. And we need to give careful consideration to these questions as they could make or break us. Can the cloud provide the resources faster than when hosted locally in your company? What do we give up? What do we gain? Is your organization willing to compromise? Are the organization, employees, IT staff, and other interested parties willing to make the change without delay? Let us learn about TCO 'and all that stuff' in the next slide.

1.188 TCO and all that stuff

TCO 'and All that Stuff': We have seen that moving to a cloud TCO decreases. We have seen that CAPEX gets converted into OPEX as cloud is a pay per use model and is classified in accounts as direct expenditure instead of capital expenditure. However again before moving on to a cloud a detailed analysis should be done pertaining the capital costs and what and how much would be saved after moving onto a cloud. Also Cloud expenses should be calculated and compared with references to money saved in the short run and the long run. What we need to look at is what will be the expenditure in case of a cloud; which includes subscriptions support contracts etc. compared to your current in house expenditure. We will look into an example: Total Cost of Application Ownership (TCAO) in the next slide.

1.189 Example Total cost of application ownership (TCAO)

Example: Total Cost of Application Ownership (TCAO): Let's see a few components that you will require foe application ownership. Server costs Storage costs Network costs Backup and archive costs Disaster recovery costs Data center infrastructure costs Platform costs Software maintenance costs (package software) Software maintenance costs (in-house software) Help desk support costs Operational support personnel costs Infrastructure software costs All this components are required for running applications on a physical infrastructure. Now, we will move on to the next slide to discuss on Operational and staffing benefits.

1.192 Legal Issues in Cloud Computing

Legal Issues in the Cloud: Legal issues in the cloud can be classified as shown in the figure as under: Liability Law Data Portability Copyright Compliance. In the next slide we will learn about the Law for Cloud Computing Service.

1.193 Law for Cloud Computing Service

Law for Cloud Computing Service: As per the IT Act 2000, Cloud Computing service providers are intermediaries. They are described as follows: S2(1)(w)"Intermediary" with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web hosting service providers, search engines, online payment sites, online-auction sites, online market places and cyber cafes. Let us go through the Compliance as per IT Rules 2011, in the next slide.

1.196 Indemnity Issues in Cloud Computing

Indemnity Issues in Cloud Computing: Indemnity in legal parlance is: To compensate for loss or damage; to provide security for financial reimbursement to an individual, in case of a specified loss incurred by the person. Whenever we sign up for a service we see agreements that have similar verbiage: We and our licensors shall not be responsible for any service interruptions, including, without limitation, power outages, system failures or other interruptions, including those that affect the receipt, processing, acceptance, completion or settlement of any payment services. (...) Neither we nor any of our licensors shall be liable to you for any direct, indirect, incidental, special, consequential or exemplary damages, including, but not limited to, damages for loss of profits, goodwill, use, data or other losses (...) In short the provider of that service is saying that they will not be responsible for the effects any losses that occur for the mentioned reasons. What is it then, that will keep the customer safe and grant him indemnity if such an event occurred. Note that such an event occurring is not the customer's fault any which ways. Let us see the Agreement Clauses in Cloud Service.

1.197 Agreement Clauses in Cloud Service

Agreement Clauses in Cloud Service: On a cloud you essentially are using a shared disk model and there is a risk that a third party may interfere with other clients using the same platform. This is a risk not everyone is willing to take and has serious repercussions. This risk has to be met and mitigated. Now, we will talk about the Legal Liability of Cloud Providers.

1.200 Letter rogatory an option

Letter Rogatory an Option: A letter rogatory or letter of request is a formal request from a court to a foreign court for some type of judicial assistance. The most common remedies sought by letters rogatory are service of process and taking of evidence. Courts may serve documents only to individuals within the court's jurisdiction. One exception to this rule is states that invoke universal jurisdiction, granting their courts ubiquitous domain. Therefore a person seeking to take an action against a person in another country will need to seek assistance from the judicial authorities in the other country. This is of course assuming the court in his country, has jurisdiction to hear the case matter. As a hypothetical example, Alice in the United States wishes to sue Roberto in Argentina. Alice issues her summons in a U.S. court, and must then petition a court in Argentina by means of a letter rogatory to serve the process on Roberto. The use of letters rogatory for purposes of service of process to initiate court action is now largely confined to the Americas, as between countries in Europe, Asia, and North America, service of process is effected without resort to letters rogatory, under the provisions of the Hague Service Convention. We will learn about Data Portability on the Cloud in the next slide.

1.201 Data Portability on Cloud

Data Portability on the Cloud: These are a few questions you might want to ask yourself and find the answers to keep you, your organization and your data safe. Who is really managing my company’s sensitive information? What are their internal security practices? How well do they handle incident response? How reliable is the infrastructure that provides the service? Are they prone to service outages? How can my service provider recover my cloud stuff? What is H/W and S/W Portability of my DATA? Let us know Copyright Issues for data on cloud.

1.203 5 2 Cloud Integration Green IT

5.2 Cloud Integration and Green IT: Moving onto a Cloud may reduce your carbon footprint and help you go Green.

1.204 It is easy being green

It is Easy being Green: Moving to a Cloud will save you money on a lot of fronts. Capital Investment, Overheads like Electricity, Floor space, etc., less stock of inventory, and so on. Talking about electricity alone cloud computing reduces your usage by almost 91 percent. Salesforce customers saved energy equivalent to 11 barrels of oil every hour which is a significant saving. Let us learn about Green Data Centers, in the next few slides.

1.205 Green Data Centers

Green Data Centers: To help customers transform from traditional DCs to cloud data centres, Huawei conducted in-depth research in new information technologies like cloud computing, cloud storage, and virtualization. Drawing upon practical experience from dozens of cloud data centres around the world, Huawei developed a DC solution featuring a flexible, mixed, and modular design philosophy. The solution is used to build green DCs that adapt to cloud computing for customers, to strike an optimal balance between the total cost of ownership (TCO) and the DC's availability, security, flexibility, and scalability.

1.207 Green Data Centers

Green Data Centers: Mixed: Striking an optimal balance between availability and TCO Not only does the mixed DC deployment solution support the mix of different power densities to address high-density deployment and hotspot drift, it also supports mix of different tiers. Determining availability requirements according to business types, the solution avoids excessive investment, thereby striking an optimal balance between availability and TCO.

1.208 Green Data Centers

Green Data Centers: Modular: on-demand deployment, efficient expansion Allowing for on-demand deployment and efficient expansion, Huawei DC solution can flexibly meet fast-growing demands, and effectively improve the utilization rate of equipment and returns on investment (ROI) through phased investment.

1.209 Green Data Centers

Green Data Centers: Full-Life-Cycle Integration Implementation Is the Core – Building Fast and High-Quality Data Centre. Huawei's DC integration service provides comprehensive integration solution covering physical infrastructure, network platform, and business application. The solution provides E2E professional services for data centres such as planning, design, construction implementation, project management, and data migration at different levels, as well as customized turnkey engineering services.

1.210 Green Data Centers

Green Data Centers: As per experience data, with Huawei's data centre integration service, customers can significantly reduce the data centre construction period. On average, the construction period is reduced by 20percent-30percent while the service provisioning efficiency of data centres increased by 65percent and related investment reduced by 30percent. In some areas, Huawei is able to design super energy-saving data centre with a PUE rating of less than 1.3, which can dramatically reduce OPEX, increase customer revenue, and give customers extra competitive edges.

1.212 Overview of Evaluating Cloud Computing Implementations

Overview of Evaluating Cloud Computing Implementations: This diagram shows the way we will be looking at and evaluating the implementation of Cloud. From the supplier to the service, SLA, business cases, to the benefits and cost savings will be evaluated. Now, we will discuss on The Evaluation of Performance Factors, management requirements and satisfaction factors.

1.213 The evaluation of performance factors management requirements and satisfaction factors

The Evaluation of Performance Factors, Management Requirements and Satisfaction Factors: As already discussed we need to ask ourselves and the cloud service providers a few questions based on parameter review and analyse the data gathered. It is also important that we do a comparative study of several providers before we sign a contract. Typical questions to be asked are: How long does it take to resolve incidents and problems? How good is the security of the Cloud data center? How does system performance (i.e. connection and transaction speeds) compare to your own data center and private network? We will understand Evaluating Cloud Implementations in the next slide.

1.214 Evaluating Cloud Implementations

Evaluating Cloud Implementations: These are a few parameters where you are going to save a bunch of money: Power savings Floor space savings Network infrastructure Maintenance Software licensing Time to value Trial period Service Wiser investment Security Compliance Faster delivery of what you want Less capital expense Short-term needs Let us go through Performance, Requirements and Satisfaction in the next slide.

1.216 Evaluation of service providers and services what you get for the money

Evaluation of Service Providers and Services: What You Get for the Money: When evaluating services in the cloud, it is important to really understand what you are getting for your money: Are installation and conversion services provided? What is the offered Service Level Agreement; are its terms reasonable? What are the penalties? What kind of storage is available? What type of support are you going to receive? Do you have an alternative or backup plan? Do you really fully understand the offering and the expected outcome? If you do not understand or if you still have unanswered questions, take a step back and reflect on what the cloud is going to mean to your organization. In reviewing a number of the cloud service provider Web sites, it is often not clear what is being provided. What we need here is a governance framework. A management folio should be drafted out and best practices be laid out for others to follow so that they can understand the effects of their actions. Also check for third party statements for SAS70, ISO/IEC20000, 27001, 9001 etc. Now, we will look into the case studies.

1.217 5 4 Case Studies

Case Studies: Let's look at a few cases where companies have moved successfully to the cloud.

1.218 Amazon Cloud Users New York Times and Nasdaq (408)

Amazon Cloud Users: New York Times and Nasdaq: New York Times: NYT needed to convert 15 million scanned articles to PDF files. It would have taken a huge amount of time if they would have decided to do it using their own infrastructure. Let alone the time it would have taken a lot of their hardware and network resources were they to do it in house, which would also have affected their daily work. What they did was directly log in to the Amazon cloud using their credit card (they did not have to contact Amazon for any requirements); and create an account. They deployed 100 Linux computers using the hardware and storage resources offered by Amazon and in a span of 24 hours transferred all that data (which totaled to 4TB) to PDF’s using EC2 and S3. It would have taken months if they had decided to utilize their own resources. Thus not only was it cost effective for them it saved a lot of time without creating a hindrance on their existing network. At the end of the exercise they deleted the 100 Linux machines they had created and thus only had to pay for the resources consumed for their entire exercise. Nasdaq: Everyone knows NASDAQ and the voluminous data they go through every day. They have to deal with hundreds and thousands of scripts on a daily basis and give second to second updates daily. There are millions of files showing price changes of entities over a 10 minute segment. Storing all that data on their hardware would require a humongous investment in computer resources also resulting in a lot of maintenance costs. So they used the S3 services offered by AWS for storage, integrated that storage in their infrastructure and used S3 to host all that data while being in sync at all times. They also created a lightweight Adobe AIR application using which the users were able to view the required data. These two examples show us to what extent can a Cloud Infrastructure help us. A businessman no longer will have to invest in infrastructure and maintain a team of specialists to service it. He can now easily create everything on the cloud, let the provider worry about the infrastructure and he can focus entirely on his own business. All he is doing here is outsourcing his IT needs. Let us take another example of IBM google cloud.

1.221 Engineering Firm Virtualizes Field Servers Saves 3 2 Million with Switch to Hyper V

Engineering Firm Virtualizes Field Servers, Saves $3.2 Million, with Switch to Hyper-V: Engineering firm CH2M HILL was an early user of virtualization software to lower server costs. However, when the global economy began to slump in 2007, the company sought a more cost-effective virtualization solution than VMware. It switched to the Windows Server 2008 R2 Datacenter operating system with Hyper-V virtualization technology, and also deployed Microsoft System Center data center solutions to simplify server management. With the switch to Hyper-V, CH2M HILL projects software savings of up to U.S.$280,000 over the next three to five years and hardware savings of up to $3 million by virtualizing field servers, which was cost-prohibitive with VMware. Additionally, the company foresees reducing server management work by 30 percent, giving staff more time to focus on strategic work. CH2M HILL will also be able to extend high availability to field servers by using Hyper-V. Let us proceed to the next slide which discusses on Appliance Manufacturer Saves €1.5 Million with Global Virtualization Solution.

1.222 Appliance Manufacturer Saves 1 5 Million with Global Virtualization Solution

Appliance Manufacturer Saves €1.5 Million with Global Virtualization Solution: Miele and Cie is a manufacturer of premium household appliances that are distributed worldwide. To reduce server sprawl and data center costs, Miele had to deploy a large virtualization solution based on VMware. However, to expand its virtualized environment, the company evaluated and then deployed a Microsoft solution based on Windows Server 2008 R2 Datacenter with Hyper-V technology and managed with Microsoft System Center products. So far, Miele has migrated 200 virtual machines from VMware to Hyper-V, and plans to migrate 350 more by mid-2011. To date, Miele estimates saving €1.5 million (approximately U.S.$1.8 million) with global virtualization on hardware by decreasing physical server needs by more than 50 percent and by improving administrators’ productivity and reducing licensing costs.

1.223 NASA BeAMartian Website

NASA BEaMartian Website: Researchers at the NASA Jet Propulsion Laboratory (NASA/JPL) wanted to solve two different challenges—providing public access to vast amounts of Mars-related exploration images, and engaging the public in activities related to NASA’s Mars Exploration Program in order to encourage learning in science, technology, engineering, and mathematics. Using a variety of technologies, NASA/JPL created its new BeAMartian Web site. The site provides entertaining and engaging ways to view and interact with information delivered by Mars-based rovers and orbiters. The goal is to let the public participate in exploration, making contributions to data processing and analysis. It also provides a platform that lets developers collaborate with NASA on solutions that can help scientists analyze vast amounts of information that can be used to understand the universe and support future space exploration.

1.225 NASA

NASA: The BeAMartian site gives citizens a chance to view hundreds of thousands of large, high-resolution Mars images. Site visitors can pan, zoom, and explore the planet through images from Mars landers, roving explorers, and orbiters. The images are stored in the Planetary Data System, a huge data repository maintained by NASA/JPL.

1.226 NASA

NASA: Although the tools for retrieving the data from the Planetary Data System are largely geared for scientists and other experts, the BeAMartian site makes it much easier for the general public to work with the Mars data. To do this, Microsoft and NASA—working with Mondo Robot, a Colorado-based design firm, and the Arizona State University Mars Space Flight Facility—developed a way for citizens to participate in science using casual game-like experiences. For example, “Mapping Mars” lets citizen scientists perform “map stitching” activities in which they align images from different orbiters, but with the same geo-coordinates, to build a more accurate global map of the planet than can be achieved by computers alone.

1.227 NASA

NASA: The BeAMartian site has successfully demonstrated how Web technology can help an organization engage with a large, dispersed group of users to view graphically rich content and participate in activities that involve massive amounts of data. The site has helped NASA/JPL raise awareness of its Mars-related missions and research activities. It has also helped NASA/JPL engage with a large international audience and, in the process, promote its goal of generating excitement around the technical skills needed for future space exploration, particularly the STEM disciplines. Additionally, the site is helping NASA/JPL fulfil its obligations to make its data more accessible to the general public while assisting NASA/JPL scientists in their work.

1.229 LexisNexis

LexisNexis: As executives at LexisNexis looked at ways to extend the value of Time Matters, they noted two related industry trends. First, lawyers rely increasingly on their mobile devices to stay productive while away from the office; they want the ability to securely access their firm’s data and documents from anywhere. Second, law firms are providing staff with greater freedom of choice when it comes to which mobile devices they can use for work.

1.230 LexisNexis Solution

LexisNexis Solution: LexisNexis weighed several options, including using Amazon Elastic Compute Cloud services and hosting the application in its own data center. The company ultimately decided to use Windows Azure, the Microsoft cloud services development, hosting, and management environment, to deliver its new Time Matters app for mobile devices, which is called Time Matters Mobility. Time Matters Mobility from LexisNexis supports numerous mobile operating systems, including Android, BlackBerry OS, iOS, and Windows Phone 7.

1.231 LexisNexis Solution

LexisNexis Solution: LexisNexis executives considered a number of factors in making the decision to adopt Windows Azure over other alternatives. “We quickly realized that, with Windows Azure, we could gain the on-demand scalability we needed in a much more cost-effective way than if we attempted to build out our own multitier infrastructure,” says Paransky. “Plus, the Microsoft solution offers much more than just redundant hardware; it provides a complete set of familiar tools to manage the entire development lifecycle. And it gave us the chance to work directly with the people who know the technology best to make sure we got our application to market as fast as possible.”

1.234 Mahindra Satyam

Mahindra Satyam: When Mahindra Satyam began exploring SQL Server products, it was already looking forward to implementing its BI solutions in the cloud with the Windows Azure platform and services and products such as Windows Azure and Microsoft SQL Azure. Some customers could go directly to the cloud to build new infrastructure and take advantage of cloud-based scalability, while others might choose a hybrid cloud solution. “One of the things we liked about SQL Server 2012 is that it’s a cloud-ready version,” says Koona. “We could deploy our BI solutions on-premises and later migrate the same solution to the Windows Azure platform.”

1.235 Large American Retail Chain

Large American Retail Chain: When you’re approaching the end-of-life cycle on hundreds of servers and start calculating the cost of replacement for all of that physical hardware, it comes out to a very large number,” says the technical architecture manager of a large American retail chain. “That’s one of the main challenges of maintaining a physical IT infrastructure for a company as large as ours. We started looking at alternatives to a 1:1 replacement, and that naturally led us to VMware.”

1.238 EDS Turns to VMware Server Virtualization to Support Australian Customers

EDS Turns to VMware Server Virtualization to Support Australian Customers: The increasing maturity of virtualization and the growing support from independent software vendors is helping fuel deployment of VMware virtualization across production environments. This means more businesses are redeploying or disposing of inefficient hardware and consolidating their data center infrastructure onto multiple virtual server environments, thus contributing to a reduction in power consumption. Said by, David Simpfendorfer Asia-Pacific ITO Product Marketing Manager, EDS.

1.239 EDS Turns to VMware Server Virtualization to Support Australian Customers

EDS Turns to VMware Server Virtualization to Support Australian Customers: VMware virtualization technologies provide one of the major avenues for EDS and its customers to reduce their environmental footprint. Some large Australian businesses are taking advantage of these technologies to consolidate multiple virtual servers onto a reduced number of physical servers. One key EDS client has elected to use a virtualized environment based on VMware ESX Server. The environment incorporates: VMware Virtual SMP to enable virtual machines to exploit multiple processors. VMware VMotion to move virtual machines from one physical server to another while the virtual environment is running. VMware P2V to migrate physical servers to virtual machines. VMware High Availability to ensure availability of applications running on virtual machines. Now, we will look into the next slide and learn a little bit important information regarding the EXIN Cloud Computing Foundation Exam.

1.240 EXIN Cloud Computing Foundation exam

EXIN Cloud Computing Foundation Exam: Here are the details of the EXIN Cloud Computing foundation exam and its format. In EXIN Cloud Computing Foundation Exam, the total number of questions will be 40. All the questions will be in multiple choices. Participants should use the web based or paper based tool for this exam. The pass rate for them will be 65 percentage and pass mark will be 26. Duration of the exam is one hour. It’s not an open book exam. Participants can look into www.exin.com for some sample exam.

  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

We use cookies on this site for functional and analytical purposes. By using the site, you agree to be cookied and to our Terms of Use. Find out more

Request more information

For individuals
For business
Name*
Email*
Phone Number*
Your Message (Optional)

By proceeding, you agree to our Terms of Use and Privacy Policy

We are looking into your query.
Our consultants will get in touch with you soon.

A Simplilearn representative will get back to you in one business day.

First Name*
Last Name*
Email*
Phone Number*
Company*
Job Title*

By proceeding, you agree to our Terms of Use and Privacy Policy