Cloud Comp Unit 2 Part 2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT

A physical cluster is a collection of servers (physical machines) interconnected by a


physical network such as a LAN. Live migration of VMs, memory and file migrations, and
dynamic deployment of virtual clusters.
Physical versus Virtual Clusters Virtual clusters are built with VMs installed at distributed
servers from one or more physical clusters. The VMs in a virtual cluster are interconnected
logically by a virtual network across several physical networks. Figure 3.18 illustrates the
concepts of virtual clusters and physical clusters. Each virtual cluster is formed with physical
machines or a VM hosted by multiple physical clusters. The virtual cluster boundaries are shown
as distinct boundaries. The provisioning of VMs to a virtual cluster is done dynamically to have
the following interesting properties:
The virtual cluster nodes can be either physical or virtual machines. Multiple VMs running with
different OSes can be deployed on the same physical node.
A VM runs with a guest OS, which is often different from the host OS, that manages the
resources in the physical machine, where the VM is implemented.
The purpose of using VMs is to consolidate multiple functionalities on the same server. This
will greatly enhance server utilization and application flexibility.

VMs can be colonized (replicated) in multiple servers for the purpose of promoting distributed
parallelism, fault tolerance, and disaster recovery.
The size (number of nodes) of a virtual cluster can grow or shrink dynamically, similar to the
way an overlay network varies in size in a peer-to-peer (P2P) network.
The failure of any physical nodes may disable some VMs installed on the failing nodes. But the
failure of VMs will not pull down the host system.

Fast Deployment and Effective Scheduling


The concept of green computing has attracted much attention recently. However, previous
approaches have focused on saving the energy cost of components in a single workstation
without a global vision. Consequently, they do not necessarily reduce the power consumption of
the whole cluster. Other cluster-wide energy-efficient techniques can only be applied to
homogeneous workstations and specific applications. The live migration of VMs allows
workloads of one node to transfer to another node. However, it does not guarantee that VMs can
randomly migrate among themselves. In fact, the potential overhead caused by live migrations of
VMs cannot be ignored.
High-Performance Virtual Storage
Basically, there are four steps to deploy a group of VMs onto a target cluster: preparing the disk
image, configuring the VMs, choosing the destination nodes, and executing the VM deployment
command on every host. Many systems use templates to simplify the disk image preparation
process. A template is a disk image that includes a preinstalled operating system with or without
certain application software.
Users choose a proper template according to their requirements and make a duplicate of it as
their own disk image. Templates could implement the COW (Copy on Write) format. A new
COW backup file is very small and easy to create and transfer. Therefore, it definitely reduces
disk space consumption. In addition, VM deployment time is much shorter than that of copying
the whole raw image file.
Live VM Migration Steps and Performance Effects
There are four ways to manage a virtual cluster. First, you can use a guest-based manager, by
which the cluster manager resides on a guest system. In this case, multiple VMs form a virtual
cluster. For example, openMosix is an open source Linux cluster running different guest systems
on top of the Xen hypervisor. Another example is Suns cluster Oasis, an experimental Solaris
cluster of VMs supported by a VMware VMM. Second, you can build a cluster manager on the
host systems. The host-based manager supervises the guest systems and can restart the guest
system on another physical machine. A good example is the VMware HA system that can restart
a guest system after failure. These two cluster management systems are either guest-only or hostonly, but they do not mix. A third way to manage a virtual cluster is to use an independent
cluster manager on both the host and guest systems. This will make infrastructure management
more complex, however. Finally, you can use an integrated cluster on the guest and host systems.
This means the manager must be designed to distinguish between virtualized resources and
physical resources. Various cluster management schemes can be greatly enhanced when VM life
migration is enabled with minimal overhead. VMs can be live-migrated from one physical
machine to another; in case of failure, one VM can be replaced by another VM. Virtual clusters
can be applied in computational grids, cloud platforms, and high-performance computing (HPC)
systems. The major attraction of this scenario is that virtual clustering provides dynamic
resources that can be quickly put together upon user demand or after a node failure. In particular,
virtual clustering plays a key role in cloud computing. When a VM runs a live service, it is
necessary to make a trade-off to ensure that the migration occurs in a manner that minimizes all

three metrics. The motivation is to design a live VM migration scheme with negligible
downtime, the lowest network bandwidth consumption possible, and a reasonable total migration
time.

Steps 0 and 1: Start migration. This step makes preparations for the migration, including
determining the migrating VM and the destination host. Although users could manually make a
VM migrate to an appointed host, in most circumstances, the migration is automatically started
by strategies such as load balancing and server consolidation.
Steps 2: Transfer memory. Since the whole execution state of the VM is stored in memory,
sending the VMs memory to the destination node ensures continuity of the service provided by
the VM. All of the memory data is transferred in the first round, and then the migration controller
recopies the memory data which is changed in the last round. These steps keep iterating until the
dirty portion of the memory is small enough to handle the final copy. Although precopying
memory is performed iteratively, the execution of programs is not obviously interrupted.
Step 3: Suspend the VM and copy the last portion of the data. The migrating VMs execution is
suspended when the last rounds memory data is transferred. Other nonmemory data such as
CPU and network states should be sent as well. During this step, the VM is stopped and its
applications will no longer run. This service unavailable time is called the downtime of
migration, which should be as short as possible so that it can be negligible to users.
Steps 4 and 5: Commit and activate the new host. After all the needed data is copied, on the
destination host, the VM reloads the states and recovers the execution of programs in it, and the
service provided by this VM continues. Then the network connection is redirected to the new
VM and the dependency to the source host is cleared. The whole migration process finishes by
removing the original VM from the source host.

You might also like