Computer,CPU,Memory,Storage,Network,Processing,RAM,Hard disk,Connection

Virtualization

 

Let’s start with the very basics of what makes up a computer.  Every computer has core computing elements; a CPU or Central Processing Unit, Memory, or RAM, Hard Disk for Storage and Network Interface for Connections to send and receive data.  The central piece of software that controls the send and receive of data or read/write capabilities is called the RAID Controller.  A great example of how virtualization works is in the Windows operating system.  The RAID Controller sits between the hard disk and the Windows Operating system but only sees one drive, however behind the RAID Controller, many virtual disks can be created on the fly or replaced.

In most organizations there are hundreds of PC’s Laptops and servers that call for the same four elements.  In an environment that is virtual there is a host computer, and within the host computer, you have plenty of processing power, storage memory, and data connections.   Installed in a host computer is a piece of software called a hypervisor, or the abstraction layer.  It’s called an abstraction layer because it allows an administrator to abstract the resources from the host computer to create virtual machines or guest computers.

A Virtual Machine is just an huge file that runs on the host computer.  The key is the sending and receiving of data.  When data sends from the guest computer or virtual machine, it travels from the Virtual Machine through the hypervisor to the network connection and out.  Receiving works the opposite; data comes in through the network connection goes back through the hypervisor and to the guest computer or virtual machine.

So why would you do this?

There are some factors, but let’s discuss the most important reasons.  Mostly you can run many virtual machines simultaneously over one physical piece of hardware.  From an economy of scale perspective, you can save money on hardware, consolidate management, decrease energy consumption.

Computers only use less than 10% of the available resources.  By virtualizing your environment, you can get many users onto one host.  Immediately, you will see a reduction in physical hardware, space and fewer resources to manage.   When a business requires a new server or needs to add a user the administrator allocates the resources through the hypervisor.  No need to spend money on a new server or computer.

So what can you Virtualize?

A device that has the four attributes of a computer.  Even devices such as switches and routers can consolidate onto the same platform.

When you have two host servers, you can reallocate virtual machines between the two host computers to make your environment more efficient.   Even powered on guest computers might be moved between hosts, to keep your virtual environment optimized continuously.   An administrator might do this because one host may experience overload, or failure.

There are several areas of Virtualization;  Click on each tab to learn about the various areas of Virtualization.

Network

Before I go into Network virtualization lets define what a Network is.  A Network is any and all connections between computing devices.  A Network is like the nervous system in the human body.   A Network is made up of interconnected nodes that are communicating with one another via predetermined protocols.  Transmission Control Protocol or TCP / Internet Protocol (IP) dictate how data or information will be transported and handled by the devices;

  1. Routers
  2. Security Gateways

It is the responsibility of the Transport Control Protocol to make sure the traffic is moving between all nodes within the network.

 

Network Virtualization (NV)

In network virtualization or NV, there is a logical virtual network, by separating the network functions from the hardware that deliver them.  All network functionality is separated from the base hardware and simulated as a “virtual instance” that can load onto general single hardware platform.  That hardware platform can be used to support multiple virtual network instances.

NV has been around for years, and the 801.1Q standards protocol is present in switches that can establish vLANs Virtual Local Area Networks.  vLANs a redesigned to abstract the network by allowing multiple vLANs to share a single link while keeping them separate from one another.  Services such as Internet Protocol Security (IPSEC), Virtual Private Networks (VPNs), Secure Socket Layer (SSL) VPNs, Virtual Private LAN Service (VPLS) and lastly Multi-Protocol Label Switching (MPLS) have all elements of Network Virtualization (NV). All developed for particular use cases.   These services have limitations because fundamentally each does not change the way the Network topology.  The NV capability of these services depends on the fundamental proprietary hardware network appliances to deliver the functionality.

NV Today is designed to create virtual networks within a virtualized infrastructure and making the system more portable and flexible.  The physical devices are responsible for forwarding packets while the management of the system software driven.

OSI 7 Layer Model,Physical,Data,Transport,Session,Presentation,ApplicationNetwork Virtualization in the WAN

NV is nothing new in WAN Topology, and carriers have been running and selling network virtualization quite awhile.  Carriers today are moving away from dedicated links, and this provides the evidence that NV is present in today’s [glossary]WAN[/glossary].

Providers today achieve virtualization using L2 or L3 VPN technology, such as MPLS VPNs (L2 and L3), VPLS, L2TP, and OTV, to name a few.

Not far off from data center network virtualization, WAN virtualization techniques are segmented into direct fabric programmings, such as running an MPLS VPN over L2, and overlay approaches. Direct fabric programming enables providers to attain tight controls over quality of service (QoS) while overlays will require more advanced handling to achieve those checks. Overlays, however, can provide a little more flexibility, employing WAN virtualization techniques, such as MPLS VPN over GRE (or multipoint GRE) to allow the traffic to traverse networks that are not MPLS-ready via tunneling. Regarding categorization, IPsec VPNs could be considered an overlay WAN virtualization technology, since it provides the means of running multiple private networks over a shared WAN infrastructure.

Network Virtualization as a Service

The Table below outlines benefits and challenges of the different approaches:

Summary of Benefits of Each Approach

Storage

Storage,Primary-Storage,Secondary-Storage,Storage in a computer is the place where data is in an electromagnetic or optical form for access by a computer processor. This serves two purposes

  1. A computer has an input/output operation. This means a hard disk or tape systems or other forms of storage that do not include the memory and in computer storage.   Enterprises require far greater variety and cost than related to memory.
  1. Storage commonly divides into two separate areas. The first is the primary storage that houses data in memory referred to Random Access Memory or RAM.  Other devices such as the processor’s L1 cache.  The Second area is secondary storage; it houses data on hard disks, tapes and other devices requiring the input/output operations.

Storage Virtualization

It is the separating of the storage management software from the hardware infrastructure to provide better flexibility and scalability.  It is implemented to create pools of storage resources.

Storage Virtualization involves pulling data information from the hardware storage resources on [glossary]SAN[/glossary]s or Storage Area Networks.  This is helpful to integrate hardware resources from different networks and data centers into one clear view.

Cloud Computing SAN

There is a difference between SDS and Virtual Storage. Although SDS is considered storage, it should not be confused with Virtualized Storage.

In 2001, the Storage Network Industry Association SNIA made an effort to describe the significant attributes of storage virtualization.

It defined storage virtualization in two parts

  • The task of abstracting, hiding or isolating the internal functions of storage (sub) services from the applications, host computers, or general network resources. The purpose was to enable implementation and network independence for the purpose of storage.
  • Virtualization to storage services for the purpose of aggregating tasks or devices, hiding the complexity or adding new attributes to lower the amount of resources in storage.

Storage Virtualization has been implemented to solve many issues in scaling and managing vast amounts of storage.  Keep in mind, this challenge continues to increase over time as the amount of structured and unstructured data increases.  The type of storage is used to improve scalability, redundancy, performance and most of all combat rising costs.

There are a few technology techniques that can be introduced to virtualize storage functions.  The following are examples of the techniques for inducing virtualized storage.

  • [glossary]Masking[/glossary]
  • [glossary]Zoning[/glossary]
  • [glossary]Host bus adapters[/glossary]
  • [glossary]Creating Logical Volumes[/glossary]
  • [glossary]RAID[/glossary]
  • Distributed File Systems or Objects

Utilizing a storage hardware and typically using arrays of disks, flash memory or a combination.  The idea behind virtualization is to aggregate and manage data across a broad range of physical assets in enterprise networks and data centers. The data can be aggregated to isolate performance issues, predict and troubleshoot problems, and plan for future capacity needs.

Is the [glossary]masking[/glossary] server resources, such as identity of individual physical servers, processer and operating systems from server users.  So what happens?  An administrator uses a software application to divide one physical server into multiple isolated virtual environments.  The virtual environments are sometimes called virtual private servers, but they are also known as guests, instances, containers or emulations.

There are three popular approaches to server virtualization: the virtual machine model, the [glossary]paravirtual[/glossary] machine model, and virtualization at the operating system ([glossary]OS[/glossary]) layer.

Virtual machines or [glossary]VMs[/glossary] are based on the host/guest paradigm. Each guest runs on a virtual imitation of the hardware layer. This approach allows the guest operating system to run without modifications. It also allows the administrator to create guests that use different operating systems. The guest has no knowledge of the host’s operating system because it is not aware that it’s not running on real hardware. It does, however, require real computing resources from the host — so it uses a hypervisor to coordinate instructions to the CPU. The hypervisor is called a virtual machine monitor (VMM). It validates all the guest-issued CPU instructions and manages any executed code that requires addition privileges. VMware and Microsoft Virtual Server both use the virtual machine model.

The para virtual machine (PVM) model is also based on the host/guest paradigm — and it uses a virtual machine monitor too. In the paravirtual machine model, however, The VMM actually modifies the guest operating system’s code. This modification is called porting. Porting supports the VMM so it can utilize privileged systems calls sparingly. Like virtual machines, paravirtual machines are capable of running multiple operating systems. Xen and UML both use the paravirtual machine model.

Virtualization at the OS level works a little differently. It isn’t based on the host/guest paradigm. In the OS level model, the host runs a single OS kernel as its core and exports operating system functionality to each of the guests. Guests must use the same operating system as the host, although different distributions of the same system are allowed. This distributed architecture eliminates system calls between layers, which reduces CPU usage overhead. It also requires that each partition remain strictly isolated from its neighbors so that a failure or security breach in one partition isn’t able to affect any of the other partitions. In this model, common binaries and libraries on the same physical machine can be shared, allowing an OS level virtual server to host thousands of guests at the same time. Virtuozzo and Solaris Zones both use OS-level virtualization.

Server virtualization can be viewed as part of an overall virtualization trend in enterprise IT that includes storage virtualization, network virtualization, and workload management. This trend is one component in the development of autonomic computing, in which the server environment will be able to manage itself based on perceived activity. Server virtualization can be used to eliminate server sprawl, to make more efficient use of server resources, to improve server availability, to assist in disaster recovery, testing and development, and to centralize server administration.