Sunday, May 17, 2020

Investigation inside the architecture and history of virtualization for the beginners


Hello everybody, in this post I want to review the history of a transformative technology called Virtualization. But when it really began? Let's back to the Mainframe development era by the IBM Company at the end of 1960 and the beginning of 1970 there were two big concerns: How much the physical infrastructures of the IT world will grow!? And also in most of Unix-based systems, how to share the computing resources between many services or even users?!
So this idea has been originated by the following challenges:
  1. How to effectively use the extra capacities like big unused memory or idle processors?
  2. With respect to one of the mentioned concerns, how to use the physical servers without have its physical ownership?
  3. And in conditions of a disaster has been occurred, can we move or migrate our services?!
Regardless of efficient usage of physical resources, these mentioned challenges bring up some goals for us:
  1. Faster service provisioning includes all steps of installing and implementing in front of traditional methods of service providing.
  2. How to achieve better methods the data center management and its critical components like servers and databases.
  3. And how to get a perfect IT mobility in response to the business continuity.
A good conclusion for these challenges is the Virtualization, and this concept is achieved only by the Hypervisor technology. Hypervisor or virtual machine monitor has a primary target: Construction the virtual components like the virtual machines, virtual adapters or controllers, virtual switches and etc.
By deploying hypervisor platform, it’s possible to run concurrently multiple guest OS inside a single physical host through running the virtual machines. Host, in our terminology is the physical system that Hypervisor software is installed on top of it.
Hypervisor is a software process that acts as a new abstraction layer to separate OS and its applications from underlying physical hardware. Besides running separate virtual machines, the hypervisor lets the host share its resources between those VMs and also provides an infrastructure for managing resource requests.
But which physical resources we exactly mean as the sharing components?
By default we can say everything, but in the design concept, we usually mentioned these important hardware components:  CPU or processor, RAM or memory, Disk or Storage, Network or bandwidth.
There are two primary types of Hypervisor:
Type1 or Bare Metal, the hypervisor will be installed and run directly on top of the hardware layer and manage the host’s resources and handle the virtual machines requests without any additional software. So in this type there is no need to install a separate OS on physical host, because the hypervisor itself will response to any operations related to this area and act as well as both of operation system and virtualization software. VMware ESXi, Citrix Xen Server, Red Hat KVM, Oracle VM Server, Microsoft Hyper-V and Nutanix AHV are the most popular hypervisor type1 solutions.
But unlike the Bare-Metal, in type2 or Hosted we need exactly to setup an OS on the physical host, before the hypervisor has been installed on that OS. So it this type, the hypervisor should be setup as software and requires an independent OS on top of the hardware layer, to act as virtualization software on top of the OS. Any hardware problems or software issues lead to the host OS failure will made the hypervisor software stop working and all VMs will not work anymore. VMware Workstation, Oracle VM Virtual Box, Microsoft Virtual PC are some of known hypervisors type2.
But what is a virtual machine? And it consists of exactly which components?
First of all, you should know the virtual machine functionalities must not have differed from the physical machine and when we run multiple machines inside a host, each of them has its own guest OS and applications. However the VM acts like an independent physical system, from the host perspective a VM is nothing really more than just some files reside on the host’s storage that has its dedicated configuration and log files and also hardware resources include of NVRAM, virtual disks and so forth. When you power-on a VM, one or more processes will be added in the hypervisor OS level and some new VM files will be generated on the VM’s related directory.
In comparison to the old and traditional server deployment methods, there are many benefits of running virtual machines, include the following:
  1. Using existing hardware resource with higher rate of efficiency.
  2. Improve speed rate of service provisioning respect to the virtual machine deployment.
  3. Simplify servers and datacenter managing operations inside a unified management console.
  4. Protect against disaster recovery in any level of software or hardware resource of a datacenter.
  5. Because we don’t need to provide more physical servers for new services and applications, so there is no need more budget and financial resources.
There is another method of deploying virtualization solutions called Nested VMs that has more complexity in the establishment procedure: When a VM is a hypervisor itself or has hypervisor software installed on its guest OS. In this structure we will deploy two hypervisors that one of them will be setup as a virtual machine inside the other one and while it can act as the host for some VMs, it’s still a machine for another host. Both of hypervisors can be on the same or different virtualization products, but remember to read their installation documents before the planning to deploy them.
Running many large virtual machines on a single physical host can lead to an unstable performance. So keep in mind we should always maintain a suitable balance rate for between total number of running virtual machines and existing physical hosts.




No comments:

Post a Comment

I will start a new journey soon ...