Saturday, May 30, 2020

Unable to create ESXi bootable device with the Rufus!

Today, one of our network staff wants to prepare an ESXi installation media on the flash USB disk. He tried to prepare it with the Rufus v2.8, but unfortunately it doesn't complete successfully. it failed to boot and prompt the following error:
menu.c32: not a COM32R image 
boot:
Again we tested it with another versions of this application, but there was no success :( So after some search I found the following file in reddit.com that one user attach the correct menu.c32 file. When I over-write it on the old file, it works and the mentioned issue has been solved.

Sunday, May 17, 2020

Investigation inside the architecture and history of virtualization for the beginners


Hello everybody, in this post I want to review the history of a transformative technology called Virtualization. But when it really began? Let's back to the Mainframe development era by the IBM Company at the end of 1960 and the beginning of 1970 there were two big concerns: How much the physical infrastructures of the IT world will grow!? And also in most of Unix-based systems, how to share the computing resources between many services or even users?!
So this idea has been originated by the following challenges:
  1. How to effectively use the extra capacities like big unused memory or idle processors?
  2. With respect to one of the mentioned concerns, how to use the physical servers without have its physical ownership?
  3. And in conditions of a disaster has been occurred, can we move or migrate our services?!
Regardless of efficient usage of physical resources, these mentioned challenges bring up some goals for us:
  1. Faster service provisioning includes all steps of installing and implementing in front of traditional methods of service providing.
  2. How to achieve better methods the data center management and its critical components like servers and databases.
  3. And how to get a perfect IT mobility in response to the business continuity.
A good conclusion for these challenges is the Virtualization, and this concept is achieved only by the Hypervisor technology. Hypervisor or virtual machine monitor has a primary target: Construction the virtual components like the virtual machines, virtual adapters or controllers, virtual switches and etc.
By deploying hypervisor platform, it’s possible to run concurrently multiple guest OS inside a single physical host through running the virtual machines. Host, in our terminology is the physical system that Hypervisor software is installed on top of it.
Hypervisor is a software process that acts as a new abstraction layer to separate OS and its applications from underlying physical hardware. Besides running separate virtual machines, the hypervisor lets the host share its resources between those VMs and also provides an infrastructure for managing resource requests.
But which physical resources we exactly mean as the sharing components?
By default we can say everything, but in the design concept, we usually mentioned these important hardware components:  CPU or processor, RAM or memory, Disk or Storage, Network or bandwidth.
There are two primary types of Hypervisor:
Type1 or Bare Metal, the hypervisor will be installed and run directly on top of the hardware layer and manage the host’s resources and handle the virtual machines requests without any additional software. So in this type there is no need to install a separate OS on physical host, because the hypervisor itself will response to any operations related to this area and act as well as both of operation system and virtualization software. VMware ESXi, Citrix Xen Server, Red Hat KVM, Oracle VM Server, Microsoft Hyper-V and Nutanix AHV are the most popular hypervisor type1 solutions.
But unlike the Bare-Metal, in type2 or Hosted we need exactly to setup an OS on the physical host, before the hypervisor has been installed on that OS. So it this type, the hypervisor should be setup as software and requires an independent OS on top of the hardware layer, to act as virtualization software on top of the OS. Any hardware problems or software issues lead to the host OS failure will made the hypervisor software stop working and all VMs will not work anymore. VMware Workstation, Oracle VM Virtual Box, Microsoft Virtual PC are some of known hypervisors type2.
But what is a virtual machine? And it consists of exactly which components?
First of all, you should know the virtual machine functionalities must not have differed from the physical machine and when we run multiple machines inside a host, each of them has its own guest OS and applications. However the VM acts like an independent physical system, from the host perspective a VM is nothing really more than just some files reside on the host’s storage that has its dedicated configuration and log files and also hardware resources include of NVRAM, virtual disks and so forth. When you power-on a VM, one or more processes will be added in the hypervisor OS level and some new VM files will be generated on the VM’s related directory.
In comparison to the old and traditional server deployment methods, there are many benefits of running virtual machines, include the following:
  1. Using existing hardware resource with higher rate of efficiency.
  2. Improve speed rate of service provisioning respect to the virtual machine deployment.
  3. Simplify servers and datacenter managing operations inside a unified management console.
  4. Protect against disaster recovery in any level of software or hardware resource of a datacenter.
  5. Because we don’t need to provide more physical servers for new services and applications, so there is no need more budget and financial resources.
There is another method of deploying virtualization solutions called Nested VMs that has more complexity in the establishment procedure: When a VM is a hypervisor itself or has hypervisor software installed on its guest OS. In this structure we will deploy two hypervisors that one of them will be setup as a virtual machine inside the other one and while it can act as the host for some VMs, it’s still a machine for another host. Both of hypervisors can be on the same or different virtualization products, but remember to read their installation documents before the planning to deploy them.
Running many large virtual machines on a single physical host can lead to an unstable performance. So keep in mind we should always maintain a suitable balance rate for between total number of running virtual machines and existing physical hosts.




Friday, May 15, 2020

ESXCLI System - Part 2

This is the 2nd part of the ESXCLI System video series, and I talked about the process management, UUID of boot device and system, and welcome message of DCUI ...

Thursday, May 14, 2020

Introduction to the VMware Virtual Volumes (vVOLs)


One of the greatest technology and impressive solution provided by the VMware, especially in Software-Defined Storage (SDS) systems is the vVols (Virtual Volumes). For better understanding of SDS technologies, first consider a primary goal: Instead of using real storage equipment, we can bring the storage provisioning possibilities in the hypervisor level like the computing facilities, with nearly similar throughput, efficiency and performance in comparison to the physical storage devices. VMware Virtual Volume (vVols) and VMware vSAN are two of them.
To gain capacity pools in the vSphere environment, vVols is abstracting representation of existing storage underlying (physical disks, arrays, LUNs and etc) for consuming these resources and configuring them as the Virtual Datastores to define capacity boundaries, access logic, and also a set of data services accessible to VMs provisioned in the pool. By using this feature we can provide right storage service levels with more flexibility according to the virtual machines requirements. One of the greatest benefits of these Virtual Datastores is provisioning LUNs without risk of disruption and no need to format them as a specific file system.
vVols is a storage policy-based management framework (SPBM) provides virtual arrays (virtualized SAN & NAS) to enable an efficient and optimized operational model in the vSphere environment, so instead of infrastructure, the storage arrays will be provided on applications. It means instead of implementing a specific vendor as the storage infrastructure, we can create application-based volumes. By release of the vSphere 6.0, VMware published Virtual Volumes and announced which one of its products is compatible with the first version of vVols in KB2112039.
VMware SDS perspective is always defining administrative platforms to the Storage Admins for providing and adding virtual datastore (volume), which defines services related to the capacity and how to store data. So Virtual Volumes will simplify management operations over the existing storage infrastructure. As VMware defines by introducing this feature, there is a SAN/NAS management framework in the vSphere environment that exposes virtual disks as native storage objects and enables array-based operations at the virtual disk level. vVols are VMDK granular storage entities exported by storage arrays. Also vVols encapsulate virtual disks and other virtual machine files, and natively store the files on the storage system.
Virtual Volumes technology let the storage devices to sense and detect virtual machines disk requests and unlocks the ability to leverage array-based data services with a VM approach at the single virtual disk. From the virtual machine perspective, with vVols it's possible to provide and present storage services, besides the usage and consumption from VMs (allows separating the presentation, from the consumption of storage for VMs).
vVols are exported to the ESXi host through a small set of Protocol  Endpoints (PE) which are parts of the physical storage fabric (iSCSI & NFS) that are created by the Storage Admins. PE acts as the I/O access point and is responsible for handling all paths from the ESXi host to the storage system and also related policies. basically the PE is similar to the LUN and can be discovered and mounted by multiple ESXi hosts. Also, they are required to establish a data path from virtual machines to their respective vVols based on demand.
Each layer of Management, Configuration, and Data services in vVols are separately handled with an out-of-band (OOB) mechanism. For management considerations, vVols can be grouped into logical entities called storage container (SC) which are acting as a logical abstraction for mapping the vVols and specifying how they are stored. So the vSphere will map them to the vVols Datastores to provide applicable datastore level functionality. But you should know vVols are not accessible by default after creation, and there is some vSphere administration operations about how to use them, like Bind/Unbind to create/remove PE (Limitation of 256 SC per ESXi host in the vVols v1). PEs are associated with arrays (managed per array), and also each one of them will be associated with only one array, While an array can be associated with multiple PEs (A single PE can be IO access point for multiple vVols).
After the vSphere 6.5 release, there was many storage enhancement and improvements, and one of them is publishing the vVols v2. One of the shortcomings in v1 is lack of Array-Based Replication (ABR) protection for vVols datastore, and of course it was fixed on v2. So in vVols v2, its possible to replicate a group or specific VM. Beside of ABR, there are other vVol v2 functionalities, include of policy-based, VM-centric storage for Business Critical Applications like Oracle RAC.

In vVols v1, Storage I/O Control (SIOC) is supported, while Storage DRS (SDRS) is not. Also, many storage providers that support this technology, have their own integration software for vVols, so for using it, visit your storage vendor's website. vSphere APIs for Storage Awareness (VASA) includes a set of vSphere storage APIs, which is also required for vVols operation, because of its role in providing communication between vCenter, hosts and storage arrays. Also, some VSAN functionalities are similar to the vVols, like providing virtual storage. I will write about the VSAN features in another post. At last you can read about the vVols improvements in vSphere 7.0, like supporting the SRM v8.3, in vSphere7 Core Storage.
For more information about the Virtual Volumes feature, visit the following links:




Friday, May 8, 2020

Kubernetes Cheat Sheet: Useful guide for kubectl

Thanks to LinuxAcademy.com for publishing two perfect and useful cheat sheets about the Kubernetes and how to work with kubectl command





I will start a new journey soon ...