One of the greatest technology and impressive solution provided by the VMware, especially in Software-Defined Storage (SDS)
systems is the vVols (Virtual Volumes). For better understanding of SDS
technologies, first consider a primary goal: Instead of using real
storage equipment, we can bring the storage provisioning possibilities
in the hypervisor level like the computing facilities, with nearly
similar throughput, efficiency and performance in comparison to the
physical storage devices. VMware Virtual Volume (vVols) and VMware vSAN
are two of them.
To
gain capacity pools in the vSphere environment, vVols is abstracting
representation of existing storage underlying (physical disks, arrays,
LUNs and etc) for consuming these resources and configuring them as the
Virtual Datastores to define capacity boundaries, access logic, and also
a set
of data services accessible to VMs provisioned in the pool. By using
this feature we can provide right storage
service levels with more flexibility according to the virtual machines requirements. One of the greatest benefits of these Virtual
Datastores is provisioning LUNs without risk of disruption and no need to format them as a specific file system.
vVols is a storage
policy-based management framework (SPBM) provides virtual arrays
(virtualized SAN
& NAS) to enable an efficient and optimized operational model in the
vSphere environment, so
instead of infrastructure, the storage arrays will be provided on
applications. It means instead of implementing a specific vendor as the
storage infrastructure, we can create application-based volumes. By
release of the vSphere 6.0, VMware published Virtual Volumes and
announced which one of its products is compatible with the first version
of vVols in KB2112039.
VMware SDS perspective
is always defining administrative
platforms to the Storage Admins for providing and adding virtual
datastore (volume), which defines services related to the capacity and
how to store data. So Virtual Volumes will simplify management
operations over the existing storage infrastructure. As VMware defines
by introducing this feature, there is a SAN/NAS management framework in
the vSphere environment that exposes virtual disks as native storage
objects and enables array-based operations at the virtual disk level. vVols
are VMDK granular storage entities exported by storage arrays. Also
vVols encapsulate virtual disks and other virtual machine files, and
natively store the files on the storage system.
Virtual
Volumes technology let the storage devices to sense and detect virtual
machines disk requests and unlocks the ability to leverage array-based
data services with a VM approach at the single virtual disk. From the
virtual machine perspective, with vVols it's possible to provide and
present storage services, besides the usage and consumption from VMs
(allows separating the presentation, from the consumption of storage
for VMs).
vVols are exported to the ESXi host through a small set of Protocol Endpoints (PE)
which are parts of the physical storage fabric (iSCSI & NFS) that
are created by the Storage Admins. PE acts as the I/O access point and
is responsible for handling all paths from the ESXi host to the storage system and also related policies. basically the PE is similar to the LUN and can be discovered and mounted by multiple ESXi hosts. Also, they are required to establish a data path from virtual machines to their respective vVols based on demand.
Each
layer of Management, Configuration, and Data services in vVols are
separately handled with an out-of-band (OOB) mechanism. For management
considerations, vVols can be grouped into logical entities called
storage container (SC) which are acting as a logical abstraction for mapping the vVols and specifying how they are stored. So the vSphere will map them to the vVols Datastores to
provide applicable datastore level functionality. But you should
know vVols are not accessible by default after creation, and there is
some vSphere administration operations about how to use them, like
Bind/Unbind to create/remove PE (Limitation of 256 SC per ESXi host in
the vVols v1). PEs are associated with arrays (managed per array), and also each one of them will be associated with only one array, While an array can be associated with multiple PEs (A single PE can be IO access point for multiple vVols).
After the
vSphere 6.5 release,
there was many storage enhancement and improvements, and one
of them is publishing the vVols v2. One of the shortcomings in v1 is
lack of Array-Based Replication (ABR) protection for vVols
datastore, and of course it was fixed on v2. So in vVols v2, its
possible
to replicate a group or specific VM. Beside of ABR, there are other
vVol v2 functionalities, include of policy-based, VM-centric storage for
Business Critical Applications like Oracle RAC.
In vVols v1, Storage I/O Control (SIOC) is supported, while Storage DRS (SDRS) is not. Also, many storage
providers that support this technology, have their own integration
software for vVols, so for using it, visit your storage vendor's
website. vSphere APIs for Storage Awareness (VASA)
includes a set of vSphere storage APIs, which is also required for vVols
operation, because of its role in providing communication between
vCenter, hosts and storage arrays. Also, some VSAN functionalities are
similar to the vVols, like providing virtual storage. I will write about
the VSAN features in another post. At last you can read about the vVols improvements in vSphere 7.0, like supporting the SRM v8.3, in vSphere7 Core Storage.
For more information about the Virtual Volumes feature, visit the following links:
No comments:
Post a Comment