After VMFS 3 and 5, VMware has introduced VMFS 6 with all of the new storage enhancements that come with vSphere 6.5. Here we will look insight of new features that been introduced in VMFS 6.
Thanks to VMware vSphere 6.5 Storage Guide that I flipped to get depth about these new features.
What's New in VMFS 6 Datastore
Here are enhancements which came with VMFS 6 Volumes:-
- Support for 4K Native Drives in 512e mode
- Space Efficient Virtual Disk (SE Sparse disk) is now default
- Automatic Space Reclamation
- Support for 512 devices and 2000 paths
- View Storage Accelerator (Content Based Read Cache (CBRC))
Support for 4K Native Drives in 512e mode
- Sizes of spindles disks keep growing and these new “advanced format” drives come with a 4K byte sector instead of the usual 512 byte sector, which is primarily for better handling of media errors. The new format allows 4 kB sectors instead of the common 512 bytes per sector.
- Industry standard disk drives have been using a native (physical) 512 bytes sector size. However, due to the increasing demand for larger capacities, the storage industry recently introduced new advanced formats that use 4KB (4096 bytes) physical sectors.
- The disk sector size is an important factor in the design of Operating System and Hypervisor (collectively called OS here) software such as device drivers and file systems, because it represents the atomic unit of I/O operations on a disk drive. Not all OS versions have been modified to utilize 4KB sectors in the disk drives. Thus, the firmware of these newer devices may expose a logical sector size, which is either 4KB Native (4Kn) or 512B Emulation (512e).
- 4K native drive is the advanced format in which the physical sectors and logical sectors are both 4,096 bytes in size.
- 512e is the advanced format in which the physical sector size is 4,096 bytes, but the logical sector size emulates 512 bytes sector size.
Space Efficient Virtual Disk (SE Sparse disk) is now default
- SE Sparse disks give better space efficiency to virtual desktop infrastructure (VDI) deployed on this virtual disk format because they have the ability to reclaim stranded space from within the guest OS automatically. SE Sparse is used primarily View and for LUNs larger than 2TB. When on VMFS 6 the default will be SE Sparse.
Automatic Space Reclamation
- VMFS 6 enables Automatic Space Reclamation based on VAAI Unmap which has been around for a while and allows you to unmap previously used blocks, reclaim dead or stranded space on thinly-provisioned VMFS volumes.
- Storage Capacity is reclaimed and released to the array so that when needed other volumes can use these blocks. In vSphere 6.0, you were doing this manually via the command line interface which has now been integrated in the UI and can simply be turned on or off.
- vSphere 6.5 allows you to automate the process by tracking the deleted VMFS blocks and reclaiming this space from the storage array in the background every 12 hours. The effect on storage I/O is minimal.
- UNMAP works not only at the array level, but also at a guest OS level with newer versions of Windows and Linux. The benefit of UNMAP is obvious. You can reclaim disk space from within your provisioned VMs on a regular basis with the scrubber. The space reclamation priority can further be adjusted per datastore. This means that you can directly specify the priority of execution when the deleted or unmapped blocks are reclaimed on the LUN.
- If you are good with esxcli, then you can use the following to get the info of a particular datastore (here i used "vi-ds-03" as datastore name)
esxcli storage vmfs reclaim config get -l vi-ds-03
Reclaim Granularity: 1048576 Bytes
Reclaim Priority: low
- Or set the datastore to a particular level, note that using esxcli you can also set the priority to medium and high if desired:
esxcli storage vmfs reclaim config set -l vi-ds-03 -p high
- Please note that ESXi 6 cannot access a VMFS 6 formatted datastore. However, ESXi 6.5 is compatible and can read/write on VMFS 6 and VMFS 5 datastores.
Support for 512 devices and 2000 paths
- This is one of those features that would allow us to place large capacity drives on a vSAN. However, VMware will only support 512e drives for high capacity and high density configurations.
- The main difference between 512n & 512e is the sector sizes on the drive. Traditional 512n storage devices have been using a native 512-bytes sector size, whereas 512e is the advanced format in which the physical sector size is 4,096 bytes. Logical sector size are same 512 bytes in both 512n and 512e.
- 512n supports both VMFS 5 and VMFS 6 while 512e doesn't support VMFS5 in case of local storage device.
- In earlier version of vSphere had a limit of 256 devices and 1024 paths, in vSphere 6.5 extends this to 2000. Those limits had limitation and were painful specially when RDM are used or maybe 8 paths to each device are used or it have a limited number of VMs per datastore.
View Storage Accelerator (Content Based Read Cache (CBRC))
- The View Storage Accelerator reduces storage loads generated by peak VDI storage reads by using the Content Based Read Cache (CBRC) to store common blocks of desktop images in local host memory.
- This significantly improves the desktop performance, especially during boot storms or anti-virus scanning storms when a large number of blocks with identical contents are read.
- CRBC also improves application performance. In environments where several users load the same application on an ESX server (for example MS Word), the corresponding blocks of the application are cached once and all users are then served directly from the cache.
Let's move to Configuration Part. See how we can add VMFS 6 Datastore in ESXi. Though it's quite similar to earlier version of vSphere, just new features added that we can see here.
How to Add VMFS 6 Datastore in vSphere 6.5 ESXi Host
- Login to vSphere or Web Client. Go to Host Inventory and Select Host.
- Go to Configuration Tab and Click on Datastores under Storage.
- Click on + Icon to Create a new datastore.
- New Datastore Wizard will open. Select VMFS and Click on Next.
- Select specific LUN that you want to use to create this Datastore. Give appropriate name. Here I used "vi-ds-03".
- Under VMFS Version option, Here you can see newly introduced VMFS 6 which was not available in earlier versions. VMFS 3 has been removed now and you can no longer use that in this version.
- You may also notice under VMFS 6 note which we discussed above, "VMFS 6 enables advanced format(512e) and automatic space reclamation support"
- VMFS 6 - This option is default for 512e storage devices. The ESXi hosts of version 6.0 or earlier cannot recognize the VMFS6 datastore. If your cluster includes ESXi 6.0 and ESXi 6.5 hosts that share the datastore, this version might not be appropriate.
- VMFS 5 -This option is default for 512n storage devices. VMFS5 datastore supports access by the ESXi hosts of version 6.5 or earlier.
- Select VMFS 6 and Click on Next.
- Specify Partition Configuration.
- Use all available partitions - Dedicates the entire disk to a single VMFS datastore. If you select this option, all file systems and data currently stored on this device are destroyed.
- Use Free Space - Deploys a VMFS datastore in the remaining free space of the disk.
- Specify the block size and define space reclamation parameters.
- Block Size - The block size on a VMFS datastore defines the maximum file size andthe amount of space the file occupies. VMFS6 supports the block size of 1 MB.
- Space reclamation granularity - Specify granularity for the unmap operation. Unmap granularity equals the block size, which is 1 MB. Storage sectors of the size smaller than 1 MB are not reclaimed.
- Space reclamation priority - Select one of the following options
- Low (default). Process the unmap operations at a low rate.
- None. Select this option if you want to disable the space reclamation operations for the datastore.
- Review the configuration and Click on Finish. Now this datastore will be visible in ESXi Host.