Thursday, June 6, 2013

Hello VMware vSphere 5.1 is here!

New Features in vSphere 5.1

  • User Access – There is no longer a dependency on a shared root account. Local users assigned administrative privileges automatically get full shell access
  • Auditing – All host activity from both the shell and the Direct Console User Interface is now logged under the account of the logged in user
  •  Monitoring – Support is added for SNMPv3. The SNMP agent has been unbundled from the VMkernel and can now be independently updated.
  • vMotion – a vMotion and an Storage vMotion can be combined into one operation. This allows a VM to be moved between two hosts or clusters that do not have any shared storage.
  • New Windows Support – Support for both the Desktop and Server Editions of Windows 8/2012
  • Hardware Accelerated 3D Graphics – Teaming up with NVIDIA, vSphere can now map a vGPU to each VM on a system. Not only does this feature accelerate 3D graphics but provides a GPU for high performance computing needs
  • Improvements in Virtual hardware virtualization support – This brings Intel-VT/AMD RVI features further into the virtual machine which will improve virtualization within virtualization. In addition, more low level CPU counters are exposed which can be further used for high performance computing and real time style applications.
  • Agentless Antivirus and Antimalware – vShield Endpoint is now included in vSphere 5.1 and offloads anti-virus and antimalware processing inside virtual machines to a secure dedicated virtual appliance delivered by VMware partners. This change lowers the cost of entry for Agentless Angivirus and Malware.
  • New 64-vCPU Support – Virtual machines running on a vSphere 5.1 host can be configured with up to 64 virtual CPU’s and 1TB of RAM.
  • Auto-Deploy – Auto-Deploy is extended with two new modes, “stateless caching” and “stateful installs”. In addition the number of concurrent reboots per Auto-Deploy host has been increased to 80
  • SR-IOV Support – Single Root I/O Virtualization allows certain Intel NIC’s to transfer data directly into the memory space of a virtual machine without any involvement from the hypervisor. See this Intel Video
  • Space Reclaiming Thin Provisioned Disks – These types of disks add the ability to reclaim deleted blocks from existing thin provisioned disks while the VM is running. To reclaim space is a two-part function of first wiping the disk marking unused blocks as free, and then to shrink the disk. These two features have been a part of VMware Tools for a number of years but now do things differently for thin provisioned disks. The underlying hardware is not initially part a part of the reclamation process. Instead the vSCSI layer within ESX reorganizes unused blocks to keep the used part of the thin provisioned disk contiguous. Once the unused parts are at the end of the thin provisioned disk then the hardware is involved.
  • Tunable Block Size – Normally thin provisioned disks use a 4KB block size that is unchanging, however, this block size can be tuned indirectly as it is now based on the requirements of the underlying storage array. There is no method to tune this by hand.
  • All Paths Down Improvements – When there was an all paths down (APD) situation, the vSphere management service would hang waiting on disk IO, which would cause the vSphere host to inadvertently disconnect from vCenter and in effect become unmanageable. APD handling has been improved such that transient APD events will not cause the vSphere management service to hang waiting on disk IO, use of vSphere HA to move workloads around to other hosts if APD detects a permanent device lost (PDL) situation, and implement a way to detect PDL for iSCSI arrays that present only one LUN.
  • Storage Hardware/Software improvements – These improvements include the ability to boot from software FCoE, additions of Jumbo frame support for all iSCSI adapters (software or hardware), and support for 16Gb FC
  • VAAI Improvements – VAAI has added support to allow vCloud Director fast-provisioned vApps to make use of VAAI enabled NAS array-based snapshots.
  • vSphere S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) Implementation – vSphere has implemented SMART reporting via the esxcli commands so that SSD and other disks can report back on their status. In addition, esxcli has been upgraded to include ways to reset specific FC adapters directly as well as methods to retrieve event caching information such as link-up and link-down.
  • Storage IO Contral Statistics and Settings Improvements – Finding the proper value for SIOC has been problematic, now it is possible to set a percentage instead of a millisecond value to determine when SIOC should fire. In addition, SIOC will report stats immediately instead of waiting. This allows Storage DRS has statistics available immediately, which improve its decision process. In addition, the observed latency of a VM (a new metric) is available within the vSphere Client performance charts. The observed latency is latency within the host and not just latency after storage packets leave the host
  • Storage DRS Improvements – Storage DRS has been improved for workloads using vCloud Directory. Linked clones can now be migrated between datastores if there exists either the base disk or a shadow copy of the base disk. Storage DRS is also used now for initial placement of workloads when using vCloud Director.
  • Improvements in Datastore Correlation for Non-VASA enabled arrays – For storage devices that do not support VASA it is difficult to correlate datastores against disk spindles on an array. There are now improvements in the datastore correlation such that vSphere can now detect if spindles are shared by datastores on the array regardless of VASA support.