Monday, November 20, 2017

Upgrade VMware ESXi from v6.0 to v6.5

To upgrade your ESXi 6.0 to 6.5 first of all you need to download ESXi 6.5 
Let’s start,
As you see I have my ESXi 6.0 installed,
1
Plug the ISO of ESXi 6.5, and set you VM to boot from CD-ROM
3
I will restart my ESXi 6.0 to start the upgrade
4
When restarting the VM ensuare that you are booting from the CD-ROM
 5
Boot from ESXi 6.5 CD-R
6
Follow the steps bellow :
789

Important : You have 3 choices

  1. Upgrade and preserve your VMFS datastore (That mean everything will be preserverd, VMs, vSwitch, Data uploaded on you Datastor)
  2. Install and preserve the data of your VMFS datastore (To continue use your old VMs you have to get them back manually to your inventory)
  3. Install and overwrite (You will lose all you data)
To Upgrade our ESXi 6.0 we will choose the first Option :
10
Continue, click F11
1112
Remove your bootable CD-ROM and restart your ESXi
13
Congratulation your ESXi is now on 6.5,
14
Starting from ESXi 6.5 U1 the vSphere client is no longer supported, Now it’s time of vSphere Web Client to manage and controle your ESXi
15
Now it's v6.5 ;)

Sunday, March 26, 2017

How to create Hyper-V clusters on Nano Servers (Windows Server 2016)


Windows Server 2016 comes with many brand-new features and options. One of the new deployment options is Nano Server, a new headless installation option for Windows Server 2016. Nano Server is a highly-minimized installation that is 20-25x smaller than traditional Windows, which only includes the required bits for the running OS. A unique component of Nano Server is the fact that it has no Graphical User Interface (GUI) and no built-in management tools, it’s the only recovery console where you can change network settings, firewall rules, and reset WinRM. The administration is required to be completed remotely via remote management tools such as PowerShell or Server Manager. The main premise behind this allows the server and applications to better utilize resources while at the same time providing higher security due to the much smaller attack surface.
If you have specific requirements, such as Hyper-V, Windows Failover Cluster, IIS, Scale-Out-File-Server or the DNS role, then you need to add those specific packages during (or after) the Nano Server deployment. During the image package generation process is when you can set the required roles that will be present.
Today I will walk you through the process of creating a Nano Server host that will serve as a Hyper-V node. This Hyper-V node will become a member of a Windows Failover Cluster as well.
There are several ways to build Nano Server. You can use Nano Server Image Builder with a graphical interface or PowerShell. In this post, I’ll concentrate on the PowerShell deployment. So, unlike traditional Windows Server installations, the Nano Server install is initiated via a folder that is located on the Windows Server 2016 ISO. Within this folder are all of the required components to get up and running. To start with, you should download the Windows Server 2016 ISO image and mount that image to a Windows Server or Windows 10 machine already deployed within your environment. The first step is to fire up PowerShell ISE in administrator mode and then load the Nano Server module. Below you’ll find all the steps for the new deployment.
Once we have our PowerShell ISE session running, let’s set the PowerShell execution policy. This will let us run PowerShell Scripts without any restrictions. Without this, only single commands or digitally signed scripts can be run (defaults depends on Windows version you are running):
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
Next, we’ll need to create the image. In my environment I have Windows Server 2016 ISO mounted to d:\ so I’ll start with going to the Nano folder on install ISO:

cd d:\NanoServer
Import-Module .\NanoServerImageGenerator\NanoServerImageGenerator.psm1
New-NanoServerImage -Edition Datacenter -DeploymentType Host -MediaPath d:\ -TargetPath E:\Temps\nano\NANO_SRV.vhd -DomainName demo.local -ComputerName NANO_SRV -OEMDrivers -Compute -Storage -Clustering -EnableRemoteManagementPort -InterfaceNameOrIndex Ethernet -Ipv4Address 10.11.0.30 -Ipv4SubnetMask 255.255.255.0 -Ipv4Gateway 10.11.0.1 -Ipv4Dns 10.11.0.10
  • Edition allows you to select the Nano Server flavor: Standard or Datacenter. If you plan to use Shielded VMs or Storage Spaces Direct, then Datacenter it is. A Datacenter license also has the right to use unlimited Windows Server VMs on that host
  • DeploymentType Host: This will prepare the image for a physical machine (Guest switch is dedicated for guest VMs)
  • DomainName: If you specify this parameter, Nano will use offline domain provisioning and the Nano computer account will appear in Active Directory domain. The whole process will be completed during the first boot. If you are redeploying this image, an additional parameter – ReuseDomainNode - can be required. This option can only be used when the computer we are using to prepare Nano is domain joined and in the same domain as our future Nano Server. In other cases, a domain blob harvest can be used.
  • OEMDrivers package contains the basic set of drivers, mainly for network and storage adapters.
  • Compute is responsible for deploying the necessary Hyper-V bits
  • Storage package contains storage components
  • Clustering - Failover clustering
  • EnableRemoteManagementPort will enable WinRM (from different subnets)
  • Another interesting parameter (not used in my example) could be MaxSize 100GB. It will allow the Nano VHD or VHDX to grow up to specified value. This is important if you plan to store some additional files on the c:\ path of Nano Server (i.e. local virtual machines).
  • If you have specific physical hardware drivers, you can add those with the parameter  -DriversPath <path:\Drivers>



1
2

At this stage, we should have the Nano Server image ready. Next we will deploy it to a physical server.

We can now can use the generated VHDX image for booting on physical computers. There are several ways to do this. We can use, for example, System Center 2012/2012 R2/2016 – Virtual Machine Manager for bare metal deployment, or we can use the bcdboot command to configure boot from VHD/VHDX on that machine. In my case, I will use bcdboot.
For a fresh server, you should copy the VHDX with Nano Server to a USB stick. Then boot the new server with Windows Server 2016 ISO and choose Repair your computer > Troubleshoot > Command prompt.


Nano Server physical deployment
Nano Server physical deployment
Nano Server physical deployment

The next steps are to create the partition, assign a logical letter and format the boot partition where the VHDX will be stored. Diskpart can help with this.

diskpart
select disk 0
create partition primary
active
assign letter=C
format c: /q


Nano Server physical deployment

You can now can copy the VHDX file from the USB stick to the C drive (for example, copy e:\nano_srv.vhdx c:\).
We complete the process with mounting our vhdx file:
diskpart
select vdisk file=”c:\nano_srv.vhdx”
attach vdisk
list volume (To verify where our vdisk is mounted, f: drive, in my case)
exit


Nano Server physical deployment 5

We will now prepare the VHDX for booting:

f:
cd windows\system32
bcdboot f:\windows

Finally, some cleaning details:
diskpart
select vdisk file=”c:\nano_srv.vhdx”
detach vdisk
exit


Nano Server physical deployment

The server can be rebooted now. When Nano Server starts, you should be able to logon with the domain credentials.


Nano Server physical deployment

At this stage, you should be able to ping Nano Server by name, browse shares and add it to Hyper-V Manager. Next we will configure our Nano Server.

we've created a new nano server image and deployed it on a physical server. Now we will go through configuration.
First we need to connect remotely to that machine over PowerShell for further configuration.
$cred=get-credential
$nanoname='NANO_SRV'
Enter-PSSession -ComputerName $nanoname -Credential $cred


Setting up Nano Server with Hyper-V Role 1

As we are connected to Nano Server remotely we should verify PowerShell execution policy to avoid potential issues with some scripts later.
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
Set-Item WSMan:\localhost\Client\TrustedHosts * -force
In my case I have test environment, so I set first command to Unrestricted. Be careful with this setting in production as it allows to run unsigned scripts. Later, with the Restricted or RemoteSigned parameter you can lock back the execution policy.
The second command is for removing restriction for PowerShell remoting. I used asterisk to allow any computer in my lab to connect remotely. Don’t do this in production. Instead limit allowed computers by name or IP.
Again, I recommend you to carefully use both commands in production. It’s NOT security best practice to open computer so widely.


Setting up Nano Server with Hyper-V Role 2

In the following steps, let’s add additional storage in the VMs location.
Get-Disk
$additional_disk=Get-Disk | Where-Object PartitionStyle –Eq "RAW"
Initialize-Disk $additional_disk.Number
New-Partition -DiskNumber $additional_disk.Number -UseMaximumSize -AssignDriveLetter | Format-Volume -NewFileSystemLabel "VMs" -FileSystem ReFS
  • The first command is to get information about all connected disks
  • The second line is to select disk with RAW partition
  • Third line will initialize selected disk
  • Fourth command will create new partition with maximum size and format it with ReFS file system.


Setting up Nano Server with Hyper-V Role 3

Now creating a switch. I’ll start with the Internal one. Creating Internal Switch is optional. I’m creating it for future tests only. So, if you are working with production environment and you don’t need such swich you can skip this step.
Get-NetAdapter
New-VMSwitch -Name "Internal Switch" -SwitchType Internal


Setting up Nano Server with Hyper-V Role 4

And now the normal switch. You can create it on any network interface. In my example I have only one, so I’ll create it on Ethernet0. AllowManagementOS parameter will enable management traffic over that switch.
$net = Get-NetAdapter -Name 'Ethernet0'
New-VMSwitch -Name "External VM Switch" -AllowManagementOS $True -NetAdapterName $net.Name


Setting up Nano Server with Hyper-V Role 5

This process will create an external switch linked to my Ethernet0 adapter. As part of this move, the IP settings are transferred to a new (virtual) adapter connected to that switch.


Setting up Nano Server with Hyper-V Role 6

Because of a known bug with Microsoft, the DNS settings are cleared and we must reset it on the newly created switch interface:
Get-DnsClient
Set-DnsClientServerAddress -InterfaceIndex 6 -ServerAddresses "10.11.0.10"
  • First command is to identify ID of newly created switch
  • With second command we are setting new DNS server


Setting up Nano Server with Hyper-V Role 7

Now you should be able to log in to the Nano console with the domain credentials again. You can also check newly added switches directly in the Nano Server Recovery console.


Setting up Nano Server with Hyper-V Role 8

Last thing before we create the first VM is to set some Hyper-V default options like default paths for VM and Virtual Hard Disk, authentication type, migration limitations, etc.
Get-VMSwitch
Get-VMHost | Set-VMHost -MaximumStorageMigrations 2 -MaximumVirtualMachineMigrations 2 -VirtualMachineMigrationAuthenticationType Kerberos -VirtualMachinePath e:\VMs\ -VirtualMachineMigrationPerformanceOption SMB -VirtualHardDiskPath "e:\VMs\Virtual Hard Disks"
Enable-VMMigration


Setting up Nano Server with Hyper-V Role 9

Next step, we’ll create the new VMs located on the local storage. We can use PowerShell or Hyper-V Manager for this. Bellow PowerShell way of creation.
New-VM -Name testVM -MemoryStartupBytes 1GB -SwitchName "Internal Switch" -Generation 2 -Path "E:\VMs\" -NewVHDPath "E:\VMs\testVM\Virtual Hard Disks\testVM.vhdx" -NewVHDSizeBytes 30GB
SET-VMProcessor –VMName testVM –Count 2
Start-VM testVM
We are creating Generation 2 VM with testVM name, 2 CPU, 1GB memory, new VHDX with 30GB size and connection to my Internal Switch.


Setting up Nano Server with Hyper-V Role 10
Setting up Nano Server with Hyper-V Role 11

At this moment we have a single Nano host with the Hyper-V role installed. We should be able to run the VMs from the local Nano storage. As there are no local management tools, we must use Hyper-V Manager from a remote server or workstation or use PowerShell, we’ll go through clustering of Hyper-V Nano

In previous parts of this Nano Server deployment series, we learned how to create, deploy and configure Nano Server as a Hyper-V host. In this part, we will look for a clustering option. We will create a Hyper-V cluster of 3 Nano Server host nodes with Storage Spaces Direct on it.
Before we start clustering, I prepared 3 hosts with 2 NICs (first for management, second for cluster communication only) and 2 SATA disks each.
Note: If you plan to build a test environment inside VMware VMs, then you may get the following error from Cluster Validation Wizard:
Disk partition style is GPT. Disk has a Microsoft Reserved Partition. Disk type is BASIC. The required inquiry data (SCSI page 83h VPD descriptor) was reported as not being supported.
To avoid this, you can add SATA disks instead SCSI. When using SATA disks, please configure a different controller ID for different disks across all nodes. The base on controller ID disks will get serial numbers, and it’s possible that different disks from different nodes will have the same serial number. This will cause problems for S2D.
The following configuration is made from an external computer I used for management. To be able to perform configuration remotely (by Invoke-Command or by CimSession parameter), I added all the required PowerShell modules for Hyper-V and Failover Clustering to my management machine.
So, let’s start with cluster validation.
Test-Cluster -Node nano_srv1, nano_srv2, nano_srv3 -Include "Storage Spaces Direct", Inventory, Networking, "System Configuration"
I got a warning regarding the SCSI 83h VPD descriptor we mentioned earlier. We can ignore this warning since this is a system disk for Nano server, which will not be part of Storage Spaces Direct.


nano01

Now we will create a new cluster with three nodes.
New-Cluster -Name "nanocluster" -Node nano_srv1, nano_srv2, nano_srv3 -NoStorage -StaticAddress 10.11.0.40
The NoStorage parameter is to skip cluster storage configuration. We will setup Storage Spaces Direct later. I received a warning here relating to a missing witness.


nano02

Yes, only two commands are required for cluster creation. We can check it in Failover Cluster Manager.


nano03

Now some network cosmetics. I’ll change the cluster networks name for easier identification.
Get-ClusterNetwork -Cluster nanocluster
(Get-ClusterNetwork -Cluster nanocluster | Where-Object {$_.Address -eq "10.11.0.0"}).Name = "Management_Network"
(Get-ClusterNetwork -Cluster nanocluster | Where-Object {$_.Address -eq "10.11.1.0"}).Name = "Cluster_Network"
Get-ClusterNetwork -Cluster nanocluster


nano04

Before we take care of storage, lets configure the cluster quorum. I’ll use the node majority here.
Set-Clusterquorum -Cluster nanocluster -NoWitness


nano05

Now we will create Storage Spaces Direct (S2D). S2D can have from 2 up to 16 nodes. It will offer local server disks (SATA, SAS, NVMe) to be visible across the whole cluster. In my environment, I have 3 nodes, 2 disks each available for pooling. I can verify if I can see all of them by:
$nodes = @("nano_srv1","nano_srv2","nano_srv3")
0..2 | foreach { Get-PhysicalDisk -CanPool $true -CimSession $nodes[$_] }
The first command is to create an array with all my nodes. I’ll use it to execute commands on specific hosts (0,1 or 2). The second command is to get all disks that can be pooled from all nodes. The CanPool parameter means that disks can be used in storage pool — we will use it in S2D.


nano06

So, lets enable S2D.
Enable-ClusterStorageSpacesDirect -CimSession $nodes[0]
All the disks I have are SSD, so no disk was chosen for cache.


nano07

Now, we can create a storage pool from all disks available for pooling. I also chose default settings for provisioning disk type and for resiliency. The second command is for creating a new disk base on the new storage pool. It will have the maximum size and a cluster-shared volume version of ReFS.
New-StoragePool -StorageSubSystemName *Cluster* -FriendlyName NanoS2D -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Parity -Physicaldisks (Get-PhysicalDisk -CanPool $true -CimSession $nodes[0]) -CimSession $nodes[0]
New-Volume -StoragePoolFriendlyName *S2D* -FriendlyName vDisk -FileSystem CSVFS_ReFS -UseMaximumSize -CimSession $nodes[0]


nano08

Now we have a few options for VM storage. We can use:
- CSV - Volume(s) located in c:\ClusterStorage.  If you choose this option, you can create few smaller volumes instead of one big volume(as above), and balance volume ownerships among all cluster nodes.
- Scale-Out File Server (SOFS) – I’ll choose this option.
We will create Scale-Out File Server role for hosting VMs here.
New-StorageFileServer -StorageSubSystemName *Cluster* -FriendlyName NanoSOFS -HostNames NanoSOFS -Protocols SMB -CimSession $nodes[0]


nano09

SOFS role is created, but there still isn’t a share for VMs.


nano10

So, we will create one. The first command is to create a dedicated folder for VMs on our SOFS volume.
Second, is for SMB share creation. The share will be located on a previously created folder. We are also giving Full Access rights for all cluster nodes.
The last command sets the ACL for the file system folder to match the ACL used by an SMB share.
Invoke-Command -ComputerName nano_srv1 -ScriptBlock {md c:\ClusterStorage\Volume1\VMs}
New-SmbShare -Name VMs -Path c:\ClusterStorage\Volume1\VMs -FullAccess nano_srv1$, nano_srv2$, nano_srv3$, "domain admins", nanocluster$, "domain Computers" -CimSession $nodes[0]
Invoke-Command -ComputerName nano_srv1 -ScriptBlock {Set-SmbPathAcl -ShareName VMs}


nano11

We can check the result in Failover Cluster Manager. Make sure the share is ready, and continuous Availability is enabled.


nano12

In our case, we have Hyper-V using SMB storage. To avoid access denied errors during VMs creation or VMs migrations, we should configure SMB Constrained Delegation. We can set it up in Active Directory Users and Computers for computer account properties (Delegation tab) or by PowerShell.
First, we must add an Active Directory PowerShell module. Then enable SMB delegation for each node. I added a machine from which all commands are executed.
Install-WindowsFeature RSAT-AD-PowerShell
Enable-SmbDelegation -SmbServer nanosofs -SmbClient nano_srv1
Enable-SmbDelegation -SmbServer nanosofs -SmbClient nano_srv2
Enable-SmbDelegation -SmbServer nanosofs -SmbClient nano_srv3
Enable-SmbDelegation -SmbServer nanosofs -SmbClient vbr
Enable-SmbDelegation -SmbServer nanosofs -SmbClient nanocluster


nano13

The cluster is ready, and now we can create VMs. With the same command, as in previous part of this blog series. The second command will cluster VMs from the first part.
New-VM -Name testVM3 -MemoryStartupBytes 1GB -Generation 2 -Path "\\nanosofs\VMs\" -NewVHDPath "\\nanosofs\VMs\testVM3\Virtual Hard Disks\testVM3.vhdx" -NewVHDSizeBytes 30GB -CimSession $nodes[1]
Add-ClusterVirtualMachineRole -Cluster nanocluster -VMName testVM3


nano14

Done. Now we have a fully working clustered Hyper-V environment with the Storage Spaces Direct.
If we have a lack of resources, we can add more nodes with the following commands. The first and last command is to check cluster members. The second command is to add an additional node.
Get-ClusterNode -Cluster nanocluster
Add-ClusterNode -Cluster nanocluster -Name nano_srv4
Get-ClusterNode -Cluster nanocluster


nano15

Now we have four nodes. Disks from the new node are visible to the whole cluster. We can create a new storage pool or add disks to our current storage pool. We can do this by:
$PDToAdd = Get-PhysicalDisk -CanPool $True -CimSession $nodes[0]
Add-PhysicalDisk -StoragePoolFriendlyName NanoS2D -PhysicalDisks $PDToAdd -CimSession $nodes[0]


nano16

At this moment, we have new disks in the pool and we can extend our virtual disk. In GUI, we can specify new disk size or use the available maximum size while also taking in account that it’s in parity mode.
With PowerShell, we should calculate maximum available size by first command. It will get (Get-VirtualDiskSupportedSize) the available size of new disks in parity mode, and add to current disk size (Get-VirtualDisk). Result is in bytes.
The second command is to resize our disk.
$maxsize = ((Get-StoragePool nanos2d -CimSession $nodes[0]| Get-VirtualDiskSupportedSize -ResiliencySettingName Parity).VirtualDiskSizeMax + (Get-StoragePool nanos2d -CimSession $nodes[0]| Get-VirtualDisk).Size)
Resize-VirtualDisk -FriendlyName vDisk -Size $maxsize -CimSession $nodes[0]


nano17

That concludes our Nano Server deployment series — we went all the way from creating image through deploying, configuring, clustering and adding a new node. Hope you enjoyed the ride and we'll see you next time!