Although the features presented in Hyper-V replica give you a great setup, there are many reasons to still want a failover cluster. This won’t be a comparison between the benefits of Hyper-V replica vs failover clustering. This will be a guide on configuring a Hyper-V cluster in Windows Server 2012. Part one will cover the initial configuration and setup of the servers and storage appliance.

The scope:

2-node Hyper-V failover cluster with iSCSI shared storage for small scalable highly available network.

Equipment:

2 -HP ProLiant DL360p Gen8 Server

-64GB RAM

-8 1Gb Ethernet NIC, (4-port 331FLR Adapter, 4-Port 331T Adapter)

-2 146GB SAS 15K drives

HP StorageWorks P2000 MSA

-1.7TB RAW storage

Background:

When sizing you environment you need to take into consideration how many VM’s you are going to need. This specific environment only required 4 virtual machines to start with, so it didn't make sense to go with Datacenter. Windows Server 2012 differs from previous versions in that there is no difference between versions. With versions prior to 2012 if you needed failover clustering you had to go with Enterprise level licensing or above, standard didn't give you the option to add the failover clustering feature (even though you could go with the free Hyper-V Server version which did support failover clustering). This has changed in 2012. No longer do you have to buy specific editions to get roles or features, all editions include the same feature set. However, when purchasing your server license you need to cost out your VM requirements. Server 2012 Standard includes two virtual use licenses, while Datacenter includes unlimited. The free Hyper-V Server doesn't include any. Virtual use licenses are only allowed so long as the host server is not running any other role other than Hyper-V. Because there is no difference in feature set, you can start off with standard and look to move to datacenter if you happen to scale out in the future. Although I see no purpose in changing editions, you can convert a standard edition installation to datacenter by entering the following command at the command prompt:

dism /online /set-edition:ServerDatacenter /productkey:48HP8-DN98B-MYWDG-T2DCC-8W83P /AcceptEULA

I have found issues when trying to use a volume license key during the above dism command. The key above is a well-documented key, which always works for me. After the upgrade is completed I enter my MAK or KMS key to activate the server since the key above will only give you a trial.

Next thing you are going to need to determine is whether or not you want to go with GUI or Non-GUI (core). Again, thankfully Microsoft has given us the option to switch between both versions with a powershell entry so you don’t need to stress over which one:

To go “core”: Get-WindowsFeature *gui* | Uninstall-WindowsFeature –Restart

To go “GUI”: Get-WindowsFeature Server-Gui-Mgmt-Infra, Server-Gui-Shell | Install-WindowsFeature –restart

Get Started:



Install your Windows Operating system on each of the nodes, but don’t add any features or roles just yet. We will do that at a later stage.

Each server has a total of 8 NIC’s and they will be used for the following:

1 – Dedicated for management of the nodes, and heartbeat

1 – Dedicated for Hyper-V live migration

2 – To connect to the shared storage appliance directly

4 - For virtual machine network connections

We are going multipath I/O to connect to the shared storage appliance. Of the NIC’s dedicated to the VM’s we will create a team for redundancy. Always keep redundancy in mind. We have two 4-port adapters, so we will use one NIC from each for SAN connectivity, and when creating a team we will use one NIC from each of the adapters as well.

The P2000 MSA has two controller cards, with 4 1Gb Ethernet ports on each controller. We will connect the Controller as follows:

Two iSCSI host ports will connect to the dedicated NICs on each of the Hyper-V hosts. Use CAT6 cables for this since they are certified for 1Gbps network traffic. Try to keep redundancy in mind here, so connect one port from one controller card to a single nic port on the 331FLR, and the second controller card to a single NIC port on the 331T:

On our hyper-V nodes we are going to have to configure the connecting Ethernet adapters with the specified subnet that co-relates to the SAN. I tend to use 172.16.1.1, 172.16.2.1, 172.16.3.1 and 172.16.4.1 to connect. When configuring your server adapters be sure to uncheck the option to register the adapter in DNS so you don’t end up populating your DNS database with errant entries for your host

servers. See for example:

From each server ping the host interfaces to ensure connectivity.

HP used to ship a network configuration utility with their Windows Servers. This is not supported yet in Windows Server 2012, however the NIC’s I am using are all Broadcom. A quick look on Broadcom’s website led me to the Windows Management Application BACS. This utility allows you to fine tune the network adapter settings, what we need this for is to hard set the MTU on the adapters connecting to the SAN to 9000. There is a netsh command that will do it as well, but I found it to be unreliable when testing and it rarely stuck.

Download and install the Broadcom Management Applications Installer on each of your hyper-v nodes. Once installed, there should be a management application called Broadcom Advanced Control suite. This is where we want to set the jumbo frame MTU to 9000. This management application does run in the non-gui version of Windows Server, and you can also connect to remote hosts using the utility as well. You need to make sure you have the right adapter here, and if you are dealing with 8 NICs like I am this can get confusing so take your time here. Luckily enough you can see the configuration of the NIC in the

application’s window:

Verify connectivity to the SAN after you set the MTU. Send a large packet size when pinging the associated IP addresses of the SAN ports using a ping command such as:

ping 172.16.1.10 –f –l 6000

If you don’t get a successful reply here then revisit your settings until you get it right.

Network Teaming

You could create a network team in the Broadcom utility as well, however, in testing I encountered there to be issues using the Broadcom utility. The team created fine, but didn’t initialize on one server. Removing the errant team proved to be a major hassle. Windows Server 2012 includes NIC teaming function, so I prefer to configure the team on the server directly using the Windows configuration. Again, since I am dealing with two different network cards, I typically create a team using one nic port from either card on the server.

The new NIC teaming management interface can be invoked through server manager, or by running lbfoadmin.exe from command prompt or run box. To create a new team highlight the NICs involved by holding control down while clicking on each. Once highlighted, right click the group and choose the

option “Add to New Team”

This will bring up the new team dialog. Enter a name that will be used for the team. Try to stay consistent across your nodes here so remember the name you use. I typically go with “Hyper-V External #”.

We have three additional options under “Additional properties”

Teaming mode is typically set to switch independent. Using this mode you don’t have to worry about configuring your network switches. As the name implies, the nics can be plugged into different switches, so long as they have a link light they will work on the team. Static teaming requires you to configure the network switch as well. Finally, LACP is based on link aggregation which requires you to have a switch that supports this feature. The benefit of LACP is that you can dynamically reconfigure the team by adding or removing individual NIC’s without losing network communication on the team.

Load balancing mode should be set to Hyper-V switch port. Virtual machines in Hyper-V will have their own unique MAC addresses that will be different than the physical adapter. When load balancing mode is set to Hyper-V switch port, traffic to the VM will be well-balanced across the teamed NICs.

Standby adapter is used when you want to assign a standby adapter to the team. Selecting the option here will give you a list of all adapters in the team. You can assign one of the team members as a standby adapter. The standby adapter is like a hot spare, it is not used by the team unless another member in the team fails. It’s important to note her that standby adapters are only permitted when teaming mode is set to switch independent.

There is a lot to be learned regarding NIC teaming in Server 2012, and it is a very exciting feature. You can also configure teams inside of virtual machines as well. To read more, download the teaming documentation provided by Microsoft here: http://www.microsoft.com/en-us/download/details.aspx?id=30160

Once we have the network team in place it will be time to install the necessary roles and features to your nodes. Another fantastic new feature in Server 2012 is the ability to manage multiple servers by means of server groups. I won’t go into detail here, but if you are using Server 2012 you should investigate using Server Groups when managing multiple servers with similar roles on them. In my case, I always create a server group called “Hyper-V Nodes”, assigning the individual servers from the server pool to the server group.

Adding the roles and features:

Invoke the add roles and features wizard by opening server manager, and choosing the manage option in

the top right, then “Add Roles and Features”

We want to add the Hyper-V role, and the failover clustering and multipath i/o feature to each of the nodes. You will be prompted to select your network adapter to be used for Hyper-V. Don’t have to worry about setting this option at the moment, I prefer to do this after installing the role. You will also be prompted to configure live migration, since we are using a cluster here this is not required. Live Migration feature here is for shared nothing (non-SAN) setups. Finally, you will be prompted to configure your default stores for virtual machine configuration files and VHD files. Since we will be attaching SAN storage we don’t need to be concerned about this step at this moment. Click next to get through the wizard and Finish to install the roles and features. Installation will require reboot to complete, and will actually take two reboots before the Hyper-V role is completely installed.

This covers part one of the installation. At this point we should have everything plugged in, initial configuration of the SAN completed, and initial configuration of the Hyper-V nodes complete as well. In part two we will be configuring the iSCSI initiator, and bringing up the failover cluster.

Part two here: http://alexappleton.net/post/69111063826/configuration-of-2-node-hyper-v-cluster-in-windows