Configuration of a 2 node Hyper-V Cluster in Windows Server 2012 – Part 2. Part one is here: http://alexappleton.net/post/44748523400/configuration-of-2-node-hyper-v-cluster-in-windows

I realized that in my prior post for configuration of a 2 node Hyper-V cluster that I did not include the steps necessary for configuring the HP Storage Works P2000. So here they are:

There are two controllers on this unit. This is for redundancy. If one controller fails, the SAN will remain operational on the redundant controller. My specific unit has 4 iSCSI ports for host connectivity, directly to the nodes. I am utilizing MPIO here, so I have two links from each server (on separate network adapters) to the SAN. As follows:

The cables I use to connect the links are standard CAT6 Ethernet cables.

You also want to plug both management ports into the network. Out of the box, both management ports should obtain an address via DHCP. Now, there is no need to use a CAT6 cable to plug the management ports in, so go ahead and use a standard CAT5e cable instead. You can also configure the device via command line using the CLI by interfacing with the USB connection located on each of the management controllers. I have never had to use this for anything other than when the network port is not responding. This interface is a USB mini connection located just to the left of the Ethernet management port, and a cable is included with the unit.

Once plugged into your Windows PC, the device comes up as a USB to serial adapter and is given a COM port assignment. You will have to install the drivers to get the device to be recognized, drivers are not included with the Windows binaries.

I won’t be covering the CLI interface, all configuration will be conducted via the web based graphic console.

The web based console is accessed via your favourite Internet browser. I typically use Google Chrome as I have ran into issues logging into the console with later versions of Internet Explorer. The default username is manage, password !manage.

Once logged in, launch the initial configuration wizard by clicking Configuration – Configuration Wizard at the top:

This will l launch the basic settings configuration wizard. This wizard should hopefully be self-explanatory so I won’t go into many details here.

For this example I will be creating a single VDisk encompassing the entire drive space available. To do this, click Provisioning – Create Vdisk:

Use your best judgements on what RAID level you want here. For my example here I am going to be building a RAID 5 on 5x450GB drives:

Now I am going to be creating two separate volumes: One for the CSV file storage, and the other for Qurorum. The Quorum volume will be 1GB in size for the disk witness required since we have 2 nodes, and the CSV volume will encompass the remaining space. To create the volume click on the VDisk created above, and then click Provisioning – Create Volume. I don’t like to MAP the volumes initially, rather explicitly mapping them to the nodes after connecting them to the SAN:

In part 1 we added the roles, configured the NIC’s connecting for both Hyper-V VM access and SAN connections and prepped the servers. Now we need to connect the nodes to the SAN by means of the iSCSI initiator.

Our targets on the P2000 are 172.16.1.10, 172.16.2.10, 172.16.3.10, and 172.16.4.10 for ports 1 and 2 on each controller. As you recall from step one, the servers are directly connected without a switch in the middle.

To launch the iSCSI initiator just type “iSCSI” in the start screen:

I typically pin this to the start screen.

When you launch the iSCSI initiator for the first time you will presented with an option to start the service and make the service auto start. Choose yes:

I don’t typically like using the Quick Connect option on the target screen, rather configure each connection separately. Click on the Discovery Tab in the iSCSI Initiator Properties screen, then Discover Portal:

Next, we want to input the IP address of the SAN NIC that we are connecting to, then click on the advanced button.

Select the Initiator IP that will be connecting to the target:

Then do this again for the second connection to the SAN. When finished you should have two entries:

Now, back on the target tab your target should be listed as Inactive. Click on the connect button, then in the window that opens click on the “Enable Multi-Path” button:

Now it should show connected:

Complete the same tasks on the other node as well.

Now, before we can attach a volume from the SAN we are going to have to MAP the LUN explicitly to each of the nodes. So, we are going to have to open the web management utility for the P2000 again. Once in, if we expand the Hosts in the left pane we should now see our two nodes listed (I have omitted server names in this screenshot):

We need to map the two volumes created on the SAN to each of the nodes. Right click on the volume, selecting Provisioning – Explicit Mappings

Then choose the node, click the Map check box, give the LUN a unique number, check the ports assigned to the LUN on the SAN and apply the changes:

Assign the same LUN number to the other node and complete the same explicit mapping to the other node. Then complete the same procedure for the other volume. I used LUN number 0 for the Quorum Volume, and LUN number 1 for the CSV Volume.

Jump back to the nodes, back into the iSCSI initiator and click on the Volumes and Devices tab, press the Auto Configure button and our volumes should show up here:

Complete the same procedure on the second node as well. If you are having difficulty with the volumes showing up sometimes a disconnect and reconnect is required.(don’t forget to check the “Enable Multi-Path” option)

Now we want to enable multipath for iSCSI. Fire up the MPIO utility from the start screen:

Click on the Discover Multi-Paths tab, then check off the box “Add support for iSCSI devices” and finally the Add button:

The server will prompt for a reboot. So go ahead and let it reboot. Don’t forget to complete the same tasks on the second node.

After the reboot we are going to want to fire up disk management and configure the two SAN volumes on the node, making sure each node can see and connect to them. When initializing your CSV volume I would suggest making this a GPT disk rather than an MBR one, since you are likely to go above the 2TB limit imposed with MBR.

I format both volumes with NTFS, and give them a drive letter for now:

After configuring the volumes on the first node, I typically offline the disks, then on-line the disks on the second node to be sure everything is connected and working correctly. Don’t get worried about the drive letters assigned to the volumes, this doesn’t matter.

Getting there slowly!

Next, before we create the cluster I always like to assign the Hyper-V External NICs in the Hyper-V configuration. Fire up Hyper-V Manager, selecting “Virtual Switch Manager” in the action pane. We are going to create the external Virtual Switches using the adapters we assigned for the Hyper-V VM’s. I always dedicate the network adapters to the virtual switch, un-checking the option “Allow management operating system to share this network adapter”.

At this point we have completed all the prerequisite steps required to fire up the cluster. Now we will form the cluster.

Fire up Fail over Cluster Manager from the start screen:

Once opened, select the option in the action pane to create cluster. This will fire up the wizard to form our cluster. The wizard should be self-explanatory, so walk through the steps required. Make sure you run the cluster validation tests, selecting the default option to run all tests. This is the best time to be running this test, since it will take the cluster disks offline. You don’t want to have this cluster in production finding issues wrong with it, having to run the cluster validation tests bringing the cluster down. If we run into any issues here we can address them now before the system is in production.

The P2000 on Windows Server 2012 will create a warning about validating storage spaces persistent reservation. This warning can be safely ignored as noted here.

Hopefully when you run the validation tests you will get all Success (other than the note above). If not, trace back through the steps and make sure you are not missing anything. Once you get a successful validation save the report and store it if you need to reference it for future support.

Finish walking through the wizard to create your cluster. Assign a cluster name and static IP address to your cluster as requested from the wizard.

That should do it. If you got this far you made it. Congratulations!

Continue with Part 3 here: http://bit.ly/1aVnDAj