top of page
  • Writer's pictureRenan Antonio Rodrigues

How to set up a Hyper-V Failover Cluster, Step by step

Updated: Mar 30, 2019

This post aims to guidance you on how to set up a Hyper-V Failover Cluster, step by step in a high available environment.


Overview


The configuration presented at this page was implemented in a production environment for a company of 100 employees in 2012.


The purpose of this project was to increase the availability of the services offered to network users and thus deliver a higher quality service to the customer. In addition, it will be possible to reduce costs with energy, free space, and savings for future investments with Servers.


Problems of slowness, insufficient disk space, unplanned downtime, and a high level of business criticality were key factors in choosing and implementing the Cluster.


Project


List of equipment:

1.          2x Servers HP ProLiant ML350 G6:

            - 1 Processor Intel Xeon E5650;

            - 16Gb memory DDR3 PC3-10600 Registered;

            - 2 discs Sata 500Gb 7k Raid 1;

            - 2 Sources of 750W redundants;

            - 2 NIC's dual-gigabit onboard;

            - 6 NIC's HP NC112T offboard.


2.            1x Storage HP P2000 ISCSI 1Gb;

                - HP P2000 LFF Modular Smart Array Chassis;

                - HP P2000 G3 iSCSI MSA Controller;

                - 6 discs SAS 300Gb 15k Raid 10.


3.           3x Switches HP Procurve


Software:

- Microsoft Windows Server 2008 R2 SP1 Enterprise Full;

- Antivirus Symantec EndPoint Protection;


Roles and Features:

- ISCSI Initiator;

- MPIO;

- Failover Cluster;

- Hyper-V.


Network Interface Card:

In a cluster design the network cards play a very important role, and the choice of the number of boards must be made carefully. In this project I made a Failover Cluster with Hyper-V using CSV and Live Migration, I opted for the following configuration:


1x Heartbeat;

1x CSV;

1x Live Migration;

1x Management and LAN;

2x VM's (NIC Teaming);

2x ISCSI;

1x Virtual


Neither of them can be on the same network, otherwise, it will not pass in Cluster validation.


Keep a naming pattern and rename all of them, remembering that it should be the same name on both nodes. Important to remember that on each Server, Windows must be installed on the same volume (C: usually).


Step by Step


1. If the servers do not have Disk Raid configured, do so as per the manufacturer's guidance. If you have any questions about how to do this, refer to the manual. In my case, I opted for Raid 1 (mirroring) for a reason. I'm just storing a partition for the OS, so we bought only two disks. I recommend using Raid 5 or 10 on partitions that will store data, in this case, we will use Storage to do it, making it not necessary;


2. After the Raid is properly configured, install the operating system. In this case, Windows Server 2008 R2 Ent.


3. Install the antivirus, in this case, Symantec EndPoint Protection;


4. Install any drivers that were not automatically installed with the OS. A very important point is that you go to the web site of your manufacturer and download all the drivers you have, now in the most current version. This will prevent future issues that may occur with the outdated version. In my case I updated the drivers for the network cards, sas/sata controller, video and I installed an HP application to manage the network cards and create TEAM;


5. Update the BIOS of the servers. In my case it was not necessary because the last version was already installed;


6. Configure your Storage. In my case, as stated earlier, I used the HP P2000 ISCSI 1Gb in the design. To do this, I connected a network cable to the management interface on the back and set up my network card in the same network range. I also used the CLI interface, which is via the command line. I will not go into more detail because your project can use another model of Storage.

What is essential in this step is that you configure the main resources for the operation of the Cluster, involving:

· Configuration of management network adapters;

· Configuration of the ISCSI network cards (IP address of each port, enable Jumbo Frames, setar speed, etc);

· Virtual Disk Configuration. In my case I created only one using the full capacity of the Raid 10 disks for performance and security;

· Configuration of volumes (LUNS). In my case I created four, the first one for quorum, the second for the application server and files server disk, the third for the application server disk and files (this server separated from the others for having a higher demand for Access and recording files) and the fourth and last one for the other vm's. I emphasize that I am describing my case, you should choose the best option for your project;

· Configure LUNS masking. This step is the mapping of each LUN to the Cluster nodes, otherwise when connecting the node to the storage (via ISCSI Initiator), no volume will be available. To do so, you must know the IQN of each server. To do this, open ISCSI Initiator, Configuration, Initiator Name. Depending on your storage, when the node is first connected to the storage, the server will automatically be visible in the storage already with its IQN. Then just make the masking for each LUN giving read and write access.


7. Create a TEAM between the two boards dedicated to the VMs. In my case I used the HP utility. Find out if your manufacturer provides any tools to do this.


8. Configure the network adapters on each node identically.


Heartbeat:

In the card's properties, disable all options except IPv4 and IPv6;

In IPv4, disable DNS and NetBIOS over IP registration;

Set the speed to 1000 / Full;

Set IP and Subnet Mask, only.


CSV Card:

In the properties of the adapter, enable only, Client for Microsoft, File and Printer, IPv4 and IPv6;

In IPv4, disable DNS and NetBIOS over IP registration;

Set the speed to 1000 / Full;

Set IP and Subnet Mask, only.


Live Migration Card:

In the card's properties, disable all options except IPv4 and IPv6;

In IPv4, disable DNS and NetBIOS over IP registration;

Set the speed to 1000 / Full;

Set IP and Subnet Mask, only.


Management / Communication Card Node (physical server):

In the properties of the card, keep all options enabled;

Set the speed to 1000 / Full;

Define IP, Subnet Mask, Gateway and DNS.


VM Cards:

The two boards were configured in TEAM for balancing and fault tolerance. When creating TEAM, all options will be disabled by default.


ISCSI (communication node> storage) cards:

In the card properties, disable all options except IPv4;

In IPv4, disable DNS and NetBIOS over IP registration;

Set the speed to 1000 / Full, enable the Jumbo package 9014 bytes,

Enable Flow Control;

To test whether Jumbo Frames are supported, use the command ping-l 8000 –f -n 5 <ip storage>.


TEAM Virtual Card between the two VM's boards: (it should have already been created)

This card will be indicated when creating a virtual card in Hyper-V, so it will be automatically configured and in the end only the Virtual Switch option will be enabled in the properties.


9. Add the Hyper-V role and the Clustering Failover feature on both Servers. In the Hyper-V installation do not select any boards for use by VMs. We will do this next;


10. Open Hyper-V Manager, click Virtual Network Manager, and then External and Add; Choose the description and name of the board, in the External option, select the resulting TEAM board between the two VMS boards. To not create a virtual card for use on the physical host, uncheck Allow Management Operating System to Share this network adapter. In my case I unmarked not to create it. Click OK;


11. Change the order of the boards on each node. Open Control Panel, Network and Internet, Network and Sharing Center, Change adapter settings, Alt-click Advanced, then Advanced Settings.


Set the following order:

1º - Physical Node Management

2º - ISCSI 1

3º - ISCSI 2

4º - Live Migration

5º - CSV

6º - VM1 (TEAM)

7º - VM2 (TEAM)

8º - Heartbeat

9º - Virtual Team


12. Create an unprivileged domain account, just as an ordinary user. Give this user permission to create Computer objects in the entire domain. To do so, follow the steps:

1º On the domain controller, open Active Directory Users and Computers;

2º Right-click on the domain and then Delegate Control;

3º In the wizard that you opened, click Next and add the newly created user to the cluster. Click Next one more time;

4º Select Create a custom task to delegate and then Next;

5º Click this folder, and then click Next;

6º Check only the last box, and select the first option, Create Computer Objects;

7º Click Next. Confirm the summary of settings and then Finish.


13. Insert the two nodes into your domain;


14. Log on as a local administrator on both nodes and add the newly created account to the domain as the Windows Local Administrator. To do so, do the following:


1º Open Server Manager;

2º Then Configuration, Local Users and Groups, Groups;

3º Double-click Administrators, Add, and enter the user name. You will need to enter a domain account to complete this step;

4º Click Ok on the open screens, and then log on to Windows with the respective user on both nodes.


15. Establish connection between nodes and Storage using the ISCSI Initiator tool. In addition, it will be configured through the MPIO tool, path redundancy between the network adapters of the devices.


To add the two tools to the nodes:

1º For the ISCSI Initiator, open the control panel and click on it. A screen indicating the automatic startup will appear, click Yes;

2º For MPIO, open Server Manager, click Features, Add Features, and check the Multipath I / O option. Click Next and Install.


To configure each tool on the nodes, do the following:

1º Open the ISCSI Initiator tool;

2º Open the tab called Discovery and click Discover Portal to add the first IP of the Storage (remembering that in my case the storage has two modules adding four ISCSI cards, two for each node);

3º On the tab that opens, enter the storage IP address and port 3260 (default) and click on Advanced;

4º Under Advanced Settings, under the Local adapter option, select Microsoft iSCSI Initiator. Then, in the IP Initiator option, select the IP of the node that will establish connection to the storage by IP previously informed, click Ok;

5º Navigate to the Targets tab. The newly created connection will be inactive, to connect click the Connect button and then Advanced;

6º In the Local Adapter option, select Microsoft iSCSI Initiator. In IP Initiator select the IP of the node and in Target IP Portal select the IP of the storage. Click Ok on the two screens that are open;

7º Open the MPIO tool and navigate to the Discover Multi-Paths tab. Click Add support for ISCSI devices and then Add. You will need to restart the server;

8º After logging in to Windows, open the MPIO tool and check in MPIO Devices if there is a new ID, other than Vendor 8Product 16;

9º Just for confirmation, open the ISCSI Initiator tool and then Devices. Check that the available disks are in the storage. They should point to Target 0;

10º Returning to the Targets tab, click Connect and check the Enable multi-path option. Then click Advanced ...;

11º In the Local Adapter option, select Microsoft iSCSI Initiator. In IP Initiator select the second IP of the node that will connect to the second module of the storage. And in Target IP gateway select the IP of the storage. Click Ok on the two screens that are open;

12º Again in Targets, click on Devices and check if the disks now point each to Target 0 and 1;

13º Select the first Disk 1 and click MPIO. Under Load balance policy select the desired policy. Repeat the same procedure for the other disks;

14º If the MPIO policy is not what you want, you will need the following procedure to change it:

- Open the Disk Management tool in Server Manager; - On each disk, right-click Properties; - Navigate to the MPIO option, select the desired policy and click Apply; - Confirm that Path ID 77030000 is Active / Optimized and Path ID 77030001 is Active / Unoptimized. The target of each is 0 and 1 respectively;

15º Still in Disk Management, select each disk, make it online and set the partition as MBR (if it is larger than 2Tb, select GPT). Create simple volume and format with NTFS. You do not need to assign letters to the drive. Then turn the disks into Offline again.

It is very important that you do this last step on just one of the nodes. After you finish creating the partition and formatting it, leave the disks offline again. If by chance keeping them online simultaneously on the nodes, the partition will be corrupted and the cluster validation will fail a storage test.


16. Open the Failover Cluster Manager tool and click Validate a Configuration Wizard. Enter the name of the nodes and keep the option to run all the tests.

At this point, you have already configured all the network cards and already have connection to the Storage with redundancy of paths. If you have not done all the previous steps, return and be sure to do them.

You can only continue to the cluster creation phase if all the tests pass successfully.


17. After you have passed all the tests in Cluster validation, create it by reporting the nodes, cluster name, and IP. Check the summary of the configuration you created, the quorum must be Node and Disk Majority (Cluster Disk 1 - reserved for quorum) if you have only two nodes and one storage.


18. Now with the cluster created and running, open the Failover Cluster Manager tool, and then click Networks. Only the cards whose TCP / IP is enabled will be displayed, in my case only six. Change the name of each one by the same one previously done in Windows. In each of them, click Properties and configure them as follows:


· Heartbeat = Allow Cluster network communication on this network

· CSV = Allow Cluster network communication on this network

· Live Migration = Do Not Allow Cluster Network communication on this network

· Management Node = Allow Cluster network communication on this network / Allow Clients to connect through this network

· ISCSI 1 = Do Not Allow Cluster Network communication on this network

· ISCSI 2 = Do Not Allow Cluster Network communication on this network


19. After configuring each board for Failover Cluster Manager, I recommend setting the AutoMetric and Metric values on each network adapter manually to ensure that the traffic passes through the correct boards. The values for each card will be:

· Heartbeat = Metric = 1500. AutoMetric = False

· CSV = Metric = 500. AutoMetric = False

· Live Migration = Metric = 1000. AutoMetric = False

· Management Node = Metric = 10000. AutoMetric = True

· ISCSI 1 = Metric = 10100. AutoMetric = True

· ISCSI 2 = Metric = 10200. AutoMetric = True


To set the above values, open Windows PowerShell Modules in Administrative Tools and run the commands:


To check the current values enter:

Get-ClusterNetwork | Ft Name, Metric, AutoMetric

To set the value on the CSV card:

$ Csv = Get-ClusterNetwork digiteonomedaplaca

$ Csv.Metric = 500


To set the value on the LM board:

$ Lm = Get-ClusterNetwork digiteonomedaplac

$ Lm.Metric = 1000


To set the value on the HB board:

$ Hb = Get-ClusterNetwork digiteonomedaplaca

$ Hb.Metric = 1500


Confirm the result:

Get-ClusterNetwork | Ft Name, Metric, AutoMetric


The other boards must have the values stated above, otherwise do so as shown in the commands, changing only the name of the boards.


20. Enable Cluster Shared Volumes in Failover Cluster Manager. Accept the term and click Ok.

· Right-click the new Cluster Shared Volumes option, then Add storage;

· Add the disk you will use for the VMs (these are the LUNS made available by Storage);

· After enabling CSV, you will notice that a directory called ClusterStorage was created on drive C;

· Inside it each disk is represented as Volume X, where x is the number of each.

It is in these volumes that all VMs must be stored, so if one node stops for a failure, the other node that is running will be responsible for the continuity of the business.


21. Now that your cluster has been created, configured, and CSV enabled, it's time to create a VM. To do this, do:


1º In Failover Cluster Manager, right-click Services and Applications, Virtual Machines, New Virtual Machine, select the node to manage it at the first moment;

2º In the storage option, enter the CSV path, being C: \ ClusterStorage. The remainder should be set to each VM individually.


Now your VM is configured in high availability by Cluster Failover.


A tip for you, if for any reason all the nodes and storage hang up, you may need to intervene with a command to force the quorum to start again as all the nodes have gone off at once. Run the command at a privileged prompt:


Net start clussvc / fq


If necessary, run on all servers in the cluster. To verify that you have quorum again, open Failover Cluster Manager, and in the Quorum Configuration option, you must be as Node and Disk Majority (or quorum that pertains to your cluster).


For any doubts or suggestions, please leave a comment below.

612 views0 comments

Recent Posts

See All
bottom of page