Category Archives: Uncategorized

Update Plex Plugin on FreeNAS 11

If you are rocking your own FreeNAS storage at home or office, you’ll know that FreeNAS’ built-in plugins are hardly up to date. Updating the Plex plugin is fairly straightforward.

1. SSH to your FreeNAS
2. type: jls
3. Take the note of the Jail # of your Plex plugin
4. type:  jexec # csh (where # is the number of the jail noted in last step)
5. type:  fetch -o PMS_Updater.sh https://raw.githubusercontent.com/mstinaff/PMS_Updater/master/PMS_Updater.sh
5. type:  chmod 755 PMS_Updater.sh
6. type:  ./PMS_Updater.sh -u PlexPass_User -p PlexPass_password -a

 

vSphere Web Client Integration Plugin Not Working

When trying to manage your vSphere environment using the web client (or forced to in 6.5+), the Web Client Integration plugin is required to make use of many features the web client has to offer, like remote console, enhanced authentication, and deploying OVF appliances.

If you have downloaded and installed the plugin, but IE, Chrome, or Firefox do not activate the plugin, it can most likely be resolved by doing one of the following:

  1. Add the vCenter FQDN to the trusted site list:
    For vSphere 6.0-6.5: https://vCenter_FQDN
    For vSphere 5.5: https://vCenter_FQDN:9443
  2. Add the vCenter FQDN to the Local Intranet list (IE & Chrome)
  3. Uninstall Plugin, Clear Cache/Cookies, Reinstall Plugin, and Repeat option 1

 

HPE Proliant G7 Servers and vSphere 6.5 Purple Screen of Death

Upgrading VMware to ESXi 6.5 on HP G7 Servers will crash and cause you to scream and will require you to waste your time building a custom ISO that HPE could have easily done.
Best practice is to use the vendor’s custom ISO’s that have the hardware drivers integrated, so I used HPE’s latest Custom ISO.

HPE G7 Server support is being dropped by both HPE and VMware. In fact, vSphere 6.5 is supposedly the last version that will support the G7s. Knowing this info, I assumed upgrading from ESXi 6.0 to 6.5 on G7 would work, but I found out quickly that after the upgrade the hosts would “Purple Screen of Death” (PSOD) right after boot.

The Error: “PF Exception 14 in world 67667:sfcb-smx IP 0x0 addr 0x0″

The Issue: There are incompatible driver(s) in the customized ISO from HPE. Yes, there are more than one driver with issues.

The Workarounds: There are various workarounds that I have personally found to work, while others have been resolutions I have read about after I dealt with this, so I was not able to verify that they do indeed work, but I will list them nevertheless. Upgrading the firmware, BIOS, etc did not resolve the issue.
Note: All these workaround require a fresh install of ESXi. Running an Upgrade does not remove the incompatible drivers, and the host doesn’t stay alive long enough before crashing to manually remove them via SSH.

Solution 1: Use VMware’s Standard ISO Media
While this goes against many best practices, VMware doesnt offer too many vendor drivers in their ISO builds, so the offending drivers do not get installed and crash the system. While you can certainly use this method, you will want to follow-up and manually install the appropriate driver VIBs from HPE.

Solution 2: Build your own Custom ISO
This takes a bit more work, but is probably the most comprehensive path to resolution. You will basically need to remove drivers from the HPE Customized 6.5 ISO and inject those from the 6.0 ISO. The following are instructions on doing this.

Create Custom VMware ESXi Media

Prerequisites:

Instructions:

  • Launch vSphere PowerCLI

  • Add the HP ESXi 6.5 image bundle
    Add-EsxSoftwareDepot -DepotUrl C:\ESXi\HPE-6_5.zip

  • Check the Profile
    Get-EsxImageProfile

  • Copy the Profile
    New-EsxImageProfile -CloneProfile HPE-ESXi-6.5.0-OS-Release-6* -Name “G7-ESXi”


    Use “HPE Custom” for Vendor

  • Check the Profile
    Get-EsxImageProfile

  • Remove the driver from the image
    Remove-EsxSoftwarePackage G7-ESXi hpe-smx-provider

  • Add the HP ESXi 6.0 image bundle
    Add-EsxSoftwareDepot -DepotUrl C:\ESXi\HPE-6_0.zip
  • Check the Profile
    Get-EsxImageProfile

  • View both drivers in the two bundles
    Get-EsxSoftwarePackage | findstr smx

  • Add the necessary driver into the custom build
    add-esxsoftwarepackage -imageprofile G7-ESXi -softwarepackage “hpe-smx-provider 600.03.11.00.9-2768847”

  • Convert your custom bundle to ISO
    Export-EsxImageProfile -ImageProfile G7-ESXi -ExportToIso -filepath “C:\ESXi\G7-ESXi.iso”

  • Now take that ISO file that was created and use it to do a FRESH INSTALL. (Remember, upgrade will not work).

Find Unknown Wireless Password for Aruba Wireless SSID

If you don’t remember what password you or another Administrator set for a particular SSID on an Aruba Wireless Access Controller (or Instant Access Point), you can find this by connecting to any Access Point via SSH, Telnet or Console, and running the following commands:

show run no-encrypt

Scroll up until you get to the wlan ssid-profile section, and the password will be listed next to wpa-passphrase

If you had just ran a show run without the “no-encrypt“, you would have see a random hash like this:

 

vSphere 6.5 – Transport (VMDB) error -45: Failed to connect to peer process

While upgrading some Cisco UCS B200 M3 Servers from vSphere 6.0 to 6.5, I ran into an error that I could not figure out. After upgrading the first Cisco Blade to 6.5, I could not vMotion any VMs from the older 6.0 host to the newly upgraded 6.5 host. I would get the following error:

Transport (VMDB) error -45: Failed to connect to peer process

I was able to vMotion a powered off VM to the new host, but when I attempted to power on the VM, I got the same error: Transport (VMDB) error -45: Failed to connect to peer process

After poking around for awhile, I decided to turn to the VMware community, where I most mostly seeing this error with people using Workstation and Fusion products, but there wasn’t much going on with ESXi environments. I made sure to use the ESXi 6.5 Cisco Media for the original installs and this upgrade, and I assumed there had to be a driver/component issue with all of this. I tried updating by booting into the ISO and running the upgrade from there. After attempting to manually upgrade drivers and firmware, the solution that worked for me was the following:

Reinstall the freaking host from scratch! 

There you have it. Such a simple solution 🙂
Honestly, I have no idea why the reinstall was necessary. I ran into the same issue again when trying to upgrade that second host, and I even tried upgrading it using the an alternative method (Using ESXCLI and Update Manager), but no luck.

I did not call VMware Support on this, but I did submit the bug report. I would love to hear from someone who figured out the root cause and workaround.

Enterprise Wireless Access Points Benchmarks: Cisco, Aruba, Meraki, Aerohive

As more and more aspects of a business now require some type of mobility, the companies that sell you a way to connect them all-together are a dime a dozen. I have spent a considerable amount in my pursuit for wireless knowledge. I have also spent a LOT of time (just ask my wife) with some of these Access Points I have benchmarked and can say I know them fairly well. I’ve decided to take them head-to-head in some various tests and provide my readers with a quick and simplified version of the detailed data I collected during this process. A process that will be a “work in progress” as I find new testing criteria and new hardware to play with. Two of the tested access points are 802.11ac Wave 2 devices, which can provide over 1Gb of throughput using bonded links or MGIG. But all APs were tested with one 1Gb Ethernet (no LAGs)

The Access Points I will be benchmarking are:
Cisco Airnonet 1830i (802.11ac Wave 2)
Meraki MR42 (802.11ac Wave 2)
Meraki MR18 (802.11n)
Aruba 225 (802.11ac)
Aruba 205 (802.11ac)

Let me preface this with a disclaimer that I have no official training or degree in the methodologies of benchmarks. I have tried to take what I believe are some real world tasks a user will encounter daily, and tested them in the best way I know how. I will explain my testing environment, and how I chose that environment, and then move onto the actual benchmarks.

Client OS and Wireless Chipset
2015 Macbook Pro – OS X 10.11 (El Capitan): Broadcom BCM43602
Lenovo T450S – Win 10 Pro: Intel Dual Band AC-7265 (Integrated)
Lenovo T450S – Win 10 Pro: Netgear A6200 (USB 3 Adapter)

Results: I ran a 1GB file upload and download to a local server using each of the above clients. I ran these tests three (3) times on each, and took the averages of each and compared them with each other. I found they each were within ~1/20th of upload/download seconds, and throughput difference was also negligible. I used the Lenovo with integrated Intel chipset for the official benchmarks.

Environment
I placed each access point 9’ high and tested each client ~12’ away. I used the exact placement for each test. I only had one AP powered on during each test, and these tests were done in a very secluded area, with absolutely zero interference from neighboring wifi or Microwave signals. Acrylics Wifi Professional was used to verify this. Each Access Point was connected via POE. No other devices connected to the Access Points besides my client machine

Network Backbone
The bulk of these benchmarks tested for local upload/download speeds of files on the local LAN. I tested the Access Points using two switches. The first one being a Netgear GS728 TP and the second a Cisco Meraki MS350. Surprisingly, I was getting lower latency on the Netgear switch (between 1-3ms), and used the Netgear for the official benchmarks.

Internet Speed Tests
The Internet Speed Tests were semi-irrelevant, since some of these APs can download/upload much faster than my Internet Plan and modem allow. I am using Comcast Xfinity Blast (105 Down/10 Up), but it looks like Comcast is allowing me to burst above those speeds. I am using a Motorola Surfboard SC6121 DOCSIS 3.0 Modem, which has a ~172 Mbps max throughput, which would be the weakest link even if I had faster Internet. What is interesting though, is all these Access Points support multi-streams which should allow internet speeds on the 2.4 Ghz range to exceed the results I am getting in benchmarks. Am I missing something on this opinion?

2.4 Ghz vs 5 Ghz Tests and Features
Each Access Point offers its own array of extended features and configurations, some of which are unique to the access point. Most of these features really only shine under a multi-device scenario, so I think the single-device head to head benchmarks are fairly accurate, as these unique features aren’t needed. 5 Ghz tests were done by shutting off the 2.4 Ghz radios and vice versa. Attempt to “tweak” some of the default settings to more “optimized” ones had little effect, and in some cases made things worse. Again, these Access Points are made for the Enterprise and are built to handle multiple users with multiple devices. I welcome any feedback on any of these testing mechanisms.

Ok, now the good stuff. Here are the results! I ran each aspect of the benchmarks three (3) times and took the average of those results. Some results were surprising and seemed odd and were re-tested but results were similar. Here we go!

Test 1: 20 MB File Transfers over 5 Ghz Radios

Test 2: 20 MB File Transfers over 2.4 Ghz Radios

Test 3: 1 GB File Transfers over 5 Ghz Radios

Test 4: 1 GB File Transfers over 2.4 Ghz Radios

More benchmarking to come. This is definitely a work in progress!

Exporting VMware Logs for Analysis

Sometimes there are issues that arise with your VMware environment that require advanced troubleshooting from VMware Technical Support. Sending them your VMware logs preemptively or upon request is a great way to get to the bottom of an issue.
To get those logs, just do the following.

– Open vSphere (vCenter)
– Click File – Export – Export System Logs

– Select all System Logs

– Choose a location to Download Them
– And Watch the Progress of the Download

It may take awhile to gather and export all the logs, but once finished, you can FTP the logs to VMware Support for further analysis!

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Step by Step Configuration of 2 node Hyper-V Cluster in Windows Server 2012 R2

* Material taken from my own testing as well as http://alexappleton.net.

Although the features presented in Hyper-V replica give you a great setup, there are many reasons to still want a failover cluster.  This won’t be a comparison between the benefits of Hyper-V replica vs failover clustering.  This will be a guide on configuring a Hyper-V cluster in Windows Server 2012.  Part one will cover the initial configuration and setup of the servers and storage appliance.

The scope:
2-node Hyper-V failover cluster with iSCSI shared storage for small scalable highly available network.

Equipment:
2 -HP ProLiant DL360p Gen8 Server
-64GB RAM
-8 1Gb Ethernet NIC, (4-port 331FLR Adapter, 4-Port 331T Adapter)
-2 146GB SAS 15K drives

HP StorageWorks P2000 MSA
-1.7TB RAW storage

Background:

When sizing you environment you need to take into consideration how many VM’s you are going to need.  This specific environment only required 4 virtual machines to start with, so it didn’t make sense to go with Datacenter.  Windows Server 2012 differs from previous versions in that there is no difference between versions.  With versions prior to 2012 if you needed failover clustering you had to go with Enterprise level licensing or above, standard didn’t give you the option to add the failover clustering feature (even though you could go with the free Hyper-V Server version which did support failover clustering).  This has changed in 2012.  No longer do you have to buy specific editions to get roles or features, all editions include the same feature set.  However, when purchasing your server license you need to cost out your VM requirements.  Server 2012 Standard includes two virtual use licenses, while Datacenter includes unlimited.  The free Hyper-V Server doesn’t include any.  Virtual use licenses are only allowed so long as the host server is not running any other role other than Hyper-V.   Because there is no difference in feature set, you can start off with standard and look to move to datacenter if you happen to scale out in the future.  Although I see no purpose in changing editions, you can convert a standard edition installation to datacenter by entering the following command at the command prompt:

dism /online /set-edition:ServerDatacenter /productkey:48HP8-DN98B-MYWDG-T2DCC-8W83P /AcceptEULA

I have found issues when trying to use a volume license key during the above dism command.  The key above is a well-documented key, which always works for me.  After the upgrade is completed I enter my MAK or KMS key to activate the server since the key above will only give you a trial.

Next thing you are going to need to determine is whether or not you want to go with GUI or Non-GUI (core).  Again, thankfully Microsoft has given us the option to switch between both versions with a powershell entry so you don’t need to stress over which one:

To go “core”: Get-WindowsFeature *gui* | Uninstall-WindowsFeature –Restart
To go “GUI”:  Get-WindowsFeature Server-Gui-Mgmt-Infra, Server-Gui-Shell | Install-WindowsFeature –restart

Get Started:

Install your Windows Operating system on each of the nodes, but don’t add any features or roles just yet.  We will do that at a later stage.

Each server has a total of 8 NIC’s and they will be used for the following:

1 – Dedicated for management of the nodes, and heartbeat
1 – Dedicated for Hyper-V live migration
2 – To connect to the shared storage appliance directly
4 – For virtual machine network connections

We are going multipath I/O to connect to the shared storage appliance.  Of the NIC’s dedicated to the VM’s we will create a team for redundancy.  Always keep redundancy in mind.  We have two 4-port adapters, so we will use one NIC from each for SAN connectivity, and when creating a team we will use one NIC from each of the adapters as well.

The P2000 MSA has two controller cards, with 4 1Gb Ethernet ports on each controller.  We will connect the Controller as follows:

Two iSCSI host ports will connect to the dedicated NICs on each of the Hyper-V hosts.  Use CAT6 cables for this since they are certified for 1Gbps network traffic.  Try to keep redundancy in mind here, so connect one port from one controller card to a single nic port on the 331FLR, and the second controller card to a single NIC port on the 331T:

On our hyper-V nodes we are going to have to configure the connecting Ethernet adapters with the specified subnet that co-relates to the SAN.  I tend to use 172.16.1.1, 172.16.2.1, 172.16.3.1 and 172.16.4.1 to connect. When configuring your server adapters be sure to uncheck the option to register the adapter in DNS so you don’t end up populating your DNS database with errant entries for your host
servers.  See for example:

From each server ping the host interfaces to ensure connectivity.

HP used to ship a network configuration utility with their Windows Servers.  This is not supported yet in Windows Server 2012, however the NIC’s I am using are all Broadcom.  A quick look on Broadcom’s website led me to the Windows Management Application BACS.  This utility allows you to fine tune the network adapter settings, what we need this for is to hard set the MTU on the adapters connecting to the SAN to 9000.  There is a netsh command that will do it as well, but I found it to be unreliable when testing and it rarely stuck.

Download and install the Broadcom Management Applications Installeron each of your hyper-v nodes.  Once installed, there should be a management application called Broadcom Advanced Control suite.  This is where we want to set the jumbo frame MTU to 9000.  This management application does run in the non-gui version of Windows Server, and you can also connect to remote hosts using the utility as well.  You need to make sure you have the right adapter here, and if you are dealing with 8 NICs like I am this can get confusing so take your time here.  Luckily enough you can see the configuration of the NIC in the
application’s window:

Verify connectivity to the SAN after you set the MTU.  Send a large packet size when pinging the associated IP addresses of the SAN ports using a ping command such as:

ping 172.16.1.10 –f –l 6000 

If you don’t get a successful reply here then revisit your settings until you get it right.

Network Teaming

You could create a network team in the Broadcom utility as well, however, in testing I encountered there to be issues using the Broadcom utility.  The team created fine, but didn’t initialize on one server.  Removing the errant team proved to be a major hassle.  Windows Server 2012 includes NIC teaming function, so I prefer to configure the team on the server directly using the Windows configuration.  Again, since I am dealing with two different network cards, I typically create a team using one nic port from either card on the server.

The new NIC teaming management interface can be invoked through server manager, or by running lbfoadmin.exe from command prompt or run box.  To create a new team highlight the NICs involved by holding control down while clicking on each.  Once highlighted, right click the group and choose the
option “Add to New Team”

This will bring up the new team dialog.  Enter a name that will be used for the team.  Try to stay consistent across your nodes here so remember the name you use.  I typically go with “Hyper-V External#”.

We have three additional options under “Additional properties”

Teaming mode is typically set to switch independent.  Using this mode you don’t have to worry about configuring your network switches.  As the name implies, the nics can be plugged into different switches, so long as they have a link light they will work on the team.  Static teaming requires you to configure the network switch as well.  Finally, LACP is based on link aggregation which requires you to have a switch that supports this feature.  The benefit of LACP is that you can dynamically reconfigure the team by adding or removing individual NIC’s without losing network communication on the team.

Load balancing mode should be set to Hyper-V switch port.  Virtual machines in Hyper-V will have their own unique MAC addresses that will be different than the physical adapter.  When load balancing mode is set to Hyper-V switch port, traffic to the VM will be well-balanced across the teamed NICs.

Standby adapter is used when you want to assign a standby adapter to the team.  Selecting the option here will give you a list of all adapters in the team.  You can assign one of the team members as a standby adapter.  The standby adapter is like a hot spare, it is not used by the team unless another member in the team fails.  It’s important to note her that standby adapters are only permitted when teaming mode is set to switch independent.

There is a lot to be learned regarding NIC teaming in Server 2012, and it is a very exciting feature.  You can also configure teams inside of virtual machines as well.  To read more, download the teaming documentation provided by Microsoft here: http://www.microsoft.com/en-us/download/details.aspx?id=30160

Once we have the network team in place it will be time to install the necessary roles and features to your nodes.  Another fantastic new feature in Server 2012 is the ability to manage multiple servers by means of server groups.  I won’t go into detail here, but if you are using Server 2012 you should investigate using Server Groups when managing multiple servers with similar roles on them.  In my case, I always create a server group called “Hyper-V Nodes”, assigning the individual servers from the server pool to the server group.

Adding the roles and features:

Invoke the add roles and features wizard by opening server manager, and choosing the manage option in
the top right, then “Add Roles and Features”

We want to add the Hyper-V role, and the failover clustering and multipath i/o feature to each of the nodes.  You will be prompted to select your network adapter to be used for Hyper-V.  Don’t have to worry about setting this option at the moment, I prefer to do this after installing the role.  You will also be prompted to configure live migration, since we are using a cluster here this is not required.  Live Migration feature here is for shared nothing (non-SAN) setups.  Finally, you will be prompted to configure your default stores for virtual machine configuration files and VHD files.  Since we will be attaching SAN storage we don’t need to be concerned about this step at this moment.  Click next to get through the wizard and Finish to install the roles and features.  Installation will require reboot to complete, and will actually take two reboots before the Hyper-V role is completely installed.

This covers part one of the installation.  At this point we should have everything plugged in, initial configuration of the SAN completed, and initial configuration of the Hyper-V nodes complete as well.  In part two we will be configuring the iSCSI initiator, and bringing up the failover cluster.

—————————————————————————————————————————

I realized that in my prior post for configuration of a 2 node Hyper-V cluster that I did not include the steps necessary for configuring the HP Storage Works P2000.  So here they are:

There are two controllers on this unit.  This is for redundancy.  If one controller fails, the SAN will remain operational on the redundant controller.  My specific unit has 4 iSCSI ports for host connectivity, directly to the nodes.  I am utilizing MPIO here, so I have two links from each server (on separate network adapters) to the SAN.  As follows:

The cables I use to connect the links are standard CAT6 Ethernet cables.

You also want to plug both management ports into the network.  Out of the box, both management ports should obtain an address via DHCP.   Now, there is no need to use a CAT6 cable to plug the management ports in, so go ahead and use a standard CAT5e cable instead.  You can also configure the device via command line using the CLI by interfacing with the USB connection located on each of the management controllers.  I have never had to use this for anything other than when the network port is not responding.  This interface is a USB mini connection located just to the left of the Ethernet management port, and a cable is included with the unit.

Once plugged into your Windows PC, the device comes up as a USB to serial adapter and is given a COM port assignment.  You will have to install the  to get the device to be recognized, drivers are not included with the Windows binaries.

I won’t be covering the CLI interface, all configuration will be conducted via the web based graphic console.

The web based console is accessed via your favourite Internet browser.  I typically use Google Chrome as I have ran into issues logging into the console with later versions of Internet Explorer.  The default username is manage, password !manage.

Once logged in, launch the initial configuration wizard by clicking Configuration – Configuration Wizard at the top:

This will l launch the basic settings configuration wizard.  This wizard should hopefully be self-explanatory so I won’t go into many details here.

For this example I will be creating a single VDisk encompassing the entire drive space available.  To do this, click Provisioning – Create Vdisk:

Use your best judgements on what RAID level you want here.  For my example here I am going to be building a RAID 5 on 5x450GB drives:

Now I am going to be creating two separate volumes:  One for the CSV file storage, and the other for Qurorum.  The Quorum volume will be 1GB in size for the disk witness required since we have 2 nodes, and the CSV volume will encompass the remaining space.  To create the volume click on the VDisk created above, and then click Provisioning – Create Volume.  I don’t like to MAP the volumes initially, rather explicitly mapping them to the nodes after connecting them to the SAN:

In part 1 we added the roles, configured the NIC’s connecting for both Hyper-V VM access and SAN connections and prepped the servers.  Now we need to connect the nodes to the SAN by means of the iSCSI initiator.

Our targets on the P2000 are 172.16.1.10, 172.16.2.10, 172.16.3.10, and 172.16.4.10 for ports 1 and 2 on each controller.  As you recall from step one, the servers are directly connected without a switch in the middle.

To launch the iSCSI initiator just type “iSCSI” in the start screen:

I typically pin this to the start screen.

When you launch the iSCSI initiator for the first time you will presented with an option to start the service and make the service auto start.  Choose yes:

I don’t typically like using the Quick Connect option on the target screen, rather configure each connection separately.  Click on the Discovery Tab in the iSCSI Initiator Properties screen, then Discover Portal:

Next, we want to input the IP address of the SAN NIC that we are connecting to, then click on the advanced button.

Select the Initiator IP that will be connecting to the target:

Then do this again for the second connection to the SAN.  When finished you should have two entries:

Now, back on the target tab your target should be listed as Inactive.  Click on the connect button, then in the window that opens click on the “Enable Multi-Path” button:

Now it should show connected:

Complete the same tasks on the other node as well.

Now, before we can attach a volume from the SAN we are going to have to MAP the LUN explicitly to each of the nodes.  So, we are going to have to open the web management utility for the P2000 again.  Once in, if we expand the Hosts in the left pane we should now see our two nodes listed (I have omitted server names in this screenshot):

We need to map the two volumes created on the SAN to each of the nodes.  Right click on the volume, selecting Provisioning – Explicit Mappings

Then choose the node, click the Map check box, give the LUN a unique number, check the ports assigned to the LUN on the SAN and apply the changes:

Assign the same LUN number to the other node and complete the same explicit mapping to the other node.  Then complete the same procedure for the other volume.  I used LUN number 0 for the Quorum Volume, and LUN number 1 for the CSV Volume.

Jump back to the nodes, back into the iSCSI initiator and click on the Volumes and Devices tab, press the Auto Configure button and our volumes should show up here:

Complete the same procedure on the second node as well.  If you are having difficulty with the volumes showing up sometimes a disconnect and reconnect is required.(don’t forget to check the “Enable Multi-Path” option)

Now we want to enable multipath for iSCSI.  Fire up the MPIO utility from the start screen:

Click on the Discover Multi-Paths tab, then check off the box “Add support for iSCSI devices” and finally the Add button:

The server will prompt for a reboot.  So go ahead and let it reboot.  Don’t forget to complete the same tasks on the second node.

After the reboot we are going to want to fire up disk management and configure the two SAN volumes on the node, making sure each node can see and connect to them.  When initializing your CSV volume I would suggest making this a GPT disk rather than an MBR one, since you are likely to go above the 2TB limit imposed with MBR.

I format both volumes with NTFS, and give them a drive letter for now:

After configuring the volumes on the first node, I typically offline the disks, then on-line the disks on the second node to be sure everything is connected and working correctly.  Don’t get worried about the drive letters assigned to the volumes, this doesn’t matter.

Getting there slowly!

Next, before we create the cluster I always like to assign the Hyper-V External NICs in the Hyper-V configuration.  Fire up Hyper-V Manager, selecting “Virtual Switch Manager” in the action pane.  We are going to create the external Virtual Switches using the adapters we assigned for the Hyper-V VM’s.  I always dedicate the network adapters to the virtual switch, un-checking the option “Allow management operating system to share this network adapter”.

At this point we have completed all the prerequisite steps required to fire up the cluster.  Now we will form the cluster.

Fire up Fail over Cluster Manager from the start screen:

Once opened, select the option in the action pane to create cluster.  This will fire up the wizard to form our cluster.  The wizard should be self-explanatory, so walk through the steps required.  Make sure you run the cluster validation tests, selecting the default option to run all tests.  This is the best time to be running this test, since it will take the cluster disks offline.  You don’t want to have this cluster in production finding issues wrong with it, having to run the cluster validation tests bringing the cluster down.  If we run into any issues here we can address them now before the system is in production.

The P2000 on Windows Server 2012 will create a warning about validating storage spaces persistent reservation.  This warning can be safely ignored as noted here.

Hopefully when you run the validation tests you will get all Success (other than the note above).  If not, trace back through the steps and make sure you are not missing anything.  Once you get a successful validation save the report and store it if you need to reference it for future support.

Finish walking through the wizard to create your cluster.  Assign a cluster name and static IP address to your cluster as requested from the wizard.

That should do it.   If you got this far you made it.  Congratulations!

—————————————————————————————————————-

A few asked me to elaborate more on configuring the cluster.  Sorry I didn’t go into too much detail during Part 2.  I’ll explain further here.

When you open up Failover Cluster Manager you have the option in the action pane to create a cluster.  Click on this to fire up the wizard:

The initial configuration screen can be skipped, and the second screen will prompt you to input the server names of the cluster nodes:

When you add the servers it will verify the failover cluster service is running on the node.  If everything is good, the wizard will allow you to add the server.  Once the servers are added, proceed to the next step.

The next step is very important.  Not only is this step required for Microsoft to ever support you if you run into any issues, but it also validates that everything you have done thus far is correct and setup properly for the cluster to operate.  Not quite sure why they give you the option to skip the tests, but I would highly recommend against this.  The warning is pretty straight forward as well:

The next portion of the cluster configuration that comes up is the validation wizard.  Like I mentioned above, do not skip this portion.   Run all tests as recommended by the wizard:

The tests will take a few minutes to run, so go grab a coffee while waiting.  Once completed, you shouldn’t have any errors.  However, as I mentioned in part 2 there is a known issue when using the P2000 with the “Validate Storage Spaces Persistent Reservation” test so you will get a warning here relating to this but you shouldn’t have any other warnings if things are setup correctly.

View the report and save it somewhere as a reference that you ran it in case Microsoft support wants to see it.

When you click finish you will be asked to enter your name for the cluster, as well as the IP address for the cluster.  Enter these parameters in and click next:

Then finish up the wizard and form the cluster.

Now, there are several things we must do after the cluster is up and running to completely configure it.  I’ll go over each aspect now.

Cluster Shared Volumes:

This should be a given.  I won’t go into much detail here, sparing you the time.  If you need to read up on what a cluster shared volume is please read up on it here:

http://blogs.msdn.com/b/clustering/archive/2013/12/02/10473247.aspx

To enable the cluster shared volume navigate to storage, then disks.  Then select your storage disk, right clicking it and choosing the option “Add to Cluster Shared Volumes”

I like to rename the disks here as well, but this is not a necessary step.

Now that we have enabled Cluster Shared Volumes we should change the default path in Hyper-V manager on both nodes to reflect this.  The path should be C:\ClusterStorage\Volume1 on both nodes.  I like to keep the remaining path as well for simplicity:

Don’t forget to do this on both nodes.

Live Migration:

I dedicate a NIC for live migration.  I have always done this on recommendation that if we saturate the network link for managing the server with live migration traffic that we could cause a failover situation where heartbeat is lost.  To dedicate the network adapter for live migration you right click the Networks option in failover cluster manager, choosing Live Migration Settings.  I rename my networks in the list first so that they are more easily understood other than “Cluster Network X”

Cluster Aware Updating:

Cluster aware updating is a fantastic feature introduced in 2012 that allows for automatic updating of your cluster nodes without taking down the workloads they are servicing.  What happens with Hyper-V is that the VM roles are live migrated to another node, once all roles are off the node then updating is completed and the node is rebooted.  Then the same process happens on the other node.  There is a little bit of work to set this up, and you should have a WSUS server on your network, but the setup is worth the effort.

To enable Cluster-Aware Updating choose the option on the initial failover cluster manager page

This will launch the management window where you can configure the options for the cluster.  Click on the “Configure cluster self-updating options” in the cluster actions pane.  This will launch the wizard to let you configure this option.

Before you walk through this wizard there is one necessary step you should complete first.  I like to place my Hyper-V nodes, and the cluster computer object in their OU within Active Directory.  I then typically grant full control over that OU to the Cluster computer object.  I find if you don’t complete this step that sometimes you will get errors in the failover cluster manager, as well as issues with Cluster-Aware updating.

The Cluster-Aware updating wizard is pretty straight forward.  The only thing you need to determine is when you want it to run.  There is no need to check off the “I have a pre-staged computer object for the CAU clustered role” as this will be created during the setup.  I don’t typically change any options from the default here, I haven’t found any reason to do so yet.  I’ll also do a first run to make sure that this is working correctly.

Tweaking:

The following are some tweaks and best practices I also do to ensure the best performance and reliability on the cluster configuration:

Disable all networking protocols on the iSCSI NICs used, with the exception of Internet Protocol Version 4/6.  This is to reduce the amount of chatter that occurs on the NICs.  We want to dedicate
these network adapters strictly for iSCSI traffic, so there is no need for anything outside of the IP protocols.

  1. Change the binding of the NICs, putting the management NIC of the node at the top of the list.
  2. Disable RDP Printer mapping on the hosts to remove any chance of a printer driver causing issues with stability.  You can do this via local policy, group policy, or registry.  Google how to do this.
  3. Configure exclusions in your anti-virus software based on the following article:
    http://social.technet.microsoft.com/wiki/contents/articles/2179.hyper-v-anti-virus-exclusions-for-hyper-v-hosts.aspx
  4. Review the following article on performance tuning for Hyper-V servers:
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn567657.aspx

    If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Microsoft KMS Client Setup Keys Reference

Windows Server 2012 R2 and Windows 8.1 Client Setup Keys

 

Operating system edition KMS Client Setup Key
Windows 8.1 Professional GCRJD-8NW9H-F2CDX-CCM8D-9D6T9
Windows 8.1 Professional N HMCNV-VVBFX-7HMBH-CTY9B-B4FXY
Windows 8.1 Enterprise MHF9N-XY6XB-WVXMC-BTDCT-MKKG7
Windows 8.1 Enterprise N TT4HM-HN7YT-62K67-RGRQJ-JFFXW
Windows Server 2012 R2 Server Standard D2N9P-3P6X9-2R39C-7RTCD-MDVJX
Windows Server 2012 R2 Datacenter W3GGN-FT8W3-Y4M27-J84CP-Q3VJ9
Windows Server 2012 R2 Essentials KNC87-3J2TX-XB4WP-VCPJV-M4FWM

Windows Server 2012 and Windows 8 Client Setup Keys

 

Operating system edition KMS Client Setup Key
Windows 8 Professional NG4HW-VH26C-733KW-K6F98-J8CK4
Windows 8 Professional N XCVCF-2NXM9-723PB-MHCB7-2RYQQ
Windows 8 Enterprise 32JNW-9KQ84-P47T8-D8GGY-CWCK7
Windows 8 Enterprise N JMNMF-RHW7P-DMY6X-RF3DR-X2BQT
Windows Server 2012 BN3D2-R7TKB-3YPBD-8DRP2-27GG4
Windows Server 2012 N 8N2M2-HWPGY-7PGT9-HGDD8-GVGGY
Windows Server 2012 Single Language 2WN2H-YGCQR-KFX6K-CD6TF-84YXQ
Windows Server 2012 Country Specific 4K36P-JN4VD-GDC6V-KDT89-DYFKP
Windows Server 2012 Server Standard XC9B7-NBPP2-83J2H-RHMBY-92BT4
Windows Server 2012 MultiPoint Standard HM7DN-YVMH3-46JC3-XYTG7-CYQJJ
Windows Server 2012 MultiPoint Premium XNH6W-2V9GX-RGJ4K-Y8X6F-QGJ2G
Windows Server 2012 Datacenter 48HP8-DN98B-MYWDG-T2DCC-8W83P

Windows 7 and Windows Server 2008 R2

 

Operating system edition KMS Client Setup Key
Windows 7 Professional FJ82H-XT6CR-J8D7P-XQJJ2-GPDD4
Windows 7 Professional N MRPKT-YTG23-K7D7T-X2JMM-QY7MG
Windows 7 Professional E W82YF-2Q76Y-63HXB-FGJG9-GF7QX
Windows 7 Enterprise 33PXH-7Y6KF-2VJC9-XBBR8-HVTHH
Windows 7 Enterprise N YDRBP-3D83W-TY26F-D46B2-XCKRJ
Windows 7 Enterprise E C29WB-22CC8-VJ326-GHFJW-H9DH4
Windows Server 2008 R2 Web 6TPJF-RBVHG-WBW2R-86QPH-6RTM4
Windows Server 2008 R2 HPC edition TT8MH-CG224-D3D7Q-498W2-9QCTX
Windows Server 2008 R2 Standard YC6KT-GKW9T-YTKYR-T4X34-R7VHC
Windows Server 2008 R2 Enterprise 489J6-VHDMP-X63PK-3K798-CPX3Y
Windows Server 2008 R2 Datacenter 74YFP-3QFB3-KQT8W-PMXWJ-7M648
Windows Server 2008 R2 for Itanium-based Systems GT63C-RJFQ3-4GMB6-BRFB9-CB83V

Windows Vista and Windows Server 2008

 

Operating system edition KMS Client Setup Key
Windows Vista Business YFKBB-PQJJV-G996G-VWGXY-2V3X8
Windows Vista Business N HMBQG-8H2RH-C77VX-27R82-VMQBT
Windows Vista Enterprise VKK3X-68KWM-X2YGT-QR4M6-4BWMV
Windows Vista Enterprise N VTC42-BM838-43QHV-84HX6-XJXKV
Windows Web Server 2008 WYR28-R7TFJ-3X2YQ-YCY4H-M249D
Windows Server 2008 Standard TM24T-X9RMF-VWXK6-X8JC9-BFGM2
Windows Server 2008 Standard without Hyper-V W7VD6-7JFBR-RX26B-YKQ3Y-6FFFJ
Windows Server 2008 Enterprise YQGMW-MPWTJ-34KDK-48M3W-X4Q6V
Windows Server 2008 Enterprise without Hyper-V 39BXF-X8Q23-P2WWT-38T2F-G3FPG
Windows Server 2008 HPC RCTX3-KWVHP-BR6TB-RB6DM-6X7HP
Windows Server 2008 Datacenter 7M67G-PC374-GR742-YH8V4-TCBY3
Windows Server 2008 Datacenter without Hyper-V 22XQ2-VRXRG-P8D42-K34TD-G3QQC
Windows Server 2008 for Itanium-Based Systems 4DWFP-JF3DJ-B7DTH-78FJB-PDRHK

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Find HP Server Serial Numbers via iLO

I was trying to find some serial numbers of our HP servers the other day, but was not onsite to view the actual sticker with the Serial info. I searched for a way to find the serial number for warranty purposes, but everyone online said I needed the actual sticker. I found another way!

1. Login to server iLO
2. Click on the “Administration” Tab
3. Click on “Management” on the left navigation pane
4. Click on “View XML Reply”
5. The first part of the text output is your serial. Normally the serial starts with USE

If you found this article to be helpful, please support us by visiting our sponsors’ websites.