Fibre Channel vs ISCSI

In the beginning there was Fibre Channel (FC), and it was good. If you wanted a true SAN — versus shared direct-attached SCSI storage — FC is what you got. But FC was terribly expensive, requiring dedicated switches and host bus adapters, and it was difficult to support in geographically distributed environments. Then, around six or seven years ago, iSCSI hit the SMB market in a big way and slowly began its climb into the enterprise.

The intervening time has seen a lot of ill-informed wrangling about which one is better. Sometimes, the iSCSI-vs.-FC debate has reached the level of a religious war.

This battle been a result of two main factors: First, the storage market was split between big incumbent storage vendors who had made a heavy investment in FC marketing against younger vendors with low-cost, iSCSI-only offerings. Second, admins tend to like what they know and distrust what they don’t. If you’ve run FC SANs for years, you are likely to believe that iSCSI is a slow, unreliable architecture and would sooner die than run a critical service on it. If you’ve run iSCSI SANs, you probably think FC SANs are massively expensive and a bear to set up and manage. Neither is entirely true.

Now that we’re about a year down the pike after the ratification of the FCoE (FC over Ethernet) standard, things aren’t much better. Many buyers still don’t understand the differences between the iSCSI and Fiber Channel standards. Though the topic could easily fill a book, here’s a quick rundown.

The fundamentals of FC
FC is a dedicated storage networking architecture that was standardized in 1994. Today, it is generally implemented with dedicated HBAs (host bus adapters) and switches — which is the main reason FC is considered more expensive than other storage networking technologies.

As for performance, it’s hard to beat the low latency and high throughput of FC, because FC was built from the ground up to handle storage traffic. The processing cycles required to generate and interpret FCP (Fibre Channel protocol) frames are offloaded entirely to dedicated low-latency HBAs. This frees the server’s CPU to handle applications rather than talk to storage

FC is available in 1Gbps, 2Gbps, 4Gbps, 8Gbps, 10Gbps, and 20Gbps speeds. Switches and devices that support 1Gbps, 2Gbps, 4Gbps, and 8Gbps speeds are generally backward compatible with their slower brethren, while the 10Gbps and 20Gbps devices are not, due to the fact that they use a different frame encoding mechanism (these two are generally used for interswitch links).

In addition, FCP is also optimized to handle storage traffic. Unlike protocols that run on top of TCP/IP, FCP is a significantly thinner, single-purpose protocol that generally results in a lower switching latency. It also includes a built-in flow control mechanism that ensures data isn’t sent to a device (either storage or server) that isn’t ready to accept it. In my experience, you can’t achieve the same low interconnect latency with any other storage protocol in existence today.

Yet FC and FCP have drawbacks — and not just high cost. One is that supporting storage interconnectivity over long distances can be expensive. If you want to configure replication to a secondary array at a remote site, either you’re lucky enough to afford dark fiber (if it’s available) or you’ll need to purchase expensive FCIP distance gateways.

In addition, managing a FC infrastructure requires a specialized skill set, which may make administrator experience an issue. For example, FC zoning makes heavy use of long hexadecimal World Wide Node and Port names (similar to MAC addresses in Ethernet), which can be a pain to manage if frequent changes are made to the fabric.

The nitty-gritty on iSCSI
iSCSI is a storage networking protocol built on top of the TCP/IP networking protocol. Ratified as a standard in 2004, iSCSI’s greatest claim to fame is that it runs over the same network equipment that run the rest of the enterprise network. It does not specifically require any extra hardware, which makes it comparatively inexpensive to implement.

From a performance perspective, iSCSI lags behind FC/FCP. But when iSCSI is implemented properly, the difference boils down to a few milliseconds of additional latency due to the overhead required to encapsulate SCSI commands within the general-purpose TCP/IP networking protocol. This can make a huge difference for extremely high transactional I/O loads and is the source of most claims that iSCSI is unfit for use in the enterprise. Such workloads are rare outside of the Fortune 500, however, so in most cases the performance delta is much narrower.

iSCSI also places a larger load on the CPU of the server. Though hardware iSCSI HBAs do exist, most iSCSI implementations use a software initiator — essentially loading the server’s processor with the task of creating, sending, and interpreting storage commands. This also has been used as an effective argument against iSCSI. However, given the fact that servers today often ship with significantly more CPU resources than most applications can hope to use, the cases where this makes any kind of substantive difference are few and far between.

iSCSI can hold its own with FC in terms of throughput through the use of multiple 1Gbps Ethernet or 10Gbps Ethernet links. It also benefits from being TCP/IP in that it can be used over great distances through existing WAN links. This usage scenario is usually limited to SAN-to-SAN replication, but is significantly easier and less expensive to implement than FC-only alternatives.

Aside from savings through reduced infrastructural costs, many enterprises find iSCSI much easier to deploy. Much of the skill set required to implement iSCSI overlaps with that of general network operation. This makes iSCSI extremely attractive to smaller enterprises with limited IT staffing and largely explains its popularity in that segment.

This ease of deployment is a double-edged sword. Because iSCSI is easy to implement, it is also easy to implement incorrectly. Failure to implement using dedicated network interfaces, to ensure support for switching features such as flow control and jumbo framing, and to implement multipath I/O are common mistakes which can result in lackluster performance. Stories abound on Internet forums of unsuccessful iSCSI deployments that could have been avoided because of these factors.

Fiber Channel over IP
FCoIP (Fiber Channel over Internet Protocol) is a niche protocol that was ratified in 2004. It is a standard for encapsulating FCP frames within TCP/IP packets so that they can be shipped over a TCP/IP network. It is almost exclusively used for bridging FC fabrics at multiple sites to enable SAN-to-SAN replication and backup over long distances.

Due to the inefficiency of fragmenting large FC frames into multiple TCP/IP packets (WAN circuits typically don’t support packets over 1,500 bytes), it is not built to be low latency. Instead, it is built to allow geographically separated Fibre Channel fabrics to be linked when dark fiber isn’t available to do so with native FCP. FCIP is almost always found in FC distance gateways — essentially FC/FCP-to-FCIP bridges — and is rarely if ever used natively by storage devices as a server to storage access method.

Fibre Channel over Ethernet
FCoE (Fibre Channel over Ethernet) is the newest storage networking protocol of the bunch. Ratified as a standard in June of last year, FCoE is the Fibre Channel community’s answer to the benefits of iSCSI. Like iSCSI, FCoE uses standard multipurpose Ethernet networks to connect servers with storage. Unlike iSCSI, it does not run over TCP/IP — it is its own Ethernet protocol occupying a space next to IP in the OSI model.

This differential is important to understand as it has both good and bad results. The good is that, even though FCoE runs over the same general-purpose switches that iSCSI does, it experiences significantly lower end-to-end latency due to the fact that the TCP/IP header doesn’t need to be created and interpreted. The bad is that it cannot be routed over a TCP/IP WAN. Like FC, FCoE can only run over a local network and requires a bridge to connect to a remote fabric.

On the server side, most FCoE implementations make use of 10Gbps Ethernet FCoE CNAs (Converged Network Adapters), which can both act as network adapters and FCoE HBAs — offloading the work of talking to storage similar to the way that FC HBAs do. This is an important point as the requirement for a separate FC HBA was often a good reason to avoid FC altogether. As time goes on, servers may commonly ship with FCoE-capable CNAs built in, essentially removing this as a cost factor entirely.

FCoE’s primary benefits can be realized when it is implemented as an extension of a pre-existing Fiber Channel network. Despite having a different physical transport mechanism, which requires a few extra steps to implement, FCoE can use the same management tools as FC, and much of the experience gained in operating an FC fabric can be applied to its configuration and maintenance.

Putting it all together
There’s no doubt that the debate between FC and iSCSI will continue to rage. Both architectures are great for certain tasks. However, saying that FC is good for enterprise while iSCSI is good for SMB is no longer an acceptable answer. The availability of FCoE goes a long way toward eating into iSCSI’s cost and convergence argument while the increasing prevalence of 10Gbps Ethernet and increasing server CPU performance eats into FC’s performance argument.

Whatever technology you decide to implement for your organization, try not to get sucked into the religious war and do your homework before you buy. You may be surprised by what you find.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Understanding Dell DPACK

The Dell DPACK Tool is a unique agentless tool that collect performance statistics of servers (Physical and Virtual) and displays them in an easy to read report. Key metrics in this report include Throughput, Average IO Size, IOPS, Latency, Read/Write Ratio, Peak Queue Depth, Total Capacity, CPU and Memory Usage and much more. Running this tool against your servers adds NO overhead to your servers and provides a wealth of information.

See this sample report:

Dell DPACK Report

Dell DPACK Report

Data collected through this tool is crucial in sizing SAN storage for your organization.
If you would like a free report on what your environment looks like, along with recommendations, please contact Netwize here and request this free service: http://www.netwize.net/contact-us/

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

How many VMs per DataStore should I have?

Although there are no steadfast rules for how many virtual machines should be placed on a datastore due to the scalability enhancements of VMFS-5, a good conservative approach is to place anywhere between 15-25 virtual machines on each.

The reasoning behind keeping a limited number of Virtual Machines and/or VMDK files per datastore is due to potential I/O contention, queue depth contention, or Legacy SCSI reservation conflicts that may degrade system performance.

This is why I suggest limiting your datastore size to 500GB-700GB each, because this helps limit the total number of virtual machines that can be placed on each datastore.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Microsoft KMS Client Setup Keys Reference

Windows Server 2012 R2 and Windows 8.1 Client Setup Keys

 

Operating system edition KMS Client Setup Key
Windows 8.1 Professional GCRJD-8NW9H-F2CDX-CCM8D-9D6T9
Windows 8.1 Professional N HMCNV-VVBFX-7HMBH-CTY9B-B4FXY
Windows 8.1 Enterprise MHF9N-XY6XB-WVXMC-BTDCT-MKKG7
Windows 8.1 Enterprise N TT4HM-HN7YT-62K67-RGRQJ-JFFXW
Windows Server 2012 R2 Server Standard D2N9P-3P6X9-2R39C-7RTCD-MDVJX
Windows Server 2012 R2 Datacenter W3GGN-FT8W3-Y4M27-J84CP-Q3VJ9
Windows Server 2012 R2 Essentials KNC87-3J2TX-XB4WP-VCPJV-M4FWM

Windows Server 2012 and Windows 8 Client Setup Keys

 

Operating system edition KMS Client Setup Key
Windows 8 Professional NG4HW-VH26C-733KW-K6F98-J8CK4
Windows 8 Professional N XCVCF-2NXM9-723PB-MHCB7-2RYQQ
Windows 8 Enterprise 32JNW-9KQ84-P47T8-D8GGY-CWCK7
Windows 8 Enterprise N JMNMF-RHW7P-DMY6X-RF3DR-X2BQT
Windows Server 2012 BN3D2-R7TKB-3YPBD-8DRP2-27GG4
Windows Server 2012 N 8N2M2-HWPGY-7PGT9-HGDD8-GVGGY
Windows Server 2012 Single Language 2WN2H-YGCQR-KFX6K-CD6TF-84YXQ
Windows Server 2012 Country Specific 4K36P-JN4VD-GDC6V-KDT89-DYFKP
Windows Server 2012 Server Standard XC9B7-NBPP2-83J2H-RHMBY-92BT4
Windows Server 2012 MultiPoint Standard HM7DN-YVMH3-46JC3-XYTG7-CYQJJ
Windows Server 2012 MultiPoint Premium XNH6W-2V9GX-RGJ4K-Y8X6F-QGJ2G
Windows Server 2012 Datacenter 48HP8-DN98B-MYWDG-T2DCC-8W83P

Windows 7 and Windows Server 2008 R2

 

Operating system edition KMS Client Setup Key
Windows 7 Professional FJ82H-XT6CR-J8D7P-XQJJ2-GPDD4
Windows 7 Professional N MRPKT-YTG23-K7D7T-X2JMM-QY7MG
Windows 7 Professional E W82YF-2Q76Y-63HXB-FGJG9-GF7QX
Windows 7 Enterprise 33PXH-7Y6KF-2VJC9-XBBR8-HVTHH
Windows 7 Enterprise N YDRBP-3D83W-TY26F-D46B2-XCKRJ
Windows 7 Enterprise E C29WB-22CC8-VJ326-GHFJW-H9DH4
Windows Server 2008 R2 Web 6TPJF-RBVHG-WBW2R-86QPH-6RTM4
Windows Server 2008 R2 HPC edition TT8MH-CG224-D3D7Q-498W2-9QCTX
Windows Server 2008 R2 Standard YC6KT-GKW9T-YTKYR-T4X34-R7VHC
Windows Server 2008 R2 Enterprise 489J6-VHDMP-X63PK-3K798-CPX3Y
Windows Server 2008 R2 Datacenter 74YFP-3QFB3-KQT8W-PMXWJ-7M648
Windows Server 2008 R2 for Itanium-based Systems GT63C-RJFQ3-4GMB6-BRFB9-CB83V

Windows Vista and Windows Server 2008

 

Operating system edition KMS Client Setup Key
Windows Vista Business YFKBB-PQJJV-G996G-VWGXY-2V3X8
Windows Vista Business N HMBQG-8H2RH-C77VX-27R82-VMQBT
Windows Vista Enterprise VKK3X-68KWM-X2YGT-QR4M6-4BWMV
Windows Vista Enterprise N VTC42-BM838-43QHV-84HX6-XJXKV
Windows Web Server 2008 WYR28-R7TFJ-3X2YQ-YCY4H-M249D
Windows Server 2008 Standard TM24T-X9RMF-VWXK6-X8JC9-BFGM2
Windows Server 2008 Standard without Hyper-V W7VD6-7JFBR-RX26B-YKQ3Y-6FFFJ
Windows Server 2008 Enterprise YQGMW-MPWTJ-34KDK-48M3W-X4Q6V
Windows Server 2008 Enterprise without Hyper-V 39BXF-X8Q23-P2WWT-38T2F-G3FPG
Windows Server 2008 HPC RCTX3-KWVHP-BR6TB-RB6DM-6X7HP
Windows Server 2008 Datacenter 7M67G-PC374-GR742-YH8V4-TCBY3
Windows Server 2008 Datacenter without Hyper-V 22XQ2-VRXRG-P8D42-K34TD-G3QQC
Windows Server 2008 for Itanium-Based Systems 4DWFP-JF3DJ-B7DTH-78FJB-PDRHK

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

How to do a Firmware Upgrade on your Exagrid

Need to do a firmware upgrade on your exagrid? It’s a pretty straight forward process. The most difficult part is getting the download link for the firmware.
I have a link for version 4.6.4 here: http://supportweb.exagrid.com/downloads/software/4.6.4/4.6.4.P20/install.4.6.4.1038.P20.jar

After you have the firmware downloaded (it will be a .jar file), login to your Exagrid through Internet Explorer. Click “Manage” and “Software Upgrade”

From here, you can upload your downloaded file and Apply the file. A firmware update takes around 45 mins-1 hr.

You can view the upgrade in progress by refreshing the web page:
EX2

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Find HP Server Serial Numbers via iLO

I was trying to find some serial numbers of our HP servers the other day, but was not onsite to view the actual sticker with the Serial info. I searched for a way to find the serial number for warranty purposes, but everyone online said I needed the actual sticker. I found another way!

1. Login to server iLO
2. Click on the “Administration” Tab
3. Click on “Management” on the left navigation pane
4. Click on “View XML Reply”
5. The first part of the text output is your serial. Normally the serial starts with USE

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Find Server Service Tag via VMware

If you need to find the service tag of an ESXi server without physically being present at the server, try this.

Enable SSH on host and use the following command:

/sbin/esxcli hardware platform get

 

There you go!

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Reclaim “white space” – HP Lefthand SAN

Post from TeleData:

This is always one of the challenges (and limitations) to thin provisioning.

The technology used to provide thin provisioning in SAN/iQ is really more of a “high water mark”.Once a block has been marked as “written” you cannot recover unused space by deleting data from the volume. There is no communication facility in which the OS would “tell” the SAN that,
“Hey that block of data we were using yesterday, is now empty and you can have it back.”Since there is no way to “tell” the SAN the space is empty, those blocks of data, once written to cannot be reclaimed.

Your only option is to create a NEW volume, and migrate the data to the new volume, and then delete the old volume.

This can be challenging with direct native iSCSI mounted volumes, but if you are using a virtual machine (with virtual disks) you can reclaim storage by creating a new VMFS datastore, using sdelete to zero out unused space (within the Windows OS), then performing a storage migration and choosing “thin” provisioning on the virtual disk.

While still requiring a new (VMFS) volume, the virtualized disk can be left intact avoiding any reconfiguration within the Windows server itself.

The result would NOT be different if you had chosen thick vs thin. The blocks are still marked as used and a “high water mark” is still maintained. The only difference is when you mark it “thick” SAN/iQ reserves the entire space, and it cannot be used to provision to other volumes/snapshots.

This is why you can dynamically switch between thin and thick provisioning within the CMC.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

How To Reset an EqualLogic PS SAN

1. Connect to the SAN via a serial cable to the active storage processor.

2. Enter the group login and password (grpadmin/grpadmin is default).

3. Enter the command “reset” and read the warning.

4. If you want to reset the SAN enter “DeleteAllMyDataNow”.

5. The SAN will then reset and reboot, ready for initial configuration.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Dell Compellent Thin Import

This is a Step-by-Step Guide I found fromhttp://workinghardinit.wordpress.com/tag/thin-import/
He did a great job in outlining the Compellent Thin Import Process.

 

A Hidden Gem in Compellent

As you might well know I’m in the process of doing a multi site SAN replacement project to modernize the infrastructure at a non disclosed organization. The purpose is to have a modern, feature reach, reliable and affordable storage solution that can provide the Windows Server 2012 roll out with modern features (ODX, SMI-S, …).

One of the nifty things you can do with a Compellent SAN is migrations from LUNs of the old SAN to the Compellent SAN with absolute minimal downtime. For us this has proven a real good way of migrating away from 2 HP EVA 8000 SANs to our new DELL Compellent environment. We use it to migrate file servers, Exchange 2010 DAG Member servers (zero downtime),  Hyper-V clusters, SQL Servers, etc. It’s nothing less than a hidden gem not enough people are aware off and it comes with the SAN. I was told that it was hard & not worth the effort by some … well clearly they never used and as such don’t know it. Or they work for competitors and want to keep this hidden Winking smile.

The Process

You have to set up the zoning on all SANs involved to all fabrics. This needs to be done right of course but I won’t be discussing this here. I want to focus on the process of what you can do. This is not a comprehensive how to. It depends on your environment and I can’t write you a migration manual without digging into that. And I can’t do that for free anyway. I need to eat & pay bills as well Winking smile

Basically you add your target Compellent SAN as a host to your legacy SAN (in our case HP EVA 8000) with an operating system type of “Unknown”. This will provide us with a path to expose EVA LUNs to our Compellent SAN.

image

Depending on what server LUNs you are migrating this is when you might have some short downtime for that LUN. If you have shared nothing storage like in an Exchange 2010 or a SQL Server 2012 DAG you can do this without any downtime at all.

Stop any IO to the LUN if you can (suspend copies, shut down data bases, virtual machines) and take CSVs or disks offline. Do what is needed to prevent any application and data issue, this varies.

What we then do is we unpresent the LUN of a server on the legacy SAN.

image

After a rescan of the disks on the server you’ll see that disk/LUN disappear.

This same LUN we then present to the Compellent host we added above.

image

We then “Scan for Disks” in the Compellent Controller GUI. This will detect the LUN as an unassigned disk. That unassigned disk can be mapped to an “External Device” which we name after the LUN to keep things clear (“Classify Disk as External Device” in the picture below).

image

Then we right click that External Device and choose to “Restore Volume from External Device”.

image

This kicks off replication from the EVA LUN mapped to the Compellent target LUN. We can now map that replica to the host as you can see in this picture.

image

After this rescan the disks on the server and voila, the server sees the LUN again. Bring the disk/CSV back online and you’re good to go.

image

All the downtime you’ll have is at a well defined moment in time that you choose. You can do this one LUN at the time or multiple LUNs at once. Just don’t over do it with the number of concurrent migrations. Keep an eye on the CPU usage of your controllers.

After the replication has completed the Compellent SAN will transparently map the destination LUN to the server and remove the mapping for the replica.

image

The next step is that the mirror is reversed. That means that while this replica exists the data written to the Compellent LUN is also mirrored to the old SAN LUN until you break the mirror.

image

Once you decide you’re done replicating and don’t want to keep both LUNs in sync anymore, you break the mirror.

image

You delete the remaining replica disk and you release the external disk.

image

Now you unpresent the LUN from the Compellent host on your old SAN.

image

After a rescan your disks will be shown as down in unassigned disks and you can delete them there. This completes the clean up after a LUN migration.

image

Conclusion

When set up properly it works very well. Sure it takes some experimenting to deal with some intricacies, but once you figure all that out you’re good to go and are ready to deal with any hiccups that might occur. The main take away is that this provides for minimal downtime at a moment that you choose. You get this out of the box with your Compellent. That’s a pretty good deal I say!

If you found this article to be helpful, please support us by visiting our sponsors’ websites.