Tag Archives: Fibre Channel

Brocade Fibre Channel Zoning

So you want to learn how to zone you fibre channel switches? This post will describe how to do zoning through any Brocade Fibre Channel Switch.

After installing your FC Switch and getting in an IP, Login to it by going to the IP address. It requires a specific version of Java and I have found it works best in Firefox than any other browser.

Once logged into the Switch, you should be presented with the Main Switch Admin page that will look something like this. (Each model varies slightly):

Click Configure at the top of the Screen and Choose “Zone Admin”. A new Window will appear and look like this:

Here is where all the magic happens. In FC Zoning, the goal is to create “VLAN-Like” objects called zones that contain the WWNs of your HBAs and Storage.

First off, lets zone in your SAN. Make sure the only cables plugged into your Fibre Channel Switch, are those from your SAN. (This will make explaining things easier).
We need to create an Alias for the WWNs of your SAN. To do this, I click on the Alias Tab and Select the “New Alias’ Button.

Give your Alias a descriptive name, like SAN_WWNs_ Alias.

Expand the WWN’s on the lefthand side. You’ll want to click the + on the WWNs to view the Second-Level Object. Add those objects to this new Alias you created. (See the image above or below for reference).

After you have the SAN Alias created and have added the WWNs, click on the “ZONE” tab.
Now we will create a new Zone and add the Alias we just created to the zone.
Click the “New Zone” Button and give it a name like “SAN_ WWNs_Zone”.
Expand the Aliases on the left and add the Alias you created to this zone.
If you have a two port FC card, there should only be two WWN’s per switch. Repeat this process on your other switch.

Now for the Servers- We will want to plug the servers in, one at a time, zone each server, and then plug in the next server. This will help us identify which server is which.
When you plug in a server into the FC switch, you will see a new WWN.

You need to go to the Alias Tab and create a new Alias and name is something like: “ServerName”.
Expand the WWN and add the Second-Level WWN object to this Alias.

Next, go to the “ZONE” tab and Create a new zone, something like “Servername+SAN_WWNs”.
Add the Server Alias you created PLUS the “SAN_WWNs_Alias”.
Again, you will add the server Alias and the SAN Alias into this Zone.

Finally, click on the Zone Config Tab and create a new Zone Config. Add all the Zones you created into this Zone Config Tab. This is basically a big file will all your settings.

Click on Save Config at the top and wait about 30 seconds for the changes to be saved. You’ll see a success message in the bottom log screen.
The select Enable Config. Wait another 30 seconds for the settings to be enabled and take effect.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Brocade Fibre Channel Zoning – Dell Compellent

There are good step by step zoning documents out on the internet, so I assume this post will be a success. This post will explain how to do Fibre Channel Zoning using any type of Brocade Fibre Channel Switch. In this case, I am zoning in a Dell Compellent SAN, but these steps basically apply for any type of SAN.

Fibre Channel Zoning for Dell Compellent

After Installing your FC Switch, Login to it by going to the IP address in a web browser. It requires a specific version of Java and I have found it works best in Firefox than any other browser.

Once logged into the Switch, you should be presented with the Main Switch Admin page that will look something like this. (Each model varies slightly):

Click Configure at the top of the Screen and Choose “Zone Admin”. A new Window will appear and look like this:

Here is where all the magic happens. In FC Zoning, the goal is to create “VLAN-Like” objects called zones that contain the WWNs of your Server and Storage HBAs.

Since I am configuring this for a Compellent SAN, the first thing I need to do is create an Alias for all the Physical WWNs. To do this, I click on the Alias Tab and Select the “New Alias” Button.

Give your Alias a descriptive name, like SAN_Phy_WWNs_ Alias.

Expand the WWN’s on the lefthand side. Keep this window on the right side of your screen with the Compellent Storage Center GUI opened on the lefthand side with the Fibre Channel IO cards expanded so you can see their WWNs.

Add all the Physical WWNs you see in the switch that match up with the Physical WWNs on the Compellent SAN. (Physical WWNs on Compellent are the Green objects).
If you have a two port card, you will only see two Physical WWN’s (Per switch).
After you have added the two Physical WWNs to this alias you created, you will need to do this exact same thing on your other switch, only this time you will use the OTHER Compellent Physical WWNs you see in the list.

When finished, create a new alias and call is something like “SAN_Virt_WWNs_Alias”.
This time you will follow the same steps as above but you will be adding the Virtual WWNs of the Compellent into this alias. The Virtual WWNs are the ones in blue. Again, if you have a two port FC card, there should only be two WWN’s PER SWITCH. Repeat this process on your other switch for the other Virtual WWNs.

Next we create two Zones. One Zone that includes the Alias of the Physical WWNs and one zone that contains the Alias of the Virtual WWNs. TO do this, click on the Zone tab and select new Zone.

Name the Zones something like “SAN_Virt_WWNs” and “SAN_Phys_WWNs”.
In one zone add JUST the “SAN_Virtual_WWN_Alias” Alias, and in a new Zone and JUST “SAN_Phys_WWNs_Alias”

Now for the Servers- When you plug in a server into the FC switch, you will see a new WWN.

You need to go to the Alias Tab and create a new Alias and name is something like: “ServerName”.
Expand the WWN and add the Second-Level WWN object to this Alias.

Next, go to the Zone tab and Create a new zone, something like “Servername+SAN_WWNs”.
Add the Server Alias you created PLUS the “SAN_Virtual_WWNs” Alias.
You will need to make sure each Server you connect to the SAN has It’s server alias + The SAN’s Virtual WWN’s. 

Finally, click on the Zone Config Tab and create a new Zone Config. Add all the Zones you created into this Zone Config Tab. This is basically a big file will all your settings.

Click on Save Config at the top and wait about 30 seconds for the changes to be saved. You’ll see a success message in the bottom log screen.
The select Enable Config. Wait another 30 seconds for the settings to be enabled and take effect.

 

To recap, these are the aliases and zones you will need to create:

Compellent_Phy_WWNs: Alias
Compellent_Virt_WWNs: Alias

Compellent_Phy_Alias: Zone
Compellent_Virt_Alias: Zone

ServerWWN+Compellent_Virt_WWN: Zone

Add all those to your zone config.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Fibre Channel vs ISCSI

In the beginning there was Fibre Channel (FC), and it was good. If you wanted a true SAN — versus shared direct-attached SCSI storage — FC is what you got. But FC was terribly expensive, requiring dedicated switches and host bus adapters, and it was difficult to support in geographically distributed environments. Then, around six or seven years ago, iSCSI hit the SMB market in a big way and slowly began its climb into the enterprise.

The intervening time has seen a lot of ill-informed wrangling about which one is better. Sometimes, the iSCSI-vs.-FC debate has reached the level of a religious war.

This battle been a result of two main factors: First, the storage market was split between big incumbent storage vendors who had made a heavy investment in FC marketing against younger vendors with low-cost, iSCSI-only offerings. Second, admins tend to like what they know and distrust what they don’t. If you’ve run FC SANs for years, you are likely to believe that iSCSI is a slow, unreliable architecture and would sooner die than run a critical service on it. If you’ve run iSCSI SANs, you probably think FC SANs are massively expensive and a bear to set up and manage. Neither is entirely true.

Now that we’re about a year down the pike after the ratification of the FCoE (FC over Ethernet) standard, things aren’t much better. Many buyers still don’t understand the differences between the iSCSI and Fiber Channel standards. Though the topic could easily fill a book, here’s a quick rundown.

The fundamentals of FC
FC is a dedicated storage networking architecture that was standardized in 1994. Today, it is generally implemented with dedicated HBAs (host bus adapters) and switches — which is the main reason FC is considered more expensive than other storage networking technologies.

As for performance, it’s hard to beat the low latency and high throughput of FC, because FC was built from the ground up to handle storage traffic. The processing cycles required to generate and interpret FCP (Fibre Channel protocol) frames are offloaded entirely to dedicated low-latency HBAs. This frees the server’s CPU to handle applications rather than talk to storage

FC is available in 1Gbps, 2Gbps, 4Gbps, 8Gbps, 10Gbps, and 20Gbps speeds. Switches and devices that support 1Gbps, 2Gbps, 4Gbps, and 8Gbps speeds are generally backward compatible with their slower brethren, while the 10Gbps and 20Gbps devices are not, due to the fact that they use a different frame encoding mechanism (these two are generally used for interswitch links).

In addition, FCP is also optimized to handle storage traffic. Unlike protocols that run on top of TCP/IP, FCP is a significantly thinner, single-purpose protocol that generally results in a lower switching latency. It also includes a built-in flow control mechanism that ensures data isn’t sent to a device (either storage or server) that isn’t ready to accept it. In my experience, you can’t achieve the same low interconnect latency with any other storage protocol in existence today.

Yet FC and FCP have drawbacks — and not just high cost. One is that supporting storage interconnectivity over long distances can be expensive. If you want to configure replication to a secondary array at a remote site, either you’re lucky enough to afford dark fiber (if it’s available) or you’ll need to purchase expensive FCIP distance gateways.

In addition, managing a FC infrastructure requires a specialized skill set, which may make administrator experience an issue. For example, FC zoning makes heavy use of long hexadecimal World Wide Node and Port names (similar to MAC addresses in Ethernet), which can be a pain to manage if frequent changes are made to the fabric.

The nitty-gritty on iSCSI
iSCSI is a storage networking protocol built on top of the TCP/IP networking protocol. Ratified as a standard in 2004, iSCSI’s greatest claim to fame is that it runs over the same network equipment that run the rest of the enterprise network. It does not specifically require any extra hardware, which makes it comparatively inexpensive to implement.

From a performance perspective, iSCSI lags behind FC/FCP. But when iSCSI is implemented properly, the difference boils down to a few milliseconds of additional latency due to the overhead required to encapsulate SCSI commands within the general-purpose TCP/IP networking protocol. This can make a huge difference for extremely high transactional I/O loads and is the source of most claims that iSCSI is unfit for use in the enterprise. Such workloads are rare outside of the Fortune 500, however, so in most cases the performance delta is much narrower.

iSCSI also places a larger load on the CPU of the server. Though hardware iSCSI HBAs do exist, most iSCSI implementations use a software initiator — essentially loading the server’s processor with the task of creating, sending, and interpreting storage commands. This also has been used as an effective argument against iSCSI. However, given the fact that servers today often ship with significantly more CPU resources than most applications can hope to use, the cases where this makes any kind of substantive difference are few and far between.

iSCSI can hold its own with FC in terms of throughput through the use of multiple 1Gbps Ethernet or 10Gbps Ethernet links. It also benefits from being TCP/IP in that it can be used over great distances through existing WAN links. This usage scenario is usually limited to SAN-to-SAN replication, but is significantly easier and less expensive to implement than FC-only alternatives.

Aside from savings through reduced infrastructural costs, many enterprises find iSCSI much easier to deploy. Much of the skill set required to implement iSCSI overlaps with that of general network operation. This makes iSCSI extremely attractive to smaller enterprises with limited IT staffing and largely explains its popularity in that segment.

This ease of deployment is a double-edged sword. Because iSCSI is easy to implement, it is also easy to implement incorrectly. Failure to implement using dedicated network interfaces, to ensure support for switching features such as flow control and jumbo framing, and to implement multipath I/O are common mistakes which can result in lackluster performance. Stories abound on Internet forums of unsuccessful iSCSI deployments that could have been avoided because of these factors.

Fiber Channel over IP
FCoIP (Fiber Channel over Internet Protocol) is a niche protocol that was ratified in 2004. It is a standard for encapsulating FCP frames within TCP/IP packets so that they can be shipped over a TCP/IP network. It is almost exclusively used for bridging FC fabrics at multiple sites to enable SAN-to-SAN replication and backup over long distances.

Due to the inefficiency of fragmenting large FC frames into multiple TCP/IP packets (WAN circuits typically don’t support packets over 1,500 bytes), it is not built to be low latency. Instead, it is built to allow geographically separated Fibre Channel fabrics to be linked when dark fiber isn’t available to do so with native FCP. FCIP is almost always found in FC distance gateways — essentially FC/FCP-to-FCIP bridges — and is rarely if ever used natively by storage devices as a server to storage access method.

Fibre Channel over Ethernet
FCoE (Fibre Channel over Ethernet) is the newest storage networking protocol of the bunch. Ratified as a standard in June of last year, FCoE is the Fibre Channel community’s answer to the benefits of iSCSI. Like iSCSI, FCoE uses standard multipurpose Ethernet networks to connect servers with storage. Unlike iSCSI, it does not run over TCP/IP — it is its own Ethernet protocol occupying a space next to IP in the OSI model.

This differential is important to understand as it has both good and bad results. The good is that, even though FCoE runs over the same general-purpose switches that iSCSI does, it experiences significantly lower end-to-end latency due to the fact that the TCP/IP header doesn’t need to be created and interpreted. The bad is that it cannot be routed over a TCP/IP WAN. Like FC, FCoE can only run over a local network and requires a bridge to connect to a remote fabric.

On the server side, most FCoE implementations make use of 10Gbps Ethernet FCoE CNAs (Converged Network Adapters), which can both act as network adapters and FCoE HBAs — offloading the work of talking to storage similar to the way that FC HBAs do. This is an important point as the requirement for a separate FC HBA was often a good reason to avoid FC altogether. As time goes on, servers may commonly ship with FCoE-capable CNAs built in, essentially removing this as a cost factor entirely.

FCoE’s primary benefits can be realized when it is implemented as an extension of a pre-existing Fiber Channel network. Despite having a different physical transport mechanism, which requires a few extra steps to implement, FCoE can use the same management tools as FC, and much of the experience gained in operating an FC fabric can be applied to its configuration and maintenance.

Putting it all together
There’s no doubt that the debate between FC and iSCSI will continue to rage. Both architectures are great for certain tasks. However, saying that FC is good for enterprise while iSCSI is good for SMB is no longer an acceptable answer. The availability of FCoE goes a long way toward eating into iSCSI’s cost and convergence argument while the increasing prevalence of 10Gbps Ethernet and increasing server CPU performance eats into FC’s performance argument.

Whatever technology you decide to implement for your organization, try not to get sucked into the religious war and do your homework before you buy. You may be surprised by what you find.

If you found this article to be helpful, please support us by visiting our sponsors’ websites.