Menu déroulant pour mobile

Cisco ISLB Issue

10 February 2015 | Written by Nicolas Michel | Published in Data Center

Usually people are blogging on a certain topic because they want to share they knowledge with a certain protocol or product.

Today I ll take another approach with that fact and I will actually do the exact opposite. I have an issue with ISLB which allows load balancing for my iSCSI sessions. Today I will elaborate each steps needed to make it work. I have failed this configuration a LOT of time and I have followed the same steps over and over. I decided to make a blogpost about it to keep track of what I should do next time I want to configure it.

I did not play with VRRP yet but this can be an idea for a following blogpost.

The topology is the same as in my previous blog posts related to the MDS.



The difference here is that both MDS will have an iSCSI interface bound to their gigabit interface. (iscsi 1/1 mapped to gig 1/1).

ISLB on Cisco MDS

I will start from scratch and setup the infrastructure:

The outpout above prove us that the JBOD has registered to the fabric and that VSAN 10 is running on the E port between MDS01 and MDS02. Another proof is that the FCNS commands on MDS02 has the JBOD PWWN in its database.

Now we will setup Device-alias, we will activate a test zoneset on vsan 10 because ISLB requires an already active zoneset if you want to use the auto zone feature. If you do NOT have an active zone, you will have to manually perform the zoning configuration.


Now we can start our ISLB configuration. Again we will first configure the infrastructure and check that both iSCSI interfaces are reachable from the L2 domain.


ISLB configuration can now start and you will see it is very brief:

We first need to check the IQN of our servers.

\IQN Win 2008 IQN Win 2012


The configuration has been commited and MDS02 should have the ISLB configuration and the zoning configured on it:

All is all right here and none of the iSCSI initiator have yet logged in the fabric:

Let’s now activate debugs on both switches and try to initiate a Fabric Login from the iSCSI initiators (Server 01 first then Server 02)


MDS01 has performed a FLOGI onto itself on the VSAN10 and it has been mapped to interface iSCSI 1/1.

We can also see that the initiator has been correctly mapped to the JBOD

Let’s now try with server 02


Note that the MDS02 will only see 1 FLOGI and that MDS01 will see both FLOGI from its local FC Disk and from its iSCSI Initiator.

Both servers are able to map the drive and everybody is happy 🙂


As I mentionned at the beginning of the post, I did not played with VRRP on purpose and I will relate about that in a following blogpost 🙂

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.