Pages

Tuesday, July 9, 2013

iSCSI Differences

So i recently was asked, 
"Why do we in storage suggest that the best practices for iSCSI connectivity be segregated to two separate IP ranges on the dedicated NICs for the hosts?" 

It is a common question in my profession and it while to me now it appears simple, I admit it can be confusing.

iSCSI connectivity differs greatly on different vendors.  The common arrays I work on are the  EQL SAN, Dell's PVNAS, EMC or even NetApps,  so I apologize if it is confusing some.

To explain this iSCSI has 3 primary traits or roles. 

 Discovery – this is the process that runs when we input the first IP of a SAN device into the iSCSI software initiator.  It will reach the SAN device and that device will in turn then broadcast back all the IPs that are available to be used, it will even provide IPs that might not be reachable with a ping or a vmkping for ESX in this instance. 

Session – this is the active connection made to the SAN for production data.  This is where we actively communicate with the block level device generating the IOPs.

IP Forwarding – this is where the big difference is in EQL verses our other products.  The SAN for EQL uses a group lead that will control all the head units in the EQL group.  This lead IP will monitor all active and passive ports on the EQL SAN and then forward the request for sessions to the port that best suits the session request.  This requires the switches to be bridged so that the redirect can occur. 

Applications then right?

In  EMC and PVNAS equipment we do not have the option to port forward so we have to bind directly to the port on the array, thus the need to use multiple subnets on the host side on multiple NICs.  As we bind to the ports the suggestion is to use non-routable IP’s such as 192.168.x.x on switches that are not bridged.  Currently in my experience EQL is the only SAN that I have seen that uses this IEEE approved configuration method.  This allows the EQL array to increase port availability to the SAN and thus is an increase in performance overall.  Binding directly to the ports in the PVNAS may be a bit more difficult to configure but it can provide a more stable and durable connection as there is not a management port forwarding or this overhead process. 

Example:
ESX.  As to the number of NICs and vswitches to use in ESX.  The minimum that I would use is 2.  There is only a need to make one active and one failover path to the SAN.  However with the ability to use multipathing and Round Robin, you have to implement 2 NICs and 2 vswitches to 2 different subnets on the SAN which will create 4 connections.  Ex: NIC1 would connect to RC0 port0 and then RC1 port0; and NIC2 would connect to RC0 port1 and RC1 port1.

If you decide to use the 4 connections on the ESX hosts, you will have to implement 4 NICs and 4 vswithes and 4 different subnets on the SAN which will create 8 connections.  This can assist multipathing but it will decrease the host count that the SAN can actively respond to.  

Answer:

The reason we use the multiple NICs and subnets are to allow us in the PVNAS and EMC world to directly bind to the ports on the storage array.  

No comments:

Post a Comment