SAN vs NAS Part 2: Hardware, Protocols, and Platforms, Oh My!

In this post we are going to explore some of the various options for SAN and NAS.

SAN

There are a couple of methods and protocols for accessing SAN storage.  One is Fibre Channel (note: this is not misspelled, the protocol is Fibre, the cables are fiber) where SCSI commands are encapsulated within Fibre Channel frames.  This may be direct Fibre Channel (“FC”) over a Fibre Channel fabric, or Fibre Channel over Ethernet (“FCoE”) which further encapsulates Fibre Channel frames inside ethernet.

With direct Fibre Channel you’ll need some FC Host Bus Adapters (HBAs), and probably some FC switches like Cisco MDS or Brocade (unless you plan on direct attaching a host to an array which most of the time is a Bad Idea).

With FCoE you’ll be operating on an ethernet network typically using Converged Network Adapters (CNAs).  Depending on the type of fabric you are building, the array side may still be direct FC, or it may be FCoE as well.  Cisco UCS is a good example of the split out, as generally it goes from host to Fabric Interconnect as FCoE, and then from Fabric Interconnect to array or FC switch as direct Fibre Channel.

It could also be accessed via iSCSI, which encapsulates SCSI commands within IP over a standard network.  And then there are some other odd mediums like infiniband, or direct attach via SAS (here we are kind of straying away from the SAN and are really just directly attaching disks, but I digress).

What kind of SAN you use depends largely on the scale and type of your infrastructure.  Generally if you already have FC infrastructure, you’ll stay FC.  If you don’t have anything yet, you may go iSCSI.  Larger and performance environments typically trend toward FC, while small shops trend towards iSCSI.  That isn’t to say that one is necessarily better than the other – they have their own positives and negatives.  For example, FC has its own learning curve with fabric management like zoning, while iSCSI connections are just point to point over existing networks that someone probably already knows.  The one thing I will caution against here is if you are going for iSCSI, watch out for 1Gb configurations – there is not a lot of bandwidth and the network can get choked VERY quickly.  I personally prefer FC because I know it well and trust its stability, but again there are positives and negatives.

Back to the subject at hand – in all cases with SAN the recurring theme here is SCSI commands.  In other words, even though the “disk” might be a virtual LUN on an array 10 feet (or 10 miles) away, the computer is treating it like a local disk and sending SCSI disk commands to it.

Some array platforms are SAN only, like the EMC VMAX 10K, 20K, 40K series.  EMC XtremIO is another example of a SAN only platform.  And then there are non-EMC platforms like 3PAR, Hitachi, and IBM XIV.  Other platforms are unified, meaning they do both SAN and NAS.  EMC VNX is a good example of a unified array.  NetApp is another competitor in this space.  Just be aware that if you have a SAN only array, you can’t do NAS…and if you have a NAS only array (yes they exist, see below), you can’t do SAN.  Although some “NAS” arrays also support iSCSI…I’d say most of the time this should be avoided unless absolutely necessary.

NAS

NAS on the other hand is virtually always over an IP network.  This is going to use standard ethernet adapters (1Gb or 10Gb) and standard ethernet switches and IP routers.

As far as protocols there is CIFS, which is generally used for Windows, and NFS which is generally used on the Linux/Unix/vSphere side.  CIFS has a lot of tie-ins with Active Directory, so if you are a windows shop with an AD infrastructure, it is pretty easy to leverage your existing groups for permissions.  NFS doesn’t have these same ties with AD, but does support NIS for some authentication services.

The common theme on this side of the house is “file” which can be interpreted as “file system.”  With CIFS, generally you are going to connect to a “share” on the array, like \\MYARRAY1\MYAWESOMESHARE.  This may be just through a file browser for a one time connection, or this may be mounted as a drive letter via the Map Network Drive feature.  Note that even though it is mounted as a drive letter, it is still not the same as an actual local disk or SAN attached LUN!

For NFS, an “export” will be configured on the array and then mounted on your computer.  This actually gets mounted within your file system.  So you may have your home directory in /users/myself, and you create a directory “backups” and mount an export to it doing something like mount -t nfs 172.0.0.10:/exports/backups /users/myself/backups.  Then you access any files just as you would any other ones on your computer.  Again note that even though the NFS export is mounted within your file system, it is still not the same as an actual local disk or SAN attached LUN!

Which type of NAS protocol you use is generally determined by the majority of your infrastructure – whether it is Windows or *nix.  Or you may run both at once!  Running and managing both NFS and CIFS is really more of a hurdle with understanding the protocols (and sometimes licensing both of them on your storage array), whereas the choice to run both FC and iSCSI has hardware caveats.

For NAS platforms, we again look to the unified storage like EMC VNX.  There are also NAS gateways that can be attached to a VMAX for NAS services.  EMC also has a NAS only platform called Isilon.

One thing to note is that if your array doesn’t support NAS (say you have a VMAX or XtremIO) the gateway solution is definitely viable and enables some awesome features, but it is also pretty easy to spin up a Windows/Linux VM, or use a Windows/Linux physical server (but seriously, please virtualize!) that uses array block storage, but then serves up NAS itself.  So you could create a Windows file server on the VMAX and then all your NAS clients would connect to the Windows machine.

The reverse is not really true…if your array doesn’t support SAN, it is difficult to wedge SAN into the environment.  You can always do NFS with vSphere, but if you need block storage you should really purchase some infrastructure for it.  iSCSI is a relatively simple thing to insert into an existing environment, just again beware 1Gb bandwidth.

Protection

One final note I wanted to mention is about protection.  There are methods for replicating file and block data, but many times these are different mechanisms, or at least they function in different ways.  For instance, EMC RecoverPoint is a block replication solution.  EMC VNX Replicator is a file replication solution.  RP won’t protect your file data (unless you franken-config it to replicate your file LUNs), and Replicator won’t protect your block data.  NAS supports NDMP while SAN generally does not.  Some solutions, like NetApp snapshots, do function on both file and block volumes, but they are still very different in how they are taken and restored…block snapshots should be initiated from the host the LUN is mounted to (in order to avoid disastrous implications regarding host buffers and file system consistency) while file snapshots can be taken from any old place you please.

I say all this just to say, be certain you understand how your SAN and NAS data is going to be protected before you lay down the $$$ for a new frame!  It would be a real bummer to find out you can’t protect your file data with RecoverPoint after the fact.  Hopefully your pre-sales folks have you covered here but again be SURE!

And……..

We’ve drawn a lot of clear distinctions between SAN and NAS, which kind of fall back into the “bullet point” message that I talked about in my first post.  All that is well and good, but here is where the confusion starts to set in: in both NAS cases (CIFS and NFS), on your computer the remote array may appear to be a disk.  It may look like a local hard drive, or even appear very similar to a SAN LUN. This leads some people to think that they are the same, or at least are doing the same things.  I mean, after all, they even have the same letters in the acronym!

However, your computer never issues SCSI commands to a NAS.  Instead it issues commands to the remote file server for things like create, delete, read, write, etc.  Then the remote file server issues SCSI (block) commands to its disks in order to make those requests happen.

In fact, a major point of understanding here is, “who has the file system?”  This will help you understand who can do what with the data.  In the next post we are going to dive into this question head first in a linux lab environment.

2 thoughts on “SAN vs NAS Part 2: Hardware, Protocols, and Platforms, Oh My!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s