RAID: Part 5 – RAID5 and RAID6

Now that the parity post is out of the way, we can move into RAID5 and RAID6 configurations.  The good news for anyone who actually plodded through the parity post is that we’ve essentially already covered RAID5!  RAID5 is striping with single parity protection, generated on each row of data, exactly like my example.  Because of that I’ll be writing this post assuming you’ve read the parity post (or at least understand the concepts).

RAID5

Actually, from the parity post not only have we covered RAID5…we also covered most of our criteria for RAID type analysis.  Sneaky!

Before continuing on, let me make a quick point about RAID5 (note: this also applies to RAID6) group size.  In our example we did 4+1 RAID5. X+1  is standard notation for RAID5, meaning X data disks and 1 parity disk (…kind of – I’ll clarify later regarding distributed parity) but there is no reason it has to be 4+1.  There is a lower limit on single parity schemes, and that is three disks (since if you had two disks you would just do mirroring) which would be 2+1.  There is no upper bound on RAID5 group size, though I will discuss this nuance in the protection factor section.  I could theoretically have a 200+1 RAID5 set.  On an EMC VNX system, the upper bound of a RAID5 group is a system limitation of 16 disks, meaning we can go as high as 15+1.  The more standard sizes for storage pools are 4+1 and the newer 8+1.

That said, let’s talk about usable capacity. RAID5 differs from RAID1/0 in that the usable capacity penalty is directly dependent on how many disks are in the group.  I’ve explained that in RAID5, for every stripe, exactly one strip must be dedicated to parity.  Scale out to the disk level, and this translates into one whole disk’s worth of parity in the group.   In the 4+1 case our capacity penalty is 20% (1 out of 5 disks are used for parity).  Here are the capacity penalties for the schemes I just listed:

  • 2+1 – 33% (this is the worst case scenario, and still better than the 50% of RAID 1/0)
  • 4+1 – 20%
  • 8+1 – 11%
  • 15+1 – 6.5%

So as we add more data disks into a RAID5 group our usable capacity penalty goes down, but is always better than RAID1/0.

Protection factor?  After the parity post we know and understand why RAID5 can survive a single drive failure.  Let’s talk about degraded and rebuild.

  • Degraded mode – Degraded on RAID5 isn’t too pretty.  In this case we have lost a single disk but are still running because of our parity bits.  In this case for a read request coming in to the failed disk, the system must rebuild that data in memory.  We know that process – every remaining disk must be read in order to generate that data.  For a write request coming into the failed disk, the system must rebuild the existing data in memory, read and recalculate parity, and write the new parity value to disk. The one exception to the write condition is if in a given stripe we have lost the parity strip instead of a data strip.  In this case we get a performance increase because the data is just written to whatever data strip it is destined for with no regard to parity recalculation.  However this teensy performance increase is HEAVILY outweighed by the I/O crushing penalty going on all around it.
  • Rebuild mode – Rebuild is also ugly.  The replacement disk must be rebuilt, which means that every bit of data on every remaining drive must be read in order to calculate what the replacement disk looks like.  And all the while, for incoming reads it is still operating in degraded mode.  Depending on controller design, writes can typically be sent to the new disk – but we still have to update parity.

Protection factor aside, the performance hit from degraded mode is why hot spares are tremendously important to RAID5. You want to spend as little time as possible in degraded mode.

Circling back to usable capacity, why do I want smaller RAID groups?  If I have 50 disks, why would I want to do ten 4+1’s instead of one 49+1.  Why waste 10 times the space to parity?  The answer is two-fold.

First related to the single drive failure issue, the 49+1 presents a much larger fault domain.  In English, fault domain means a set of things that are tied to each other for functionality.  Think of it like links in a chain: if one link fails, the entire chain fails (well, a chain in analogy like this one fails) .  With 49+1, I can lose at most one drive out of 50 at any time and keep running.  With ten 4+1’s, I can lose up to 10 drives as long as they come out of different RAID groups.  It is certainly possible that I lose two disks in one 4+1 group and that group is dead, but the likelihood of it happening with a given set of 5 disks is lower than a set of 50 disks.  The trade-off here is that as we add more disks to our RAID group size, we gain usable capacity but increase our risk of a two drive failure causing data loss.

Second, related to the Degraded and Rebuild issues, the more drives I have, the more pieces of data I must read in order to construct data during a failure.  If I have 4+1 and lose a disk, for every read that comes into the system I have to read four disks to generate that data.  But with a 49+1 if I lose a disk, now I have to read forty-nine disks in order to generate that data!  As I add more disks to a RAID5 set, Degraded and Rebuild operations become more taxing on the storage array.

On to write penalty!  In the parity post I explained that any write to existing data causes the original data and parity to be read, some calculations (which happen so fast they aren’t relevant) and then the new data and new parity must be written to disk.  So the write penalty in this case is 4:1.  Four I/O operations for each write coming into the system.  Interestingly enough, this doesn’t scale with RAID group size.  Whether a 2+1 or  200+1, the write penalty is always 4:1 for single parity schemes.

Full Stripe Writes

RAID1/0 has a 2:1 write penalty, and RAID5 has a 4:1 write penalty.  Does this mean that writes to RAID1/0 are always more efficient than RAID5?  Not necessarily.  There is a special case for writes to parity called Full Stripe Writes (FSWs).  A FSW is a special case that typically happens with large block sequential writes (like backup operations).  In this case we are writing such a large amount of data that we actually overwrite one entire stripe.  E.g. in our 4+1 scenario, if the strip size was 64KB and we wrote 256KB of data starting at the first disk, we would end our write at the end of the stripe.  In this case, we have no need to do a parity update because every bit of data that we are protecting with the parity is getting overwritten.  Because of this, we can actually just calculate parity in memory (since we already have the entire stripe’s data in memory) and write the entire stripe at once.

The payback is enormous here, because we only have one extra write for every four writes coming into the system.  In the 4+1 that we described, this translates into a write penalty of 5:4.  This is actually a big improvement even over RAID1/0!

FSWs are not something to hope for when choosing a RAID type.  They are very dependent on the application behavior, file system alignment, and I/O pattern.  Modern storage arrays enable this behavior more often because they hold data in protected cache before flushing to disk, but choosing RAID5 for something that is heavily write oriented and simply hoping that you will get the 5:4 write penalty would be very foolish.  However, if you do your homework you can usually figure out if it is happening or not.  As a simple example, if I was dumping large backups onto a storage array, I would almost always choose RAID5 or RAID6 because this generally will leverage FSWs.

RAID6

RAID6 is striping with dual parity protection.  Essentially most of what we know about RAID5 applies, except that in any given stripe instead of one parity value there are two.  What this allows us to do is to recover in the event that we lose two drives.  RAID6 can survive two drive failures.

In order to do this, a catch with this second value is that the second parity bit must actually be different from the first.  If the second parity value was just a copy of the first, that doesn’t buy us anything for data recovery.  Another catch is that the 2nd parity value can’t use the first parity value for the calculation…otherwise the 2nd parity value is dependent on the first and in a recovery scenario we run into a bit of a storage array and the egg problem.  Not what we want.

In the parity post I declared my undying love for XOR, and to prove to the rest of you doubters that it is just as amazing as I made it out to be – the 2nd parity value also uses XOR!  It is just too efficient to pass up.  But obviously we must XOR some different data values together.  RAID6’s second parity actually comes from diagonal stripes.

Offhand you might be imagining something like this:

wrongr6

As the helpful text indicates, not so much.  Why not, though?  We satisfied both of our criteria – the 2nd parity bit is different than the first, and it doesn’t include it either.

From a protection standpoint, this probably works but we pay a couple of performance penalties.  First and foremost, we lose the ability to do FSWs.  In order to do a full stripe write with this scheme, I have to essentially overwrite every single disk at one time.  Not gonna happen.  Second, in recovery scenarios my protection information is tied to more strips than RAID5.  I have a set of horizontal strips for one parity value and then another set of diagonal strips for the 2nd parity strip.

Instead, remember that we are working with an ordered set of 1’s and 0’s in every strip, so really the 2nd parity bit is calculated like:

rightr6

It is a strange, strange thing, but essentially the parity is calculated (or should be calculated) within the same stripe using different bits in each strip.

For a more comprehensive and probably more clear look into the hows of RAID6 (including recovery methodology), EMC’s old whitepaper on it is still a great resource.  I really encourage you to check it out if you need some more detail or explanation, or just want to read a different perspective on it.  https://www.emc.com/collateral/hardware/white-papers/h2891-clariion-raid-6.pdf  Their diagrams are much more informative than mine, although they have very few kittens in them from what I’ve seen so far.

On to our other criteria – the degraded and rebuild modes are pretty much the same as RAID5 except that we may have to read one additional parity disk during the operation.  In other words, degraded and rebuild modes are not pleasant with RAID6.  Make sure you have hotspares to get you out of both as fast as possible.

Usable capacity – the penalty is calculated similarly to RAID5, just with X+2 notation. So e.g. a 6+2 RAID6 would have a 2/8 (two out of eight disks used for parity) penalty, or 25%.  Just like RAID5, this value depends on the size of the group itself, with a technical minimum of four drives.  I say technical because RAID6 schemes are usually implemented to protect a large number of disks – instead of two data and two parity disks, why not just do a 2+2 RAID1/0?  Ahh, variety.

Finally, write penalty.  Because every time I write data I have to update two parity values, there is a 6:1 write penalty with RAID6.  The update operation is once again the same as RAID5 except the second parity value must be read, new parity calculated, and new parity written.

RAID6 can utilize FSWs as discussed above, but if it doesn’t, write operations are taxed HEAVILY with the 6:1 write penalty.  RAID6 has its place, but if you are trying to support small block random writes, it is probably advisable to steer clear.  Again there is no such thing as read penalty, so from a read perspective it performs identically to all other RAID types given the same number of disks in the group.

Distributed vs Dedicated Parity

Briefly I wanted to mention something about parity and the RAID notation like 4+1.  We “think” of this as “4 data disks, one parity disk” which makes sense from a capacity perspective.  In practice, this is called dedicated parity…and it’s not such a good idea.

Every write that comes in the system generates 4 back end I/Os.  Two of those I/Os are slated for the strip that the data is on, and the other two I/Os hit the parity strip.  Were we to stack all the parity strips up on one disk (as we would with a dedicated parity disk), what do you think that would look like under any serious write load?

You could roast marshmallows on the parity disk

You could roast marshmallows on the parity disk

The parity disk has a lot of potential to become a bottleneck.  Instead, RAID5 and 6 implementations use what is called distributed parity in order to provide better I/O balancing.

distributedparity

In this manner, the parity load for the RAID group is distributed evenly across the disks.  Now, does this guarantee even balance?  Nope.  If I hit the top stripe hard, the top parity strip on Disk1 is still going to cook.  But under normal write load with small enough strip size, this provides a much needed load balance.

Not all protection schemes use distributed parity – NetApp’s RAID-DP is a good example of this.  But in cases where parity is not distributed, there must be some other mechanism to alleviate the parity load…otherwise the parity disk is going to be a massive bottleneck.

Uncorrectable Bit Errors

Finally, I wanted to mention Uncorrectable Bit Errors and their impact on RAID5 vs RAID6.  If you check out the whitepaper from EMC above, you’ll see a reference to uncorrectable errors.  You can also google this topic – here is a good paper on it.

An uncorrectable error is one that happens on a disk and renders the data for that particular sector unrecoverable.  The error rate is measured in errors per bit read.  Many consumer grade drives are 1 error per 10^15 bits (~113TB) read, and enterprise grade drives are 1/10^16 (~1.1PB). Generally the larger capacity drives (NL-SAS) are actually consumer grade from this standpoint.

During normal operations with RAID protection a UBE is OK because we have recovery information built into the RAID scheme.  But in a RAID5 rebuild scenario, a UBE is instant death for the RAID group.  Remember we have to be able to reconstruct that failed disk in its entirety, and in order to do that we have to read every bit of data off of every other disk in the group.

So consider that 3TB capacity drives are going to exhibit an UBE every ~113TB of data read, giving a run through the entire disk an approximately 2.5% chance of winning the lottery. Then consider that your RAID5 group is probably going to have at least four or five of these guys in it.

I’ve seen RAID5 used for capacity drives before. And there are mechanisms built into storage arrays to try to sweep and detect errors before a drive fails.  And to date (knock on wood) I haven’t seen a RAID group die a horrible death during rebuild.  But it is always my emphatic recommendation to protect capacity drives with RAID6.  You will find this best practice repeated ad nauseum throughout the storage world.  It is nearly impossible to justify the additional risk of RAID5 against the cost of a few extra capacity disks, even if it pushes you into an extra disk shelf.  Fighting a battle today for a few more dollars on the purchase is going to be a lot less painful than explaining why a 50TB storage pool is invalid and everything in it must be rolled from backup. (and you’ve got backups right?  and they work?)

The Summary Before the Summary

This was a tremendous amount of information and is probably not digestible in one sitting.  Maybe not even two.  My hope is really that by reading this you will learn just a bit about the operations behind the curtain that will help you make an informed decision on when to use RAID5 and RAID6.  If this saves just one person from saying “we need to use RAID1/0 because it is the fast one,” I will be happy.

My next post will be a wrap up of RAID and some comparisons between the types to bring a close to this sometimes bizarre topic of RAID.

RAID: Part 3 – RAID 1/0

So, if you have been following along dear reader, we are now up to speed on several things.  We have discussed mirroring (and RAID1, which leverages it) and striping (and RAID0, which leverages that).  We have also discussed RAID types using some familiar and standard terminology which will allow us to compare and contrast the versions moving forward.

Now, on to the big dog of RAID – RAID 1/0.  This is called “RAID one zero” and “RAID ten,” and sometimes “RAID one plus zero” (and indicated as RAID 1+0).  I have never heard it called “RAID one slash zero” but perhaps somebody somewhere does that also.  All of these things are referring to the same thing, and RAID ten is the most common term for it.

Why do we need RAID1/0?

In this section I wanted to ask a sometimes overlooked question – what are the problems with RAID0 and RAID1 that cause people to need something else?

If you know about RAID0 (or even better if you read Part 2) you should have an excellent idea of the failings of it.  Just to reiterate, the problem of RAID0 is that it only leverages striping, and striping only provides a performance enhancement.  It provides nothing in the way of protection, hence any disk that fails in a RAID0 set will invalidate the entire set. RAID0 is the ticking time bomb of the storage world.

RAID1’s problems aren’t quite as obvious as the “one disk failure = worst day ever” of RAID0, but once again let’s go back to Part 1 and look at the benefits I listed of RAID:

  1. Protection – RAID (except RAID0) provides protection against physical failures.  Does RAID1 provide that?  Absolutely – RAID1 can survive a single disk failure.  Check box checked.
  2. Capacity – RAID also provides a benefit of capacity aggregation.  Does RAID1 provide that?  Not at all.  RAID1 provides no aggregate capacity or aggregate free space benefit because there are always exactly two disks in a RAID1 pair, and the usable capacity penalty is 50%.  Whether I have a RAID1 set using a 600GB drive or a 3TB drive, I get no aggregate capacity benefit with RAID1, beyond the idea of just splitting a disk up into logical partitions…which can be done on a single disk without RAID in the first place.
  3. Performance – RAID provides a performance benefit since it is able to leverage additional physical spindles.  Does RAID1 provide that?  The answer is yes…sort of.  It does provide two spindles instead of one, which fits the established definition.  However there are some caveats.  There isn’t a performance boost on writes because of the write penalty of 2:1 (both of the spindles are being used for every single write).  There is a performance boost on reads because it can effectively round-robin read requests back and forth on the disks.  But, and a BIG BUT, there are only two spindles.  There are only ever going to be two spindles.  Unlike a RAID0 set which can have as many disks as I want to risk my data over, a RAID1 set is performance bound to exactly two spindles.

Essentially the problem with the mirrored pair is just that – there are only ever going to be two physical disks.

By now it may have become obvious, but RAID0 and RAID1 are almost polar opposites.  RAID1’s benefit lies mostly around protection, and RAID0’s benefit is performance and capacity.  RAID1 is the stoic peanut butter, and RAID0 is the delicious jelly.  If only there was a way to leverage them both….

What is RAID1/0?

RAID1/0 is everything you wanted out of RAID0 and RAID1. It is the peanut butter and jelly sandwich.  (Note: please do not attempt to combine your storage array with peanut butter or jelly.  Especially chunky peanut butter.  And even more especiallyer chunky jelly)

Essentially RAID1/0 looks like a combination of RAID1 and RAID0, hence the label.  More accurately, it is a combination of mirroring and striping in that order.  RAID1/0 replaces the individual disks of a RAID0 stripe set with RAID1 mirror pairs.  It is also important to understand what RAID1/0 is and what it is not.  It is true that it leverages the good things out of both RAID types, but it also still maintains the bad things of both RAID types. This will become apparent as we dive into it.

raid10

This is a busy image, but bear with me as I break it down.

  • This is an eight disk RAID1/0 configuration, and on this configuration (similar to the Part 2 examples) we are writing A,B,C,D to it. For simplicity’s sake we ignore write order and just go alphabetically
  • The orange and green help indicate what is happening at their particular parts of the diagram
  • The physical disks themselves (the black boxes) are in mirrored pairs that should hopefully be familiar by now (indicated by the green boxes and plus signs).  This is the same RAID1 config that I’ve covered previously.
  • The weirdness picks up at the orange part. The orange box indicates that we are striping across every mirrored pair.  This is also identical to the RAID0 configuration, except that the the physical disks of the RAID0 config have been replaced with these RAID1 pairs.

This is what is meant by RAID1/0.  First comes RAID1 – we build mirrored pairs.  Then comes RAID0 – we stripe data across the members, which happen to be those mirrored pairs.  It may help to think about RAID1/0 as RAID0 with an added level of protection at the member level (since we know RAID0 provides no protection otherwise).

As the host writes A,B,C,D, the diagram indicates where the data will land, but let’s cover the order of operations.

  1. The host writes A to the RAID1/0 set
  2. A is intercepted by the RAID controller.  The particular strip it is targeted for is identified.
  3. The strip is recognized to be on a mirrored pair, and due to the mirror configuration the write is split.
  4. A lands on both disks that make up the first member of the RAID0 set.
  5. Once the write is confirmed on both disks, the write is acknowledged back to the host as completed
  6. The host writes B to the RAID1/0 set
  7. B is intercepted by the RAID controller.  The particular strip it is targeted for is identified.  Due to the mirror configuration the write is split.
  8. B lands on both disks that make up the second member of the RAID0 set.
  9. Once the write is confirmed on both disks, the write is acknowledged back to the host as completed
  10. The host writes C to the RAID1/0 set
  11. etc.

Hopefully this gives an accurate, comprehensible version of the how’s of RAID1/0.  Now, let’s look at RAID1/0 using the same terminology we’ve been using.

From a usable capacity perspective, RAID1/0 maintains the same penalty as RAID1.  Because every member is a RAID1 pair, and every RAID1 pair has a 50% capacity penalty, it stands to reason that RAID1/0 also has a 50% capacity penalty as a whole.  No matter how many members are in a RAID1/0 group, the usable capacity penalty is always 50%.

The write penalty is a similar tune.  Because every member is a RAID1 pair, and every RAID1 pair has a 2:1 write penalty, RAID1/0 also has a write penalty of 2:1.  Again no matter how many members are in the set, the write penalty is always 2:1.

RAID1/0 reminds me of the Facts of Life. You know, you take the good, you take the bad?  RAID1/0 is a leap up from RAID0 and RAID1, but it doesn’t mean that we’ve gotten rid of their problems.  It is better to think that we’ve worked around their problems.  The same usable capacity penalty exists, but now I have the ability to aggregate capacity by putting more and more members into a RAID1/0 configuration.  The same write penalty exists, but again I can now add more spindles to the RAID1/0 configuration for a performance boost.

The protection factor is weird, but still a combination of the two.  How many disks failures can a RAID1/0 set survive?  The answer is, it depends.  There is still striping on the outer layer, and by now we have beaten the dead horse enough to know that RAID0 can’t lose any physical disks.  It is a little clearer, especially for this transition, to think of this concept as RAID0 can’t survive any member failures, and in traditional RAID0 members are physical disks.  In this capacity, RAID1/0 is the same: RAID1/0 can’t survive any member failures.  The difference is that now a member is made up of two physical disks that are protecting each other.  So can a RAID1/0 set lose a disk and continue running?  Absolutely – RAID1/0 can always survive one physical disk failure.

…But, can it survive two?  This is where it gets questionable.  If the second disk failure is the other half of the mirrored pair, the data is toast.  Just as toast as if RAID0 had lost one physical disk since the effect is the same.  But what if it doesn’t lose that specific disk?  What if it loses a disk that is part of another RAID1 pair?  No problem, everything keeps running.  In fact, in our example, we can lose 4 disks like this and keep running:

raid10_4fails

You can lose as many as half of the disks in the RAID1/0 set and continue running, just as long as they are the right disks.  Again, if we lose two disks like this, ’tis a bad day:

raid10_2fails

So there are a few rules about the protection of RAID1/0

  • RAID1/0 can always survive a single disk failure
  • RAID1/0 can survive multiple disk failures, so long as the disk failures aren’t within the same mirrored pair
  • With RAID1/0 data loss can occur over as little as two disk failures (if they are part of the same mirror pair) and is guaranteed to occur at (n/2)+1 failures where n is the total disk count in the RAID1/0 set. 

Degraded and rebuild concepts are identical to RAID1 because the striping portion provides no protection and no rebuild ability.

  • Any mirror pair in degraded mode will see a write performance increase (splitting writes no longer necessary), and potentially a read performance decrease.  Other mirror pairs continue to operate as normal
  • Any mirror pair in rebuild mode will see a heavy performance penalty.  Other mirror pairs continue to operate as normal with no performance penalty.

Why not RAID0/1?

This is one of my favorite interview questions, and if you are interviewing with me (or at places I’ve been) this might give you a free pass on at least one technical question.  I picked it up from a colleague of mine and have used it ever since.

Why not RAID0/1?  Or is there even a concept of RAID0/1?  Would it be the same as RAID1/0?

It does exist, and it is extremely similar on the surface.  The only difference is the order of operations: RAID1/0 is mirrored, then striped, and RAID0/1 is striped, then mirrored.  This seemingly minor difference in theory actually manifests as a very large difference in practice.

raid01

Most things about RAID0/1 are identical to RAID1/0 (like performance and usable capacity), with one notable exception – what happens during disk failure?

I covered the failure process of RAID1/0 above so I won’t rehash that. For RAID0/1, remember that any failure of a RAID0 member invalidates the entire set.  So, what happens whenever the top left disk in RAID0/1 fails?  Yep, the entire top RAID0 set fails, and now it is effectively running as RAID0 using only the bottom set.

This has two implications.  The most severe being that RAID0/1 can survive a single disk failure, but never two disk failures.  The other is that if a disk failed and a hot spare was available (or the bad disk was swapped out with a good disk), the rebuild affects the entire RAID set rather than just a portion of it.

It would be possible to design a RAID controller to get around this.  It could recognize that there is still a valid member available to continue running from in the second stripe set.  But then essentially what it is doing is trying to make RAID0/1 be like RAID1/0.  Why not just use RAID1/0 instead?  That is why RAID1/0 is a common implementation and RAID0/1 is not.

Wrap Up

In Part 4 I’m going to cover parity and hopefully RAID5 and 6, and then I’ll provide some notes to bring the entire discussion together.  However, I wanted to include some thoughts about RAID1/0 in case someone stumbled on this and had some specific questions or issues related to performance, simply because I’ve seen this a lot.

RAID1/0 performs more efficiently than other RAID types from a write perspective only.  A lot of people seem to think that RAID1/0 is “the fastest one,” and hence should always be used for performance applications.  This is demonstrably untrue.  As I’ve stated previously, there is no such thing as a read penalty for any RAID type.  If your application is entirely or mostly read oriented, using RAID1/0 instead of RAID5 or 6 does nothing but cost you money in the form of usable capacity.  And yes, there are workloads with enormous performance requirements that are 100% read.

RAID1/0 has a massive usable capacity penalty.  If you are protecting data with RAID1/0, you need to purchase twice as much storage as it needs.  If you are replicating that data like-for-like, you need to purchase four times the amount of storage that it needs.  Additionally, sometimes your jumping off point locks you into a RAID type as well, so a decision to use RAID1/0 today may impact the future costs of storage as well.  I can’t emphasize this point enough – RAID1/0 is extremely expensive and not always needed.

I like to think of people who always demand RAID1/0 like the people who might bring a Ferrari when asked to “bring your best vehicle.”  But it turns out, I needed to tow a trailer full of concrete blocks up a mountain.  Different vehicles are the best at different things…just like RAID types.  We need to fully understand the requirements before we bring the sports car.

If you are having performance problems, or more likely someone is telling you they are having performance problems, jumping from RAID5 to RAID1/0 may not do a thing for you.  It is important to do a detailed analysis of the ENTIRE storage environment and figure out what the best fit solution is.  You don’t want to be that guy who advocated a couple hundred thousand dollars of a storage purchase when it turned out there was a host misconfiguration.

RAID: Part 2 – Mirroring and Striping

In this post I’m going to cover two of the three patterns of RAID, and cover RAID0, 1.  Initially I was going to include RAID1/0 but this one got a bit too long, so I’ll break it out into its own post.

Mirroring

Mirroring is one of the more straightforward RAID patterns, and is employed in RAID1 and RAID1/0 schemes.  Mirroring is used for data protection.

Exactly as it sounds, mirroring involves pairs of disks (always pairs – exactly two) that are identical copies of each other.  In RAID1, a virtual disk would be comprised of two physical disks on the back end. So as I put files on the virtual disk that my server sees, essentially it is making two copies of it on the back side.

mirroring

In this scenario imagine a server is writing the alphabet to disk.  The computer host starts with A and continues through till I.  In the case of one physical disk on the left hand side, the writes end up on the single physical disk as expected.

On the right hand side, using the RAID1 pair, the computer host writes the same information, but on the storage array the writes are mirrored to two physical disks.  This pair must have identical contents at all times in order to guarantee the consistency and protection of RAID1.

The two copies are accomplished by write splitting, generally done by the RAID controller (the brains behind RAID protection – sometimes dedicated hardware, sometimes software).  The RAID controller detects a write coming into the system, realizes that the RAID type is 1, and splits the write onto the two physical disks of the pair.

How much capacity is available in both scenarios for the hosts?  300GB.  In the RAID1 configuration, even though there is a total of 600GB of space, 50% of the usable capacity is lost to RAID1.  Usable capacity is one of the key differentiators in the different RAID schemes.  Sometimes this is expressed in terms of usable and raw capacities – in this case there is 600GB of raw capacity, but only 300GB usable.  Again, the 50% factor comes in to play.

Another important concept to start to understand is what is called the write penalty.  The write penalty of any RAID protection scheme (except RAID0, discussed below) exists because in order to protect data, the storage system must have the ability to recover or maintain it in the event of a physical failure. And in order to recover data, it must have some type of extra information.  There is no free lunch, and unfortunately no magic sauce (although some BBQ sauce comes close) (note: please do not try to combine your storage array with BBQ sauce).  If the array is going to protect data that hosts are writing, it must write additional information along with it that can be used for recovery.

Write penalties are expressed in a ratio of physical disk writes to host writes (or physical disk writes to virtual disk writes, or most accurately back-end writes to front-end writes).  RAID1 has a write penalty of 2:1.  This means that for every host write that comes in (a write coming in to the front-end of the storage array), the physical disks (on the back-end of the storage array) will see two writes.

You might be wondering, what about read penalties?  Read penalties don’t really exist in normal operations. This may be a good topic for another post but for now just take it on faith that the read penalty for every non-degraded RAID type is 1:1.

The protection factor here is pretty obvious, but let’s again discuss it using common terminology in the storage world.  RAID1 can survive a single disk failure.  By that I mean if one disk in the pair fails, the remaining good disk will continue to provide data service for both reads and writes.  If the second disk in the pair fails before the failed disk is replaced and rebuilt, the data is lost.  If the first disk is replaced and rebuilds without issue, then the pair returns to normal and can once again survive a single disk failure.  So when I say “can survive a single disk failure,” I don’t mean for the life of the RAID group – I mean at any given time assuming the RAID group is healthy.

Another important concept – what does degraded and rebuild mean to RAID1 from a performance perspective?

  • Degraded – In degraded mode, only one of the two disks exists.  So what happens for writes?  If you thought things got better, you are strangely but exactly right.  When only one disk of the pair exists, only one disk is available for writes.  The write penalty is now 1:1, so we see a performance improvement for writes (although an increased risk for data loss, since if the remaining disk dies all data is lost). There is a potential performance reduction for reads since we are only able to read from one physical disk (this can be obscured by cache, but then so can write performance).
  • Rebuild – During a rebuild, performance takes a big hit.  The existing drive must be fully mirrored onto the new drive from start to finish, meaning that the one good drive must be entirely read and the data written to the new partner.  And, because the second disk is in place, writes will typically start being split again so that pesky 2:1 penalty comes back into play.  And the disks must continue to service reads and writes from the host.  So for the duration of the rebuild, you can expect performance to suffer.  This is not unique to RAID1 – rebuild phases always negatively impact potential performance.

 Striping

Striping is sometimes confused or combined with parity (which we’ll cover in RAID5 and RAID6) but it is not the same thing.  Striping is a process of writing data across several disks in sequence. RAID0 only uses striping, while the rest of the RAID types except RAID1 use striping in combination with mirroring or parity.  Striping is used to enhance performance.

striping

In this example the computer system is once again writing some alphabetic characters to disk.  It is writing to Logical Block Addresses, or sectors, or blocks, or whatever you like to imagine makes up these imaginary disks.  And these must be enormous letters because apparently I can only fit nine of them on a 300GB disk!  

On the left hand side there is a single 300GB physical disk.  As the hosts writes these characters, they are hitting the same disk over and over.  Obvious – there is only one disk!

What is the important thing to keep in mind here?  As mentioned in Part 1, generally the physical disk is going to be the slowest thing in the data path because it is a mechanical device.  There is a physical arm inside the disk that must be positioned in order to read or write data from a sector, and there is a physical platter (metal disk) that must rotate to a specific position.  And with just one disk here, that one arm and one platter must position itself for every write. The first write is C, which must fully complete before the next write A can be accomplished.

Brief aside – why the write order column?  The write order is to clarify something about this workload.  Workload is something that is often talked about in the storage world, especially around design and performance analysis.  It describes how things are utilizing the storage, and there are a lot of aspects of it – sequential vs random, read vs write, locality of reference, data skew, and many others. In this case I’m clarifying that the workload is random, because the host is never writing to consecutive slots.  If instead I wrote the data as A, B, C, D, E, F, G, H, I, this would be sequential.  I’ll provide some more information about random vs sequential in the RAID5/6 discussion.

On the right hand side there are three 100GB disks in a RAID0 configuration.  And once again it is writing the same character set.

This time, though, the writes are being striped across three physical disks.  So the first write C hits disk one, the second write A hits disk two, and the third write E hits disk 3.  What is the advantage?   The writes can now execute in parallel as long as they aren’t hitting the same physical disk.  I don’t need for the C write to complete before I start on A.  I just need C to complete before I start on B.

How about reads?  Yep, there is increased performance here as well.  Reads can also execute in parallel, assuming the locations being read are on different physical disks.

Effectively RAID0 has increased our efficiency of processing I/O operations by a factor of three.  I/O operations per second, or IOPs as they are commonly called, is a way to measure disk performance (e.g. faster disks like SAS can process roughly double the amount of IOPs than NLSAS or SATA disks).  And striping is a good way to bump up the IOPs a system is capable of producing for a given virtual disk set.

This is a good time to define some terminology around striping.  I wouldn’t necessarily say this is incredibly useful, but it can be a good thing to comprehend when comparing systems because these are some of the areas where storage arrays diverge from each other.

The black boxes outlined in green represent strips.  The red dotted line indicates a stripe.

The black boxes outlined in green represent strips. The red dotted line indicates a stripe.

  • Strip – A strip is a piece of one disk.  It is the largest “chunk” that can be written to any disk before the system moves on to the next disk in the group.  In our three disk example, a strip would be the area of one disk holding one letter.
  • Strip Size (also called stripe depth) – This is the size of a strip from a data perspective.  The size of all strips in any RAID group will always be equivalent.  On EMC VNX, this value is 64KB (some folks might balk at this having seen values of 128 – this is actually 128 blocks and a block is 512 bytes).  On VMAX this varies but (I believe) for most configurations the strip size is 256KB, and for some newer ones it is 128KB (will try to update this if/when I verify this).  A strip size of 64KB means that if I were to write 128KB starting at sector 0 of the first disk, the system would write 64KB to the disk before moving on to the next disk in the group.  And if the strip size were 128KB, the system would write the entire 128KB to disk before moving on to the next disk for the next bit of data.
  • Stripe – A stripe is a collection of strips across all disks that are “connected”, or more accurately seen as contiguous.  In our 3 disk example, if our strip size was 64KB, then the first strip on each disk would collectively form the first stripe.  The second strip on each disk would form the second stripe, and would be considered, from a logical disk perspective, to exist after the first stripe.  So the order of consecutive writes would go Stripe1-Strip1, Stripe1-Strip2, Stripe1-Strip3, Stripe2-Strip1, Stripe2-Strip2, etc.
  • Stripe Width – this is how many data disks are in a stripe.  In RAID0 this is all of them because disks only hold data, but for other RAID types this is a bit different.  In our example we have a stripe width of 3.
  • Stripe Size – This is stripe width x strip size.  So in our example if the strip size is 64KB, the stripe size is 64KB x 3 or 192KB

Note: these are what I feel are generally accepted terms.  However, these terms get mixed up A LOT.  If you are involved in a discussion around them or are reading a topic, keep in mind that what someone else is calling stripe size might not be what you are thinking it is.  For example, a 4+1 RAID5 group has five disks, but technically has a stripe width of 4.  Some people would say it has a stripe width of five.  In my writing I will always try to maintain these definitions for these terms.

Since I defined some common terms above in the RAID1 section, let’s look at them again from the perspective of RAID0.

First, usable capacity.  RAID0 is unique because there is no protection information that is written.  Because of this, there is no usable capacity penalty.  If I combine five 1TB disk in a RAID0 group, I have 5TB usable.  In RAID0, raw always equals usable.

How about write penalty?  Once again we have a unique situation on our hands.  Every front end write only hits one physical disk, so there is no write penalty – or the write penalty can be expressed as 1:1.

“Amazing!” you might be thinking.  Actually, probably not, because you have probably realized what the big issue with RAID0 is.  Just to be sure, let’s discuss protection factor.  RAID0 does not write any protection information in any way, hence it provides no protection.  This means that the failure of any member in a RAID0 group immediately and instantly invalidates the entire group (this is an important concept for later, so make sure you understand it).  If you have two disks in a RAID0 configuration and one disk fails, all data on both disks is unusable.  If you have 30 disks in a RAID0 configuration and one disk fails, all data on all 30 disks is unusable.  Any RAID0 configuration can survive zero failed disks.  If you have a physical failure in RAID0, you better have a backup somewhere else to restore from.

How about the degraded and rebuild concepts?  Good news everyone!  No need to worry ourselves with these concepts because neither of these things will ever happen.  A degraded RAID0 group is a dead RAID0 group.  And a rebuild is not possible because RAID0 does not write information that allows for recovery.

So, why do we care about RAID0?  For the most part, we don’t.  If you run RAID0 through the googler, you’ll find it is discussed a lot for home computer performance and benchmarking.  It is used quite infrequently in enterprise contexts because the performance benefit is outweighed by the enormous protection penalty.  The only places I’ve seen it used are for things like local tempdb for SQL server (note: I’m not a DBA and haven’t even played one on TV, but this is still generally a bad idea.  TempDB failure doesn’t affect your data, but I believe it does cause SQL to stop running…). 

We do care about RAID0, and more specifically the striping concept, because it is used in every other RAID type we will discuss.  To say that RAID0 doesn’t protect data isn’t really fair to it.  It is more accurate to say that striping is used to enhance performance, and not to protect data.  And it happens that RAID0 only uses striping.  It’s doing what it is designed to do.  Poor RAID0.

Neither of these RAID types are used very often in the world of enterprise storage, and in the next post I’ll explain why as I cover RAID1/0.

RAID: Part 1 – General Overview

For my first foray into the tech blogging world, I wanted to have a discussion on the simple yet incredibly complex subject of RAID.  Part 1 will not be technical, and instead hopefully provide some good footing on which to build.

For the purposes of this discussion I’m only going to focus on RAID 0, 1, 1/0 (called “RAID one zero” or more commonly “RAID ten”), 5, and 6.  These are generally the most common RAID types in use today, and the ones available for use on an EMC VNX platform.  Newcomers may feel daunted by the many types of RAID…I know I was. I spent some time memorizing a one line definition of what they mean. While this may be handy for a job interview, a far more valuable use of time would be to memorize how they work!  You can always print out a summary and hang it next to your desk.

I’ve found RAID to be one of the more interesting topics in the storage world because it seems to be one of the more misunderstood, or at least not fully understood, concepts – yet it is probably one of the most widely used.  Almost every storage array uses RAID in some form or another.  Often I deal with questions like:

  • Why don’t we just use RAID 1/0 since it is the fastest?
  • Why don’t I just want to throw all my disks into one big storage pool?
  • RAID6 for NLSAS is a good suggestion, but RAID5 isn’t too much different right?
  • RAID6 gives two disk failure protection, why would anyone use RAID5 instead?
  • Isn’t RAID6 too slow for anything other than backups?

Most of these questions really just stem from not understanding the purpose of RAID and how the types work.

In this post we’ll tackle the most basic of questions – what does RAID do, and why would I want to use RAID?

What does RAID do?

RAID is an acronym for Redundant Array of Independent (used to be Inexpensive, but not so much anymore) Disks.  The easiest way to think of RAID is a group of disks that are combined together into one virtual disk.

raid_example

If I had five 200GB disks, and “RAIDed” them together, it would be like I had one 1000GB disk.  I could then allocate capacity from that 1000GB disk.

Why would I want to use RAID?

RAID serves at least three purposes – protection, capacity, and performance.

Protection from Physical Failures

With the exception of RAID 0 (I’ll discuss the types later), the other RAID versions listed will protect you against at least one disk failure in the group.  In other words, if a hard drive suffers a physical failure, not only can you continue running (possibly with a performance impact), but you won’t lose any data.  A RAID group that has suffered a failure but is continuing to run is generally known as degraded.  What this means is a little different for each type so we’ll cover those details later.  When the failed disk is replaced with a functional disk, some type of rebuild operation will commence, and when complete the RAID group will return to normal status without issue.

Most enterprise storage arrays, and many enterprise servers, allow you to implement what is commonly known as a hot spare.  A hot spare is a disk that is running in a system, but not currently in use.  The idea behind a hot spare is to reduce restore time.  If a disk fails and you have to:

  1. Wait for a human to recognize the failure
  2. Open a service request for a replacement
  3. Wait for the replacement to be shipped
  4. Have someone physically replace the disk

That is potentially a long period of time that I am running in degraded mode. Hence the hot spare concept. With a hot spare in the system, when the disk fails, a spare is instantly available and rebuild starts.  Once the rebuild is finished, the RAID group returns to normal.  The failed disk is no longer a part of any active RAID group, and itself can be seen as a spare, unused disk in the system (though obviously not a hot spare because it is failed!).  Eventually it will be replaced, but because it isn’t involved in data service there is less of a critical business need to replace it.

An important and sometimes hazy concept, especially with desktops, is that RAID only protects you against physical failures.  It does not protect you against logical corruption.  As a simple example, if I protect your computer’s hard drives with RAID1 and one of those drives dies, you are protected. If instead you accidentally delete a critical file, RAID will do nothing for you.  In this situation, you need to be able to recover the file through the file system if possible, or restore from a backup.  There are a lot of types of logical corruption, and rest assured that RAID will not protect you from any of them.

Capacity

There are two capacity related benefits to RAID.  Note that there is generally also a capacity penalty that comes along with RAID, but we will discuss that when we get into the types.

Aggregated Usable Capacity

Continuing the example above with the five 200GB disks, if you were to come ask me for storage, without RAID the most I could give you would be a 200GB disk.  I might be able to give you multiple 200GB disks, and you might be able to combine those through a volume manager, but as a storage admin I could only present you one 200GB disk.

What if you need a terabyte of space?  I’d have to give you all five separate disks, and then you’d have to do some volume management on your end to put them together.

With RAID, I can assemble those together on the back end as a virtual device, and present it as one contiguous address space to a host.  As an example, 2TB datastores are fairly common in ESX, and I would venture to say a lot of those datastores run on disk drives much smaller than 2TB.  Maybe it is a 10 or 20 disk 600GB SAS pool, and we have allocated two TB out of that for the ESX datastore.

Aggregated Free Space

Think about the hard drive in your computer.  It is likely that you’ve got some amount of free capacity on it.  Let’s say you have a 500GB hard drive with 200GB of free space.

Now let’s think about five computers with the same configuration.  500GB hard drives, 200GB free on each.  This means that we are not using 1000GB of space overall, but because it is dedicated to each individual computer, we can’t do anything with it.

freespace

If instead we took those 500GB hard drives and grouped them, we could then have a sum total of 2500GB to work with and hand out.  Now perhaps it doesn’t make sense to give all users 300GB of capacity, since that is what they are using and they would be out of space…but perhaps we could give them 400GB instead.

Now we’ve allocated (also commonly known as “carving”) five 400GB virtual disks (also commonly known as LUNs) out of our 2500GB pool, leaving us 500GB of free space to work with.  Essentially by pooling the resources, we’ve gained the ability to add one additional hard drive without adding another physical disk.

Performance

Performance of disk based storage is largely based on how many physical spindles are backing it (this changes with EFD and large cache models, but that is for another discussion).  A hard drive is a mechanical device, and is generally the slowest thing in the data path.  Ergo, the more I can spread your data request (and all data requests) out over a bunch of hard drives, the more performance I’m going to be able to leverage.

If you need 200GB of storage and I give you one 200GB physical disk, that is one physical spindle  backing your storage.  You are going to be severely limited on how much performance you can squeeze out of that hard drive.

If instead I allocate your 200GB of space out of a RAID group or pool, now I can give you a little bit of space on multiple disks.  Now your virtual disk storage is backed by many physical spindles, and in turn you will get a lot more performance out of it.

It should be said that this blog is focused on enterprise storage arrays, but some of the benefits listed above apply to any RAID controller, even one in a server or workstation.  The aggregated free space, and in most scenarios the performance benefit, only apply to a shared storage arrays.

Hopefully this was a good high level introduction to the why’s of RAID.  In the next post I will cover the how’s of RAID 1 and 0.