RAID: Part 3 – RAID 1/0

So, if you have been following along dear reader, we are now up to speed on several things.  We have discussed mirroring (and RAID1, which leverages it) and striping (and RAID0, which leverages that).  We have also discussed RAID types using some familiar and standard terminology which will allow us to compare and contrast the versions moving forward.

Now, on to the big dog of RAID – RAID 1/0.  This is called “RAID one zero” and “RAID ten,” and sometimes “RAID one plus zero” (and indicated as RAID 1+0).  I have never heard it called “RAID one slash zero” but perhaps somebody somewhere does that also.  All of these things are referring to the same thing, and RAID ten is the most common term for it.

Why do we need RAID1/0?

In this section I wanted to ask a sometimes overlooked question – what are the problems with RAID0 and RAID1 that cause people to need something else?

If you know about RAID0 (or even better if you read Part 2) you should have an excellent idea of the failings of it.  Just to reiterate, the problem of RAID0 is that it only leverages striping, and striping only provides a performance enhancement.  It provides nothing in the way of protection, hence any disk that fails in a RAID0 set will invalidate the entire set. RAID0 is the ticking time bomb of the storage world.

RAID1’s problems aren’t quite as obvious as the “one disk failure = worst day ever” of RAID0, but once again let’s go back to Part 1 and look at the benefits I listed of RAID:

  1. Protection – RAID (except RAID0) provides protection against physical failures.  Does RAID1 provide that?  Absolutely – RAID1 can survive a single disk failure.  Check box checked.
  2. Capacity – RAID also provides a benefit of capacity aggregation.  Does RAID1 provide that?  Not at all.  RAID1 provides no aggregate capacity or aggregate free space benefit because there are always exactly two disks in a RAID1 pair, and the usable capacity penalty is 50%.  Whether I have a RAID1 set using a 600GB drive or a 3TB drive, I get no aggregate capacity benefit with RAID1, beyond the idea of just splitting a disk up into logical partitions…which can be done on a single disk without RAID in the first place.
  3. Performance – RAID provides a performance benefit since it is able to leverage additional physical spindles.  Does RAID1 provide that?  The answer is yes…sort of.  It does provide two spindles instead of one, which fits the established definition.  However there are some caveats.  There isn’t a performance boost on writes because of the write penalty of 2:1 (both of the spindles are being used for every single write).  There is a performance boost on reads because it can effectively round-robin read requests back and forth on the disks.  But, and a BIG BUT, there are only two spindles.  There are only ever going to be two spindles.  Unlike a RAID0 set which can have as many disks as I want to risk my data over, a RAID1 set is performance bound to exactly two spindles.

Essentially the problem with the mirrored pair is just that – there are only ever going to be two physical disks.

By now it may have become obvious, but RAID0 and RAID1 are almost polar opposites.  RAID1’s benefit lies mostly around protection, and RAID0’s benefit is performance and capacity.  RAID1 is the stoic peanut butter, and RAID0 is the delicious jelly.  If only there was a way to leverage them both….

What is RAID1/0?

RAID1/0 is everything you wanted out of RAID0 and RAID1. It is the peanut butter and jelly sandwich.  (Note: please do not attempt to combine your storage array with peanut butter or jelly.  Especially chunky peanut butter.  And even more especiallyer chunky jelly)

Essentially RAID1/0 looks like a combination of RAID1 and RAID0, hence the label.  More accurately, it is a combination of mirroring and striping in that order.  RAID1/0 replaces the individual disks of a RAID0 stripe set with RAID1 mirror pairs.  It is also important to understand what RAID1/0 is and what it is not.  It is true that it leverages the good things out of both RAID types, but it also still maintains the bad things of both RAID types. This will become apparent as we dive into it.

raid10

This is a busy image, but bear with me as I break it down.

  • This is an eight disk RAID1/0 configuration, and on this configuration (similar to the Part 2 examples) we are writing A,B,C,D to it. For simplicity’s sake we ignore write order and just go alphabetically
  • The orange and green help indicate what is happening at their particular parts of the diagram
  • The physical disks themselves (the black boxes) are in mirrored pairs that should hopefully be familiar by now (indicated by the green boxes and plus signs).  This is the same RAID1 config that I’ve covered previously.
  • The weirdness picks up at the orange part. The orange box indicates that we are striping across every mirrored pair.  This is also identical to the RAID0 configuration, except that the the physical disks of the RAID0 config have been replaced with these RAID1 pairs.

This is what is meant by RAID1/0.  First comes RAID1 – we build mirrored pairs.  Then comes RAID0 – we stripe data across the members, which happen to be those mirrored pairs.  It may help to think about RAID1/0 as RAID0 with an added level of protection at the member level (since we know RAID0 provides no protection otherwise).

As the host writes A,B,C,D, the diagram indicates where the data will land, but let’s cover the order of operations.

  1. The host writes A to the RAID1/0 set
  2. A is intercepted by the RAID controller.  The particular strip it is targeted for is identified.
  3. The strip is recognized to be on a mirrored pair, and due to the mirror configuration the write is split.
  4. A lands on both disks that make up the first member of the RAID0 set.
  5. Once the write is confirmed on both disks, the write is acknowledged back to the host as completed
  6. The host writes B to the RAID1/0 set
  7. B is intercepted by the RAID controller.  The particular strip it is targeted for is identified.  Due to the mirror configuration the write is split.
  8. B lands on both disks that make up the second member of the RAID0 set.
  9. Once the write is confirmed on both disks, the write is acknowledged back to the host as completed
  10. The host writes C to the RAID1/0 set
  11. etc.

Hopefully this gives an accurate, comprehensible version of the how’s of RAID1/0.  Now, let’s look at RAID1/0 using the same terminology we’ve been using.

From a usable capacity perspective, RAID1/0 maintains the same penalty as RAID1.  Because every member is a RAID1 pair, and every RAID1 pair has a 50% capacity penalty, it stands to reason that RAID1/0 also has a 50% capacity penalty as a whole.  No matter how many members are in a RAID1/0 group, the usable capacity penalty is always 50%.

The write penalty is a similar tune.  Because every member is a RAID1 pair, and every RAID1 pair has a 2:1 write penalty, RAID1/0 also has a write penalty of 2:1.  Again no matter how many members are in the set, the write penalty is always 2:1.

RAID1/0 reminds me of the Facts of Life. You know, you take the good, you take the bad?  RAID1/0 is a leap up from RAID0 and RAID1, but it doesn’t mean that we’ve gotten rid of their problems.  It is better to think that we’ve worked around their problems.  The same usable capacity penalty exists, but now I have the ability to aggregate capacity by putting more and more members into a RAID1/0 configuration.  The same write penalty exists, but again I can now add more spindles to the RAID1/0 configuration for a performance boost.

The protection factor is weird, but still a combination of the two.  How many disks failures can a RAID1/0 set survive?  The answer is, it depends.  There is still striping on the outer layer, and by now we have beaten the dead horse enough to know that RAID0 can’t lose any physical disks.  It is a little clearer, especially for this transition, to think of this concept as RAID0 can’t survive any member failures, and in traditional RAID0 members are physical disks.  In this capacity, RAID1/0 is the same: RAID1/0 can’t survive any member failures.  The difference is that now a member is made up of two physical disks that are protecting each other.  So can a RAID1/0 set lose a disk and continue running?  Absolutely – RAID1/0 can always survive one physical disk failure.

…But, can it survive two?  This is where it gets questionable.  If the second disk failure is the other half of the mirrored pair, the data is toast.  Just as toast as if RAID0 had lost one physical disk since the effect is the same.  But what if it doesn’t lose that specific disk?  What if it loses a disk that is part of another RAID1 pair?  No problem, everything keeps running.  In fact, in our example, we can lose 4 disks like this and keep running:

raid10_4fails

You can lose as many as half of the disks in the RAID1/0 set and continue running, just as long as they are the right disks.  Again, if we lose two disks like this, ’tis a bad day:

raid10_2fails

So there are a few rules about the protection of RAID1/0

  • RAID1/0 can always survive a single disk failure
  • RAID1/0 can survive multiple disk failures, so long as the disk failures aren’t within the same mirrored pair
  • With RAID1/0 data loss can occur over as little as two disk failures (if they are part of the same mirror pair) and is guaranteed to occur at (n/2)+1 failures where n is the total disk count in the RAID1/0 set. 

Degraded and rebuild concepts are identical to RAID1 because the striping portion provides no protection and no rebuild ability.

  • Any mirror pair in degraded mode will see a write performance increase (splitting writes no longer necessary), and potentially a read performance decrease.  Other mirror pairs continue to operate as normal
  • Any mirror pair in rebuild mode will see a heavy performance penalty.  Other mirror pairs continue to operate as normal with no performance penalty.

Why not RAID0/1?

This is one of my favorite interview questions, and if you are interviewing with me (or at places I’ve been) this might give you a free pass on at least one technical question.  I picked it up from a colleague of mine and have used it ever since.

Why not RAID0/1?  Or is there even a concept of RAID0/1?  Would it be the same as RAID1/0?

It does exist, and it is extremely similar on the surface.  The only difference is the order of operations: RAID1/0 is mirrored, then striped, and RAID0/1 is striped, then mirrored.  This seemingly minor difference in theory actually manifests as a very large difference in practice.

raid01

Most things about RAID0/1 are identical to RAID1/0 (like performance and usable capacity), with one notable exception – what happens during disk failure?

I covered the failure process of RAID1/0 above so I won’t rehash that. For RAID0/1, remember that any failure of a RAID0 member invalidates the entire set.  So, what happens whenever the top left disk in RAID0/1 fails?  Yep, the entire top RAID0 set fails, and now it is effectively running as RAID0 using only the bottom set.

This has two implications.  The most severe being that RAID0/1 can survive a single disk failure, but never two disk failures.  The other is that if a disk failed and a hot spare was available (or the bad disk was swapped out with a good disk), the rebuild affects the entire RAID set rather than just a portion of it.

It would be possible to design a RAID controller to get around this.  It could recognize that there is still a valid member available to continue running from in the second stripe set.  But then essentially what it is doing is trying to make RAID0/1 be like RAID1/0.  Why not just use RAID1/0 instead?  That is why RAID1/0 is a common implementation and RAID0/1 is not.

Wrap Up

In Part 4 I’m going to cover parity and hopefully RAID5 and 6, and then I’ll provide some notes to bring the entire discussion together.  However, I wanted to include some thoughts about RAID1/0 in case someone stumbled on this and had some specific questions or issues related to performance, simply because I’ve seen this a lot.

RAID1/0 performs more efficiently than other RAID types from a write perspective only.  A lot of people seem to think that RAID1/0 is “the fastest one,” and hence should always be used for performance applications.  This is demonstrably untrue.  As I’ve stated previously, there is no such thing as a read penalty for any RAID type.  If your application is entirely or mostly read oriented, using RAID1/0 instead of RAID5 or 6 does nothing but cost you money in the form of usable capacity.  And yes, there are workloads with enormous performance requirements that are 100% read.

RAID1/0 has a massive usable capacity penalty.  If you are protecting data with RAID1/0, you need to purchase twice as much storage as it needs.  If you are replicating that data like-for-like, you need to purchase four times the amount of storage that it needs.  Additionally, sometimes your jumping off point locks you into a RAID type as well, so a decision to use RAID1/0 today may impact the future costs of storage as well.  I can’t emphasize this point enough – RAID1/0 is extremely expensive and not always needed.

I like to think of people who always demand RAID1/0 like the people who might bring a Ferrari when asked to “bring your best vehicle.”  But it turns out, I needed to tow a trailer full of concrete blocks up a mountain.  Different vehicles are the best at different things…just like RAID types.  We need to fully understand the requirements before we bring the sports car.

If you are having performance problems, or more likely someone is telling you they are having performance problems, jumping from RAID5 to RAID1/0 may not do a thing for you.  It is important to do a detailed analysis of the ENTIRE storage environment and figure out what the best fit solution is.  You don’t want to be that guy who advocated a couple hundred thousand dollars of a storage purchase when it turned out there was a host misconfiguration.

RAID: Part 2 – Mirroring and Striping

In this post I’m going to cover two of the three patterns of RAID, and cover RAID0, 1.  Initially I was going to include RAID1/0 but this one got a bit too long, so I’ll break it out into its own post.

Mirroring

Mirroring is one of the more straightforward RAID patterns, and is employed in RAID1 and RAID1/0 schemes.  Mirroring is used for data protection.

Exactly as it sounds, mirroring involves pairs of disks (always pairs – exactly two) that are identical copies of each other.  In RAID1, a virtual disk would be comprised of two physical disks on the back end. So as I put files on the virtual disk that my server sees, essentially it is making two copies of it on the back side.

mirroring

In this scenario imagine a server is writing the alphabet to disk.  The computer host starts with A and continues through till I.  In the case of one physical disk on the left hand side, the writes end up on the single physical disk as expected.

On the right hand side, using the RAID1 pair, the computer host writes the same information, but on the storage array the writes are mirrored to two physical disks.  This pair must have identical contents at all times in order to guarantee the consistency and protection of RAID1.

The two copies are accomplished by write splitting, generally done by the RAID controller (the brains behind RAID protection – sometimes dedicated hardware, sometimes software).  The RAID controller detects a write coming into the system, realizes that the RAID type is 1, and splits the write onto the two physical disks of the pair.

How much capacity is available in both scenarios for the hosts?  300GB.  In the RAID1 configuration, even though there is a total of 600GB of space, 50% of the usable capacity is lost to RAID1.  Usable capacity is one of the key differentiators in the different RAID schemes.  Sometimes this is expressed in terms of usable and raw capacities – in this case there is 600GB of raw capacity, but only 300GB usable.  Again, the 50% factor comes in to play.

Another important concept to start to understand is what is called the write penalty.  The write penalty of any RAID protection scheme (except RAID0, discussed below) exists because in order to protect data, the storage system must have the ability to recover or maintain it in the event of a physical failure. And in order to recover data, it must have some type of extra information.  There is no free lunch, and unfortunately no magic sauce (although some BBQ sauce comes close) (note: please do not try to combine your storage array with BBQ sauce).  If the array is going to protect data that hosts are writing, it must write additional information along with it that can be used for recovery.

Write penalties are expressed in a ratio of physical disk writes to host writes (or physical disk writes to virtual disk writes, or most accurately back-end writes to front-end writes).  RAID1 has a write penalty of 2:1.  This means that for every host write that comes in (a write coming in to the front-end of the storage array), the physical disks (on the back-end of the storage array) will see two writes.

You might be wondering, what about read penalties?  Read penalties don’t really exist in normal operations. This may be a good topic for another post but for now just take it on faith that the read penalty for every non-degraded RAID type is 1:1.

The protection factor here is pretty obvious, but let’s again discuss it using common terminology in the storage world.  RAID1 can survive a single disk failure.  By that I mean if one disk in the pair fails, the remaining good disk will continue to provide data service for both reads and writes.  If the second disk in the pair fails before the failed disk is replaced and rebuilt, the data is lost.  If the first disk is replaced and rebuilds without issue, then the pair returns to normal and can once again survive a single disk failure.  So when I say “can survive a single disk failure,” I don’t mean for the life of the RAID group – I mean at any given time assuming the RAID group is healthy.

Another important concept – what does degraded and rebuild mean to RAID1 from a performance perspective?

  • Degraded – In degraded mode, only one of the two disks exists.  So what happens for writes?  If you thought things got better, you are strangely but exactly right.  When only one disk of the pair exists, only one disk is available for writes.  The write penalty is now 1:1, so we see a performance improvement for writes (although an increased risk for data loss, since if the remaining disk dies all data is lost). There is a potential performance reduction for reads since we are only able to read from one physical disk (this can be obscured by cache, but then so can write performance).
  • Rebuild – During a rebuild, performance takes a big hit.  The existing drive must be fully mirrored onto the new drive from start to finish, meaning that the one good drive must be entirely read and the data written to the new partner.  And, because the second disk is in place, writes will typically start being split again so that pesky 2:1 penalty comes back into play.  And the disks must continue to service reads and writes from the host.  So for the duration of the rebuild, you can expect performance to suffer.  This is not unique to RAID1 – rebuild phases always negatively impact potential performance.

 Striping

Striping is sometimes confused or combined with parity (which we’ll cover in RAID5 and RAID6) but it is not the same thing.  Striping is a process of writing data across several disks in sequence. RAID0 only uses striping, while the rest of the RAID types except RAID1 use striping in combination with mirroring or parity.  Striping is used to enhance performance.

striping

In this example the computer system is once again writing some alphabetic characters to disk.  It is writing to Logical Block Addresses, or sectors, or blocks, or whatever you like to imagine makes up these imaginary disks.  And these must be enormous letters because apparently I can only fit nine of them on a 300GB disk!  

On the left hand side there is a single 300GB physical disk.  As the hosts writes these characters, they are hitting the same disk over and over.  Obvious – there is only one disk!

What is the important thing to keep in mind here?  As mentioned in Part 1, generally the physical disk is going to be the slowest thing in the data path because it is a mechanical device.  There is a physical arm inside the disk that must be positioned in order to read or write data from a sector, and there is a physical platter (metal disk) that must rotate to a specific position.  And with just one disk here, that one arm and one platter must position itself for every write. The first write is C, which must fully complete before the next write A can be accomplished.

Brief aside – why the write order column?  The write order is to clarify something about this workload.  Workload is something that is often talked about in the storage world, especially around design and performance analysis.  It describes how things are utilizing the storage, and there are a lot of aspects of it – sequential vs random, read vs write, locality of reference, data skew, and many others. In this case I’m clarifying that the workload is random, because the host is never writing to consecutive slots.  If instead I wrote the data as A, B, C, D, E, F, G, H, I, this would be sequential.  I’ll provide some more information about random vs sequential in the RAID5/6 discussion.

On the right hand side there are three 100GB disks in a RAID0 configuration.  And once again it is writing the same character set.

This time, though, the writes are being striped across three physical disks.  So the first write C hits disk one, the second write A hits disk two, and the third write E hits disk 3.  What is the advantage?   The writes can now execute in parallel as long as they aren’t hitting the same physical disk.  I don’t need for the C write to complete before I start on A.  I just need C to complete before I start on B.

How about reads?  Yep, there is increased performance here as well.  Reads can also execute in parallel, assuming the locations being read are on different physical disks.

Effectively RAID0 has increased our efficiency of processing I/O operations by a factor of three.  I/O operations per second, or IOPs as they are commonly called, is a way to measure disk performance (e.g. faster disks like SAS can process roughly double the amount of IOPs than NLSAS or SATA disks).  And striping is a good way to bump up the IOPs a system is capable of producing for a given virtual disk set.

This is a good time to define some terminology around striping.  I wouldn’t necessarily say this is incredibly useful, but it can be a good thing to comprehend when comparing systems because these are some of the areas where storage arrays diverge from each other.

The black boxes outlined in green represent strips.  The red dotted line indicates a stripe.

The black boxes outlined in green represent strips. The red dotted line indicates a stripe.

  • Strip – A strip is a piece of one disk.  It is the largest “chunk” that can be written to any disk before the system moves on to the next disk in the group.  In our three disk example, a strip would be the area of one disk holding one letter.
  • Strip Size (also called stripe depth) – This is the size of a strip from a data perspective.  The size of all strips in any RAID group will always be equivalent.  On EMC VNX, this value is 64KB (some folks might balk at this having seen values of 128 – this is actually 128 blocks and a block is 512 bytes).  On VMAX this varies but (I believe) for most configurations the strip size is 256KB, and for some newer ones it is 128KB (will try to update this if/when I verify this).  A strip size of 64KB means that if I were to write 128KB starting at sector 0 of the first disk, the system would write 64KB to the disk before moving on to the next disk in the group.  And if the strip size were 128KB, the system would write the entire 128KB to disk before moving on to the next disk for the next bit of data.
  • Stripe – A stripe is a collection of strips across all disks that are “connected”, or more accurately seen as contiguous.  In our 3 disk example, if our strip size was 64KB, then the first strip on each disk would collectively form the first stripe.  The second strip on each disk would form the second stripe, and would be considered, from a logical disk perspective, to exist after the first stripe.  So the order of consecutive writes would go Stripe1-Strip1, Stripe1-Strip2, Stripe1-Strip3, Stripe2-Strip1, Stripe2-Strip2, etc.
  • Stripe Width – this is how many data disks are in a stripe.  In RAID0 this is all of them because disks only hold data, but for other RAID types this is a bit different.  In our example we have a stripe width of 3.
  • Stripe Size – This is stripe width x strip size.  So in our example if the strip size is 64KB, the stripe size is 64KB x 3 or 192KB

Note: these are what I feel are generally accepted terms.  However, these terms get mixed up A LOT.  If you are involved in a discussion around them or are reading a topic, keep in mind that what someone else is calling stripe size might not be what you are thinking it is.  For example, a 4+1 RAID5 group has five disks, but technically has a stripe width of 4.  Some people would say it has a stripe width of five.  In my writing I will always try to maintain these definitions for these terms.

Since I defined some common terms above in the RAID1 section, let’s look at them again from the perspective of RAID0.

First, usable capacity.  RAID0 is unique because there is no protection information that is written.  Because of this, there is no usable capacity penalty.  If I combine five 1TB disk in a RAID0 group, I have 5TB usable.  In RAID0, raw always equals usable.

How about write penalty?  Once again we have a unique situation on our hands.  Every front end write only hits one physical disk, so there is no write penalty – or the write penalty can be expressed as 1:1.

“Amazing!” you might be thinking.  Actually, probably not, because you have probably realized what the big issue with RAID0 is.  Just to be sure, let’s discuss protection factor.  RAID0 does not write any protection information in any way, hence it provides no protection.  This means that the failure of any member in a RAID0 group immediately and instantly invalidates the entire group (this is an important concept for later, so make sure you understand it).  If you have two disks in a RAID0 configuration and one disk fails, all data on both disks is unusable.  If you have 30 disks in a RAID0 configuration and one disk fails, all data on all 30 disks is unusable.  Any RAID0 configuration can survive zero failed disks.  If you have a physical failure in RAID0, you better have a backup somewhere else to restore from.

How about the degraded and rebuild concepts?  Good news everyone!  No need to worry ourselves with these concepts because neither of these things will ever happen.  A degraded RAID0 group is a dead RAID0 group.  And a rebuild is not possible because RAID0 does not write information that allows for recovery.

So, why do we care about RAID0?  For the most part, we don’t.  If you run RAID0 through the googler, you’ll find it is discussed a lot for home computer performance and benchmarking.  It is used quite infrequently in enterprise contexts because the performance benefit is outweighed by the enormous protection penalty.  The only places I’ve seen it used are for things like local tempdb for SQL server (note: I’m not a DBA and haven’t even played one on TV, but this is still generally a bad idea.  TempDB failure doesn’t affect your data, but I believe it does cause SQL to stop running…). 

We do care about RAID0, and more specifically the striping concept, because it is used in every other RAID type we will discuss.  To say that RAID0 doesn’t protect data isn’t really fair to it.  It is more accurate to say that striping is used to enhance performance, and not to protect data.  And it happens that RAID0 only uses striping.  It’s doing what it is designed to do.  Poor RAID0.

Neither of these RAID types are used very often in the world of enterprise storage, and in the next post I’ll explain why as I cover RAID1/0.