VNX, Dedupe, and You

Block deduplication was introduced in Flare 33 (VNX2).  Yes, you can save a lot of space.  Yes, dedupe is cool.  But before you go checkin’ that check box, you should make sure you understand a few things about it.

As always, nothing can replace reading the instructions before diving in:

Click to access h12209-vnx-deduplication-compression-wp.pdf

Lots of great information in that paper, but I wanted to hit the high points briefly before I go over the catches.  Some of these are relatively standard for dedupe schemes, some aren’t:

  • 8KB granularity
  • Pointer based
  • Hash comparison, followed by a bit-level check to avoid hash collisions
  • Post-process operation on a storage pool level
  • Each pass starts 12 hours after the last one completed for a particular pool
  • Only 3 processes allowed to run at the same time; any new ones are queued
  • If a process runs for 4 hours straight, it is paused and put at the end of the queue.  If nothing else is in the queue, it resumes.
  • Before a pass starts, if the amount of new/changed data in a pool is less than 64GB the process is skipped and the 12 hour timer is reset
  • Enabling and disabling dedupe are online operations
  • FAST Cache and FAST VP are dedupe aware << Very cool!
  • Deduped and non-deduped LUNs can coexist in the same pool
  • Space will be returned to the pool when one entire 256MB slice has been freed up
  • Dedupe can be paused, though this does not disable it
  • When dedupe is running if you see “0GB remaining” for a while, this is the actual removal of duplicate blocks
  • Deduped LUNs within a pool are considered a single unit from FAST VP’s perspective.  You can only set a FAST tiering policy for ALL deduped LUNs in a pool, not for individual deduped LUNs in a pool.
  • There is an option to set dedupe rate – this adjusts the amount of resources dedicated to the process (i.e. how fast it will run), not the amount of data it will dedupe
  • There are two Dedupe statistics – Deduplicated LUN Shared Capacity is the total amount of space used by dedupe, and Deduplication and Snapshot Savings is the total amount of space saved by dedupe

Performance Implications

Nothing is free, and this check box is no different.  Browse through the aforementioned PDF and you’ll see things like:

Block Deduplication is a data service that requires additional overhead to the normal code path.

Leaving Block Deduplication disabled on response time sensitive applications may also be desirable

Best suited for workloads of < 30% writes….with a large write workload, the overhead could be substantial

Sequential and large block random (IOs 32 KB and larger) workloads should also be avoided

But the best line of all is this:

it is suggested to test Block Deduplication before enabling it in production

Seriously, please test it before enabling it on your mission critical application. There are space saving benefits, but that comes with a performance hit.  Nobody can tell you without analysis whether that performance hit will be noticeable or detrimental.  Some workloads may even get a performance boost out of dedupe if they are very read oriented and highly duplicated – it is possible to fit “more” data into cache…but don’t enable it and hope it will happen. Testing and validation is important!

Along with testing for performance, test for stability.  If you are using deduplication with ESX or Windows 2012, specific features (the XCOPY directive for VAAI, ODX for 2012) can cause deduped LUNs to go offline with certain Flare revisions.  Upgrade to .052 if you plan on using it with these specific OSes.  And again, validate, do your homework, and test test test!

The Dedupe Diet – Thin LUNs

Another thing to remember about deduplication is that all LUNs become thin.

When you enable dedupe, in the background a LUN migration happens to a thin LUN in the invisible dedupe container.  If your LUN is already thin, you won’t notice a difference here.  However if the LUN is thick, it will become thin whenever the migration completes.   This totally makes sense – how could you dedupe a fully allocated LUN?

When you enable dedupe the status for the LUN will be “enabling.”  This means it is doing the LUN migration – you can’t see it in the normal migration status area.

Thin LUNs have slightly lower performance characteristics than thick LUNs. Verify that your workload is happy on a thin LUN before enabling dedupe.

Also keep in mind that this LUN migration requires 110% of the consumed space in order to migrate…so if you are hoping to dedupe your way out of a nearly full pool, you may be out of luck.

One SP to Rule Them All

Lastly but perhaps most importantly – the dedupe container is owned by one SP.  This means that whenever you enable dedupe on the first LUN in a pool, that LUN’s owner becomes the Lord of Deduplication for that pool.  Henceforth, any LUNs that have dedupe enabled will be migrated into the dedupe container and will become owned by that SP.

This has potentially enormous performance implications with respect to array balance.  You need to be very aware of who the dedupe owner is for a particular pool.  In no particular order:

  • If you are enabling dedupe in multiple pools, the first LUN in each pool should be owned by differing SPs.  E.g. if you are deduping 4 different pools, choose an SPA LUN for the first one in two pools, and an SPB LUN for the first one in the remaining two pools.  If you choose an SPA LUN for the first LUN in all four pools, every deduped LUN in all four pools will be on SPA
  • If you are purchasing an array and planning on using dedupe in a very large single pool, depending on the amount of data you’ll be deduping you may want to divide it into two pools and alternate the dedupe container owner.  Remember that you can keep non-deduplicated LUNs in the pools and they can be owned by any SP you feel like
  • Similar to a normal LUN migration across SPs, after you enable dedupe on a LUN that is not owned by the dedupe container owner, you need to fix the default owner and trespass after the migration completes.  For example – the dedupe container in Pool_X is owned by SPA.  I enable dedupe on a LUN in Pool_X owned by SPB.  When the dedupe finishes enabling, I need to go to LUN properties and change the default owner to SPA.  Then I need to trespass that LUN to SPA.
  • After you disable dedupe on a LUN, it returns to the state it was pre-dedupe.  If you needed to “fix” the default owner on enabling it, you will need to “fix” the default owner on disabling.

What If You Whoopsed?

What if you checked that box without doing your homework?  What if you are seeing a performance degradation from dedupe?  Or maybe you accidentally have everything on your array now owned by one SP?

The good news is that dedupe is entirely reversible (big kudos to EMC for this one).  You can uncheck the box for any given LUN and it will migrate back to its undeduplicated state.  If it was thick before, it becomes thick again.  If it was owned by a different SP before, it is owned by that SP again.

If you disable dedupe on all LUNs in a given pool, the dedupe container is destroyed and can be recreated by re-enabling dedupe on something.  So if you unbalanced an array on SPA, you can remove all deduplication in a given pool, and then enable it again starting with an SPB LUN.

Major catch here – you must have the capacity for this operation.  A LUN requires 110% of the consumed capacity to migrate, so you need free space in order to undo this.

Deduplication is a great feature and can save you a lot of money on capacity, but make sure you understand it before implementing!

5 thoughts on “VNX, Dedupe, and You

  1. Hi Jcason,

    I need your advice please. Since i need to disable the deduplication at a LUN may i know the impact for that particular LUN ? And the major catch “you must have the capacity for this operation. A LUN requires 110% of the consumed capacity to migrate, so you need free space in order to undo this” It’s means 110% of the consumed storage pool capacity or 110% of the consumed LUN capacity?

    • Hi Oscar,

      You’ll see a very slight performance degradation while the background migration out of the dedupe container is accomplished – not particularly different from any LUN migration – but it is likely that you won’t notice it. Assuming you are reversing deduplication for performance reasons, you won’t notice the performance benefit until the migration is complete and dedupe is completely undone.

      As far as the capacity, yes this is the consumed capacity of the LUN, not the pool. So if you were to have started with a 1TB thin LUN that currently has 200GB used on it (actual data, not deduped size) then you’d need ~220GB free in the pool in order to disable dedupe. If you were to have started with a 1TB thick LUN that currently has 200GB used on it, then you would need ~1100GB free in the pool in order to disable dedupe, as the undo operation will restore the thickness.

      Hope that helps, and good luck!

      • Hi JCASON,
        Thank you for this great post.
        I have a Customer who unfortunately enabled dedup without considering any of the EMC Best Practices on the entire system level (every pool, all the LUNs)… Now, you can imagine the result…
        System’s Performance became horrible…
        After tedious work, we identified the LUNs eligible for dedup. However, support (for several reasons) advised the Customer to first disable the dedup on all LUNs and then re-enable it on a specific set of LUNs.
        For this operation, the Customer decided to create a script that deactivates deduplication on LUNs by sets of 50 LUNs at a time. The following was observed: when the script starts, the 50 LUNs are in disabling state HOWEVER only 8 are migrating at a time. So here are the questions:
        1/ Why 8 only ? Is this normal behavior in the code when disabling dedup ?
        2/ Considering that available capacity is not an issue, what is the best practice for disabling dedup on all LUNs in the system (we are talking about several hundreds LUNs) ?

        Thanks in advance for your answer

      • Hi Elys,

        Thanks for commenting and sorry for the performance problems for your customer. I wish I had better news for you but don’t really. 😦 Your tale unfortunately underscores what gets missed a lot and why I wrote the post, which is that people need to fully understand dedupe and its impact before they enable it. Sadly it isn’t much help after the fact!

        Your first issue is not a system problem but a stated limit with the dedupe migrations. See https://elabnavigator.emc.com/vault/pdf/EMC_VNX5xxx_7600_8000_ESSM.pdf?key=1465225696694 Regardless of model on the VNX2, there is a limit of 8 at a time unfortunately. This is different from the simultaneous dedupe operations on LUNs.

        As far as your second question, there is no process I’m aware of that wouldn’t be more painful than waiting through the process. E.g. you could roll a LUN from backup into a new non-deduped LUN, but that would be very harsh. I would certainly engage EMC support in case they know of some secret squirrel to unlock more operations or speed them up. You can alter the speed of LUN migrations but not these hidden LUN migrations…but again there is always the possibility that they know how to do that.

        Best of luck with it and thanks for reading!

        Joel

  2. Pingback: EMC – Next-Generation VNX – Block Deduplication Caveats | penguinpunk.net

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s