DataDirect Networks – the “other XIV” ?

A year or two ago, I got a call from one of our sales guys, who was looking for a solution to compete with a technology I’d never heard of, for a pretty unusual requirement.  The customer was doing (very) high-performance video effects work, and the sales guy was looking at potential replacements for an existing solution, the customer having mentioned in passing that while the equipment worked extremely well, she had some concerns around support for what was a non-mainstream product.

Sales guy, having jumped on this opening like a starving weasel, wanted to understand which of the vendors would be the best solution to compete with the incumbent, a company called DataDirect Networks.

Having sat down and looked at the set of requirements provided and done a bit of research into DDN, my response was that nothing currently on the market would do what the customer wanted, at the price they were prepared to pay.  Sure, a DMX or USP-V could be configured to provide what were, quite frankly, outlandish performance requirements, but even preliminary configurations were pricing at around double what the customer claimed to have paid for the DDN system they already had.

“Are you sure they’re actually getting this level of performance?”

The above became my catchphrase for the next few days until the sales guy, disheartened by conversations around solid state disks, huge cache requirements and the need to mortgage a kidney to pay for it all, went off to look for some slightly lower hanging fruit and I was left with the feeling that I should find out more about this DDN bunch.

  

So who are DDN?

DataDirect Networks (DDN) can be characterised as the most successful unknown niche storage vendor on the market.  A privately held company founded in 1988 and based in Chatsworth California, DDN are the manufacturers of an expanding product range, based around a parallel processing architecture.

Basically, imagine if we took the XIV capabilities and rather than using SATA disk, we attach higher-speed SAS disks.  At this point, we get something which can power “40 of the 100 fastest computing environments in the world”.  At the time I looked at DDN, their architecture was capable of a sustained 2.8GB/s (bytes, not bits) – this has since been expanded to 6GB/s.

That’s.  21.  Terabytes.  Per.  Hour.

Ouch

Put simply, this is about what I’d expect to get practically out of an enterprise system costing twice as much and smothered in solid state technology.  DDN achieve this with spinning disks and a Clariion-style footprint. And, as I keep coming back to, a lower price-tag.  With the implementation of new architectures and solid state disks, DDN are claiming that 10GB/s is now possible.

  

So what’s the secret?

The DDN product range is based around the Silicon Storage Appliance (S2A) and Storage Fusion architectures.  As with the IBM XIV, these arrays use a massively parallel processing architecture to ensure that sequential data is processed at effectively wire speeds.  Unlike mid-range arrays such as the EMC Clariion or Hitachi AMS, data is not funnelled to a single cache to speed up disk access – all error checking is done by the individual S2A processing complexes and passed directly to disk.

So what can DDN do for me?

Basically, DDN have opted for more flexibility than IBM have with XIV.  Rather than a single disk type, DDN provide a range of disk types and shelf configurations.  The performance option is similar to XIV’s front-loaded, 12 disks per shelf design, providing a small capacity footprint, but achieving the promised high-performance.  This is the configuration which is used in the supercomputing environments which make up DDN’s more interesting case studies.

The high-capacity solution on the other hand, uses a top-loading, densely packed shelf, allowing 60 disks per enclosure – with 2TB SATA drives, this would be capable of around 2 petabytes in a single rack.  This is marketed as a high-performance VTL and archiving product, rather than as a compute platform.  More recently, the company has moved from building block storage arrays, to also providing scalable file-system storage, based on their own NAS designs.

So where can I buy one?

In the UK at least, DDN arrays tend to turn up as part of larger vendors’ solutions.  IBM have long sold the S2A arrays as part of their supercomputing portfolio and have integrated the capacity versions into their SONAS product. 

Update – I’m told on good authority that the DDN products are also resold outside of the suporcomputing area as the DCS9900.  I’d be interested to hear from any IBM customers (or anyone else) who are using DDN, but particularly in this area, as I’ve not seen a case-study or use-case for non BlueGene implementations of DDN.

HP have recently announced that they will sell DDN as their top-end NAS product.  It appears from the announcements that, rather than bolt the storage array onto their own NAS offerings as IBM have done, HP will sell DDN’s own NAS offering.

  

Conclusion – so is this a competitor to XIV?

Strangely no, not really.  At least not yet.

At least at the present time, DDN tend to sell to pretty specific use-cases.  While XIV is now sold as a general computing platform, the DDN arrays tend to be sold for big number-crunching and media-serving environments.  The head of the company was quoted as saying that, in non-media serving environments, the S2A architecture would be “horrible” compared to competitors such as the EMC Clariion. 

But then, as IBM showed by marketing the XIV in the early days as a web serving platform only, things can change (P45 for the head of marketing, hey?).  It may be that DDN forms the upper and lower tiers of a storage environment, with the middle taken up by a general computing platform such as Clariion or XIV. 

The diagram below shows a possible solution of this type:

My own feeling is that the only thing stopping DataDirect Networks from becoming the next generation of EMC or HP storage is the fact that it is still a private company.  If DDN ever take the IPO route, I predict they will last about 5 minutes before the first takeover attempts (hostile or not) begin. 

This might not be a bad thing.  XIV is currently gaining IBM market share based on a combination of usability, cost and performance which I’ve not seen traditional storage architectures begin to touch.  DDN for their part, would get the same kind of boost that XIV received from IBM – a properly global reach, greater R&D spend and far greater brand awareness, at the cost of identity.

Depressed by 10 years of reselling LSI seconds, IBM went for a complete makeover, and stole a march on the competition by purchasing the technology most likely to shake up the market.  Other storage vendors will have difficulty developing an answer to the XIV architecture in a reasonable time – purchasing the DDN architectures would give an immediate route to replace existing dual-processor architectures (and an existing user-base of weird and wonderful customers). 

The attempts may already have begun.  Commentators have pointed out that the HP reseller deal fills no gaps in the HP product range – they already have large scale NAS products so DDN is effectively redundant within their portfolio.  I’d suggest that this is an attempt by HP to get an understanding of the practical reality of the DDN products, with a view to understanding whether it’s a credible product set for HP to buy up – possibly as a replacement for the rebadged HDS arrays currently sold in the enterprise space. 

Either way, I’m keeping an eye on DDN – in any case, reading their customer case studies is always going to be entertaining.  A selection below gives a sample of customers:

  • TimeWarner Cable – Video on Demand
  • Pacific Title & Art – The post-production effects house for Batman – The Dark Knight movie
  • Microsoft Studios – Xbox Live Media & Data Serving
  • CCTV – Chinese State Television, Media Serving for Coverage of the Beijing Olympics
  • Shutterfly – Photo sharing site serving up to 1 billion photos at present (growth from 100 million in 2 years)
  • Slide.com
  • Kodak Easyshare
  • National Center for Data Mining (UIC)
  • Northwestern University & John Hopkins University – Winner, 7th Annual Bandwidth Challenge (2006)
  • CalTech, CERN, University of Florida and the University of Michigan – 2nd, 7th Annual Bandwidth Challenge
  • Indiana University – 3rd place, Bandwidth Challenge at Supercomputing 2006
  • Lawrence Livermore National Laboratories,
  • Sandia National Laboratories,
  • NASA and NASA Ames
  • Argonne National Laboratory
Advertisement

7 Reasons why IBM’s XIV isn’t Perfect

Let me just start by saying that I’m not biased – I’m really not.

No, really, I’m not.

I promise I’m not biased, cross my heart.

Honestly I get no more out of recommending IBM than I do from anyone else.

Really, I’m working with Netapp this week, EMC next week, and HDS the week after that.

I’m not biased at all.

If I seem to be labouring the point, it’s because over the last year and a half, I’ve found that every time I talk about how good IBM’s XIV storage array is, after a few minutes people start giving me funny looks (funnier than usual) and asking “what’s in it for you?”. 

Given that I spent the first years of my IT career designing solutions around EMC Symmetrix and in the years since have spent my time designing storage solutions for every major player in the market (and a good number of minor ones) I really don’t see myself as having any one favourite.  With no exceptions, the organisations I’ve worked for have been vendor-neutral, and I’ve never really got the hang of the cordial hatred that vendors seem to have for each other’s products.

Lately, I’ve found that so many people are primed with the idea that anyone who even mentions XIV as a possible solution must be in the pocket of the IBM sales mafia.  I’ve no sooner begun talking about the benefits of the architecture, when people begin to question my impartiality.

I’ve come to the conclusion that the problem is, people are used to storage technologies (and technology in general) letting them down.  No storage technology is ever perfect – there are always hidden flaws and gotchas which surface only after the array has your organisation’s most precious data stored in its belly.

So anyone who comes along talking enthusiastically about an array which “just works” is automatically suspect.  It’s big, it’s expensive – so it must have problems.  So in this article I’m going to look briefly at the benefits of the array but concentrate mainly on the issues I’ve experienced in the last year and a half of working with XIV, thus finally demonstrating my sceptical side.

The benefits

Anyone who’s read a marketing slide from IBM knows the benefits of XIV – it’s easy to manage, stores up to 79TB in one rack space, is highly resilient and performs well at a low price.  With an increasing number of my customers running happily on XIV, I have no reason to disagree – in the large, well publicised (important) areas covered by the marketing brochures, the XIV really does “just work”.

To me then, the XIV has earned its place at the top table.  At 79TB of capacity and 50 to 70,000 IOPS performance each, it’s never going to compete on a 1:1 basis with the largest Symmetrix or Tagmastore arrays (200,000 IOPS and 600TB of Tier 1 storage anyone?) but then it does cost around 10x less than these arrays, and I’ve found that several XIV arrays will work as well as one large array (a Tier 1 array with the capabilities discussed above will take up 9-10 rack spaces in the datacentre, compared to 4 for the equivalent XIV).

The old chestnut

 “Double drive failure on an XIV will lose data!” scream competing vendors, somehow managing to imply that in a similar situation, their own systems would operate untouched.

Really?

Maybe one day this will happen to one of my XIV customers and I’ll know for sure – in the meantime I have to go off “interpretations of the architecture” and “assumptions of how the array will work”.

I’ll use my own interpretation, thanks 🙂

My reading of the system is that data loss is at least statistically possible in the case of simultaneous double drive failure.  Data entering XIV is split into 1MB chunks (XIV confusingly calls them “partitions”).  These partitions are copied and both copies are spread semi-evenly across the array.  Distribution is not random, much work goes into keeping these partitions on separate drives, separate modules at opposite ends of the array. But at some point the two copies have to be on two disks – if those particular two disks are lost, the data is gone.  Once you have a million of those partitions floating around on any given disk, the counterparts of some will have to be on each disk in the array.

But how likely is the situation to occur?  In 10 years I experienced exactly two incidents of double disk failure – in both cases these were within the same RAID Group, a good number of hours apart.  Fortunately in the first incident the hot-spare drive failed during rebuild – embarrassing and time-consuming to fix but no data loss.  In the second incident the RAID Group itself was lost and had to be recovered from backups.  In both cases, the explanation given for the second failure was age, coupled with the stress put on the disks by the rebuild process (72+ hours of sustained writes to disk for the rebuild + trying to keep normal operation going cannot be good for the disk).

  RAID 5 146GB (4+1) RAID 6 146GB (12+2) XIV 1TB RAID x
Number of copies of data 2 3 2
Rebuild time after disk loss 72+ hours 72+ hours 30 minutes
How many drives can be lost? 1 2 1
Disks involved in the rebuild 4 12 160
Increase in full load during rebuild 20% 7% 0.63%
Estimated lost data if 2 disks lost 526 GB 0 GB 9 GB

 

The point for me is that the examples above don’t make me stop discussing RAID 5 solutions with customers – if a customer wants to survive double drive failure they put in RAID 6, accept that they need many more drives to get performance (and start worrying about triple drive failure).  Implementing RAID 5 involves a risk that two drives may go at some point and they may lose data – this may also be a risk with the XIV.  If our data is this important, isn’t this what we have backup systems, snapshots, and cross-site replication for?

A little history

To me, it’s one of these issues that gets blown out of proportion – back in the early 2000’s, I kept hearing tales of EMC sales people, who went into meetings with customers, only to be asked probing questions (rather obviously planted by competitors) of the type “why is Symmetrix global cache a single point of failure?”

This was a problem for EMC, as the answer is “it may look like that’s the case, but actually it’s not an issue – here’s why…………… [Continued for the next 3 weeks]”.  At the time, what the customer saw was EMC sales teams descending into jargon and complex technobabble rather than just give a simple (to them) yes or no answer.  To me this is the same issue – the explanation of why there is no real issue is so involved, that customers lose patience.

EMC veteran blogger Chuck Hollis says it better than I could in his discussion of reasons why EMC delayed implementing RAID 6.  The full text can be found here:

http://chucksblog.emc.com/chucks_blog/2007/01/to_raid_6_or_no.html

Comments that I find particularly relevant to this discussion are:

“And way, way, way down the list – almost statistically insignificant – was dual disk failure in a single LUN group.”

“There’s a certain part of the storage market that is obsessed with specific marketing features, rather than results claimed”

“As a result of our decision [delaying RAID 6 implementation], I’m sure that every day someone somewhere is being pounded for the fact that EMC doesn’t offer RAID 6 like some of the other guys”

The issues

So, having gone through all of that, the IBM XIV is perfect?

No chance.

Over the last year and a half, a number of issues have become obvious in the operation of the XIV.  None are fundamental to the technology itself, but have formed a barrier to customer take-up of the array.

1. High capacity entry point: 

XIV can be sold with a minimum of 6 modules, or 27TB.  This is way down on the “at launch” configuration of 79TB only, but is still too high an entry point for many customers.  This high start point pretty much assures the survival in some form, of XIV’s internal IBM competitors, the DS3000 and DS5000 – these arrays can scale from much smaller volumes of storage, so will need to be kept alive in some form to provide for the low end of the market, at least until XIV can be sold in single module configurations.

2. Upgrade Step: 

Once customers reach 27TB, the next place they can go from 6 modules is 9 modules – 43TB.  Again, this is a tremendous jump and has put off some customers who prefer a smooth upgrade path.

3. Lack of iSCSI in the low-end configurations:  

So your small customer has spent more than he needs to on getting his very performant, very easy to manage storage array, but at least he can save money by using iSCSI, and avoiding the cost of dedicated fibre switches and HBAs?  Not a chance – in the 27TB config, XIV has no iSCSI capability.  Until you upgrade to 43TB you don’t even get the physical iSCSI ports at all.  So the segment of the market that could make best use of iSCSI, doesn’t get to use it.

4. Rigid linkage of performance to capacity: 

Traditional storage tends to have a central processor, with capacity added by adding disk trays.  For increased performance the central processor is upgraded and faster disks (or solid state disks) can be added.  With XIV, the growth model is fixed – each module adds both disk capacity and increased performance, as cache and processors are built into the module itself.  This is an immensely simple way of doing things and ensures that customers always know how much performance headroom is available, but it’s a double edged sword.

I have found there are times when a customer has a tiny storage volume, but massive performance requirements (in a recent case, 5TB of volume, average 50,000 IOPS performance requirement).  At this point, the customer has two choices:

A)     Provision the largest storage processor going on a traditional-model storage array, pack with a number of SSD drives or,

B)      Provision a full 79TB XIV to get the benefits of the entire 15 modules of cache and processor.

Both of these end up roughly the same price, but with option B, the purchaser has to explain to his superiors why he has 74TB of capacity that no one else can use (he then has to explain again next week when finance decide they want some of his unused space, and the week after to purchasing….).

Some might say that this is a situation that capacity on demand models were made for, but in the situation above, rare as it may be, CoD will only stretch so far.  If a customer is demanding that 90% of the technology delivered will never be used (or paid for) it may not be a commercially advantageous deal to make.

5. Lack of Control:

This is one which has only become apparent as the first IBM-badged XIV arrays have come to the end of their original support contracts.

IBM restricts access to a number of key technical functions.  Manually phasing out (removing from service) and phasing back in of modules and disks can only be performed by an IBM technician – access to these functions is locked, and guarded by wolves.

So you replace a disk – and until IBM support remotely phase your disk back in, that disk will just sit there, glowing a friendly yellow colour in your display and not taking on data.

The upside of this is that IBM support will probably call you immediately to tell you that the disk is awaiting phase in, and ask would you like them to do it (this has happened to me on a number of occasions).

But now hey – you’re out of support!  IBM will no longer phone you, when you call you’ve no support contract to draw on to get them to do the work for you, and to top it all you don’t have access to do it yourself.  A number of the underlying controls are locked away and IBM appears to have no plans to give out access, precluding any form of “break/fix” maintenance option.

This sort of IBM control-freakery is what keeps me awake at night – a decision by an IBM suit on the other side of the world, may make perfect sense at the time, but at 3am in a cold datacentre, when IBM are calling to tell me that my “issue can’t be resolved” due to that policy, is really not the time I want to find out I have an “insurmountable opportunity” on my hands.

6. Scheduling of Snapshots:

XIV makes copying data incredibly easy.  Snapshots are created at the click of a button, can be made read/write and mounted as a development volume.  Up to 16,000 can be created and maintained at any one time (eat that, “8 snaps maximum and 20% drop in performance” traditional storage!).

So why IBM, couldn’t you have included a built-in scheduler in the XIV interface, to let me make a new copy of a snapshot at regular intervals? 

Oh, you did?  You included built-in scheduling for the snapshots used by the asynchronous replication process?  So new replication snapshots can be created and overwritten on a regular basis to ensure that the replication stays on target?  But for my own application snapshots, I still have to buy an external replication manager application and set it up outside of XIV?  Thanks a bunch, IBM.

7. Support for older AIX versions:

Of all the operating systems supported by XIV, none has given as much trouble as AIX.  From direct fibre connection support (it doesn’t, end of story) to load balancing (it does, but only on newer releases and needs to be set manually) the AIX/XIV combination has taken some time to get into a usable state.  Recently though, as long as you’re on 5.3.10 or 6.1, you’re sorted.

If you happen to have applications which need to remain on a version older than 5.3.10, you may find you have issues – starting with no automated load balancing and low queue depths.  This in turn leads to low performance, complaints, heart-burn, indigestion and generally bad stuff.

I’m not an AIX expert, but from the work-arounds described in various places on the web, the cure is worse than the disease – manual scripting and complex processes which take all the fun out of managing an XIV environment.

IBM’s response so far has been:

“All you Luddites join the 21st century and upgrade to 6.1 – and can we sell you Power 7 while we’re at it….”

In Conclusion

In the last year and a half, I’ve gone from disbelief in the claims made by IBM, to grudging acceptance to a genuine liking for the product.  It is simple, powerful and cost effective, and I can see why there is a concerted effort by competing vendors to remove it from the field by throwing up a FUD screen around it. 

The point for me is that most of the negative comments I’ve seen tend to be of the “there is no possible way that XIV can do what it claims!!!” school of thought.  With a growing customer base running everything from Tier 1 Oracle, to MS Exchange, to disk backup systems on the arrays, I beg to differ.

It does no single thing massively better than the competition, but just does everything very well:

–          It’s easy to manage (but so is an HP EVA or SUN 7000)

–          It’s very fast for its size and cost (but an EMC Symmetrix is faster)

–          It contains a large volume in a small area, (but an HDS USP holds more and can virtualise)

But you can see there is no overlap – XIV appears in all categories, while the competition tends to focus on one or two areas.

As a solutions designer, I see the acceptance of XIV’s simple way of working leading to an end of complex LUN and RAID Group maps, the end of pre-allocation of storage months in advance, the end of pen and paper resizing exercises, and so on.

That said, it can be seen above that the array does have some nagging problems, most of them soluble if the will exists.  In most of these decisions (i.e. no iSCSI in low-end arrays) I see the corporate hand of IBM –“let’s not make the box too convenient for small customers, or they’ll never buy upgrades”

I see the IBM connection as a two-edged sword.  On the one hand, without IBM’s name, support, and R&D spend, XIV would still be languishing down in the challengers’ space with the likes of Compellent and Pillar data.  Having XIV harnessed to the IBM machine leap-frogged it into a mainstream position, bypassing potentially years or decades of effort.

But on the other hand, fitting XIV into IBM’s corporate strategy causes decisions that are hard to stomach.  The XIV modules that are added as part of the 6 to 9 upgrade have only one difference – the addition of a dedicated card for iSCSI connection.  There is no reason I can see why these cards could not have been added to the first set of modules to allow iSCSI at 27TB.

In the last year, I’ve probably spent as much time designing solutions around other vendor’s products as I have IBM’s.  This is because in the real world a buying decision takes in many more factors than just “which disk array is the fastest/best/cheapest”. 

EMC for example, have a completeness of offering in the storage area, which IBM can only aspire to; having spent the last 10 years developing an ecosystem of complementary products, EMC are fixing problems that IBM hasn’t even really started to address, apart from a spate of acquisitions of the type that EMC started before 2000 and have continued ever since. 

I’m pretty sure that while the other vendors are attacking the XIV way of doing things, in the background they’ll also be coming up with their own ways to match it – IBM has a head-start, but that’s all it is – a temporary advantage in one field of storage.  To capitalise, they need to stop thinking that the IBM way is the only way and look at some of the engineering decisions that are not working for customers.

All of the issues discussed above have either been experienced by me either during design or implementation.  Whether they are an impediment to a customer considering XIV will very much depend on the situation.  Personally I find few occasions when XIV is not at least worth a look, even if it’s not the be-all and end-all.

But maybe I’m just biased.

Update – a few days after I posted my article, Tony Pearson at IBM posted an article making some similar points regarding double drive failure, but providing an additional piece of very key information.

https://www.ibm.com/developerworks/mydeveloperworks/blogs/InsideSystemStorage/entry/ddf-debunked-xiv-two-years-later?lang=en#comments

Tony is careful to point out that no customer has ever experienced double drive failure, but his calculations for worst case data loss match mine at 9GB  (always nice when the professionals agree with you) 😉

The additional info has to do with the “Union List” – this is something we’ve known must exist, but up till now it’s not something I’ve seen published confirmation for (secretive bunch, IBM).  Basically the Union List will tell you which 9GB of data has been lost in the form of a logical block address list, allowing targeted recovery of the lost data. 

I’ve not seen this in action so no idea how well it works in practise, but I’m going to have much fun pursuing it with IBM over the next few months…….

Update (2) – A couple of other articles have linked to this one.  Thanks to both Simon Sharwood at Techtarget ANZ  and Ianhf at Grumpy Storage for their favourable reviews and the redirect – I wondered where all the traffic was coming from!  Will try to return the favour sometime 🙂