On Change, Hysteria, and Continuity

Change is scary.  Whether it’s a good or bad change, it’s ALWAYS a learning opportunity, whether the change happened to you directly, or to people you know or are just aware of.

Personally, I find myself at a point of very positive change, joining the esteemed firm Red8, LLC., after over 21 years of running the IT infrastructure VAR practice of another company.  The scary part of this change is that I need to quickly improve my team-building and team-belonging skills; and re-learn how to utilize the resources available to me without abusing or mis-using them (and, of course, become a valuable resource to others!).  My success in this new venture will directly correlate to my success in improving those skills.  I’m excited beyond words.

Other changes are afoot at a technology partner I work with.  Whether these changes are necessary, overdue, or unavoidable, are questions I’m going to think about for a while.   This  technology company (NetApp) has, in my opinion, the best technology on the market for solving the biggest problems on the minds of CIOs and CTOs today.  They’re not going to come to customers and ask what they want to buy; they’re going to ask what the problems are and come back with a way to utilize their technology to solve those problems.  That aligns perfectly well with how I approach customers, so it should be no surprise that I work with NetApp…a LOT.

This week there was a layoff at NetApp, and recently they’ve made some public-facing missteps that have nothing (really) to do with how they solve customer problems.  There are market dynamics at work across the whole enterprise storage market – namely, the move to cloud and the emergence of many second/third tier storage vendors – and those dynamics are going to impact the biggest storage vendors most markedly.  Again- this should be no surprise.  These particular changes have appeared to catch NetApp more flat-footed than we’ve come to expect from them in the past.

NetApp, as it has historically done, has leaned right into the cloud dynamic, working to create solutions that will work with its existing technology, but allow for data to move from those platforms in and out of the public cloud providers.  They are correct (IMO) in their guess that enterprises will NOT choose to put ALL of their data in the cloud, and will be forced to maintain at least some infrastructure to support their data management and production needs.

They are also guessing that customers will want their data in the cloud to look like and be managed like their data residing on-premise.  This remains to be seen.  Unstructured data has been moving to SaaS platforms like Box.com and Dropbox (among others) at accelerating rates.  Further, making cloud storage look like it’s on a NetApp reminds me a bit of VTL technology, forcing a tape construct on large-capacity disks.  There will be imperative use cases for this, but it’s not the most efficient way to use the cloud (it may be more functional in cases).

What has confused many CTOs (and technology providers) is the simultaneous rise of cloud…and FLASH.  So, we’ve got one dynamic that is operationalizing unstructured (and some structured) data and getting it out of our administrative hands, and then we’ve got this other dynamic that’s geared towards our on-premise structured data, pushing the in-house application bottlenecks back up to the CPU and RAM where they belong.   I can imagine having a single conversation with a CIO convincing him on the one hand to migrate his data and apps to the cloud to optimize budget and elasticity, and before it’s over perhaps discussing the benefits of moving his most demanding internal workloads to flash-based arrays which eliminates elasticity altogether.  (Should it stay, or should it go??)

Now imagine you are a CTO at a technology company like NetApp, that solves many problems within the enterprise problem set, for a wide array (no pun intended) of customers.  How are you supposed to set a solid technology direction when the industry’s futures arrows are all pointing different ways, and the arrows constantly move?  As a market leader, you’re expected to be right on EVERY guess, and if you’re not, the trolls have a field day and the market kicks you in the tail. The “direction thing” (think GHWB’s “vision thing”) has contributed to the current NetApp malaise; the outside world sees multiple flash arrays, an object storage platform, Clustered OnTAP slowly (perhaps too slowly) replacing 7-mode systems, E-Series storage, a cloud-connected backup target product acquired from Riverbed…it’s hard to see a DIRECTION here.  Internally, there have been multiple re-orgs upon re-orgs.  But, what you DO see is the multiple moving arrows pointing multiple ways- and that’s the result of NetApp trying to solve the entire enterprise problem set in regards to data storage and management.  If NetApp can be accused of fault, it can be that it perhaps tries to solve too MANY problems at once; you’re faced with keeping up with the changing nature of each problem, and sometimes the multiple problems force you into conflicting or multiple solution sets.

If you are a technology provider that only has one product- say, an all-flash array – you don’t have this direction problem, because your car doesn’t even have a steering wheel.  It’s a train on a track, and you’d better hope that the track leads you to the promised land (yes, I’m aware I mixed metaphors there. Moving on..).  Given the history of IT and technology invention and innovation, I wouldn’t make that bet. If you’re not leading the market to where you’re going, you’re only hoping it’s going to remain in your current direction.  So the tier-2 and tier-3 vendors do not worry me too much long term, as they’re not solving problems that others haven’t solved already; sure, they’ll grab some market share in the mid-market, but the cloud dynamic is going to impact them just as hard as the big storage vendors, and they’ll be hard pressed to innovate into that dynamic. Combine that with the impact of newer, radically different non-volatile media already hitting early adopters and I believe we’ll get a thinning of that herd over the next few years.

So sure, this week has been a tough one for NetApp; and I feel horrible for those that lost their jobs.  Given the rate at which other companies had been poaching NetApp’s talent prior to the layoff, I suspect any pain for these individuals will be short-lived.  (I do find it amazing that trolls that bash NetApp celebrate when someone from NetApp leaves for another firm; if they liked them so much, why have they been bashing them before? Shouldn’t they now bash their new firm?)

I do want to caution those who say that NetApp has stopped innovating or is in decline.  That could not be further from the truth.  Clustered OnTAP is an evolution of OnTAP, not simply a new release.  It’s not totally different, but it is… MORE.  It’s uniquely positioned for the cloud-based, multi-tenant, CYA, do-everything-in-every-way workload that is being demanded of storage by large enterprise and service providers.  Everyone else is still focused on solving yesterday’s problems faster (’cause it isn’t cheaper, folks).  The technology portfolio has never been stronger.  NetApp has historically been a year or more ahead of new problem sets- which certainly has negative revenue impacts as the market catches up (think iSCSI, Primary de-dupe, multi-protocol NAS, Multi-tenancy, vault-to-disk) –  but count NetApp out at your own peril.  People have been doing it since 1992 and NetApp has made fools of them all.

On Change, Hysteria, and Continuity