Commvault Go 2017

I’m currently at the Commvault Go keynote, and have just heard from the only man who has trekked to both the north AND south poles. Tough task for sure.  The best slide he put up was from the end of his one year journey, when he returned to base only to discover that his ride home was 7/8 submerged.  Talk about needing a backup plan!

After that inspiring speech (which connected to the event by talking about respecting DATA and FACTS in the context of climate change), CEO Bob Hammer took to the stage to discuss the major themes of the event.

First and most immediately impactful to the industry is the release of Commvault’s HyperScale platform, which runs on commodity hardware and signals the beginning of the end of legacy 3-tier Enterprise backup architecture.  Backed by RedHat GlusterFS, Commvault has created a back-end storage platform upon which they can layer a tuned version of their media agent/GRIDStore technology (which creates scalable, balanced,  and fault tolerant pools of data movers), all towards the purpose of providing a linearly scalable home for secondary data copies.

Notable is that CV has chosen to give customers a choice to use CV’s own hardware (offered as a subscription of HW and SW!) or run it on their own hardware from a number of verified hardware companies that span all the usual suspects (HPE, Cisco, Dell, etc).

More notable is that Cisco has aggressively gotten behind this product with their ScaleProtect offering, which is the CV HyperScale on their UCS platform, sold 100% through their channel.  I’ve spoken with 3 different Cisco sales reps in different regions and they are all planning on hitting their territories hard with this offering.

Hammer also talked about the pending release of new Analytics offerings, talking about using AI and Deep Learning to glean actionable information out of secondary data sets for the purposes of properly classifying, retaining, and/or deleting data as well as helping to achieve the ever-more-difficult objective of compliance.

More to come from this event- but I certainly look forward to seeing Commvault’s flag flying on the South Pole!

Commvault Go 2017

Storage companies, persistent losses, and architectural decisions

Today I saw Tintri’s stock price take a 17% hit.  Consensus among my various independent storage pals is that they’ve got two quarters of cash left, and prospects are NOT good for their ability to continue forward.  Their IPO was a huge disappointment, but even if it had raised the amount of desired capital, the revenue and forward sales outlook were still both pointing to a bleak future for these guys. It’s too bad; they have some cool technology around VM performance insight in their all-flash platform.

Also, the news this week from Barron’s is that Pure Storage is shopping themselves (this comes via Summit Redstone’s Srini Nandury); IF TRUE, that’s a clear indication that they see no independent future that protects their shareholders’ value. Their revenues continue to grow, but they also have yet to produce a single dollar in profit, and it’s doubtful they will in the near future.

HP’s acquisition of Nimble is an example of how those who deploy platforms made by persistently negative-income technology firms can work out OK; HP is continuing development on the platform, and providing existing users with the comfort that their investment and their time-consuming integrations are safe. So Nimble customers can thank their lucky rabbit’s feet that the right acquirer came along.

But what if there is no HP equivalent to rescue these technology companies? Certainly Pure has built a great customer base, but will anyone want to put out the $3B-$4B that would satisfy the investors? Would Cisco risk alienating its existing partners and go it alone in the converged infrastructure space after the Whiptail fiasco?  Who in their right minds would touch Tintri at this point from an acquisition perspective?

If you’ve deployed these platforms in your environment, you have some thinking to do.

Consider this net income graph for NetApp from their IPO in 1996:


On an annual basis, Netapp has generally run at a profit from the beginning of their public life.  Certainly, for the four years prior, they ran at a net loss- but their product at the time was used for specific application workloads (development/engineering) – NOT as a foundational IT component that touched every piece of the enterprise, as they do today quite well.  It would have been unwise for an IT architect to consider using NetApp in 1994 to house all of their firms’ data, as the future wasn’t as certain for the technology at the time.  But NetApp used their IPO in ’96 as a statement that “we’re here, we’re profitable, and we’re ready to make our lasting mark on the industry.”  Which they did and continue to do.

For comparison, let’s look at Pure Storage net income since it’s IPO:


It’s hard to call the right side of that chart a “turnaround”. It’s more of an equilibrium.

Now Pure Storage has some really good technology. The stuff works, it works well, and it’s relatively easy to implement and manage.  However, Pure does not differentiate from the other established (and PROFITABLE) competitors in their space enough for that differentiation to create a new market that they can dominate in (they’re not alone in this; NONE of the smaller storage vendors can claim that they do, that’s the problem).  As is normal for today’s Angel-to-VC-to-IPO culture, Pure used their IPO as an exit strategy for their VC’s and to raise more desperately needed cash for their money-losing growth strategy (the net income chart speaks for itself). That strategy is failing.  With the news that they’re shopping, they realize this too.  When prospective clients realize this, it’s really going to get difficult.

So while the tech geek in all of us LOVES new and cool technology, if you’re going to make a decision on a platform that is foundational in nature (meaning it will be the basis of your IT infrastructure and touch everybody in your firm), you’d be well advised to dig deep into the income statement, balance sheet, and cash flow of the firm making that technology, and put those stats right up there with the features/benefits, speeds and feeds.  Otherwise, you may have some explaining to do later.

Bottom line: if your selected storage manufacturer is losing money, has never actually made any money, and doesn’t look like they’re going to make money anytime soon, there’s a relatively good chance you’re going to be forced into an unwelcome decision point in the near-to-medium future. Caveat emptor.



Storage companies, persistent losses, and architectural decisions

NetApp gets the OpEx model right

Ever since the Dot-Com Boom, enterprise storage vendors have had “Capacity on Demand” programs that promised a pay-as-you-use consumption model for storage. Most of these programs met with very limited success, as the realities of the back-end financial models meant that the customers didn’t get the financial and operational flexibility to match the marketing terms.

The main cause of the strain was the requirement for some sort of leasing instrument to implement the program; meaning that there was always some baseline minimum consumption commitment, as well as some late-stage penalty payment if the customer failed to use as much storage as was estimated in the beginning of the agreement. This wasn’t “pay-as-you-use” as much as it was “just-pay-us-no-matter-what”.

NetApp has recently taken a novel approach to this problem, by eliminating the need for equipment title to change from NetApp to the financial entity backing the agreement. With NetApp’s new NetApp OnDemand, NetApp retains title of the equipment, and simply delivers what’s needed.

An even more interesting feature of this program is that the customer pays NOT for storage, but for capacity within three distinct performance service levels, each defined by a guaranteed amount of IOPS/TB, and each of these service levels has a $/GB/Mo associated with it.

To determine how much of each service level is needed at a given customer, NetApp will perform a free “Service Design Workshop” that uses the Netapp OnCommand Insight (OCI) tool to examine each workload and show what the IO Density (IOPS/TB) is for each. From there, NetApp simply delivers storage that is designed to meet those workloads (along with consideration for growth, after consulting with the customer). They include the necessary software tools to monitor the service levels (Workflow Automation, OnCommand Unified Manager, and OCI), as well as Premium support and all of the ONTAP features that are available in their Flash and Premium bundles.

Customers can start as low as $2k/month, and go up AND DOWN with their usage, paying only for what they use from a storage perspective AFTER efficiencies such as dedupe, compression, and compaction are taken into account. More importantly, the agreement can be month-to-month, or annually; the shorter the agreement duration of course, the higher the rate. This is America, after all.

The equipment can sit in the customer premises, or a co-location facility- even a near-cloud situation such as Equinix, making the Netapp Private Storage economics a true match for the cloud compute that will attach to it.

A great use case for NetApp OnDemand is with enterprise data management software, such as Commvault, which can be sold as a subscription as well as as a function of capacity. Since the software is now completely an OpEx, the target storage can be sold with the same financial model – allowing the customer to have a full enterprise data management solution with the economics of SaaS. Further, there would be no need to over-buy storage for large target environments, it would grow automatically as a function of use. This would be the case with any software sold on subscription, making an integrated solution easier to budget for as there is no need to cross the CapEx/OpEx boundary within the project.

This new consumption methodology creates all sorts of new project options. The cloud revolution is forcing companies such as NetApp to rethink how traditional offerings can be re-spun to fit the new ways of thinking in the front offices of enterprises. In my opinion, NetApp has gotten something very right here.

NetApp gets the OpEx model right

NetApp + SolidFire…or SolidFire + NetApp?

So what just happened?

First- we just saw AMAZING execution of an acquisition.  No BS.  No wavering.  NetApp just GOT IT DONE, months ahead of schedule.  This is right in-line with George Kurian’s reputation of excellent execution.  This mitigated any doubt, any haziness, and gets everyone moving towards their strategic goals.  When viewed against other tech mergers currently in motion, it gives customers and partners comfort to know that they’re not in limbo and can make decisions with confidence.  (Of course, it’s a relatively small, all-cash deal- not a merger of behemoths).

Second -NetApp just got YOUNGER.  Not younger in age, but younger in technical thought.  SolidFire’s foundational architecture is based on scalable, commodity-hardware cloud storage, with extreme competency in OpenStack.  The technology is completely different than OnTAP, and provides a platform for service providers that is extremely hard to match.   OnTAP’s foundational architecture is based on purpose-built appliances that perform scalable enterprise data services, that now extend to hybrid cloud deployments.  Two different markets.  SolidFire’s platform went to market in 2010, 19 years after OnTAP was invented – and both were built to solve the problems of the day in the most efficient, scalable, and manageable way.

Third – NetApp either just made themselves more attractive to buyers, or LESS attractive, depending on how you look at it.

One could claim they’re more attractive now as their stock price is still relatively depressed, and they’re set up to attack the only storage markets that will exist in 5-10 years, those being the Enterprise/Hybrid Cloud market and the Service Provider/SaaS market.  Anyone still focusing on SMB/MSE storage in 5-10 years will find nothing but the remnants of a market that has moved all of its data and applications to the cloud.

Alternatively, one could suggest a wait-and-see approach to the SolidFire acquisition, as well as the other major changes NetApp has made to its portfolio over the last year (AFF, AltaVault, cloud integration endeavors, as well as all the things it STOPPED doing). [Side note: with 16TB SSD drives coming, look for AFF to give competitors like Pure and xTremeIO some troubles.]

So let’s discuss what ISN’T going to happen.

There is NO WAY that NetApp is going to shove SolidFire into the OnTAP platform.  Anyone who is putting that out there hasn’t done their homework to understand the foundational architectures of the VERY DIFFERENT two technologies.  Also, what would possibly be gained by doing so?   In contrast, Spinnaker had technology that could let OnTAP escape from its two-controller bifurcated storage boundaries.  The plan from the beginning was to use the SpinFS goodness to create a non-disruptive, no-boundaries platform for scalable and holistic enterprise storage, with all the data services that entailed.

What could (and should) happen is that NetApp add some Data Fabric goodness into the SF product- perhaps this concept is what is confusing the self-described technorati in the web rags.  NetApp re-wrote and opened up the LRSE (SnapMirror) technology so that it could move data among multiple platforms, so this wouldn’t be a deep integration, but rather an “edge” integration, and the same is being worked into the AltaVault and StorageGRID platforms to create a holistic and flexible data ecosystem that can meet any need conceivable.

While SolidFire could absolutely be used for enterprise storage, its natural market is the service provider who needs to simply plug and grow (or pull and shrink).  Perhaps there could be a feature or two that the NetApp and SF development teams could share over coffee (I’ve heard that the FAS and FlashRay teams had such an event that resulted in a major improvement for AFF), but that can only be a good thing.  However the integration of the two platforms isn’t in anyone’s interests, and everyone I’ve spoken to at NetApp both on and off the record are adamant that Netapp isn’t going to “OnTAP” the SolidFire platform.

SolidFire will likely continue to operate as a separate entity for quite a while, as sales groups to service providers are already distinct from the enterprise/commercial sales groups at NetApp.  Since OnTAP knowledge won’t be able to be leveraged when dealing with SolidFire, I would expect that existing NetApp channel partners won’t be encouraged to start pushing the SF platform until they’ve demonstrated both SF and OpenStack chops.  I would also expect the reverse to be true; while many of SolidFire’s partners are already NetApp partners, it’s unknown how many have Clustered OnTAP knowledge.

I don’t see this acquisition as a monumental event that has immediately demonstrable external impact to the industry, or either company.  The benefits will become evident 12-18 months out and position NetApp for long-term success, viz-a-viz “flash in the pan” storage companies that will find their runway much shorter than expected in the 3-4 year timeframe.  As usual, NetApp took the long view.  Those who see this as a “hail-mary” to rescue NetApp from a “failed” flash play aren’t understanding the market dynamics at work.  We won’t be able to measure the success of the SolidFire acquisition for a good 3-4 years; not because of any integration that’s required (like the Spinnaker deal), but because the bet is on how the market is changing and where it will be at that point – with this acquisition, NetApp is betting it will be the best-positioned to meet those needs.


NetApp + SolidFire…or SolidFire + NetApp?