Netapp SolidFire: FlashFoward notes

I had the privilege of attending the Netapp SolidFire Analyst day last Thursday- and rather than go through what the company told us (which was all great stuff), what I heard from SolidFire’s customers while there was probably more relevant and important.

I won’t repeat what’s been detailed elsewhere about the new capacity licensing model from Netapp SolidFire, which breaks the storage appliance model by decoupling the software license from the hardware it runs on.  This new model, called “FlashForward”, allows for a flexibility and a return on investment previously unavailable in the enterprise storage market.

The service provider customers that participated in the breakouts unanimously agreed that this new licensing model was going to be a huge win for them.  The most striking point was one service provider who had different depreciation schedules for software versus hardware –  something that couldn’t be taken advantage of with the appliance model.  Now, since the FlashForward program allows licenses to be moved between hardware instances, the software can now be depreciated over a longer timeframe (in this SP’s case it was 7 years).  Hardware refreshes now come at a much lower cost in year 3-4.  This all results in the ability to provision resources to tenants with a lower incremental cost per resource.

This could also have a major impact in the ability of companies to finance/lease SolidFire solutions- If you’re financing a given set of software over 6-7 years, obviously the monthly bill for that will be less than if you’re amortizing it over 3 years.   Hardware remains at a 36 month lease with its typical residual.  In a given year, that has the possibility of reducing cash out considerably.

Enterprise customers weren’t quite as giddy about FlashFoward as they were about the SolidFire technology itself; however the folks there were all IT, not finance.  Service Provider folks tend to be more focused on the economics of resource delivery.  The Enterprise IT folks were all about the “set-it and forget-it” benefits of their SolidFire implementations, with one customer stating that they only had to call support once in two years, for an event that wasn’t even SolidFire-related.  Certainly, we had happy customers at the Analyst event, but their stories were all those of the challenges of choosing a smaller storage vendor (at the time), against major industry headwinds and having to justify that decision with full proof-of-concepts.  Impressive stuff.

Of course SolidFire has made its name in the Service Provider market; their embrace of automation technology and the devops philosophy is recognized as leading the market.  This is precisely why we should be keeping a very close eye on these folks, as Enterprise IT looks to become its own Service Provider, and automate its on-premises resources in the same manner in which it automates its cloud resources.  Given this automation advantage and the acquisitional flexibility now offered,  SolidFire is going to align very well with enterprises that have historically implemented the straight SAN storage appliance model but are looking to transform and modernize.

Netapp SolidFire: FlashFoward notes

Solidfire Analyst Day – 6/2/16 – Live Blog

  • (7:52AM MDT) Exciting day in Boulder, CO, where a bunch of folks from Solidfire (and Netapp) are going to be going through the state of their business, and announce all sorts of stuff.  So, instead of trying to fit everything into tweets, I figured I’d do my best live blog impersonation! Stay here for updates, keep refreshing!

IMG_2238

8:07am:  Here we go!  John Rollason, SolidFire Marketing director, takes (and sets the stage).

8:12am: #BringontheFuture is the hashtag of the day.  Coming up is the keynote from Solidfire founder Dave Wright.  This will be simulcast.  I already see tweets about the new purchasing model, yet nobody’s ANNOUNCED anything.  Oooh, Jeremiah Dooley’s going to do FIVE demos later.  AND, George Kurian hits the stage at 5:15.

IMG_2239

8:24am:  John Rollason having a heck of a time trying to pronounce “Chautauqua”, which is where Solidfire will be taking folks for hiking tomorrow. Also- phones on “stun” please.


10:31am: Dave Wright in on stage. IMG_2240 2

“One storage product cannot cover all the use cases for Flash in the datacenter”.

  • EF- SPEED Demon. (One person sitting with me called it the “Ricky Bobby” flash storage.
  • AFF- DATA SERVICES.
  • SOLIDFIRE- SCALE.

This portfolio is now a $700M/year run rate.  If you think Netapp is a laggard, you haven’t been paying attention!!  The Analyst view of the AFA Market WAY over-estimated the impact of hybrid vs all-flash.

Per dave: “Netapp KILLING IT with all-flash adoption.  Way Way ahead of analyst projections.”  33% of bookings are now all-flash.  The adoption curve is so far ahead of what analysts projects, one wonders what they were thinking.

HA! graphic– headstone, RIP HYBRID STORAGE.

ANNOUNCEMENT : One platform, Multiple Consumption Models!

ANNOUNCEMENT: Element OS 9: FLUORINE
-VVOLS done right with per-vm QoS, New UI, VRF Multi-Tenant networking, 3x FC Performance and increase scale with 4 FC nodes now.
Functional AND sexy.

ANNOUNCEMENT: FULL FLEXPOD INTEGRATION!  Converged Infrastructure – VMWare, OpenStack, one of the most compelling converged infrastructure offerings in the market.

ANNOUNCEMENT: SF19210 Appliance- 2x perf, 2x capacity, 30% lower $/GB, >1PB Effective Capacity & 1.5M IOPS in 15U.

“Appliance licensing is too rigid for advanced enterprises, agile datacenters”.
cost all up front, can’t be transferred, data reduction impose uncertainty/risk around actual cost of storage

ANNOUNCEMENT: SF FlashForward Storage

License Element OS SW on an Enterprise-wide basis.  Based on Max Prov capacity
Purchase SF-cert’d HW nodes on a pass-thru cost basis

  • Flexible – can purchase SW/HW separately – more efficient spend
  • Efficient – No need to re-purchase when replacing, upgrading, consolidating – no stranded capacity
  • Predictable – no sw cost penalty from low-efficiency workloads.
  • Scalable – usage-based discount scheme, pass-through HW pricing – no unness sw or support costs increases as footprint expands

9:30am: Brendan Howe takes the stage to give a business update.

  • Runs Emerging Products group @ Netapp.
  • Reaching “extended customer communities”
    • Run
    • Grow
    • Transform
  • Scale  (SF) vs. Data Services (AFF) vs. Speed (EF) (same story as before)

Opinion – perhaps this is TOO discrete of a model for positioning. Certainly SF is perfectly (perhaps MORE) appropriate for certain classic datacenter storage use cases than AFF at times.  Anyone who sticks to this religiously is missing the point IMO.

  • Operational Integration –
    • Structured SF Business Unit
    • GTM Specialists Team under James Whitmore
    • Broad scale & pathway enablement
      • global scale , “all-in”
    • Fully integrated shared services
    • SolidFire office of the CTO (headed by CTO Val Bercovici)
    • Must protect this investment

9:47am : James Whitmore takes the stageIMG_2241

 

  • VP of Sales & Marketing
  • 46% YoY bookings growth
  • deal size >$240k avg
  • 57% Net New Account acquisition YoY
  • 218 Node largest single customer purchase (!!!!)
  • >75% bookings through channel partners
  • >30 countries
  • 77% in the Americas
  • Comcast, Walmart, Target, AT&T, TWC, Century Link
  • 2 segments
    • Transforming Digitals
      • Nike, BNP Paribas
    • Native Digitals
      • Ebay
      • Kinx
    • Cloud/Hosting Providers
      • SaaS
      • Cloud/Hosting
      • Cable/Telco
  • >80% increase in Enterprise Net-new account
    • wins across every major vertical/use case
      • finance, healthcase, SaaS, Energy, VMW/HyperV, Ora/SQL, VDI, Openstack
    • customers truly committed to transforming their datacenters
    • dominance in service provider market
    • 20% YoY increase in # of SP’s, but a 3x+ increase in capacity!
    • Very strong repeat business
  • Channel Integration
    • merge programs
    • enable new-to-SF partners
    • Leverage global distribution network
  • SF Specialist Team
    • Invest to accelerate growth
    • Transfer product capability across org
    • Direct specialists to high-growth segments and products

10:16am – Dave Cahill takes the stage.

  • Talking about FlashForward program.
  • Senior Director, Product & Strategy, Netapp SolidFire
  • “Flexible and Fair”
  • How it works
    • Buy Element OS Cap License
    • Choose a certified SF HW platform
    • Order through SF or preferred channel partner
  • Flexible, Efficient, Predictable, Scalable
  • Tiered pricing model
  • FullSizeRender
  • Top left blue is $1/GB, 20% support based on graph.  Towards right side of graph, at 25% support.
  • OPINION: Even for customers that will purchase capacity licenses 
    and hardware at equal paces, the cumulative tiered discount model 
    ensures that customers who grow are rewarded with lower incremental
    cost per TB.  This is similar to Commvault's recent licensing model
    change.
  • The analysts are really grilling Dave on the model…question from the field about overlapping licenses during migration…question on ratio of HW/SW costs…
  • If you over-estimate the efficiency ratio- only buying more hardware- NOT more appliances.
  • Snapshots are NOT included in provisioned capacity license (good)- but if you want to hold extended retention, just more hardware.

IMPORTANT: THIN-PROVISIONING WILL CONSUME CAPACITY LICENSE at the provisioned amount, whether actually consumed or now. 

QUESTION: Is capacity  license consumed PRE-efficiency or POST-efficiency?

10:50am- Dan Berg & Jeremiah Dooley going to do geek stuff!

IMG_2245IMG_2246

  • Solidfire Products
    • ElementOS
      • Fluorine
        • VVols, VASA2 built into the ElementOS SW – on a fully HA aware cluster!
        • New UI
        • Greater FC perf and scale
        • Tagged default networks support
        • VRF VLAN Support
    • Platforms
      • SF19210 – Highest density & performance
        • Sub-millisecond latency – 100k IOPS
        • 40-80TB effective cap
      • Active IQ
        • New- alerting/mgmt framework
        • predictive cluster growth/capacity forecasting
        • enhanced data granularity
        • significant growth in hist data retention and ingest scale
        • >80% leverage Active IQ
    • Ecosystem Products
      • Programmatic- powershell & sdks
      • open source – docker/openstack/cloudstack
      • Data Protection – VSS, Backup Apps, Snapshot offloading
      • VMW- VCP, vRealize Suite

11:00am – Jeremiah Dooley – Principal Flash Architect

IMG_2248VMWare Virtual Volumes and
Why Architecture Matters

VMFS – the best and worst thing to happen to storage

Nice slide, RIP VMFS.

VVol adoption is “slow” – takes time for partners/vendors to get their “goodies” into there

Since customers are virtualizing critical apps/DBs now, new feature/release adoption is a gen behind typically now

Getting customers in VVOLs

build policies throgh VASA2 like QOS
MIN/MAX/BURST
Move VM form vmfs into VVOLS – 8-line powershell script- ID/migrate/apply

Increase size of VVOLs- big difference from VMFS

GOAL: Do all the provisioning/remediation/growth without talking to the storage team.


Why Openstack?

  • Agility – elasticity, instant provisioning, meet customer demand
  • capex to opex model
  • cluster deployment within an hour, given HW is ready
  • Prod Development- self provisioning/ self service- dev can create/destroy environments
  • Why not? lack of business cases (another guy) – so VMWare good fit for those
  • Docker – openshift – docker/kubernets/docker storm

Imagine moving an 8-node cluster from one rack to another with NO DOWNTIME


12:00 – LUNCH, back at 1pm.


1:00 – Joel Reich takes the stage.

IMG_2250

Joel is giving the ONTAP 9 feature set- including the easier application-specific deployments and high capacity flash.

ONTAP Cloud- many of Netapp’s larger customers using this for test/dev, some for Prod and they fail back on-prem.

ONTAP Select – Flexible capacity-based license, on white box equipment

Question: When the heck did White Box ever save anybody ANYthing
in the long run?

ANNOUNCEMENT: ONTAP Select with SolidFire Backend coming!


5:03pm- George Kurian – Closing Keynote

Enabling the Data Powered Digital Future

– Market Outlook- Enterprise IT spending has SLOWED –

ZING: EMC/Dell is a “Tweener” – not as vertically integrated as Oracle, they don’t do the whole stack

  • Kurian- Netapp doesn’t need to be “first to market”.
  • Q1 Market Share in AFA- Netapp 23%!! (IDC)
  • New Netapp: Broad portfolio that addresses broad customer requirements.
  • Netapp – “We protect and manage the world’s data”
  • FY16 $5.5B, Cash $5.3B, total assets $10.0B
  • 85% of shipment are clustered systems
  • $1B free cash flow PER YEAR.  Talk about stability.
  • Pre-IPO,IPO,POst-IPO stories will be written in red ink
  • 1) Pivot to Accelerate growth 2) Improve Productivity
  • Work effectively to deliver results – Priorities, Integrated execution, Follow through
  • Renew the Organization – Leadership, High Perf Culture, Talent
  • 75% of CEOs will say biggest concern is not strategy but execution
  • Incubate – pre-market (i.e. cloud), complete freedom of thinking, fail fast
    • “prodOps”
  • Grow – focus on biggest markets with biggest opp, inspect at district level, is the channel ready
  • Globalize – mainstream GTM through every pathway possible
  • Harvest – prepare for end of life, use revs to fund new businesses
  • $700M+ AFA Annual Run Rate
  • 80% YoY unit growth cDOT, 30% e-series YoY Unit growth
  • 185% AFA YoY rev growth
  • Transforming Netapp
    • remove clutter 20 years of unmanaged growth
    • focus on the best ideas
    • “velocity offerings” – preconfigured offerings
    • fine tune GTM model
    • Shared services
  • Close
    • Fundamental change
    • Good progress and accelerating momentum – more work needed
    • Strong foundation
    • Uniquely positioned to help clients navigate the changing landscape.

That’s it from Boulder, CO!  Off to one of the three dinners that Netapp Solidfire have set up for the crowd tonight!

 

 

 

 

 

 

 

Solidfire Analyst Day – 6/2/16 – Live Blog

X.AI – Amy Ingram is now one of my favorite “people”

So. Who the heck is Amy Ingram?   

She’s awesome. She’s organized. She really knows me (mostly because I told her all about me).

Most of all, she’s….not REAL. Don’t hold that against her, it certainly doesn’t inhibit her productivity.

Here’s the background. I was given the opportunity to beta test x.ai’s offering last week. x.ai leverages artificial intelligence (thus, the ai) to help you organize your calendar in a much more efficient way. Like many others, I spend WAY too much time trying to coordinate meetings with people, as we all have so many meetings and calls on our calendars, finding common free time to conduct business (or whatever) usually involves an indeterminate volley of emails, phone calls, and texts- and sometimes even the unfortunate gaze at the calendar on the phone while driving. Not good.

X.ai presents Amy Ingram (initials…ai) as what READS like a real person via email. She’s actually an intelligent agent that has access to my calendar (google for now, others to come), so she can tell when I’m busy. Since I run on Office365, the challenge was to get Amy the required visibility to my calendar, which I achieved by one-way syncing my O365 calendar with a newly-created Google Calendar (using a separate tool I will NOT endorse, it gets the job done but yikes). More importantly, the agent knows what buffers I need around calls and meetings, where I live, where my offices are, and where I prefer to meet for coffee or lunch.

When I want to get a meeting or call set up, I simply email Amy and the person I want to set the meeting with, and ask for it. In plain english. I don’t give a time, I just say “Amy, please find 30 minutes for Dennis and I to talk via phone”. I can say this in many different ways, Amy has understood every permutation I’ve thrown at her. And that’s IT. Amy will email Dennis and give him three different possible times, and he can respond in plain english as well, Amy understands the context. For instance, if Amy provided three different days to Dennis, he can just reply with “I’ll take Tuesday.” Amy will know WHICH Tuesday and the times she said I could have the call. After that, we both get the invite from Amy. If Amy were human, the emails, responses, and results would be the same.

What’s even better is when I need to set up a meeting with 3 or more people- Amy does ALL of the work for me. No frustrating discussions of when he’s free, when she’s free, and “oh someone booked me so now I’m not free”… Amy handles ALL of the back-forth-back-forth again. I can ask for a status update any time I like, and she’ll respond with what she’s working on, who she’s waiting for responses from, and what she’s gotten done recently.

If I cancel a meeting, she’ll send an email to the invitees telling them that she “wanted to let them know that the meeting needed to be cancelled”. That’s more than I usually get when someone cancels on me.

I’ve demonstrated this to a few of my colleagues, with one universal response: Envy. So, that’s sort of a mission accomplished, right?

I’ve noticed that sometimes it does take some time for Amy to get things done, but I attribute that to the “beta” tag on the solution. So far, Amy has been 100% accurate and all of the people that have received emails from Amy working out schedules have responded in ways Amy understands (which is basically English!).

This technology has LEGS, even beyond the individual executive looking to optimize their time. I can see this being expanded for use by project managers who have to cat-herd 10 people onto a project con call- just IMAGINE the cost savings across the board for a PMO. PMs in my company spend hours coordinating calls on a daily basis. Telemarketers who set up meetings can make more calls and reach more people if they no longer have to coordinate the dates and times.

Further, as the Intelligent Agent evolves, it’s not hard to imagine Amy making the lunch reservations for your meeting as well, buying the tickets for the sports event, booking your flight for that conference, or even adding information into your expense management application since she has all the meeting data already. The sky is the limit with this. AND, it already WORKS.

Kudos to the x.ai team. I’m truly looking forward to further advancements with this technology. In the meantime, I’m going to enjoy this beta as long as I can!

Keep an eye on these guys, they’ve got something very right here.

X.AI – Amy Ingram is now one of my favorite “people”

NPS-as-a-Service: The perfect middle ground for the DataFabric vision?

red8Everyone pondering the use of hyperscalar cloud for devops deals with one major issue- how do get copies of the appropriate datasets in a place to take advantage of the rapid, automated provisioning available in cloud environments? For datasets of any size, copy-on-demand methodologies are too slow for the expectations set when speaking of devops, which imply a more “point-click-and-provision” capability.

NetApp has previously provided two answers to this problem: Cloud ONTAP and NetApp Private Storage.

Cloud OnTAP represents an on-demand Clustered ONTAP instance within the Amazon EC2/EBS (and now Azure), that you can spin-up and spin-down. This is really handy, but can get expensive if you want the data to be constantly updated with data from on-prem storage, since the cloud instance must always be up and consuming resources. Further, some datasets could have custody requirements that prevent them from physically residing on public cloud infrastructures.

NetApp Private Storage addresses both of these problems. It takes customer purchased NetApp storage that is located at a datacenter physically close to the hyperscaler, and connects that storage at the lowest possible latency to the hyperscaler’s compute layer. The datasets remain on customer-owned equipment, and the benefits of elastic compute can be enjoyed. Of course, the obvious downside to this is the requirement of capital expenditure- the customer must purchase the storage (or at least lease it). Also, the customer must maintain contracts with the co-location site housing the storage, and do all the work to maintain the various connections from the AWS VPC through the Direct Connect, and from the customer datacenter to the co-location site. It’s a lot of moving parts to manage. Further- there’s no way to get started “small”; it’s all or nothing.

But wait! There’s a NEW OFFERING that solves ALL of these problems, and it’s called “NPSaaS”, or NetApp Private Storage as-a-Service.

npsaas2

This offering, currently only available via Faction partners (but keep your eyes on this space) will provide all of the goodness of Netapp Private Storage, without most of the work and NONE of the capital expenditures. It’s not elastic per-se, but it is easily orderable and consumable in 1TB chunks, and in either yearly or month-to-month terms. Each customer gets their own Storage Virtual Machine (SVM), providing completely secure tenancy. It can provide a SnapMirror/SnapVault landing spot for datasets in your on-prem storage, ready to be cloned and connected to your EC2/Azure compute resources at a moment’s notice. You can of course simply provide empty storage at will to your cloud resources for production apps as well.

When you consume storage, you’ll be consuming chunks of 100mb/s throughput from storage to compute as well. You can purchase more throughput to compute if you want- not IOPS. You’ll get 50mb/s internet throughput as well. All network options are on the table of course, you can purchase as much as you’d like, both from storage->compute, and from storage->internet (or MPLS/Point-to-Point drop).

How is this achieved?

Faction has teamed up with NetApp to provide NetApp physical storage resources at the datacenters close to the hyperscalers, ready for use usually within 3-5 days of completion of paperwork, and it will be very simple to order. This means no additional co-lo contracts, no storage to buy, all you need to do is order your Amazon Direct Connect and VPC, and provide that information to Faction and you’ll be set in a few days to use your first 1TB of storage.

Once set up, you’ll be able to use NFS and iSCSI protocols at will. There are currently some limitations that prevent use of CIFS, but future offerings will provide that functionality as well.

From a storage performance perspective, we’re currently looking at FlashPool-accelerated SATA here. Again, future offerings may provide other options here, as well as dedicated hardware should the requirements dictate. But not yet. This level of storage performance provides the best $/GB/IOPS bang for the buck for the majority of storage IO needs for this use case; but if you’re looking for <1ms multi-100k IOPS here, “standard” NPS is what you’re looking for.

Faction also provides a SnapMirror/SnapVault-based backup into their own NetApp-based cloud environment, at additional charge. You could also purchase storage in multiple datacenters, SnapMirroring between them for regional redundancy to match the compute redundancy you enjoy from either AWS or Azure.

Note to remember: This offering is NOT a virtualized DR platform. You can’t take your VMFS or NFS datastores and replicate them into this storage with the idea of bringing them up in Amazon or Azure- that won’t work. So from a replication perspective, this would be more for devops capabilities to provide cloned datasets to your cloud VM’s.

Also, the management of the storage is almost completely performed by Faction on a service-ticket basis. This means (for now) that you won’t be messing with SnapMirror schedules, SnapShots, etc, which does put a little damper on the devops automation for cloning and attaching datasets, for instance. I’m sure this will be temporary as they iron out the wrinkles.

One other thing from a storage/volume provisioning perspective. Since we’re dealing with an on-demand Clustered OnTAP SVM instance here, Faction needs to carefully manage the ratio of FlexVols and capacity as the number of available FlexVols in a given cluster is not infinite. So you will originally get one volume, and you can’t get a second one until you consume/purchase 5TB for the first one, and so on. So if you want two volumes of 2TB each, you’ll need purchase 7TB (5TB for #1, and 2TB for #2). However, no reason you can’t have two 2TB LUNs in one volume- so this isn’t as big a constraint as it may at first seem, you just need to know this up front and design for it.

Drawbacks aside, this consumption model addresses the costs of a 100% utilized/persistent Cloud OnTAP instance, as well as the capital and contractual requirements of Netapp Private Storage. It’s certainly worth a look.

NPS-as-a-Service: The perfect middle ground for the DataFabric vision?

NetApp + SolidFire…or SolidFire + NetApp?

So what just happened?

First- we just saw AMAZING execution of an acquisition.  No BS.  No wavering.  NetApp just GOT IT DONE, months ahead of schedule.  This is right in-line with George Kurian’s reputation of excellent execution.  This mitigated any doubt, any haziness, and gets everyone moving towards their strategic goals.  When viewed against other tech mergers currently in motion, it gives customers and partners comfort to know that they’re not in limbo and can make decisions with confidence.  (Of course, it’s a relatively small, all-cash deal- not a merger of behemoths).

Second -NetApp just got YOUNGER.  Not younger in age, but younger in technical thought.  SolidFire’s foundational architecture is based on scalable, commodity-hardware cloud storage, with extreme competency in OpenStack.  The technology is completely different than OnTAP, and provides a platform for service providers that is extremely hard to match.   OnTAP’s foundational architecture is based on purpose-built appliances that perform scalable enterprise data services, that now extend to hybrid cloud deployments.  Two different markets.  SolidFire’s platform went to market in 2010, 19 years after OnTAP was invented – and both were built to solve the problems of the day in the most efficient, scalable, and manageable way.

Third – NetApp either just made themselves more attractive to buyers, or LESS attractive, depending on how you look at it.

One could claim they’re more attractive now as their stock price is still relatively depressed, and they’re set up to attack the only storage markets that will exist in 5-10 years, those being the Enterprise/Hybrid Cloud market and the Service Provider/SaaS market.  Anyone still focusing on SMB/MSE storage in 5-10 years will find nothing but the remnants of a market that has moved all of its data and applications to the cloud.

Alternatively, one could suggest a wait-and-see approach to the SolidFire acquisition, as well as the other major changes NetApp has made to its portfolio over the last year (AFF, AltaVault, cloud integration endeavors, as well as all the things it STOPPED doing). [Side note: with 16TB SSD drives coming, look for AFF to give competitors like Pure and xTremeIO some troubles.]

So let’s discuss what ISN’T going to happen.

There is NO WAY that NetApp is going to shove SolidFire into the OnTAP platform.  Anyone who is putting that out there hasn’t done their homework to understand the foundational architectures of the VERY DIFFERENT two technologies.  Also, what would possibly be gained by doing so?   In contrast, Spinnaker had technology that could let OnTAP escape from its two-controller bifurcated storage boundaries.  The plan from the beginning was to use the SpinFS goodness to create a non-disruptive, no-boundaries platform for scalable and holistic enterprise storage, with all the data services that entailed.

What could (and should) happen is that NetApp add some Data Fabric goodness into the SF product- perhaps this concept is what is confusing the self-described technorati in the web rags.  NetApp re-wrote and opened up the LRSE (SnapMirror) technology so that it could move data among multiple platforms, so this wouldn’t be a deep integration, but rather an “edge” integration, and the same is being worked into the AltaVault and StorageGRID platforms to create a holistic and flexible data ecosystem that can meet any need conceivable.

While SolidFire could absolutely be used for enterprise storage, its natural market is the service provider who needs to simply plug and grow (or pull and shrink).  Perhaps there could be a feature or two that the NetApp and SF development teams could share over coffee (I’ve heard that the FAS and FlashRay teams had such an event that resulted in a major improvement for AFF), but that can only be a good thing.  However the integration of the two platforms isn’t in anyone’s interests, and everyone I’ve spoken to at NetApp both on and off the record are adamant that Netapp isn’t going to “OnTAP” the SolidFire platform.

SolidFire will likely continue to operate as a separate entity for quite a while, as sales groups to service providers are already distinct from the enterprise/commercial sales groups at NetApp.  Since OnTAP knowledge won’t be able to be leveraged when dealing with SolidFire, I would expect that existing NetApp channel partners won’t be encouraged to start pushing the SF platform until they’ve demonstrated both SF and OpenStack chops.  I would also expect the reverse to be true; while many of SolidFire’s partners are already NetApp partners, it’s unknown how many have Clustered OnTAP knowledge.

I don’t see this acquisition as a monumental event that has immediately demonstrable external impact to the industry, or either company.  The benefits will become evident 12-18 months out and position NetApp for long-term success, viz-a-viz “flash in the pan” storage companies that will find their runway much shorter than expected in the 3-4 year timeframe.  As usual, NetApp took the long view.  Those who see this as a “hail-mary” to rescue NetApp from a “failed” flash play aren’t understanding the market dynamics at work.  We won’t be able to measure the success of the SolidFire acquisition for a good 3-4 years; not because of any integration that’s required (like the Spinnaker deal), but because the bet is on how the market is changing and where it will be at that point – with this acquisition, NetApp is betting it will be the best-positioned to meet those needs.

 

NetApp + SolidFire…or SolidFire + NetApp?

Parse.com – R.I.P. 2016 (technically 2017)

Today we witnessed a major event in the evolution of cloud services. 

In 2013, Facebook purchased a cloud API and data management service provider, Parse.com. This popular service served as the data repository and authentication/persistence management backend for over 600,000 applications. Parse.com provided a robust and predictably affordable set of functionalities that allowed the developers of these mobile and web applications to create sustainable business models without needing to invest in robust datacenter infrastructures. These developers built Parse’s API calls directly into their application source code, and this allowed for extremely rapid development and deployment of complex apps to a hungry mobile user base.

Today, less than three short years later, Facebook announced that Parse.com would be shuttered and gave their customers less than a year to move out. 

From the outside, it’s hard to understand this decision. Facebook recently announced that they had crossed the $1B quarterly profit number for the first time, so it’s not reasonable to assume that the Parse group was bleeding the company dry. Certainly the internal economics of the service aren’t well known, so it’s possible that the service wasn’t making Facebook any, or possibly enough, money. There was no change in pricing that was attempted, and this announcement was rather sudden.

No matter the internal (and hidden) reason, this development provides active evidence of an extreme threat to those enterprises that choose to utilize cloud services not just for hosting of generic application workloads and data storage, but for specific offerings such as database services, analytics, authentication or messaging- things that can’t be easily moved or ported once internal applications reference these services’ specific API’s. 

Why is this threat extreme? 

Note that Facebook is making LOTS of money and STILL chose to shutter this service. Now, point your gaze at Amazon or Microsoft and see the litany of cloud services they are offering. Amazon isn’t just EC2 and S3 anymore- you’ve got Redshift and RDS among dozens of other API-based offerings that customers can simply tap into at will. It’s a given that EACH of these individual services will require groups of people to continue development, and provide customer support, and so each comes with it an ongoing and expensive overhead. 

However, it’s NOT a given that each (or any) of these other individual services will provide the requisite profits to Amazon (or Microsoft, IBM, etc) that would prevent the service provider from simply changing their minds and focusing their efforts on more profitable services, leaving the users of the unprofitable service in the lurch. There’s also the very real dynamic of M&A, where the service provider can purchase a technology that would render the existing service (and its expensive overhead) redundant. 

While it’s relatively simple to migrate OS-based server instances and disk/object-based data from one cloud provider to another (there are several tools and cloud offerings that can automate this), it’s another thing entirely to re-write internal applications that directly reference the APIs of these cloud-based data services, and replicate the data services’ functionality. Certainly there are well-documented design patterns that can abstract the API calls themselves, however migrating to a similar service given a pending service shutdown (as is faced today with Parse.com) requires the customer to hunt down another service that will provide almost identical functionality, and if that’s not possible, the customer will have to get (perhaps back) into the infrastucture game.

Regardless of how the situation is resolved, it forces the developer (and CIO) to re-think the entire business model of the application, as a service shuttering such as this can easily turn the economics of a business endeavor upside-down. This event should serve as a wake-up call for developers thinking of using such services, and force them to architect their apps up-front, utilizing multiple cloud data services simultaneously through API abstraction. Of course, this changes the economics up-front as well. 

So for all you enterprise developers building your company’s apps and thinking about not just using services and storage in the cloud, but possibly porting your internal SQL and other databases to the service-based data services provided by the likes of Amazon, buyer beware. You’ve just been given a very recent, real-world example of what can happen when you not only outsource your IT infrastructure, but your very business MODEL, to the cloud. Perhaps there are some things better left to internal resources. 

Parse.com – R.I.P. 2016 (technically 2017)