Live Blog (a little late) – NetApp Insight Keynote Day 2

Sorry i’m late, I was….ermm….detained. ;). Going to live blog this from the green room backstage!

8:44a

Did I just hear “NetApp Kubernetes Services”???? Who the hell is this NetApp??

With all this automation, Anthony Lye’s group is answering the question “This Data Fabric thing sounds all wonderful, but HOW do I utilize it without a Ph.D. In Netapp and the various cloud providers??”

So the room I’m in has some cool people in it at the moment; Henri Richard, Dave Hitz, Joel Reich, Kim Weller, bunch of other really smart folks!

Wait- FREEMIUM MODEL??? Somebody actually talked to the AppDev folks.

OK time for Cloud Insights. James Holden on stage.

8:52a

Cloud Insights is GA! Again, free trial!

Lots of focus on using performance and capacity data to save money

“All that power and nothing to install.” – Anthony Lye

8:56a

Time to talk about Hybrid Cloud

Brad Anderson , SVP and GM Cloud Infrastructure Group

Hybrid Cloud Infrastructure – If it talks like a cloud, and walks like a cloud…(then it’s not a cloud because they neither walk nor talk.)

Seamless access to all the clouds and pay-as-you-grow.

“Last year it was just a promise, today hundreds of customers are enjoying the benefit of hybrid cloud computing. ”

Consultel Cloud – from Australia! Why are so many of the cool NetApp customers from down under? Dave Hitz says to me that companies in Australia are very forward leaning in regards to technology.

These guys are leveraging Netapp HCI to provide agile cloud services to their base, with great success. They “shatter customer expectations”.

100% Netapp HCI across the globe. Got common tasks done 68% faster. Using VMWare. They looked at other solutions, they already had SolidFire experience so that probably helped.

50% cost savings over former storage platform (but…weren’t they Solidfire before this? Maybe something else too?)

So Netapp has made cloud apps a TON easier – and letting them run wherever you want. This has been the dream that the marketing folks have been talking about for years, made real.

9:30a – Joel Reich and my friend Kim Weller up there to talk about the future of Hybrid Cloud.

In the future most data will be generated at the edge, processed in the cloud.

Data Pipeline – Joel Reich, a self-proclaimed “experienced manager” will use Kim’s checklist

Snapmirror from Netapp HCI to the cloud.

Octavian looking like DOC OC! He has a “mobile data center” on his BACK. Running NetApp Select! MQTT protocol to Netapp Select (for connected devices)

Netapp automating the administration for setting up a FabricPool. You don’t have to be an NCIE to do this. Nice.

FlexCache is back and it’s better! Solves a major problem for distributed read access of datasets.

Netapp Data Availability Services – now this is something a TON of users will find valuable.

9:51 – Here’s what I was waiting for – MAX DATA.

“It makes everything faster”.

Collab with Intel – Optane persistent memory.

Will change the way your datacenter looks.

11X – MongoDB accelerated 11X vs same system without it.

NO application rewrites! In the future they will make your legacy hardware faster.

In the future will work in the cloud.

Looking forward to more specifics here. Wanted to see a demo. But we’ll see it soon enough.

Live Blog (a little late) – NetApp Insight Keynote Day 2

Live Blog: NetApp Insight Keynote Day 1

OK, starting late but this is important stuff. Going to talk about the cool stuff as it happens.

8:41a- George Kurian makes a bold statement that Istio is part of the de facto IT architecture going forward. That’s service mesh folks, where each containerized microservices app knows about all the others in the ecosystem by employing Nginx in each container rather than going through redundant centralized load balancers, and do automatic service discovery. That’s a big recognition.

8:47a – Dreamworks- “NetApp is the data authority”.

8:50a – Preview of “How to tame a dragon”. Never ceases to impress, and this tech is going to get exponentially better over the next 3-4 years.

8:52 – Got a note from an A-Teamer that there are new selectable offerings on cloud.netapp.com! Go check it out. New node types and software offerings…

8:53 Next speaker – Some future perspective from one of the leading media futurists in the world – Gerd Leanhard.

8:56 Humanity will change more in the next 20 years than in the previous 300 years. (My note- expect resistance, and more resistance as the pace quickens).

8:57 “MS Hololens may be free one day” – We all know that NOTHING is free. The form of payment just changes.

9:00 IoT/AI creates a 62 TRILLION DOLLAR economic shift. “Data is the new oil, AI is the new electricity, IoT is the new nervous system.”

9:02 2020 starts the Cognitive Systems Era. Just because Watson reads all the philosphy books doesn’t make it a philosopher. Won’t know what love feels like. “Knowledge is not the same as understanding!”

9:05 “They could buy France…they would think twice about that…” ZING.

9:06 Megashifts – a new Meta-Intelligence – for better or worse. When we cognify we will change the world. (How?) Disintermediation, Cognification, personalization, augmentation, virtualization

9:09 Tech is morally neutral until we USE it. #HellVen

AI will enable 38% profit gains by 2035. But inequality increases.

#DigitalEthics is a primary discussion now more than ever, discussed in news more than ever. Gartner #1 topic for 2019.

9:13 – China opensesame – instant credit, everyone gets a number.

9:14 Tech and data has no ethics – but societies without ethics are doomed. Cannot argue this- but purist capitalist societies do not incentivise ethics.

Who will be “mission Control for Humanity”? “Facebook gunning at Democracy”. Facebook wasn’t hacked, they’re not criminals, it was used properly but it was used unethically. FB doesn’t have lack of money.

Data Mining – Data MYning.

“Technology is exponential, Humans are NOT”.

“Don’t let your kids learn low level coding, or anything routine.”

Einstein: Imagination is more important than knowledge.

9:25 Summary from Gerd Leonhard – The future is driven by Data- defined by Humanity. Emrace technology- but don’t become it. WOW.

9:28 George back on the stage. I really love that NetApp, a company that has defined Data Fabric, and is at the core of so many data driven companies, is talking about the ethical use of that data and of technology in general. We need this industry leadership to extend this message into our policy making processes both at the state and federal level. Otherwise, we cede our policy making to the industry, who will act as (hopefully) benevolent feudal lords.

9:38 – Demo time coming….. :). Anthony Lye is on the stage.

Cloud.netapp.com – NetApp Cloud Central – not something to install, maintain, or configure- just something that exists, 7×24, optimized by workloads, accessible by callable routines.

1) discover endpoints – ActiveIQ. The fabric will tell YOU if there are opporftunities to tier and save money, capacity, etc.

Netapp “My Data Fabric”

Create Cloud Volume on all clouds from one page and API. WOW. And BACK UP those cloud volumes with snaps and a new backup service for archival. Full solution.

How does the storage get to compute? Combined Greencloud and stackpoint, control pnae to deploy K8S Istio, + trident. WOW WOW.

“CREATE FABRIC APP” – WuuuuuuuuT?

Install trusted config to your PRIVATE cloud. Create a “cloud volume” on your PRIVATE INFRASTRUCTURE…..

CLOUD INSIGHTS- SaaS, no SW to install. Access to On-Prem and all public clouds, realtime performance on both. Pay for what you consume. Small to extremely large businesses.

OK what I just saw what earth-shattering. There is a LOT of learning to do!!!

9:50 Now a customer who deals with Genome Sequencing.

“NuxiNexCode” – the internet of DNA

This guy just EXUDES scientist.

Single human genome sequencing in hours. 3 Billion Letters. Understand the millions of differences from one human to another. 40 Exabytes/yr

Expected to be the biggest datasets in the world

MArry this data with clinical and other patient and relative data.

GORdb Platform overview

Not sure if they could’ve gotten this done without Netapp Cloud Volumes functionality. By the way what other storage company is doing what Netapp is doing with on-prem, in-cloud instance, and in-cloud services? NONE. In fact, none are even CLOSE and that is astounding, there will be one storage, er data services, company standing in 10 years.

10:02a

George: Netapp #1 in US public services, Germany, and Japan, and ALL the biggest cloud providers. That’s a bold statement.

That’s a wrap until tomorrow!

Live Blog: NetApp Insight Keynote Day 1

#SFD15: Datrium impresses

If I had to choose the SFD15 presenter with the most impressive total solution, it would be Datrium.  We saw some really cool tech this week all around but Datrium showed me something that I have not seen from too many vendors lately, specifically a deep and true focus on the end user experience and value extraction from technology.

Datrium is a hyperconverged offering that fits the “newer” definition of HCI, in that the compute nodes and storage nodes scale separately. There’s been an appropriate loosening of the HCI term of late, with folks applying it based on the user experience rather than a definition specified by a single vendor in the space. Datrium takes this further in my opinion by reaching for a “whole solution” approach that attempts to provide an entire IT lifecycle experience – primary workloads, local data protection, and cloud data protection – on top of the same HCI approach most solutions only offer in the on-premises gear.

From the physical perspective, Datrium’s compute nodes are stateless, and have local media that acts very much like a cache (they call it “Primary storage” but this media doesn’t accept writes directly from the compute layer).  They are able to perform some very advanced management features of this cache layer, including global dedupe and rapid location-aware data movement across nodes (I.e. when you move a workload), so I’ll compromise and call it a “super-cache”. Its main purpose is to keep required data on local flash media, so yeah, it’s a cache. A Datrium cluster can scale to 128 nodes, which is plenty for its market space since a system that size tested out at 12.3M 4k IOPS with 10 data nodes underneath.

The storage layer is scale-out and uses erasure coding, and internally leverages the Log-Structured File System approach that came out of UC Berkeley in the early 90’s. That does mean that as it starts filling up to 80%+, writes will cost more. While some other new storage solutions can boast extremely high capacity utilization rates, this is a thing we’ve had to work with for a long time with most enterprise storage solutions.  In other words, not thrilled about that, but used to it.

Some techies I talk to care about the data plane architecture in a hyperconverged solution. There are solutions that place a purpose-built VM in the hypervisor that exposes the scale-out storage cluster and performs all data management options, and so the data plane runs through that VM. Datrium (for one) does NOT do that. There is a VIB that sits below the hypervisor, so that should appease those who don’t like the VM-in-data-plane model. There is global deduplication, encryption, cloning, lots of no-penalty snapshots, basically all the features that are table stakes at this point.  Datrium performs these functions up on the compute nodes in the VIBs.  There is also a global search across all nodes, for restore and other admin functionality. Today, the restore level is at the virtual disk/VM level. More on that later.

The user, of course, doesn’t really see or care about any of this. There is a robust GUI with a ton of telemmetry available about workload and system performance. It’s super-easy from a provisioning and ongoing management perspective.

What really caught my attention was their cloud integration. Currently they are AWS-only, and for good reason. Their approach is to create a tight coupling to the cloud being used, using the cloud-specific best practices to manage that particular implementation. So the devs at Datrium leverage Lambda and CloudWatch to create, modify, monitor, and self-heal the cloud instance of Datrium (which of course runs in EC2 against EBS and S3). It even applies the security roles to the EC2 nodes for you so that you’re not creating a specific user in AWS, which is best practice as this method auto-rotates the tokens required to allow access.  It creates all the networking required for the on-prem instances to replicate/communicate with the VPC. It also creates the service endpoints for the VPC to talk to S3. They REALLY thought it through. Once up, a Lambda function is run periodically to make sure things are where they are supposed to be, and fix them if they’re not. They don’t use CloudFormation, and when asked they had really good answers why. The average mid-size enterprise user would NEVER (well, hardly ever) have the expertise to do much more than fire up some instances from AMI’s in a marketplace, and they’d still be responsible for all the networking, etc.

So I believe that Datrium has thought through not just the technology, but HOW it’s used in practice, and gives users (and partners) a deliverable best practice in HCI up front. This is the promise of HCI; the optimal combination of the leading technologies with the ease of use that allows the sub-large enterprise market to extract maximum value from them.

Datrium does have some work ahead of it; they still need to add the ability to restore single files from within virtual guest disks, and after they can do that they need to extract that data for single-record management later, perhaps archiving those records (and being able to search on them) in S3/Glacier etc.  Once they provide that, they no longer need another technology partner to provide that functionality. Also, the solution doesn’t yet deal with unstructured data (outside of a VM) natively on the storage.

Some folks won’t like that they are AWS only at the moment; I understand this choice as they’re looking to provide the “whole solution” approach and leave no administrative functions to the user. Hopefully they get to Azure soon, and perhaps GCP, but the robust AWS functionality Datrium manages may overcome any AWS objections.

In sum, Datrium has approached the HCI problem from a user experience approach, rather than creating/integrating some technology, polishing a front end to make it look good, and automating important admin features. Someone there got the message that outcomes are what matters, not just the technology, and made sure that message was woven into the fundamental architecture design of the product. Kudos.

 

 

 

 

#SFD15: Datrium impresses

Commvault Go 2017

I’m currently at the Commvault Go keynote, and have just heard from the only man who has trekked to both the north AND south poles. Tough task for sure.  The best slide he put up was from the end of his one year journey, when he returned to base only to discover that his ride home was 7/8 submerged.  Talk about needing a backup plan!

After that inspiring speech (which connected to the event by talking about respecting DATA and FACTS in the context of climate change), CEO Bob Hammer took to the stage to discuss the major themes of the event.

First and most immediately impactful to the industry is the release of Commvault’s HyperScale platform, which runs on commodity hardware and signals the beginning of the end of legacy 3-tier Enterprise backup architecture.  Backed by RedHat GlusterFS, Commvault has created a back-end storage platform upon which they can layer a tuned version of their media agent/GRIDStore technology (which creates scalable, balanced,  and fault tolerant pools of data movers), all towards the purpose of providing a linearly scalable home for secondary data copies.

Notable is that CV has chosen to give customers a choice to use CV’s own hardware (offered as a subscription of HW and SW!) or run it on their own hardware from a number of verified hardware companies that span all the usual suspects (HPE, Cisco, Dell, etc).

More notable is that Cisco has aggressively gotten behind this product with their ScaleProtect offering, which is the CV HyperScale on their UCS platform, sold 100% through their channel.  I’ve spoken with 3 different Cisco sales reps in different regions and they are all planning on hitting their territories hard with this offering.

Hammer also talked about the pending release of new Analytics offerings, talking about using AI and Deep Learning to glean actionable information out of secondary data sets for the purposes of properly classifying, retaining, and/or deleting data as well as helping to achieve the ever-more-difficult objective of compliance.

More to come from this event- but I certainly look forward to seeing Commvault’s flag flying on the South Pole!

Commvault Go 2017

Storage companies, persistent losses, and architectural decisions

Today I saw Tintri’s stock price take a 17% hit.  Consensus among my various independent storage pals is that they’ve got two quarters of cash left, and prospects are NOT good for their ability to continue forward.  Their IPO was a huge disappointment, but even if it had raised the amount of desired capital, the revenue and forward sales outlook were still both pointing to a bleak future for these guys. It’s too bad; they have some cool technology around VM performance insight in their all-flash platform.

Also, the news this week from Barron’s is that Pure Storage is shopping themselves (this comes via Summit Redstone’s Srini Nandury); IF TRUE, that’s a clear indication that they see no independent future that protects their shareholders’ value. Their revenues continue to grow, but they also have yet to produce a single dollar in profit, and it’s doubtful they will in the near future.

HP’s acquisition of Nimble is an example of how those who deploy platforms made by persistently negative-income technology firms can work out OK; HP is continuing development on the platform, and providing existing users with the comfort that their investment and their time-consuming integrations are safe. So Nimble customers can thank their lucky rabbit’s feet that the right acquirer came along.

But what if there is no HP equivalent to rescue these technology companies? Certainly Pure has built a great customer base, but will anyone want to put out the $3B-$4B that would satisfy the investors? Would Cisco risk alienating its existing partners and go it alone in the converged infrastructure space after the Whiptail fiasco?  Who in their right minds would touch Tintri at this point from an acquisition perspective?

If you’ve deployed these platforms in your environment, you have some thinking to do.

Consider this net income graph for NetApp from their IPO in 1996:

ntap96-17.png

On an annual basis, Netapp has generally run at a profit from the beginning of their public life.  Certainly, for the four years prior, they ran at a net loss- but their product at the time was used for specific application workloads (development/engineering) – NOT as a foundational IT component that touched every piece of the enterprise, as they do today quite well.  It would have been unwise for an IT architect to consider using NetApp in 1994 to house all of their firms’ data, as the future wasn’t as certain for the technology at the time.  But NetApp used their IPO in ’96 as a statement that “we’re here, we’re profitable, and we’re ready to make our lasting mark on the industry.”  Which they did and continue to do.

For comparison, let’s look at Pure Storage net income since it’s IPO:

PSTG.png

It’s hard to call the right side of that chart a “turnaround”. It’s more of an equilibrium.

Now Pure Storage has some really good technology. The stuff works, it works well, and it’s relatively easy to implement and manage.  However, Pure does not differentiate from the other established (and PROFITABLE) competitors in their space enough for that differentiation to create a new market that they can dominate in (they’re not alone in this; NONE of the smaller storage vendors can claim that they do, that’s the problem).  As is normal for today’s Angel-to-VC-to-IPO culture, Pure used their IPO as an exit strategy for their VC’s and to raise more desperately needed cash for their money-losing growth strategy (the net income chart speaks for itself). That strategy is failing.  With the news that they’re shopping, they realize this too.  When prospective clients realize this, it’s really going to get difficult.

So while the tech geek in all of us LOVES new and cool technology, if you’re going to make a decision on a platform that is foundational in nature (meaning it will be the basis of your IT infrastructure and touch everybody in your firm), you’d be well advised to dig deep into the income statement, balance sheet, and cash flow of the firm making that technology, and put those stats right up there with the features/benefits, speeds and feeds.  Otherwise, you may have some explaining to do later.

Bottom line: if your selected storage manufacturer is losing money, has never actually made any money, and doesn’t look like they’re going to make money anytime soon, there’s a relatively good chance you’re going to be forced into an unwelcome decision point in the near-to-medium future. Caveat emptor.

 

 

Storage companies, persistent losses, and architectural decisions

NetApp gets the OpEx model right

Ever since the Dot-Com Boom, enterprise storage vendors have had “Capacity on Demand” programs that promised a pay-as-you-use consumption model for storage. Most of these programs met with very limited success, as the realities of the back-end financial models meant that the customers didn’t get the financial and operational flexibility to match the marketing terms.

The main cause of the strain was the requirement for some sort of leasing instrument to implement the program; meaning that there was always some baseline minimum consumption commitment, as well as some late-stage penalty payment if the customer failed to use as much storage as was estimated in the beginning of the agreement. This wasn’t “pay-as-you-use” as much as it was “just-pay-us-no-matter-what”.

NetApp has recently taken a novel approach to this problem, by eliminating the need for equipment title to change from NetApp to the financial entity backing the agreement. With NetApp’s new NetApp OnDemand, NetApp retains title of the equipment, and simply delivers what’s needed.

An even more interesting feature of this program is that the customer pays NOT for storage, but for capacity within three distinct performance service levels, each defined by a guaranteed amount of IOPS/TB, and each of these service levels has a $/GB/Mo associated with it.

To determine how much of each service level is needed at a given customer, NetApp will perform a free “Service Design Workshop” that uses the Netapp OnCommand Insight (OCI) tool to examine each workload and show what the IO Density (IOPS/TB) is for each. From there, NetApp simply delivers storage that is designed to meet those workloads (along with consideration for growth, after consulting with the customer). They include the necessary software tools to monitor the service levels (Workflow Automation, OnCommand Unified Manager, and OCI), as well as Premium support and all of the ONTAP features that are available in their Flash and Premium bundles.

Customers can start as low as $2k/month, and go up AND DOWN with their usage, paying only for what they use from a storage perspective AFTER efficiencies such as dedupe, compression, and compaction are taken into account. More importantly, the agreement can be month-to-month, or annually; the shorter the agreement duration of course, the higher the rate. This is America, after all.

The equipment can sit in the customer premises, or a co-location facility- even a near-cloud situation such as Equinix, making the Netapp Private Storage economics a true match for the cloud compute that will attach to it.

A great use case for NetApp OnDemand is with enterprise data management software, such as Commvault, which can be sold as a subscription as well as as a function of capacity. Since the software is now completely an OpEx, the target storage can be sold with the same financial model – allowing the customer to have a full enterprise data management solution with the economics of SaaS. Further, there would be no need to over-buy storage for large target environments, it would grow automatically as a function of use. This would be the case with any software sold on subscription, making an integrated solution easier to budget for as there is no need to cross the CapEx/OpEx boundary within the project.

This new consumption methodology creates all sorts of new project options. The cloud revolution is forcing companies such as NetApp to rethink how traditional offerings can be re-spun to fit the new ways of thinking in the front offices of enterprises. In my opinion, NetApp has gotten something very right here.

NetApp gets the OpEx model right

Intel Storage – Storage Field Day 12

On the last day of Storage Field Day 12, we got to visit Intel Storage, who gave us some insight into what they were seeing and how they are driving the future of high performance storage.  Just prior to the Intel Storage session, we heard from SNIA’s Mark Carlson about trends and requirements mainly driven by the needs of the Hyperscalers, and Intel’s session detailed some very interesting contributions that line up very neatly with those needs.

One of the big takeaways from the SNIA session was that the industry is looking to deal with the issue of “Tail Latency events”, or as Jonathan Stern (a most engaging presenter) put it, “P99’s”.  Tail Latency occurs when a device returns data 2x-10x slower than normal for a given I/O request.  Surprisingly, SSD drives are 3 times more likely to have a tail latency event for a given I/O than spinning media.  Working the math out, that means that a Raid stripe of SSD drives has a 2.2% chance of experiencing tail latency- and the upper layers of the stack have to deal with that event by either waiting for that data or repairing/calculating that late data via parity.

Now one would think that when you’re dealing with latencies on NVM of 90-150 microseconds, even going to 5x keeps you within 1ms or so.  But what the industry (read: Hyperscalers who purchase HALF of all shipped storage bytes) is looking for is CONSISTENCY of latency- they want to provide service levels and be sure that their architectures can deliver rock-solid, stable performance characteristics.

Intel gave us a great deep dive of the Storage Performance Development Kit  (SPDK), which is an answer to getting much closer to that lower standard deviation of latency.  The main difference in their approach, which is the most interesting development (that could drive OTHER efficiencies in non-storage areas, IMO), is that they have found that isolating a CPU core for storage I/O in NVMe environments provided a MUCH better performance consistency, primarily because they eliminate the costs of context switching/polling .

The results they showed by using this approach were staggering.  By using 100% of ONE CPU core with their USER space driver, they were able to get 3.6 Million IOPs with 277ns overhead per I/O from the transport protocol.  Of course that’s added to the latency of the media, but that is a small fraction of what’s seen when using the regular Linux kernel-mode drivers that run across multiple CPUs.   We’re talking nearly linear scalability when you add additional NVMe SSDs using that same single core.

This is still relatively young, but the approach Intel is taking with the single-core, user space driver is already being seen in the marketplace (E8 Storage comes to mind, it’s unknown if they are using Intel’s SPDK or their own stuff).

Intel’s approach of stealing a dedicated core may sound somewhat backwards; however as Intel CPUs get packed with more and more cores, the cores start to become the cheap commodity, and the cost of stealing a core will start to go below the performance cost of context switching/polling (it may have already), as the media we’re working with now has become so responsive and performant that the storage doesn’t want to wait for the CPU anymore!

This is also consistent with a trend seen across more than a few of the new storage vendors out there, which is to bring the client into the equation with either software (like the SPDK) or a combination of hardware (like R-Nics) and software to help achieve both the high performance AND the latency consistency desired by the most demanding of storage consumers.

We may see this trend of dedicating cores become more popular across domains, as the CPU speeds aren’t improving but core counts are- and the hardware around the CPU becomes more performant and dense.  If you play THAT out long term, virtualized platform architectures such as VMWare that run hypervisors across all cores (usually) may get challenged by architectures that simply isolate and manage workloads on dedicated cores. It’s an interesting possibility.

By giving the SPDK away, Intel is sparking (another) storage revolution that storage startups are going to take advantage of quickly, and change the concept of what we consider “high performance” storage.

 

**NOTE: Article edited to correct Mark Carlson’s name. My apologies.

Intel Storage – Storage Field Day 12