Getting to Know Cisco HyperFlex

After being with the Cisco Insieme BU, and focusing mostly on ACI, for the last three years I’ve made a bit of a job change. I’m still sort of in Technical Marketing because it’s been an amazing path for me, but I’m now actually a co-host on Cisco’s TechwiseTV! Robb Boyd has been running TechwiseTV for some time now, so I’m very happy to be learning from someone who’s such a professional.

On top of joining the TechwiseTV team, we are having our first ever LIVE (in front of a live studio audience) episode and our first launch March 22nd, 2017, which will be the announcement of the new versions of Cisco HyperFlex. I thought I’d take this opportunity to write a blog about HyperFlex in general, along with some of the new features and why their useful.

And one last note, before I finally dive in…if it wasn’t already clear, I work for Cisco.

HyperConvergence in General

HyperFlex is built off of the Cisco UCS platform, with which I’ve actually had a bit of real world experience. UCS originally started as C Series rack mount servers and B series Blade servers all of which can be centrally managed by the UCS Manager. The UCS Manager resides on Fabric Interconnects, which connect to all the servers you’d like to manage in your UCS domain. I’m not going to give a long primer on UCS, there’s a lot of info out there including a blog I wrote a few years ago.

The UCS platform really gives us a converged compute and networking system, all built on the foundation of automation and stateless computing. That’s a  lot of buzzwords for describing Service Profiles. Service profiles are essentially personalities we can assign to hardware (regardless if the hardware is present at the time). We can specify things like how many NICs or HBAs will be on the server, or which WWNs we should assign to connect to storage, all the things that come along with server provisioning and making it work within your data center. If the server hardware fails, no problem we just replace it and assign the new hardware the old service profile. If we want to add a new server with the same personality, also no problem, we just copy the service profile and assign it to the new server.

Then came along converged infrastructures that contained storage, networking, and compute such as Vblock and FlexPod. These are generally entire racks of equipment from various vendors, but with one support agreement from one company and they generally have some sort of turnkey deployment. Converged infrastructures are still very relevant, especially when talking about business critical applications and previously validated architectures.

Finally, a few years ago the industry came up with the term hyperconverged. This is a lot like converged architectures except the storage, compute, and network resources are all within a small appliance, generally 2U to 4U. Since processors and storage have only gotten faster, and networking bandwidth allows us up to 40, 50, and even 100G now, these smaller appliances can certainly pack a punch. But let’s talk about HyperFlex specifically.

Better Performance

With the new 2.x versions there will be 40G networking coming from the Fabric Interconnects (FIs), so you can be pretty sure bandwidth capacity won’t be an issue. There are also all flash node options now referred to as HyperFlex All Flash (or HXAF, which I think the millenials will find to be a funny acronym), so the performance is getting better and better. I know VDI is a particularly good use case for hyperconverged, and I also know from my experience with VMware View (now part of Horizon) that whenever people were experiencing performance issues, that storage was a good place to look. Having the ability to add flash to the cluster will definitely help in these cases, not to mention there are some automated optimization “buttons” for deploying VDI specifically in a HyperFlex cluster. Since we can have several different HX clusters we can have some clusters dedicated to an application like VDI while other clusters might be dedicated to other applications in our data center with different optimizations.

What’s in a Node?

So we have three things in every HX Node

  1. Hypervisor
  2. Storage Controller Virtual Machine
  3. IO Visor

Each HX node will contain a hypervisor, currently VMware ESXi. There will also be a storage controller VM on each node. These SCVMs help with the distributed nature of this file system. They communicate with each other to constantly and consistently balance out the system. And lastly the IO Visor is a VIB (vSphere Installed Bundle) that gets installed on top of the ESXi kernel. This is what allows the hypervisor to actually communicate with the storage along with VAAI (vSphere APIs for Array Integration). All of this together lets us do some of the things we’ve come to expect with virtualization platforms such as cloning and snapshotting.

File System Built with HCI in Mind

All of this comes together to allow for the distributed file system which Cisco calls the HyperFlex Data Platform. The HXDP is distributed throughout the HX cluster. Meaning we can look at it as one file system no matter which node we’re looking at. And because Cisco built this log structured file system from the ground up, it actually utilizes flash to its fullest potential gaining even further performance benefits. Because of the distributed nature of the HXDP we can also automatically avoid what might be referred to as “hotspots” or really bottlenecks within the system.

Flexibility in Scale

While I’ve seen some unique benefits to HyperFlex, one that will for sure stand out is the flexibility in which we can scale resources. One complaint I’ve often heard from customers considering HCI is that when they run out of one particular resource like storage they need to buy a whole new node to add to the cluster, which adds to their compute and networking resources as well…even if it’s not necessary. Now this wouldn’t be a big deal, but there’s an actual cost associated with these nodes and they’re not necessarily inexpensive. Because the HXDP was built from the ground up and because we’re working with an already converged networking and compute system like UCS, we can be a lot more flexible with HyperFlex.

For example, if we want to add more flash we could simply add more SSDs to a node. We can use the installer VM, which we used to deploy the cluster in the first place to add these resources in an automated fashion. We can do the same with memory. Now if we need to add a lot of compute resources, we can actually utilize a UCS server, install a hypervisor, an SCVM, and the IO Visor, and now even if we don’t have storage on this particular server we can increase compute performance for our cluster and manage it all from the same HX GUI. I believe this is currently limited to the B200, C220, and C240 UCS servers. There are a few business implications here:

  1. We can now more easily migrate from our current UCS infrastructure to a HyperFlex HCI
  2. We can utilize current equipment and avoid any forklift upgrades
  3. If a certain application is only certified on UCS, for example, we can still run that application in our HyperFlex environment

The possibilities don’t end there, especially when we consider that we can do the same thing with actual storage arrays. If we’re already running a Pure array, for example in our data center, we can add this storage to our HX clusters as well!


Now, if you’ve ever worked with Cisco, you’ll know that traditionally we haven’t been known for our beautiful GUIs, but this is different. First of all, no Java, so no keeping VMs around with specific versions of Java just to work on this one GUI. It’s using HTML5. Within this GUI we get a nice looking dashboard that gives us information on alerts and events, performance information, compression, in-line deduplication, VMs (and actions like cloning), and datastores (and actions like creating and editing datastores). Did you just read that right? We can edit/re-size datastores from this simple GUI? Yup, we can create a data store by giving it a name and a size, then if we decide it needs to be larger or smaller we can simply change the size…even if VMs are residing within this datastore.

Most of this functionality has already existed in a vCenter Plugin, and will continue to exist there, but the HTML5 GUI is a little nicer looking and opens it up a bit.

Cloud-Like On-Premises Infrastructure

We’re actually able to deploy most HyperFlex clusters in around a half hour. We need only fill in a few questions from the GUI of a deployment VM, things like naming conventions and IP addresses, and from there service profiles with VLAN information, IP information, and everything needed will be automatically assigned within the cluster. Now we can deploy VMs like we would in any virtualized environment, using thin provisioning to add to storage savings. We can take this cloud-like infrastructure a little further, though.

By adding orchestration products like Cisco Cloud Center (nee CliQr) and UCS Director, we can completely automate the deployment of applications and layer 4-7 services. With Cloud Center we get hooks into public clouds as well as private clouds as well. So the dev and ops teams can work together to create one application profile no matter where we want to deploy (on or off premises). As companies move more towards the hybrid model, this can reduce many complexities that come along with bringing applications to market as well as upgrading them in automated ways.

If we add ACI (Application Centric Infrastructure) in the mix we get more software defined networking as well. ACI will automate our policy deployment by using application profiles as well as offer a more secure environment for our applications because of it’s default white-list model as well as the built-in microsegmentation.

The Tl;dr is that we have the possibility of right clicking on an application in an application catalog, selecting deploy, and having an application either deployed or upgraded (even end of lifed) within minutes either to a public cloud like AWS or Azure or to our private clouds and on-prem data centers.

Leave a Reply