HCI is hardware

The key difference here is the hardware. There is a multitude of workload deployment and orchestration systems in data centers today. You’re familiar with Kubernetes. You may also be familiar with the most prominent branded versions of Kubernetes today, such as VMware’s Tanzu, HPE’s Ezmeral, and Red Hat’s OpenShift. All of these systems enable new classes of containerized workloads to be developed, tested, deployed, and managed in fully orchestrated systems, using substantial amounts of automation. And Kubernetes is promoted by champions who have publicly argued that the orchestrator fulfills the fundamental objectives of hyperconvergence, thus rendering HCI support by the hardware unnecessary and even obsolete. But Kubernetes is not baked into hardware – at least, not yet. By everyone’s definition, HCI is hard-wired into servers. If HCI is any one thing, it is this: the hard-wiring of servers’ control planes in hardware. The crux of the HCI value proposition at present is that having control in hardware, expedites processes and accelerates productivity. Typically, the relocation of control over the network, storage, management, and security from software to hardware should result in faster processes with lower latency and broader access to system resources. Also: Microsoft releases the next generation of its Azure Stack HCI service

So what’s all this talk about ‘HCI software’?

However, there are plenty of major-market software components available today, particularly for networking, that advertise themselves as hyperconverged, as part of HCI, or as a unit of their maker’s HCI platforms. The confusion begins with the introduction of the concept of the software-defined data center (SDDC). HCI, vendors say, enables SDDC. They are correct about that part, if you accept the broader definition of “software” as encompassing anything digital rather than physical – more specifically, configuration code. SDDC enables operators to specify the configurations of systems using source code. In other words, they can program the assembly of components and the provisioning of resources. In the sense that any program is software, then this is an accurate explanation of “HCI software.” HCI platforms actually produce the code that SDDC uses to configure data centers, on these operators’ behalf. They determine the requirements of servers, and the placement and availability of resources, and make the necessary adjustments. In that respect, HCI borrows the purpose and some of the functionality of SDDC, while reassigning much of the burden of control to automation. Yet in the end, what this configuration accomplishes is the delivery of instructions to the HCI hardware. At a fundamental level, HCI is hardware. There are plenty of arguments going on from well-meaning engineers, who will wager their remaining teeth and hair to advance the premise that HCI is not software. And yet you will still find so-called “HCI software.” What is far more important than whether one side or the other is right, is whether that software offers genuine benefits to your data center (there’s a good chance that it can), and ironically though just as importantly, whether that one vendor’s HCI software can co-exist on the same platform as another vendor’s HCI hardware (there’s an above-zero chance that it won’t). For the sake of this article, data center infrastructure is indeed comprised of software and hardware. Yet HCI is rooted in hardware. Each manufacturer of HCI components has engineered them to be answerable to a centralized management system. That system can place workloads where they can be run most effectively, make storage and memory accessible to those workloads, and – if the system is clever enough – alter workloads’ view of the network so that distributed resources are not only easier to address, but more secure. That’s if everything works as advertised. Since its inception, hyperconvergence has been something of an ideal state. That ideal has always been at least one step ahead, often more, of its actual implementation. In earlier incarnations of this article, we spent several paragraphs explaining what exactly got “converged” in hyperconvergence. In HCI’s present form, the issue is now largely irrelevant. In fact, from here on out, we’re only going to refer to it as “HCI” in the company of those abbreviations like “ITT,” “NCR,” and “AT&T” that no longer stand for their original designations. Some folks perceive HCI today as a means of artificially distinguishing one manufacturer’s line of enterprise servers from those of another. This is an argument you will continue to see from the makers of workload orchestration systems, many of whom are part of the open-source movement. This article will avoid evaluating the virtues of both sides’ arguments. Instead, it will present an examination of HCI in its present state, and leave any qualitative judgments to historians. Also: Microsoft adds more devices and services to its Azure Stack hybrid line-up

The various layers of HCI

Because vendors in the HCI market space pursue the end goals of the technology to varying degrees, it’s important to define HCI beginning at a foundational level, and then proceed into deeper levels where some systems may not choose to wade. Each deeper level may pertain less and less to certain market participants whose intellectual investments in HCI may not be as deep as for others.

See also: Dell builds tighter VMware integrations for cloud, app modernization

What does the “d” in “dHCI” stand for?

There are three possibilities, depending upon whom you ask and when: Whatever you decide the “d” may stand for in your data center, it’s hard to ignore that these choices all share a common theme: compartmentalization, separation, isolation, autonomy. Indeed, “d” may actually stand for the “unraveling” of hyperconvergence. Also: How system disaggregation would reorganize IT, and how Arm may benefit

Who produces HCI hardware?

My ZDNet colleague Chris Preimesberger produced a detailed examination of the six leading vendors in the global HCI market, weighing the pros and cons of their respective products. Here is what you need to know about the architectural and implementation choices currently being made by the leading vendors in the HCI space:

Nutanix

The Nutanix model, presently called AOS, is based on two components: a distributed storage fabric with a managing hypervisor, both called Acropolis; and a distributed management plane called Prism. At the center of the Acropolis fabric is a single class of appliance, simply called the node, which assumes the role conventionally played by the server. Although ideally a node would provide a multitude of commodities, in an online publication it has dubbed the Nutanix Bible, Nutanix itself admits that its model natively combines just the main two: compute and storage. One principal point of contention among HCI vendors is whether a truly converged infrastructure should incorporate a data center’s existing storage array, or replace it altogether. Nutanix argues in favor of the more liberal approach: instituting what it calls a distributed storage fabric (DSF). With this approach, one of the VMs running within each of its HCI nodes is a controller VM dedicated to storage management. Collectively, they oversee a virtualized storage pool that incorporates existing, conventional hard drives with flash memory arrays. Within this pool, Nutanix implements its own fault-resistance, reliability checks, and so-called tunable redundancies that ensure at least two valid copies of data are always accessible. Prism is the company’s HCI management system. In recent months, it has become a three-tier cluster of services, licensable on a per-node basis. The basic tier performs oversight and hypervisor management services, while the “Pro” level adds capacity planning, adjustment, and automated remediation (arguably what hyperconvergence was originally about), and the “Ultimate” tier adds machine learning-oriented performance tuning. Nutanix partners with server maker Lenovo to produce jointly-branded HCI platforms. Also: Lenovo updates HCI platforms, adds new advisory services

Dell EMC / VMware / VxRail

In May 2021, Dell EMC, in collaboration with sister company VMware, began the latest round of what this group described as “re-imagining HCI.” Tossing much of the team’s old models into the annals of history, Dell EMC brought forth what its VP of product management for HCI and other categories, Shannon Champion, called, “a series of integrated, value-added components built on top of the VMware software, as well as PowerEdge [servers], that enables automation, orchestration, and lifecycle management.” VxRail is the brand for the group’s HCI servers, which are Dell PowerEdge server models modified with HCI hardware. For these servers, VxRail has introduced a concept the group calls dynamic nodes. It’s perhaps a more exciting name than “storage-less server.” Think of a processor bus designed to connect to storage only through network fabric, rather than an expansion interface. The VxRail group’s previous iteration of HCI relied heavily upon a built-in layer of abstraction that utilizes software-defined storage (SDS), to connect to the Dell EMC storage arrays that have been the pillar of the EMC brand since its inception. With the current iteration, astonishingly, the reliance upon this SDS was part of what was tossed, replacing it essentially with the customer’s choice. One option is a contribution from VMware called HCI Mesh, which is its method of bringing existing SAN arrays into the HCI scheme (as the diagram above indicated) by bridging virtual SAN (vSAN) clusters across networks. Also: Dell EMC launches new VxRail systems, dynamic nodes, automation tools

Cisco HyperFlex

Cisco’s HCI model, called HyperFlex, deploys a controller VM on each node, but in such a way that it maintains a persistent connection with VMware’s ESXi hypervisor on the physical layer. Here, Cisco emphasizes not only the gains that come from strategic networking, but the opportunities for inserting layers of abstraction that eliminate the dependencies that bind components together and restrict how they interoperate. What hyperconverged infrastructure vendors discovered to their astonishment early on was that customers were unwilling to discard their existing investments in storage networks, just to enable a new form of scalability from the ground up. Cisco has been accommodating these customers more and more, with each iteration of HyperFlex. With version 4.5, the company began integrating support for iSCSI – perhaps the most common networked storage interface. This way, Cisco accommodates containerized workloads, VM-based workloads, and bare-metal-based operations, in a simultaneity that may as well pass for convergence. Connections between the storage array and the workloads are provided not through some proprietary interface, but through Kubernetes’ own Container Storage Interface (CSI). “The CSI integration also enables orchestration of the entire Persistent Volume object lifecycle to be offloaded and managed by HyperFlex,” a Cisco spokesperson told ZDNet, “while being driven through standard Kubernetes Persistent Volume Claim objects. Developers and users get the benefit of leveraging HyperFlex for their Kubernetes persistent storage needs, with zero additional administration overhead from their perspective.” Although HyperFlex does utilize VMware’s hypervisor, Cisco continues to tie its HCI control together with its Intersight management platform, which it now stages in a Kubernetes environment hosted by a KVM hypervisor. (Intersight also manages the company’s non-converged UCS server components.) According to Cisco, the scale of the distribution of a Kubernetes application managed with Intersight extends to the furthest reach of the customer’s HyperFlex network, which includes nodes in edge locations. This has been a problem for hypervisor-managed Kubernetes platforms in the past, which were limited to the scope of a single hypervisor – usually one virtual server. Also: Cisco publishes solutions to SD-WAN and HyperFlex software security vulnerabilities

HPE dHCI

In 2017, HPE acquired an HCI equipment provider called SimpliVity, which at the time could integrate not only with HPE/HP servers but also Dell, Cisco, Huawei, and Lenovo. SimpliVity concentrated mainly on scaling data storage in accordance with workload requirements and gave HPE more of a concrete strategy for battling Nutanix and Dell. During the 2021 HPE Discover virtual conference, it was clear that the company had decided to begin downplaying SimpliVity, in favor of what its engineers, at this show, at that time, were calling “disaggregated HCI.” By this, they were referring to a storage array that scaled independently of the compute array (although at the time, compute capacity was still being provided by ProLiant servers, rather than converged compute boxes). This array is provided by way of a class of pool-ready data storage boxes called Nimble Storage. With dHCI, gone is the need for an HPE-branded management system. In its place are VMware vSphere and vCenter, where the configurability of dHCI servers appears as a plug-in. Since HPE’s Aruba Networks unit had already integrated its network automation functionality into VMware vSphere, this enables HPE dHCI to bring network automation into its present-day HCI portfolio, without having to reinvent the wheel a fourth or fifth time. The one tie that binds the dHCI package together, as the diagram shows, is HPE’s InfoSight monitoring predictive analytics platform. This provides performance tuning and anomaly detection, which was also a component of SimpliVity. Here we see a probable preview of coming attractions for HCI as a product category, with reduced emphasis on the “hyper-” part; a clear compartmentalization of networking from storage from computing; a greater reliance upon the underlying virtualization platform (as well as its producer); and a more minimal wrapper around the components to provide some semblance of unity. Also: HPE aims GreenLake VDI services at use case, worker roles

Didn’t there use to be someone else?

NetApp did enter the HCI hardware market in 2017, with the intent to produce entry-level appliances that could appeal to small and medium enterprises. However, last March that company decided to exit this market, giving its existing customers one year to transition to its Kubernetes-based Astra platform. In a spectacularly candid admission, NetApp engineers publicly expressed their opinion that HCI was the wrong direction for infrastructure evolution. They cited the multiplicity of options there, which they characterized as arbitrary, as opposed to a single, clearer channel for Kubernetes orchestration. In so doing, they implied that by introducing more and more architectural distinctions between HCI platforms, the remaining vendors there were essentially stratifying the market, just so they could claim small competitive advantages over each other.

Best hyperconverged infrastructure systems vendors 2021 by Chris PreimesbergerNutanix extends hyperconverged umbrella to cloud storage by Tony Baer, Big on DataLenovo launches edge, hyperconverged systems integrated with Microsoft Azure by Larry Dignan, Between the LinesThe evolution of hyperconverged storage to composable systems by Jeffrey Burt, The Next PlatformA Hyperconvergence Progress Report: Has Kubernetes Stolen the Show? by Scott M. Fulton, III, Data Center KnowledgeHow to Enable Cloud Native DevOps with Kubernetes and Hyper-Converged Infrastructure by Arvind Gupta, The New Stack