HCI – Do you need hyper-converged infrastructure for SDS? HCI – Do you need hyper-converged infrastructure for SDS?

by | Jul 7, 2017 | Member News

EMC and NetApp have both been heavily promoting their new Hyper-converged infrastructure (HCI) offerings – hard to miss considering recent industry hype. The close integration of storage, networking and compute is hugely attractive for CTOs, particularly as these technologies promise to overcome the current challenges and limitations of traditional storage area networks.

Although HCI units from OEMs are attractive (they should simply drop into your existing infrastructure), they also go against the current industry appetite for hardware commoditisation and a reduction in vendor control of the data centre. And if the future of storage is software-defined, how important is HCI?

The truth about HCI

Despite the hype surrounding hyper-convergence, the truth is that the technology is nothing more that virtualization and storage technologies united in a single unit. More specifically, HCI is software-defined storage (SDS) in a box.

Obviously HCI units from Dell EMC, NetApp or IBM are designed to bolt together seamlessly as the need for capacity increases. But if these systems are “just” branded SDS, are they even necessary?

Unleashing your data centre with SDS

Software defined storage has two goals – to bring the power and flexibility of the cloud to on-site enterprise computing, and to commoditize storage so that users are not tied into expensive OEM maintenance contracts. And for businesses keen to extract maximum value, ditching OEM HCI may be the way forward.

Choosing vanilla SDS – like OpenStack – allows you to build storage infrastructure using arrays from any vendor. This not only helps to reduce capital spend, but also opens the door for reuse of your existing assets, including those which are post-warranty.

In many ways, this white box approach to SDS is actually superior. HCI typically works on the principle of adding more identical units to increase capacity as and when required. But this also assumes that all storage and compute needs are equal and identical – they’re not.

Spending top dollar on HCI units makes no sense when expanding archive or cold storage capacity for instance. Upgrading existing assets, and redeploying the retired hardware makes much more sense in terms of operations and cost.

Do you really need HCI?

HCI is the latest attempt by OEMs to try and control the SDS marketplace, and to ensure existing clients remain tied to their platforms. Yet despite the attraction of drop-in systems, the reality is that many businesses may realize greater control and cost savings by deploying their own capacity using an independent SDS platform.

To learn more about SDS, and how you can reuse your existing assets to create an even greater return on investment, please get in touch.

EMC and NetApp have both been heavily promoting their new Hyper-converged infrastructure (HCI) offerings – hard to miss considering recent industry hype. The close integration of storage, networking and compute is hugely attractive for CTOs, particularly as these technologies promise to overcome the current challenges and limitations of traditional storage area networks. Although HCI units from OEMs are attractive (they should simply drop into your existing infrastructure), they also go against the current industry appetite for hardware commoditisation and a reduction in vendor control of the data centre. And if the future of storage is software-defined, how important is HCI? The truth about HCI Despite the hype surrounding hyper-convergence, the truth is that the technology is nothing more that virtualization and storage technologies united in a single unit. More specifically, HCI is software-defined storage (SDS) in a box. Obviously HCI units from Dell EMC, NetApp or IBM are designed to bolt together seamlessly as the need for capacity increases. But if these systems are “just” branded SDS, are they even necessary? Unleashing your data centre with SDS Software defined storage has two goals – to bring the power and flexibility of the cloud to on-site enterprise computing, and to commoditize storage so that users are not tied into expensive OEM maintenance contracts. And for businesses keen to extract maximum value, ditching OEM HCI may be the way forward. Choosing vanilla SDS – like OpenStack – allows you to build storage infrastructure using arrays from any vendor. This not only helps to reduce capital spend, but also opens the door for reuse of your existing assets, including those which are post-warranty. In many ways, this white box approach to SDS is actually superior. HCI typically works on the principle of adding more identical units to increase capacity as and when required. But this also assumes that all storage and compute needs are equal and identical – they’re not. Spending top dollar on HCI units makes no sense when expanding archive or cold storage capacity for instance. Upgrading existing assets, and redeploying the retired hardware makes much more sense in terms of operations and cost. Do you really need HCI? HCI is the latest attempt by OEMs to try and control the SDS marketplace, and to ensure existing clients remain tied to their platforms. Yet despite the attraction of drop-in systems, the reality is that many businesses may realize greater control and cost savings by deploying their own capacity using an independent SDS platform. To learn more about SDS, and how you can reuse your existing assets to create an even greater return on investment, please get in touch.

Translate »