Hyperconvergence and PRA / PCA: ever more attractive features


                    

In the first phase of deploying hyperconverged platforms, IT managers quickly admit that the ease of start-up and deployment is real: there is the advantage of a single, homogeneous interface for provisioning VMs and virtualized storage, and the interest of self-discovery functions that automatically fill more than half of the parameter fields, including in the network.

VSphere + vSAN, the 'stretched cluster'. Source VMware.Then comes the phase of discovery of replication mechanisms – highly automated. Beyond snapshot options, it's not just about replicating data but cloning VMs globally. This is followed by the learning steps of replication, back-up and restoration – all with the same interface. To these functionalities, join the automatic deduplication and compression, natively implemented.
Goodbye long backup windows
It is a long way from traditional architectures coupling x86 servers from one manufacturer, storage arrays from another vendor via a storage area network (SAN) of another brand and backup solutions from a specialized publisher (which needed to stall "windows", long hours, the night is not enough sometimes …) – all spread over several bays, even in several rooms …
With hyperconvergence, the built-in backup mechanism is known to work well – in minutes, if not seconds. And the speed of the restorations always surprises (a few minutes to recover the backups of a whole week, with analysis of VM files by VM). To the point that many CIOs, after a test period, decide to make it their primary backup tool. This does not prevent them from keeping a traditional backup tool for pre-archiving backups.
A first level of intrinsic resilience
In summary, a hyperconverged infrastructure (HCI) has the ability to configure a resilience solution on the same site through a clustered architecture that includes multiple nodes. If a node falls, the VMs will, due to replication between the nodes, automatically be moved to the other active nodes. Because on each node, at least one VM holds all the intelligence and embeds the equivalent of a storage controller (except with Evo: RAIL VMware where this intelligence is embedded in the core of the hypervisor.) So, a hyperconvergence solution natively has a first level of continuity of service on the same site. It is recommended to build a 'clustered' cluster (VMware), where VMs will be back-up for a warm restart – which is just as easily done from the same console, often around VMware vCenter.
Continuity on a third site
And even better, this extended cluster will be backed up on a third site at a sufficient distance (several kilometers) to ensure a more formal continuity. We therefore have high availability on site, with replication between at least two sites and a remote backup to restart at a remote site. All this is managed from a single interface, integrated solution, homogeneous.
vSAN for remote sites (RoBo, 'remote offices'). VMware source.
And icing on the cake: such an architecture of PRA / PCA once parameterized and tested no longer requires exploitation: it is automated. He will just be tested from time to time, at least twice a year.
"The global plan – replication, backups, disaster recovery – remain to be defined, even if the task is greatly facilitated with hyperconverged solutions, including back up backups of remote sites on a central site," notes Fabrice Ferrante, responsible for architects Cloud Practice at Capgemini France.
New features and improvements
Each provider of hyperconvergence solutions works to improve, to further accelerate these multiple functionalities. Latest, HPE has announced a new version of RapidDR (Disaster Recovery) for PRA architectures: it allows to activate a recovery plan (PRA) with a single click after a stop.

At NetApp, on the new HCI infrastructure, each storage node can replicate to a remote SolidFire cluster or other hyperconverged platform to build a backup infra (PRA). Data from the hyperconverged cluster can also be replicated to an ONTAP (NetApp) cluster, with SnapMirror, opening with other branded solutions and offering an alternative to any HCI imposed by some.
Cloud Backup and Restore
Another trend is confirmed: as there is no need to worry about storage or the network but only applications, it is tempting to bring continuity to the cloud. This openness to the cloud for replication is of interest to everyone.
NetApp Cloud BackUp (previously named
'AltaVault') opens the backup to all public or private Clouds
Nutanix, among others, offers S3 interfacing to Amazon AWS. NetApp also allows this opening to the cloud. Thanks to NetApp Cloud Backup (formerly AltaVault) and its Data Fabric, there are about fifteen public and private Clouds (including Orange and Scality) that are "convened", while providing data deduplication and compression.
NB: the next part of this file, details the multiple possibilities of the Cloud, private first, that brings hyperconvergence.
To go further on this subject

Leave a Reply

Your email address will not be published. Required fields are marked *