Service-level agreements (SLAs) for backup and recovery help define the guardrails of both the efficacy and resiliency of your infrastructure. While the size of files influences how quickly you can back up or restore data, that’s not the governing principle. You care about uptime and reducing backup windows. Your strategy dictates the goals but doesn’t discern between small files and large files. Your infrastructure should mirror that.
Cohesity provides high performance, regardless of file size. MegaFile, an architectural component of Cohesity, enables performance for large, multi-terabyte files. A differentiated approach to data streaming across multiple nodes, MegaFile allows fast backups and restores for large files, helping reduce backup times as well as downtime when recovering from a disaster.
This type of performance cannot be bolted on. Speed cannot be woven into a product at the fourth or fifth iteration. High efficacy in ingesting and restoring data can only be done from an architectural level—from the very beginning.
A Differentiated Approach to Parallel Streaming
MegaFile is a patented approach to intelligently distribute files across all nodes in a cluster. An aspect of Cohesity’s architecture, MegaFile breaks large files into smaller chunks for parallel backup and recovery across nodes. The specific size of these chunks is unique, optimized to maximize performance.
Since MegaFile has been a part of Cohesity’s architecture—its filesystem—from day zero, it’s applicable for a broad set of data sources you rely on. MegaFile is applicable for VMware, Microsoft Hyper-V, Pure Storage, leading databases such as SQL Server, and physical servers.
Cohesity supports both file-based and block-based objects including VMDKs, VHDs, as well as physical and database objects.
Dramatically Faster Backups
With MegaFile, backups are dramatically faster. This is particularly evident when backing up multi-terabyte files. When backing up a 2TB file to an 8-node cluster, for example, MegaFile creates eight segments and ingests each of these data chunks in parallel across the Cohesity cluster. For this file, MegaFile decreases backup times by up to 8x.
For many customers, particularly large organizations with significant amounts of data, MegaFile has tangible results. A Fortune Global 500 leader in power management, Schneider Electric has more than 300TB of data under management, growing at >12 percent per year. With their previous backup product, backups took more than 40 hours. Predicated on unique architectural components such as MegaFile, Cohesity was able to reduce backup times significantly for Schneider Electric: from 40 hours to less than 1 hour.
Quicker Restores—Higher Uptime
The same benefits of MegaFile are applicable for restores as well. With MegaFile, restores are dramatically faster, even for large, multi-terabyte files. This helps reduce downtime and helps you meet SLAs such as ensuring higher uptime for your teams.
For many organizations, strictly adhering to SLAs is critical. Consider Riverside Healthcare, a healthcare system serving patients in five counties in Illinois. Riverside Healthcare backs up more than 3PB of data, including Epic and CPAC systems, 400+ SQL Server and other databases, 300+ VMs, and 18 physical servers. With MegaFile, Cohesity has enabled Riverside Healthcare to restore 3TB of their Epic EHR data in under four hours, beating the recommended SLA of four hours for 1TB—a performance boost of 3x.
High Performance from the Beginning
Speed is not an add-on. It’s the byproduct of an architecture that has been designed to scale and perform. A key component of Cohesity’s architecture, MegaFile is a differentiated approach to data streaming across multiple nodes. MegaFile—and other aspects of Cohesity—enable some of the world’s largest organizations to reduce their backup windows, increase uptime, and outperform their SLAs.