Most advancements in storage have focused on primary storage, applications and workloads, while secondary storage has lagged behind. Part of the problem is that primary storage is more defined and better understood, while secondary storage and vendor solutions lack similar clarity.
Primary storage directly interacts with applications when reading and writing data, and primary workloads are applications that directly interact with primary storage. Examples of primary applications include customer relationship management, enterprise resource planning, Microsoft Exchange, and other Tier 1 and 2 applications.
In a secondary storage environment, secondary applications work with primary application data, which is then stored in secondary storage. This typically involves data protection, backup, archival, replication, de-duplication, compression, analytics, and testing and development. Because the industry has just begun to view these workloads holistically under the umbrella of secondary storage, most secondary storage solutions don’t offer all these capabilities.
Advances in primary storage have come in response to siloed environments that add cost and affect performance. This has contributed to the development of hyper-convergence in which compute, storage, network and server virtualization are tightly integrated into a single infrastructure stack. Because hyper-convergence is pretested and preconfigured and has a modular architecture, it’s easy to buy, deploy, manage, provision and scale. This helps organizations overcome decades-old challenges associated with primary storage and workloads.
This begs a simple question. If hyper-convergence works so well for primary storage, why not secondary storage? To that end, Taneja Group created a concept called “hyper-converged secondary storage,” which applies hyper-convergence principles exclusively to secondary storage workloads.
A hyper-converged secondary storage solution must satisfy a number of fundamental requirements. It must have a scale-out, nodal architecture that is self-healing. It must be software-defined, policy-based, and able to tightly integrate with public or private clouds. It must be centrally managed through a web-based console and have built-in Quality of Service. It must be able to handle multiple workloads, both physical and virtual, without manual tuning, and support all current and future secondary workloads. It must support multiple block, file and object protocols. It must be able to index metadata and content with custom analytics. It must have built-in data virtualization principles, enterprise-grade security, recovery point objectives measured in minutes or less, and instantaneous recovery time objectives.
Cohesity DataPlatform is a hyper-converged, software-defined storage solution capable of efficiently consolidating all secondary storage and data services from the edge to the cloud, allowing organizations to leverage the economics and flexibility of cloud infrastructure on one seamless platform. Secondary data, including backup data, files and objects on distributed storage, and data services such as data protection, search and analytics, and test/dev copy provisioning, can be consolidated on Cohesity.
Cohesity DataPlatform provides highly efficient, pay-as-you-grow data protection that empowers organizations to move beyond legacy backup solutions that don’t scale and eventually require expensive forklift upgrades and manual data migration. Cohesity’s patented web-scale distributed file system, SpanFS, was built from the ground up to handle any secondary storage workload, offering multi-protocol access, strict consistency for guaranteed data resiliency and storage efficiency with global deduplication.
With hyper-converged secondary storage, organizations now have a solution designed to overcome the complexity and confusion surrounding secondary storage. Let us show you how Cohesity DataPlatform can bring the simplicity and scalability of hyper-convergence to your secondary storage environment.