- What We Do
- Solutions Overview
- Use Cases
- Get Started
SpanFS exposes industry-standard, globally distributed NFS, SMB, and S3 interfaces. The IO Engine manages IO operations for all the data written to or read from the system. It detects random vs. sequential IO profiles, splits the data into chunks, performs deduplication, and directs the data to the most appropriate storage tier (SSD, HDD, cloud storage).
To keep track of the data, SpanFS also includes a completely new Metadata Store. The Metadata Store is based on a consistent, distributed NoSQL store for fast IO operations at scale. And SnapTree® provides a distributed metadata structure based on B+ tree concepts. SnapTree is unique in its ability to support unlimited, frequent snapshots with no performance degradation.
Start with as little as three nodes and grow limitlessly on-premises or in the cloud with a pay-as-you-grow model.
Choose automated global indexing powering Google-like search. This enables instant wildcard searches for any VM, file, or object ingested into the system.
Avoid “file-not-found” errors with Cohesity DataPlatform. Powered by SpanFS, get guaranteed data resiliency with strict consistency at scale.
Global variable length dedupe across workloads and protocols throughout the cluster helps to reduce data footprint.
Designed with the cloud in mind, Cohesity DataPlatform eliminates dependency on bolt-on cloud gateways.
Seamlessly read and/or write to the same data volume with simultaneous multi-protocol access for NFS, SMB, and S3.
Enterprise data volumes are increasing fast – about 40% per year. To take control, you need a solution that scales. And that’s just what SpanFS is designed to do. Everything on the platform is fully distributed. There is no single choke point. Start with as little as three nodes, just add nodes to the cluster as you expand, and let SpanFS dynamically rebalance your data.
We’d love to tell you how far SpanFS scales. But the truth is, we haven’t hit the limit yet. Based on test data, we know that it scales linearly to a massive 256 node cluster without breaking a sweat.