The Cohesity filesystem, known as OASIS, keeps itself focused on providing the best read and write throughput and response times, all while maintaining strict data integrity at all times. In order to keep the system performing up to this task, it relies on several background services to chip in. These background tasks also allow OASIS to not get blocked or go into unpredictable maintenance-mode scenarios.
Let’s look at some of the tasks in detail below. I will also talk about how the filesystem handles the background operations smartly while working on user requests, in the end.
Data on a filesystem can get corrupted or be unavailable for a variety of reasons. Some of those scenarios include :
OASIS has been designed to detect these events and heal the data quickly. First of all, data is replicated across multiple disks that belong to different nodes. It is an unlikely scenario for multiple disks to fail around the same time on different machines. OASIS makes sure that all the replicas of the data are present, have valid data, and are consistent with each other. If an anomaly is detected, data from other replicas is used to quickly heal the state. Disks can sometimes go partially or fully corrupt, the background tasks monitor the failure rate to see if the disks are usable, or need to be replaced.
Similar to the data, the metadata is also replicated on multiple disks to increase fault tolerance. Any issue with the replicas is quickly repaired.
To combat against software bugs, the metadata itself is validated and cross-checked for correctness. For example, if the storage policy was set with Encryption Enabled, we should not find any unencrypted data, or if a directory contains files then those files must also exist. Numerous such checks have been added on all the metadata to help catch software bugs early on.
In addition, the general health and usage of the cluster is always monitored by these tasks. If issues are found that need user intervention, they are notified as alerts on the dashboard.
OASIS background tasks try to make sure that cluster is performing at its full potential. For example, if the cluster I/O throughput needs to be at its peak, all disks should be participating equally, which means the data on each disk should be roughly balanced. Hence, the background tasks ensure that if some disks have higher utilization, data is moved to lower usage disks. Also, if new disks or nodes are added, they also get a share of the data, so that they can help with faster reads.
Efficiency also comes in the form of tiering data across different media types within the same platform, usually based on age. The Cohesity Data Platform includes PCIe-based SSD in every node as well as spinning disks. This gives us the ability to dynamically down-tier and destage cold data from the SSD into the larger hard drives. One way we extend this even further is by adding External Targets to our system. We refer to this as CloudTier, and much in the same way as we move data between SSD and disk, we can now extend your cluster and move that extremely cold and aged data to the cloud.
Similarly, compute-intensive operations like compression and deduplication can be done on colder data as a lower priority process when the cluster is less busy with ingesting data.
The most common task here is garbage collection of data. When a user deletes a directory, the user can be notified immediately once the directory is deleted. However, behind the scenes, special background tasks detect all the files and directories that were present in that deleted directory, and then delete them recursively.
Note that the cluster needs to be ready to delete as much data as it is able to ingest every day. For example, let’s say a virtual machine is backed up every day with a 30-day retention period. At the end of 30 days, the cluster should be able to delete the per-day change data backed up in a day. The mechanisms used to garbage collect have been designed to be fast, and have minimal overhead on client-facing operations.
OASIS handles the user requests and background work items efficiently. For this, it uses QoS queues that prioritize the execution of the user requests. When the system is under heavy user load, the background tasks get lower priority, and thus have minimal impact on user performance. When the system is under low or no user load, the background work items execute full speed and finish their tasks quickly. This means that the background tasks can manage to execute their work transparently in the background, without the need for explicit maintenance and downtime.