Last week I wrote about how Cohesity helps organizations protect their SQL databases against accidental data loss. Today we look at how Cohesity’s copy data management capabilities enable efficient test/dev datasets.
One of the biggest trends in application development over the last few years has been the quest for agility. Like other popular IT buzzwords, the term can mean different things depending on its context. When it comes to application development, however, agility commonly refers to an organization’s ability to rapidly create or modify applications in response to changing business conditions.
Given the way that agile application development is covered by various technology publications, it would almost seem that agile development is the norm, and that every organization is now able to churn out new applications almost instantly. In spite of media portrayal however, truly agile application development remains elusive for most organizations. Although there are undeniable benefits to being able to rapidly create new applications, ironing out all of the inefficiencies in the development environment can be a tall order.
If an organization wants to shorten its development cycle, one of the best places to start is to choose a faster and more efficient method for providing test data to the lab environment.
In order for an organization to develop an application that utilizes existing data, the developers are going to need a sample of that data that they can use throughout the development process. Some organizations address this need by extracting a representative sampling of the data and loading it onto a development server. The problem with this method, however, is that because the developers are not using a full dataset to test their application, they may find that the application behaves differently once it is exposed to production data.
Another common solution is to create a full copy of the production data and load it onto a lab server in the test / dev environment. Although this method gives the developers the benefit of working with a full dataset, it does have its disadvantages. For starters, depending on the size of the dataset, the copy process could take a significant amount of time to complete. If the organization’s goal is to expedite its development efforts, then a time-consuming copy operation will be counterproductive.
A secondary disadvantage is the potential cost of maintaining a full data copy for test / dev purposes. Depending on the size of the dataset, creating a secondary copy of the data could require a substantial investment in storage hardware. Never mind the fact that having multiple copies of the same dataset leads to datacenter fragmentation.
If an organization is to streamline its development efforts, it must find a way to allow developers to test against a full dataset, but without incurring excessive storage costs or wasting time copying large databases from the production servers to the lab servers.
One way of overcoming these challenges is through the use of copy data management.
The term copy data management refers to a technique that allows an organization’s data to be used for a variety of purposes. A database, for example, could be simultaneously used for both production workloads and for application development, without the risk of the development environment accidentally modifying the data.
Copy data management can be implemented in a few different ways, but one of the most effective approaches is to base it around an organization’s backups. When the development team needs to test against a database copy, a virtual copy of the data can be created rather than wasting time and storage space on a physical data copy.
The virtual copy allows the development team to read the data from the backup copy, but write operations are redirected to a snapshot, thus ensuring that the backup copy of the data remains unmodified. Furthermore, because the test / dev environment is working from the backup server, there is no need to be concerned about the development team impacting the production environment’s performance.
Obviously, not every backup vendor offers a copy data management feature, but Cohesity has offered copy data management capabilities for some time now. Cohesity provides developers with a self-service environment that they can use to instantly provision test/dev datasets and clone virtual machines on an as needed basis, without incurring performance penalties.
Best of all, this approach helps organizations to see a greater ROI on their investment in backup and disaster recovery resources. Rather than the backup target acting as a passive resource that will only be used in times of crisis, it is able to act as a fully functional test and development platform.
This concludes my series on Cohesity and SQL Server, but below you will links to previous installments.