You may need to beef up your back-up system if you want to recover quickly after a disaster. Andy Holpin explains
Businesses often think about how they are backing up their data when considering disaster recovery and business continuity, but this is only half of the story. The most common mistake businesses make is not considering how long it will take to actually retrieve backed-up data and applications. You need to consider the risks associated with long retrieval times, because not only may legislation require strict data retrieval times, but long retrieval times can also affect the productivity of employees, as they can lead to long periods of downtime. Data may be backed up, but it is the time taken to access it that will make or break a disaster recovery plan. Consequently, it is an issue which needs to be addressed at the outset.
In order to reduce retrieval times, you first need to look at how you are managing data. This can be done by planning and putting in place an enterprise data storage management and reporting strategy to enable you to understand exactly where and how all your organisation’s data is being stored and used across the business.
After this, IT departments can consolidate existing storage in one central area, eliminating silos and increasing transparency of data storage in order to shorten retrieval times.
Virtualisation, where IT resources are pooled and shared for increased utilisation and flexibility, is also an important step in giving IT managers quick, easy and accurate access to enterprise data, providing a single view of all available resources in a network wherever data resides, avoiding any duplication.
There are two main options for backing-up data: disk, including: virtual tape libraries (VTLs) and tape. Disk back up can be achieved through snapshots or using disks as a back-up target storage pool. This can accelerate retrieval times by up to 80%. However, tape is significantly cheaper, making it ideal for data that needs to be stored for long periods but does not need to be accessed frequently.
Central or local?
Your IT department should prioritise data according to how often it needs to be accessed, and whether it may need to be retrieved at short notice, thus creating tiers of data recovery service levels across the organisation, with the top tiers of data backed up on disk, and the lowest on tape. Searching for lost data on a disk is much quicker than using tape, making it ideal for slashing retrieval times. This lightens the load on busy IT managers as the data is easier to manage. It is also a more reliable back up, because there is no unnecessary risk taken in relying on someone non-technical being left responsible.
The next decision is whether to back up data centrally or locally at branch offices. Backing up locally has become common practice, because of the impact of backing up large amounts of data over wide area networks (WAN). Local support staff at the remote sites have to manage the back up and the back up tapes.
Businesses that back up locally typically have workgroup servers at branch offices that use a tape drive. This involves someone removing the back up tape every day and sending it to central office. This may create problems, such as non-IT staff performing IT duties, and finding a way of sending tape to the central office cost efficiently and quickly. Also, who is responsible if the tape gets lost? Backing up locally also affects data recovery. If one file needs to be restored it may be a hassle, but if a large data set needs to be restored an IT person will have to be sent with the back up tape to the branch office, which is time intensive and costly.
The other option is to back up centrally, using disk, either through data replication or by using back up tools that reduce the amount of data transferred over the WAN, or even by removing data completely from the remote office through use of ‘wide-area file services’ and ‘server-based computing’. Most vendors already have remote mirroring facilities built into their products, and this is prompting many IT managers to consider a move to backing up centrally.
For businesses with branch offices, the best back up and retrieval solution is often to use a mixture of local and central back up. By centralising the back up of business-critical application data, risk of permanent loss is dramatically reduced, as any lost data can be quickly retrieved from the copy at the local site, while being protected from a local site disaster by having the data at a central data centre. If the business is also backing up locally, should there be a disaster at the data centre, employees at the branch will be able to keep working while the data centre back up is being restored. IT departments should also examine where they are hosting their applications and how they are backing them up to ensure that if disaster strikes, the business can be back in action as soon as possible.
When planning for disaster recovery, there are many factors that influence how long it takes to get a business up and running again, and often it is easy to overlook something simple that could have a significant impact. Organisations need to find out how quickly different business data need to be retrieved. Then a decision can be made about what medium to use, whether data should be backed up centrally or locally, and where the organisation’s applications should reside.
Postscript
Andy Holpin is a consultant with Morse, www.morse.com
No comments yet