Spread the love

It is becoming more and more important to have a reliable data backup plan as corporate processes become more digitalized. If you don’t have the right safeguards in place, your trade secrets and customer database could be taken and sold off, whether it’s from ransomware, hackers lost data, or natural disasters.

Businesses must acquire vital data over many years, so the last thing they need is to lose their priceless digital assets to internal or external attacks.

It is more crucial than ever to regularly create and maintain a high-quality, independent duplicate of production data, even with the increased resilience and dependability of production storage. Today’s customers and application owners demand no data loss, even in the case of a systems or facilities failure, especially with cloud backup in the mix. Furthermore, they anticipate recuperation timeframes expressed in minutes rather than hours. IT and storage administrators are under tremendous pressure to develop a reliable backup plan as a result.

Here are three excellent practices to help ease the process of developing the backup approach.

1. Increase backup frequency

Because of ransomware, data centers must increase the frequency of backups — once a night is no longer enough. All data sets should be protected multiple times per day. Technologies such as block-level incremental (BLI) backups enable rapid backups of almost any data set in a matter of minutes because only the changed block, not even the whole file, is copied to backup storage. Organizations should consider some form of intelligent backup that enables rapid and frequent backups.

A close companion to block-level incremental backups is in-place recovery, sometimes called “instant recovery” by vendors. Although not truly instant, in-place recovery is rapid. It instantiates a virtual machine’s data stored on protected storage, enabling an application to be back online in a matter of minutes instead of waiting for data to be copied across the network to production storage. A key requirement of a successful in-place recovery technology is a higher-performing disk backup storage area since it serves as temporary storage.

An alternative to in-place recovery is streaming recovery. With streaming recovery, the virtual machine’s volume is instantiated almost instantly as well, but on production storage instead of backup storage. Data is streamed to the production storage system, with priority given to the data being accessed. The advantage of a streaming recovery over in-place recovery is that data is automatically sent to production storage, making the performance of the backup storage less of a concern.

2. Align backup strategy to service-level demands

Since the beginning of the data center, a best practice was to set priorities for each application in the environment. This best practice made sense when an organization might have two or three critical applications and maybe four to five “important” applications. Today, however, even small organizations have more than a dozen applications, and larger organizations can have more than 50.

The time required to audit these applications and determine backup priorities simply doesn’t exist. Also, the reality is that most of the application owners will insist on the fastest recovery times possible. Chargeback and showback techniques can help application owners reconsider more practical recovery times.

The capabilities provided by rapid recovery and BLI backup ease some of the pressure for IT to prioritize data and applications. They can quite literally put all data and applications in within a 30-minute to one-hour window, and then prioritize certain applications based on user response and demand. Settling on a default but aggressive recovery window for all applications is, thanks again to modern technology, affordable and more practical than performing a detailed audit of the environment. This is especially true in data centers where the number of applications requiring data protection is growing as rapidly as the data itself.

The recovery service level though, means that the organization needs to backup as frequently as the service level demands. If the service level is 15 minutes, then backups must be done at least every 15 minutes. Again, for BLI backups, a 15-minute window is reasonable.

The only negative to a high number of BLI backups is that there is a limit in most software applications as to how many BLI backups can exist prior to them impacting backup and recovery performance. The organization might have to initiate twice-a-day consolidation jobs to lower the number of incremental jobs. Because the consolidation jobs occur off-production, they won’t impact production performance.

The cost of BLI backups and in-place recovery is well within the reach of most IT budgets today. Many vendors offer free or community versions, which work well for very small organizations. The combination of BLI and rapid recovery, both of which are typically included in the base price of the backup application, is far less expensive than the typical high availability system while providing almost as good recovery times.

3. Continue to follow the 3-2-1 backup rule

The 3-2-1 rule of backup states that organizations should keep three complete copies of their data, two of which are local but on different types of media, with at least one copy stored off-site. An organization using the techniques described above should back up to a local on-premises backup storage system, copy that data to another on-premises backup storage system, and then replicate that data to another location.

In the modern data center, it is acceptable to count a set of storage snapshots as one of those three copies, even though it is on the primary storage system and dependent on the primary storage system’s health. Alternatively, if the organization is replicating to a second location, it could replicate it once again to another location to meet the three copies requirement.

Read Also: What Does Firm Mean When Selling?

The requirement of two copies on two separate media types is more difficult for the modern data center to meet. In its purest form, two different media types literally mean two dissimilar media types, in other words, a copy of data on disk and a copy on tape. The purest form of this rule still remains the most ideal practice but it is acceptable for organizations to consider a copy of data on cloud storage to be that second media type even though admittedly both copies are fundamentally on hard disk drives.

Counting the cloud as a different media type is also strengthened if that cloud copy is immutable and can only be erased after a retention policy has passed. In other words, it can’t be erased by a malicious attack.

Other Backup Strategies

  • Use cloud backup with intelligence

IT professionals should continue to demonstrate caution when moving data to the cloud. The need for caution is especially true in the case of backup data as the organization is essentially renting idle storage. Although the cloud backup provides an attractive upfront price point, long-term cloud costs can add up. Repeatedly paying for the same 100 TB of data eventually becomes more expensive than owning 100 TB of storage.

In addition, most cloud providers charge an egress fee for data moved from their cloud back to on-premises, which is the case whenever a recovery occurs. These are just a few reasons why taking a strategic approach to choosing a cloud backup provider is so important.

In light of its downsides, taking a strategic approach to the cloud is important. Smaller organizations rarely have the capacity demands that would make on-premises storage ownership less expensive than cloud backup. Storing all their data in the cloud is probably the best course of action. Medium to larger organizations might find that owning their storage is more cost-effective, but those organizations should also use the cloud to store the most recent copies of data and use cloud computing services for tasks such as disaster recovery, reporting, and testing and development.

Cloud backup is also a key consideration for organizations looking to revamp their data protection and backup strategy. IT planners though, should be careful not to assume that all backup vendors support the cloud equally. Many legacy on-premises backup systems treat the cloud as a tape replacement, essentially copying 100% of the on-premises data to the cloud. Using the cloud for tape replacement does potentially reduce on-premises infrastructure costs, but it also effectively doubles the storage capacity that IT needs to manage.

Some vendors now support cloud storage as a tier, where old backup data is archived in the cloud, while more recent backups are stored on-premises. Using the cloud in this way enables the organization to both meet rapid recovery requirements and lower on-premises infrastructure costs.

Vendors are also using the cloud to provide disaster recovery capabilities, often referred to as disaster recovery as a service (DRaaS). This technique not only uses cloud storage but also cloud computing to host virtual images of recovered applications. DRaaS can potentially save the organization a significant amount of IT budget compared to having to manage and equip a secondary site on its own.

DRaaS also facilitates easier and therefore more frequent testing of disaster recovery plans. It is without question one of the most practical uses of the cloud and an excellent way for organizations to start their cloud journey.

DRaaS is not magic, however. IT planners must ask vendors tough questions such as what is the exact time from DR declaration to the point that the application is usable. Many vendors claim, “push button” DR but that does not mean “instant” DR. Vendors that store backups in their proprietary format on cloud storage must still extract the data from that format.

They also must, in most cases, convert their VM image from the format used by the on-premises hypervisor (typically VMware) to the format used by the cloud provider (typically a Linux-based hypervisor). All of these steps are manageable and IT or the vendor can automate them to a degree, but they do take time.

  • Automate disaster recovery runbooks

The most common recoveries are not disaster recoveries; they are recoveries of a single file or single application. Occasionally IT needs to recover from a failed storage system, but it is extremely rare that IT needs to recover from a full disaster where the entire data center is lost. Organizations, of course, still must plan for the possibility of this type of recovery.

In a disaster, IT needs to recover dozens of applications and those applications can be dependent on other processes running on other servers. In many cases the other servers must become available in a very specific order, so timing of when each recovery can start is critical to success.

The combination of the infrequency of an actual disaster with the dependent order of server start-up means that the disaster recovery process should be carefully documented and executed. The problem is that in today’s stretched-too-thin data center, these processes are seldom documented. They are updated even less frequently. Some backup vendors now offer runbook automation capabilities.

These features enable the organization to preset the recovery order and execute the appropriate recovery process with a single click. Any organization with multi-tier applications with interdependent servers should seriously consider these capabilities to help ensure recovery when it is needed most.

  • Protect endpoints and SaaS applications

Endpoints — laptops, desktops, tablets, and smartphones — all contain valuable data that might be uniquely stored on them. It is very reasonable to assume that data created on these devices might never be stored in a data center storage device unless they are specifically backed up and that data will be lost if the endpoint has a failure, is lost, or is stolen. The good news is that endpoint protection is more practical than ever thanks to the cloud. Modern endpoint backup systems enable endpoints to back up to a cloud repository, managed by core IT.

SaaS applications such as Office 365, Google G-Suite, and Salesforce.com are even more overlooked by the organization. A general and incorrect assumption is that data on these platforms is automatically protected. The reality is that the user agreements for all of them make it very clear that data protection is the organization’s responsibility. IT planners should look for a data protection application that can also protect the SaaS offerings that they use. Ideally, these offerings are integrated into their existing system, but IT could also consider SaaS-specific systems if they offer greater capabilities or value.

The backup process is under more pressure than ever. Expectations are for no downtime and no data loss. Fortunately, backup software can provide capabilities such as BLI backups, recovery in-place, cloud tiering, DRaaS and disaster recovery automation. These systems enable the organization to offer rapid recovery to a high number of applications without breaking the IT budget.

Finally

Although backups are essential for protecting sensitive data, there is still the issue of what to do in the event that servers crash or your backup cloud storage fails. For this reason, you should combine your data backup plan with a disaster recovery plan. By doing this, you may be sure that data will be accessible from a system in the event of an issue.

Having a backup plan in place can completely eliminate any downtime caused by system problems, hostile agents, or natural events. Downtime can result in a large loss of earnings.

About Author

megaincome

MegaIncomeStream is a global resource for Business Owners, Marketers, Bloggers, Investors, Personal Finance Experts, Entrepreneurs, Financial and Tax Pundits, available online. egaIncomeStream has attracted millions of visits since 2012 when it started publishing its resources online through their seasoned editorial team. The Megaincomestream is arguably a potential Pulitzer Prize-winning source of breaking news, videos, features, and information, as well as a highly engaged global community for updates and niche conversation. The platform has diverse visitors, ranging from, bloggers, webmasters, students and internet marketers to web designers, entrepreneur and search engine experts.