Friday, March 29, 2013

Scrutinizing Disaster Recovery After Sandy


In the wake of Hurricane Sandy, heads are nodding up and down the C suite as executives ask their IT colleagues, "How do we stay in business when our infrastructure is down?"

In years past, this question was slightly different. It was a matter of the resilience and safety of corporate data. But we no longer have the luxury of telling clients we will be back in a few days due to a storm-related outage. It's a given that our data will come back when the power returns, but what about the continuity of services to the end users, the real clients in our economy?

The new business economy runs 24/7. Clients, strategic partners, and decision makers want information -- and when disaster strikes, they want data even more and even faster. Let's take a look at primary business functions and how we can keep them up and running when everything else is down.
No single application is more vital to business momentum than email. Calendar, task lists, and basic business communications all rely on a layer of email infrastructure to function with minimal user interaction. Having a disaster-proof infrastructure is vital to businesses of all sizes. When you're migrating to a hosted exchange backup, at minimum you need a failover configuration. I think only the largest companies should manage email in-house. High-availability servers with bicoastal heartbeat replication of data are no longer high-ticket items. You can find a bevy of vendors offering them at $6 per box per month. In-house IT simply cannot compete with this pricing model.

Shared files and cloud-based file storage have also become very cost effective. Even the office with 20 or fewer seats can leverage this model. A minimum of three vendors come to mind. All are easily supported by internal IT to meet the need for bicoastal file replication. Files are available if you can get to the Web -- end of story. This model also supports local file storage (in replicated folders) and offline file access. You can leverage a mix of private and public file-sharing systems provided by all the main players in this space.

Specific internal applications that track clients, inventory, or some vital and specific internal process may well be the lifeblood of any business. However, this does not mean the app must live solely on in-house servers. Devising a system where the app and its correlating data live in standby on a cloud-based server, to be accessed only when the primary architecture is unavailable, is not a difficult task. The advent of VMware and its ability to emulate most known server configurations makes this virtual standby server a reality. And it's priced competitively.

In my business, I deal with a number of cloud-based providers, asset-tracking systems, accounting services, document management systems, and others. These services are primary to my client base. Over the last few weeks, I have been shocked at the number of players that could not execute even the simplest of tasks when gale winds blew through a huge swath of the East Coast.

Leveraging cloud-based technology, if only on an as-needed basis, is truly the next best step for companies of any size. Any employee empowered with a smartphone should be able to fulfill the minimum requirement for their given job function, regardless of the mayhem around them. Hoarding components and restraining the tools of new economic commerce within an aged LAN/WAN architecture is bad for business, bad for IT, and bad for the economy

No comments:

Post a Comment