In the current era of multi-cloud computing, there is great opportunity for organisations to leverage different cloud services for different applications. By being flexible between different clouds, organisations are able to provide better services to their end users.
However downtime – whether the result of a planned upgrade or an unplanned disaster – is the bane of many IT departments, especially when transferring between different cloud environments. Business resilience is fundamental in 2018, and it’s not just a cliché that time is money in the business world, it’s also a daunting reality – which means data and applications need to be available and protected continuously. Downtime, meanwhile, is doubly so. Gartner estimates that a single hour of downtime costs a business in the region of $300,000. This is a hugely significant chunk of revenue for any business, and finding a strategy to minimise the impact will be critical as creating an “always-on” business and the best customer experience requires significant IT resilience.
In an era where organisations are constantly striving to innovate, adding new internal and customer-facing services into their existing cloud network, the challenge becomes even more acute. Embracing digital transformation adds layers of complexity within IT infrastructure which need to be protected and made resilient. More data in more places means more systems to keep online at any given time, and for some, the potential risk of this going wrong has slowed the rate of adoption. At the same time, many organisations are still trying to use different IT resilience tools and approaches for different environments - an inefficient and clunky approach, with separate point solutions for backup, data protection and cloud mobility. Indeed, many organisations are just backing up the data stored with each cloud provider, rather than replicating the entire environment to create a quick restore point.
Foundations for success
When it comes to circumventing the challenge of downtime, the stakes have never been higher. However, for a business making use of third party services from the likes of AWS or IBM Cloud, and CSPs, how can it ensure that service availability is maintained during setup, and not impacted when a provider goes down?
The answer is not as simple as “choose reliable providers”, although this is a worthwhile first step. Even AWS, Microsoft Azure and Google Cloud, which have access to the best software and hardware available, and copious redundancies, can be susceptible to downtime - all three have experienced high profile outages since 2015. If these behemoths of the cloud world are still vulnerable, the solution must certainly not be a simple one. That’s why it’s not just a case of simply backup and recovery, but building an IT resilience approach for near-real-time protection, that can help, no matter what happens.
Monitoring and mitigating
Ultimately, there is no panacea for all the various causes of outages - systems and services go down. Instead, it’s about planning your infrastructure and digital transformation in such a way that you become IT and business resilient. This means being able to restore and transfer your systems back online as quickly as possible. The old adage “don’t put all your eggs in one basket” holds as true here as anywhere else; build up your IT resilience by making the most of a multi-cloud approach and continuous protection. To put it another way, embracing a multi-cloud approach not just in your service offering, but also in your resilience planning is highly beneficial. Storing backups of data not just in different physical locations, but also potentially with other providers to ensure that you will have access to it, no matter what. This means that in the event of workloads failing to automatically failover, you have built additional redundancy into your IT systems. Seamless integration between different providers is critical here, allowing businesses not just to move data in the event of a problem, but also for the flexibility to add and scale new services on the go.
This tactic works well for circumventing an unplanned outage, but what about when a CSP needs to perform updates or implement patches on your services? While necessary for the ongoing security and resilience of the cloud, provider downtime can be inconvenient with the consumer expectation of an always-on, always-accessible business.
Certainly moving to services from another provider temporarily is also an option, but there may be a simpler way. Some multi-cloud resilience platforms enable CSPs to remotely manage and upgrade customer services, all while decreasing or preventing downtime entirely. For businesses looking to ensure that any maintenance takes place not just at a convenient time, but also with the smallest possible interruption, IT managers should be asking their providers whether this is a possibility, and if not, what tools are needed to make it so.
Delivering an always-on business
A resilient solution that works across a variety of cloud environments and service providers will be essential for supporting organisations in their adoption of hybrid and multi-cloud - while still reducing the risk of downtime.
Meeting customer demand for an innovative, always-on business is complicated. It is not just offering new services, innovating and developing to meet and exceed the customer need which often requires a range of cloud services and providers; it also means proving reliability. IT resilience is a key part of that. Ensuring that your services are always available, able to be restored easily from a single point down to mere seconds, no matter what happens, is critical. When your environment is spread across a whole range of cloud environments, it has never been more important to build a multi-cloud resilience strategy to keep your business online. After all, time really is money.
Peter Godden, VP EMEA, Zerto