The era of the “always-on” business has emerged, thanks to a rise in improved connectivity and devices that allow easy and affordable access to the internet, writes Rick Vanover, senior product strategy manager at Veeam.
Our agile mobile workforce and demanding consumer expectations have created a world where constant access to products and services across time zones is a norm that we take for granted.
Consequently, organisations now face the daunting task of recovering any IT service or application within minutes – creating what is collectively known as “availability”. To achieve greater speed and efficient use of existing resources, many organisations have switched to modern data centres that are built on virtualisation, modern storage solutions and cloud-based services.
Yet, according to Veeam’s fourth annual Data Centre Availability Report 2014, organisations still experience unplanned application downtime. These downtimes occur more than once a month on average, costing organisations between $1,4-million and $2,3-million annually in lost revenue (not to mention the decreased productivity and missed opportunities).
Regardless of whether it’s a consumer-facing organisation, a mobile service provider, or the FTSE 100, the days of organisations experiencing any downtime while customers remain patient are over. Organisations now risk losing revenue, customer confidence and worse, reputation.
With availability seen to be mission-critical for smooth business operations, how do organisations ensure that they can remain Always-On and keep up with adopting the latest developments in cloud computing, mobility and the Internet of Things; as well as manage big data and software defined networking?
Cloud computing and mobility
In many countries across the world, the explosion in mobility has become the main driver of cloud’s evolution. Along with the pervasiveness of mobile devices comes the question of how to manage that data profile, and provide data availability.
To begin with, mobile’s data growth is explosive. How do we decide on what is valuable information, and what is noise? Should we keep everything? Should we categorise everything? Everything that your mobile device sends to the service provider – do they keep that forever? Do they need to? If yes, is it because of regulations, or because of compliance? The fact is that organisations are having to make those decisions now because the volume of data is challenging the underlying structures of cloud on a scale that’s never been seen before, and it’s only set to escalate.
Thanks to increasing mobility, the rise of consumers’ demands has driven cloud adoption, as cloud currently underpins all the data that organisations mine and utilise. As app and cloud providers struggle to keep pace, cloud has already started to evolve beyond availability, and into the Internet of Things (IoT) and all its ensuing data points that can be captured. The next phase of data management will be from the data profile stemming from IoT wearables and other smart devices, while IoT itself simply becomes ‘things’ against the backdrop of the cloud.
How then, do we provide availability for all that data, since having any sort of downtime is now unacceptable? Mobility doesn’t just affect consumers – it affects organisations as well as many different industries that rely on mobile services, and the data that is available through them, such as healthcare. The business cycle has a global scope, and it’s no longer five days a week, eight hours a day.
Internet of Things and big data
Combined together, IoT and big data represent a massive data lifecycle that, over time, becomes a self-running engine. The huge amount of data created by IoT is fed into Big Data’s compound to be analysed and warehoused, and eventually fed out back into IoT.
To sustain this data lifecycle, a modern data centre needs to be built. As the modern data centre typically leverages core technologies including virtualisation, storage and cloud, the next crucial consideration would be ensuring the availability of these modern data centres to ensure always-on services.
The applications are still important
While all of these shifts in the data centre are happening at the same time, one thing hasn’t changed: the application is still what is most important. In fact, the data centre is nothing without the applications it provides. The availability requirements today extend to the applications and in part due to mobilisation and constant access; but also how businesses truly run today.
Gone are the days where key business decision makers didn’t need to consult their key systems to make strategic decisions. These decisions are all powered by applications in the data centre. But what happens when something goes awry with the applications? The challenge facing data centre professionals today is to ensure that the applications are available; not just the infrastructure. But today businesses want more; they want to avoid issues before they happen. That’s a pretty tall order, that is one of the benefits that data centre availability can bring.
With technology outages now making front-page news, minimising downtime and data loss is critical to the overall health of organisations. Data and services will evolve both on premises and in the cloud, and organisations have to think about how to better protect their data on both fronts. Tool selection will become critical as organisations attempt to bridge the availability gap. This is the gap between being always-on and the cost and complexity required to be so.