In the wake of all that has happened this year, the rate at which application workloads have shifted to the public cloud has significantly accelerated.
By Greg McDonald, director: systems engineering at Dell Technologies South Africa
According to Flexera’s 2020 State of the Cloud survey, over half of organisations polled expect to significantly increase their public cloud usage because of successes during the Covid-19 crisis.
But getting the full benefit requires more than just shifting workloads. It’s about putting the right workloads in the right environment for optimal performance and efficiency. At a time when many IT teams have not been able to access local data centers, shifting the deployment of application workloads to the public cloud has often been the only option. Less clear going forward, however, is how many of those workloads will stay in the public cloud.
Many IT teams coped with the changing realities of 2020 by simply switching to software-as-a-service (SaaS) platforms that make it possible to outsource the application’s management. While this may have worked for some workloads, many applications require extensive re-platforming to take advantage of public cloud capabilities, which would take time to accomplish.
In addition, being able to deliver secure access and manage service levels with a newly remote workforce has been a challenge for IT staff. It’s important to note that the laws of physics and, more importantly, each employee’s home internet connectivity, have created new obstacles to overcome.
The world around us works in a certain way, and the laws of physics are a way of classifying that “working.” So, in this example, the limited visibility of SaaS applications often results in an inability to easily determine if a service level issue is happening with the vendor or with the user.
The bulk of application workloads still run in on-premises IT environments even after more than a decade of public cloud computing platforms being available.
Research by Enterprise Strategy Group found that the majority (89%) of organisations still view on-premises as important. In fact, many of the applications developed in the public cloud wind up being deployed in production environments running in on-premises IT environments.
However, there are many good reasons application workloads should be deployed locally – including better performance, security and compliance requirements.
The nature of the applications being developed and deployed is also fundamentally changing. Digital business transformation initiatives are largely based on real-time applications that process and analyse data as close as possible to the point where that data is created and consumed. As a result, this leads to edge locations or environments being created.
Users have little tolerance for network latency when accessing applications. Data may be required to move between various applications, depending on the use case. However, it is always going to be more efficient to bring the code to where the data is than it is to move massive amounts of data across bandwidth-constrained wide area networks (WANs). And, the cost of moving large amounts of data can also become prohibitive.
There are many developers who prefer public cloud platforms as they enable frictionless provisioning of infrastructure resources. In addition, public cloud can serve as a good solution for in-development applications where the resource capacity is unknown.
However, public cloud is a one-size-fits-all solution that does not fit the needs of all workloads. As a result, organisations will always need to run workloads in the environment that best meets the application requirements and business needs – be it data centres, private clouds, public clouds, or at the edge.
The challenge IT operations teams need to rise to is creating a cloud operating model spanning multiple platforms with well-integrated, modern IT infrastructure solutions that scale in ways that work best for them. As that goal is accomplished, it then becomes apparent the future of enterprise computing is, and always will be, hybrid.
This proved true recently for a broadband network provider, which realised that public cloud could not meet all their needs. They found they were spending too much time patching and managing legacy IT. Their initial solution / goal was to move 70% to 80% of their workloads to the public cloud – which proved to be more complicated, time-consuming and expensive than they had anticipated.
At the end of the first year when they had moved only 15% of their workloads, they realised they needed a new strategy. With the Dell Technologies Cloud Platform, they were able to take advantage of automated lifecycle management and streamlined operations. After seeing the value of being able to have workloads and data living and moving across edge, public and private clouds, they quickly scaled their investment.
Fundamentally, physics is a natural science that studies matter, motion and behaviour through space and time as affected by energy and force. Networks and the data within them operate within a similar construct – the information within them is stored, retrieved, transmitted and manipulated.
Data is of most value when it is moving to the right place at the right time – and further, in certain use cases, real-time data that is immediate is even more critical. To account for the laws of network physics, we recommend pursuing a hybrid cloud approach to enable speed of scale, management and mobility across a variety of workloads and clouds, while ensuring security and privacy.