Global data growth is expanding at an incredible rate. In just the last two years, 90% of the world’s extant data has been created, says Bryan Balfe, enterprise account manager at CommVault.

Given the explosion of low cost storage and mobile devices, there is every expectation that the global data footprint will continue to grow. The cloud is becoming an increasingly viable option for enterprises to store this data – companies of all sizes and markets are rapidly adopting cloud infrastructures for improved cost, speed and agility. It is particularly in the cloud market that Managed Service Providers (MSPs) are increasingly relied upon to bridge the gap between IT vendors and enterprises, which are grappling with enormous amounts of data, growing in terms of both volume and complexity.

In the emerging cloud sector especially, MSPs are trusted by both vendors and enterprises, revolutionising the industry by delivering true business value and tailored data management infrastructure. With firm policies that focus on standardisation, cataloguing, automation and budgets, MSPs enable enterprises to achieve increases in efficiency, reductions in management time and significant cuts in costs, time and time again.

Internal service level cataloguing
Most companies have little knowledge of the data they generate and store. As a result, to err on the side of caution, many companies retain unnecessary data, which means redundant, long-term storage costs to the organisation. This is where internal IT departments can learn from MSPs. MSPs prioritise and define Service Level Agreements (SLAs) at the very start of any engagement. As a matter of standard practice, most MSPs will look to categorise and catalogue data based on what data should be retained, and how long it should be retained before it is deleted. This cataloguing is based on both external (i.e. legal compliance) and internal (i.e. data strategy) requirements. For example, financial services organisations that deal with personal information are required by law to keep data securely, sometimes for indefinite periods of time. Likewise, business processes unique to the company might require employee information to be archived, or sales and marketing information stored for future use in data analytics.

MSPs categorise data in a hierarchy system, thus ensuring that data is tiered and stored at the most cost-efficient and effective platform for pre-established lengths of time. A simplified example is the ‘Gold, Silver, Bronze’ cataloguing of datasets, where ‘Bronze’ processes are applied to less critical data sets, and ‘Gold’ processes, for more sensitive data. This cataloguing must take into account both the costs of actually storing the data, and also the accessibility of the data. Enterprises need to assess how frequently various types of stored data will need to be accessed. A recent IDC study reports that in ASEAN, 84 percent of organisations surveyed reported access to data as ‘critical’ for their business . Hence, a robust cataloguing and archiving system should consist of multiple variables, such as file type, size, ownership, and last accessed and creation dates, among others.

Regular cost and data analysis
The cost of storing increasingly large data volumes is becoming unrealistic – so much so that a recent IDC report reveals that 87 percent of companies are keen to reduce storage costs . However, many organisations appear to lack the commitment to effectively reduce cost, whereas even at the proof of concept stage, many MSPs promise to deliver cost reduction. As a result, the anticipation of cost reduction becomes a driving force in any partnership between an enterprise and an MSP. This, in turn, leads to cost becoming the primary reason for the implementation. This motivation to reduce costs and MSPs’ typically eagle-eyed focus on budgets enable them to deliver massive cost savings. It is common, for example, for MSP partners to be able to help companies slash costs by as much as half, thus, doubling staff productivity and accelerating time-to-revenue in the process.

An enterprise’s IT department must redefine its role within the business as a profitable IT bureau to the various divisions, which raises the controversial concept of departmental billing for these divisions within an enterprise. This encourages better visibility, in terms of cost and utilisation of storage space, by each business division or function, while ensuring that departments are made accountable for their consumption of data services. Additionally, good MSPs recognise that the value of data is subject to change, for instance, the ‘gold’ category of data today may drop in ‘value’ depending on factors, such as creation date.

It is therefore, essential that data must be evaluated regularly to ensure that the storage tier warrants the value of the data. Expensive disc space might have once been warranted for data which need to be stored for compliance purposes. However, it might not justifiable at the same tier following expiration of the legally dictated timeframe. Crucially, limited IT budgets do not support business needs adequately. Therefore, reducing data management costs with a more efficient data infrastructure will free up budgets – enabling companies to pursue opportunities that can power business growth .

Automation is key
Data management is a complicated business, and it is becoming increasingly so. When making storage decisions around corporate data, organisations require a degree of foresight, in terms of what they expect from their data environments. Data growth, for example, is something that is unavoidable. An IDC survey reveals that Singapore-based enterprises are actually twice as likely as the average APAC organisation to anticipate year-on-year data growth of more than 51 percent . This leads even more enterprises to leverage on public cloud for ‘pay as you grow’ models, preventing budgets from being locked up in under-utilised hardware.

In addition to being scalable, data storage platforms must offer some degree of flexibility. In the retail industry for example, data levels fluctuate. We have seen that there are certain times in a month or year, where retailers’ data levels rise significantly, only to fall drastically, where they could remain until the next ‘peak’ period. In these cases, if an enterprise is to run an efficient data management system – the cloud model that the enterprise opts for must offer some degree of flexibility, as well as scalability.

Data trends, particularly Bring Your Own Device (BYOD), are also something that companies need to bear in mind. Companies must consider the additional processes and resources required to support endpoint data back-up, and control and access from remote devices, such as smartphones. The complexity of data environments means that it makes more sense to automate data management processes, based on the pre-determined policies set through the service level cataloguing. When data policies are reliably automated, this not only ensures that the policy is seamlessly adhered to, it also reduces management costs, by simplifying data management operations. Automating the archiving of data, both structured and unstructured, according to internal cataloguing processes, results in massive cost reduction and improved IT department efficiency.

Companies put themselves at a disadvantage when they neglect to run their own IT departments by the same high standards that they expect from a third-party service provider. Generally, there is not enough standardisation in IT departments – particularly, in large enterprises. Unless an organisation has full visibility of its data, it is impossible to make informed and effective data-related decisions on cataloguing, cost analysis and automation. By learning from MSPs’ engagement and delivery model, enterprises will be better equipped to ensure that their data is stored both effectively and efficiently.