In an age where big data is the mantra and terabytes quickly become petabytes, the surge in data quantities is causing the complexity and cost of data management to skyrocket.
But it doesn’t have to be like that, says Harold van Graan, sales director of Actifio South Africa. By introducing data virtualisation, companies can avoid multiple data copies at the core while improving cross-company access to production data.
At the current rate, by 2016 the world will be producing more digital information than it can store, says van Graan. By 2020, Actifio predicts a “capacity gap” of over six zettabytes, which is nearly double the quantity of all the data produced in 2013.
“The problem of overwhelming data quantity exists because of the proliferation of multiple physical data copies. If companies can take charge of their copy data, they can improve efficiency, reduce cost and simplify data management. This allows for more effective use in analytics and eases compliance reviews,” he points out.
Businesses have a challenge on their hands: “They require copies for: data protection; application development; regulatory compliance; business analytics and disaster recovery measures,” van Graan states. According to IDC companies are maintaining anywhere from 10 to 120 duplicate physical copies of data. Analysts report that 60% of corporate disk storage contains duplicates rather than primary data. Gartner revealed that storage and data management accounts for 20% of total IT spend.
“Physical data copies are increasing five times faster than actual primary data; little wonder IDC values the copy data market at $44-billion,” van Graan adds. “Companies are spending five times more on infrastructure managing copies than they are on the original data. Hardware, software, space, energy, people and time – it’s unsustainable and undesirable.”
However, it is easily fixed. “Data-bloating can be solved through an approach which maintains a single ‘golden master’ of the data; from which virtual copies can be provisioned for all requirements – such as testing, data protection, replication and backups,” says van Graan.
Data virtualisation is familiar as a method of optimising storage and server resources. Applied to data management, it delivers similar performance and capacity-optimisation gains. “In practice, copy data virtualisation reduces storage costs by 80%. At the same time, it makes virtual copies of ‘production quality’ data available immediately to everyone in the business that needs it,” he adds.
That includes regulators, product designers, test and development teams, back-up administrators, finance departments, data-analytics teams, marketing and sales departments. In fact, says van Graan, any approved department or individual who might need to work with company data can access and use a full, virtualised data set. “That’s powerful, because it allows for rapid prototyping, testing and innovation using a complete data set rather than a subset or sample. This is what true agility means for developers and innovators.”
Moreover, network strain is eliminated. IT staff – traditionally dedicated to managing the data – can be refocussed on more meaningful tasks for growing the business. Data management licences are reduced, due to no longer requiring back-up agents, de-duplication software and WAN (wide area network) optimisation tools. “By eliminating copy data and working off a ‘golden master’, storage capacity is reduced – and along with it, all the attendant management and infrastructure overheads. The net result is a more a streamlined organisation driving innovation and improved competitiveness for the business.” he concludes.