Junk data is like a disease eating away at the heart of the modern data centre, says Veeam Southern Africa region manager Warren Olivier: “It’s an invisible tumour chewing up resources, and can cause everything from poor performance to sudden failure of whole storage arrays. Keeping junk data under tight control is critical to maintaining the high availability required by the always-on business.”
Olivier says junk data accumulates naturally in the course of normal operations. “Software installation files are one common culprit – typically you’ll have the same 5 Gigabytes (Gb) file duplicated in many different places, taking up space without performing any useful function. Then there are virtual machines that are invisible because they’ve been removed from the inventory, but not deleted, and orphaned system snapshots. This kind of garbage can easily eat up Terabytes (Tb) of expensive storage.”
Failure to delete junk data is a feature of “just in case” thinking, adds Olivier. “I’ve walked into plenty of data centres where there’s a legacy network cable nobody quite knows the purpose of, but everyone is too scared to unplug just in case it’s important. Hanging onto useless data is part of the same impulse – it’s a legacy of the days when data protection and availability solutions were a lot less sophisticated, and restoring lost data was a cumbersome, time-consuming and difficult process.”
Olivier says the disconnect between the needs of the modern data centre – such as the ability to deliver 24/7 user access to critical data and applications – and what can be delivered by traditional backup solutions, create an ‘availability gap’.
“Companies often find that they can’t reap all the rewards of a big investment in virtualisation because legacy data protection solutions are too slow, and high availability solutions are too expensive. They face data loss and long recovery times, which means they cannot meet the increasing demands of the always-on business. In Veeam’s most recent global Availability Survey, 82% of CIOs said they cannot meet their business’s needs and more than 90% are under pressure to recover data faster and back up more often.”
“Fortunately, there are alternatives that can reduce recovery time and point objectives to 15 minutes.”
Tools developed especially for the modern data centre can help overcome the anxiety, says Olivier. “High-speed recovery means you can restore whatever you want, the way you want it. Back up your environment, delete the junk data – and if anybody complains it can be restored in seconds.”
Verified protection adds an additional layer of comfort, says Olivier. “It’s no use having a backup if you can’t actually restore from it when the crunch comes – so every company should ensure that every backup is verified to ensure guaranteed recovery of every file, application or virtual machine, every time.”
“If you’re trying to run a modern data centre with high availability on a tight budget, just-in-case thinking is holding you back,” concludes Olivier. “Junk data wastes scarce and expensive resources, compromises system stability and can seriously undermine your business case for virtualisation. It’s time to move on to a new understanding of availability.”