Software has now become essential to business. Virtually every business, large or small, is reliant on software to run its business processes, support executive decision-making, and increasingly interacting with customers. Business success is now often dependent on the ability of companies to develop (either themselves or via a third party) the software needed to grow new markets and hone competitive advantage.
Inferior software quality is responsible for the loss of $150-billion per year in the United States, and $500-billion in the rest of the world, according to IBM research.
The key to improving software quality is better testing, but traditional testing approaches are proving inadequate. Testing costs are rising as wages in markets like India are increasing steadily, while IT systems are themselves becoming highly complex, with software now forming part of a larger ecosystem.
These ecosystems are characterised by multiple layers of technology, vendor platforms and stakeholders, and complex transactions and dependencies.
In addition, testing teams can no longer keep up with new development methodologies such as agile.
“Traditional testing takes place too late in the process; a ‘big bang’ approach that identifies multiple errors that often require the development team to go right back to the beginning of the process,” says Ziaan Hattingh, MD of IndigoCube. “The expense of correcting errors at this stage can be immense, not to mention the danger of missing project deadlines.”
In response, testing professionals began to test earlier on the process, moving from the traditional “waterfall” methodology to an iterative process. But as environments became more complex and interdependent, this approach became less valuable as it is necessary to test the software within the context of the entire ecosystem and not in isolation.
Building test labs for each software project is also not an option given the licensing and hardware costs, as well as the time and resources needed.
“Virtualisation of the service environment offers an elegant way to overcome these challenges,” Hattingh argues.
“In this way, integration, user interface and performance testing can all be automated to be performed continuously throughout the development process. Project teams can collaborate and share assets. Best of all, the quality of the finished product becomes predictable, while minimising redevelopment costs and business risk.”
Hattingh views were shared at a recent seminar hosted by IndigoCube, which also featured a presentation from Stuart Walker, IBM Rational Quality Management Specialist and highlighted the need for companies to be able to act swiftly in response to market trends while managing the risk associated with the growing number of regulations that need to be complied with.
Inferior software quality is responsible for the loss of $150-billion per year in the United States, and $500-billion in the rest of the world, according to IBM research.
The key to improving software quality is better testing, but traditional testing approaches are proving inadequate. Testing costs are rising as wages in markets like India are increasing steadily, while IT systems are themselves becoming highly complex, with software now forming part of a larger ecosystem.
These ecosystems are characterised by multiple layers of technology, vendor platforms and stakeholders, and complex transactions and dependencies.
In addition, testing teams can no longer keep up with new development methodologies such as agile.
“Traditional testing takes place too late in the process; a ‘big bang’ approach that identifies multiple errors that often require the development team to go right back to the beginning of the process,” says Ziaan Hattingh, MD of IndigoCube. “The expense of correcting errors at this stage can be immense, not to mention the danger of missing project deadlines.”
In response, testing professionals began to test earlier on the process, moving from the traditional “waterfall” methodology to an iterative process. But as environments became more complex and interdependent, this approach became less valuable as it is necessary to test the software within the context of the entire ecosystem and not in isolation.
Building test labs for each software project is also not an option given the licensing and hardware costs, as well as the time and resources needed.
“Virtualisation of the service environment offers an elegant way to overcome these challenges,” Hattingh argues.
“In this way, integration, user interface and performance testing can all be automated to be performed continuously throughout the development process. Project teams can collaborate and share assets. Best of all, the quality of the finished product becomes predictable, while minimising redevelopment costs and business risk.”
Hattingh views were shared at a recent seminar hosted by IndigoCube, which also featured a presentation from Stuart Walker, IBM Rational Quality Management Specialist and highlighted the need for companies to be able to act swiftly in response to market trends while managing the risk associated with the growing number of regulations that need to be complied with.