The advent of enterprise-grade cloud solutions, combined with the precipitous drop in costs for volatile memory storage, has propelled in-memory computing to the forefront of many CIO’s agendas, writes Alan Collins, portfolio manager: cross-industry business solution at T-Systems South Africa.
Techopedia defines in-memory computing as “the storage of information in the main random access memory of dedicated servers, rather than in complicated relational databases operating on comparatively slow disk drives.”
It promises to give decision-makers and strategists near real-time analytics on their operational data, to better understand every facet of an organisation – its operations, customers, sales performance, employee productivity, and to react immediately to critical events.
Like most burgeoning technology domains, in-memory computing is passing through a phase of ‘inflated expectations’. Vendors and service providers are hyping-up the future of Big Data and The Internet of Things, and positioning in-memory computing as the key to capturing the wondrous new opportunities of the digital economy.
But one of the understated advantages of in-memory computing – and certainly one of the most immediate ways it can create value – is its ability to unearth hidden value in one’s existing Enterprise Resource Planning (ERP) systems.
Generally speaking, ERP platforms and data warehouses are strong at information storage, but lacking in the area of information extraction.
And this is where in-memory computing comes in.
Working hand-in-glove with a conventional ERP system (whether it’s delivered on-premise or in the Cloud), in-memory solutions run queries at lightning speeds. This enables business leaders to clearly see any pain points or bottlenecks in the organisation’s operations, conduct tasks like profitability analysis in near real-time, and uncover new business opportunities.
The obvious benefits aside, in-memory computing enables organisations to think differently, ask different questions, and discover more opportunities for operational improvement or innovative approaches.
Algorithms can be set that react to conditions in real-time. For example, a logistics provider could use embedded sensors and actuators to track its fleet of vehicles. Real-time information streams can be used for everything from re-routing, to predictive maintenance requirements – improving the efficiency of its operations and minimising maintenance costs.
The use-cases may be complex and sophisticated, or they may be remarkably simple. One safe bet is that any organisation – irrespective of size or industry – can improve its operations in a number of ways by capitalising on the hidden value of its existing data sources.
In-memory computing is the key that unlocks these opportunities.
For B2B organisations operating inside complex value-chains, in-memory computing opens up opportunities to expose certain operational data to suppliers, partners, distributors, or others in the ecosystem.
A supplier could then, for example, be able to predict when one of your stock inventories will be depleted – and automatically push orders to you without needing to be instructed. In this way, in-memory solutions enable more fluid links between organisations.
Having captured masses of historical data in ERP systems and data warehouses, in-memory computing gives the CIO the opportunity to fully exploit the value of that information.
Various studies show that organisations are generally despondent with the actual value they’re deriving from their ERP systems. But by augmenting them with in-memory computing, this sentiment can become a thing of the past. ERP systems can be brought magically to life – fuelling the organisation with new insights that lead to step-changes in efficiency and innovation.