There isn’t enough computing capability today to process the amount of data being created and stored.

A new study from International Data Corporation (IDC) finds that the processing and transformation required to convert the data into useful and valuable insights for today’s organizations and a new class of workloads must scale faster than Moore’s law ever predicted.

To address this gap, the computing industry is taking a new path that leverages alternative computing architectures like DSPs, GPUs, FPGAs for acceleration and offloading of computing tasks in order to limit the tax on the general-purpose architecture in the system.

These other architectures have been key to the enablement of artificial intelligence, including the growing use of deep learning models. At the edge; DSPs, FPGAs, and optimized architecture blocks in SoCs have been more suitable in initial inference applications for robotics, drones, wearables, and other consumer devices like voice-assisted speakers.

The IDC study, How Much Compute Is in the World and What It Can/Can’t Do, is part of IDC’s emerging Global DataSphere program, which sizes and forecasts data creation, capture, and replication across 70 categories of content-creating things — including IoT devices.

The data is then categorised into the types of data being created to understand various trends in data usage, consumption, and storage.

The study builds on over twenty years of extensive work in the embedded and computing areas of research at IDC, including leveraging an embedded market model covering about 300 system markets and the key underlying technologies that enable the value of a system.

The study analyzes the shift in the computing paradigm as artificial intelligence (AI) moves from the datacenter to the edge and endpoint, expanding the choices of computing architectures for each system market as features and optimizations are mapped closer to workloads.

For decades, advancements in process technology, silicon design, and the industry’s dedication to Moore’s Law predicted the performance gains of microprocessors and transistor functionality and integration in system on chips (SoCs). These advancements have been instrumental in establishing the cadence of growth and scale of client computing, smartphones, and cloud infrastructure.

Microprocessors have been at the very core of computing and today Intel, AMD, and ARM are the bellweather for the cadence of computing. However, the story does not end there; we are at the beginning of a large market force as AI becomes more ubiquitous across a broad base of industries and drives intelligence and Inferencing to the edge.

“AI technology will continue to play a critical role in redefining how computing must be implemented in order to meet the growing diversity of devices and applications,” says Mario Morales, program vice president for Enabling Technologies and Semiconductors at IDC. “Vendors are at the start of their business transformation and what they need from their partners is no longer just products and technology.

“To address the IoT and endpoint opportunity, performance must always find a balance with power and efficiency. Moving forward, vendors and users will require roadmaps and not just chips. This is a fundamental change for technology suppliers in the computing market and only those who adapt will remain relevant.”