Mobile phones care capable of running increasingly graphics-intensive games — but users pay the price in hugely reduced battery life.
In the last decade, mobile gaming has grown into a huge industry. According to Newzoo, the global mobile games market will reach $46,1-billion in 2017, a 19,4% increase from the year before.

Players can enjoy amazing gaming experience on mobile devices due to the increasingly powerful processing capability of modern mobile GPUs. However, such gaming experience comes at a big cost: power consumption.

The power consumption of mobile GPUs linearly increases with the amount of graphics computation. As a result, high-end mobile games with rich graphics content are extremely power hungry and drain batteries very quickly.

To solve this problem, researchers from Microsoft Research Asia (MSRA) and Korea Advanced Institute of Science & Technology (KAIST) have developed a new system, called Raven, to reduce the power consumption of mobile games without compromising user experience.

Raven is based on a key observation in mobile games: many frames continuously rendered in a game are either perceptually the same or very similar. The differences of those frames are too small to be perceptible to game players.

However, mobile games always render frames at a high frame rate of 60 frames per second (FPS), no matter how similar the frames are. Based on the measurement study done by the researchers, those perceptually redundant frames may make up more than 50% of the total frames in many games.

Clearly, eliminating the rendering of those perceptually redundant frames could significantly reduce power consumption.

Raven is a novel system which leverages human visual perception for scaling the rate of rendering frames. To accomplish this, Raven is introducing the use of perception-aware scaling (PAS) of frame-rendering rates. This energy-saving methodology reduces a game’s rate of rendering frames whenever succeeding frames are predicted to be perceptually similar enough.

Raven works by setting up a side channel to track the rendered frame sequences to tailor a user’s perception of graphics changes during game-play. In this way, Raven opportunistically reduces GPU power consumption.

The Raven system consists of three major components which collectively scale the rate of game-frame rendering: Frame Difference Tracker (F-Tracker), Rate Regulator (R-Regulator), and Rate Injector (R-Injector).

The system works in a pipelined fashion.

First, F-Tracker measures perceptual similarity between two recent frames. Then, R-Regulator predicts the level of similarity between the current and next frame(s). The prediction is done based on how similar the current frame and the previous frame(s) are. If the next frames are similar enough (determined by a threshold) to the current one, R-Injector limits frame-rendering rates by injecting certain delay in a rendering loop and skip graphics processing for unnecessary frame(s). Presently, Raven can skip up to a maximum of three frames, and thus, inflict a frame-rate drop down to 15 FPS.

The key challenge Raven addresses is how to determine frame similarity at a low computational cost. The direct method to compare similarity is computing frames’ structural similarity (SSIM) score. Determining SSIM score is computation intensive and therefore uses a lot of power, particularly for large frames.

Today’s mobile devices, including smartphones, usually have a high display resolution of 1920×1080 pixels or greater, making computing each SSIM score an unfeasible method for Raven to employ.

To address the challenge, the researchers employed two novel techniques. First, they developed an energy-efficient method to measure perceptual similarity based on the susceptibility of human eyes to colour difference. This method leverages the difference in the luminance component (for instance, the Y component in the YUV color space) between frames. They extensively evaluated the method by comparing it with SSIM under various settings.

The results showed that the luminance-based method efficiently measured perceptual similarity at a low computational cost.

Second, the researchers built a virtual display, cloned from the mobile device main display but with a much lower resolution (for example, 80 x 45 pixels). The system reads the graphical contents of the virtual display for the similarity measurement. Because the resolution of the virtual display is significantly smaller, the computational and energy overheads are also much smaller.

These two techniques effectively reduce the energy overhead of Raven.

As a next step, the researchers implemented the Raven system on a Nexus 5x smartphone. In an 11-person user study, they conducted comprehensive experiments using various gaming applications to evaluate Raven’s performance. Results showed an average 21,8% up to a high of 34,7% reduction in energy-per-game session while maintaining quality, user experiences.

The paper describing the Raven system “RAVEN: Perception-aware Optimization of Power Consumption for Mobile Games” was published and demonstrated at MobiCom 2017. Authors include Chanyou Hwang, Saumay Pushp, Changyoung Koh, Jungpil Yoon, Seungpyo Choi and Junehwa Song from KAIST and Yunxin Liu from MSRA.