an image-space energy-saving visualization scheme for ......an image-space energy-saving...

10
An Image-space Energy-saving Visualization Scheme for OLED Displays Haidong Chen 1,, Ji Wang 2 , Weifeng Chen 3 , Huamin Qu 4 , Wei Chen 1 Abstract Current energy-saving color design approaches can be classified into two categories, namely, context-aware dimming and color remapping. The former darkens individual regions with respect to the user interactions, and the latter replaces the color set with a new color set that yields lower energy consumption. Both schemes have drawbacks: color dimming tends to cause loss of perceptual quality, and color remapping is an offline color design process. This paper introduces a novel saliency-guided color dimming scheme for OLED displays in both the context of 3D visualization and 2D visualization. The key idea is to eliminate undesired details while enhancing the visually salient features of each frame on-the-fly by leveraging the color and spatial information. A parallelizable image-space salient region detection algorithm is introduced to make the entire process GPU-friendly and real-time. We apply our approach on several representative visualization scenarios and conduct a preliminary user study. Experimental results demonstrate the effectiveness, efficiency, and quality of our approach. Keywords: Energy saving visualization, OLED, Image space, Illustrative visualization 1. Introduction Among the various components that constitute our desktop, notebook computers, and mobile devices, the display has become a major source of energy consumption which can consume up to 38% to 50% energy of the total energy [1, 2]. Compared with the conventional liquid crystal display (LCD) which requires a high-intensity backlight, the emerging OLED (organic light-emitting diode) display brings a new opportunity for energy saving. Unlike LCD, the energy consumption of OLED is directly dependent on the color of pixels illuminated on the display. Thus the total energy of an OLED varies drastically in terms of the shown content. In the past decade, a large amount of schemes have been proposed to reduce the energy consumption of the display. Among these techniques, dimming [3, 4] is a traditional and popular scheme for saving energy which reduces the backlight intensity by tracking user interactions [5, 1] or considering the importance of displayed objects [6]. Due to its simplicity and effectiveness, dimming has been widely used in LCD-based mobile devices and can be applied to OLED displays. Essentially, conventional dimming solutions employ a context-aware scheme, i.e., the color dimming is performed on the basis of the displayed objects. This would inevitably lead to perceptual quality loss because the objects in the scene are individually considered during the dimming process. Instead [email protected] 1 State Key Laboratory of CAD & CG, Zhejiang University, CHINA 2 Department of Computer Science, Virginia Tech, U.S. 3 College of Informatics, Zhejiang University of Finance & Economics, CHINA 4 Department of Computer Science and Engineering, Hong Kong University of Science and Technology, HONGKONG of reducing the intensity, color remapping techniques [7, 8] seek to transform the colors into colors that yield lower energy consumption and maximally preserve the perceptual quality. Nevertheless, color remapping scheme cannot be applied to scenarios that the color has specific meanings. For example, in geo-visualization applications green is usually employed to represent forest or meadow. In addition, most color remapping schemes need to solve an optimization problem, which is quite computation-consuming. Therefore, it can only be used as an offline color design tool. Few attention has been paid on the energy-saving color design in the visualization community. The pioneering work [9, 10] adapt the color remapping scheme and transforms the colors by maximizing the visual expressiveness. These methods inevitably inherit the limitations of color remapping. In this paper, we propose a novel saliency-guided dimming approach that works in image space and is compatible with color remapping methods. In other words, our method can be used as a post process of color remapping technique in some applications. Thus, it yields additional energy reduction if color remapping has been used to optimize the color set for both 2D and 3D visualization applications. The preservation of perceptual quality is achieved by enhancing the visual salient regions during the dimming process, which can be formulated as an image enhancement problem. We introduce a novel parallelizable algorithm for computing the visual saliency of each frame in real-time. Adaptive color dimming is then performed, in which regions with high spatial and color contrast are explicitly highlighted. This is different from the image compensation scheme [11, 12] that recovers the image fidelity after the dimming process. A preliminary user study demonstrates the effectiveness and acceptance of our method. For most cases, our approach outperforms the brute-force Preprint submitted to Computers & Graphics November 9, 2013

Upload: others

Post on 11-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An Image-space Energy-saving Visualization Scheme for ......An Image-space Energy-saving Visualization Scheme for OLED Displays Haidong Chen1,∗, Ji Wang2, Weifeng Chen3, Huamin Qu4,

An Image-space Energy-saving Visualization Scheme for OLED Displays

Haidong Chen1,∗, Ji Wang2, Weifeng Chen3, Huamin Qu4, Wei Chen1

Abstract

Current energy-saving color design approaches can be classified into two categories, namely, context-aware dimming and color

remapping. The former darkens individual regions with respect to the user interactions, and the latter replaces the color set with a

new color set that yields lower energy consumption. Both schemes have drawbacks: color dimming tends to cause loss of perceptual

quality, and color remapping is an offline color design process.

This paper introduces a novel saliency-guided color dimming scheme for OLED displays in both the context of 3D visualization

and 2D visualization. The key idea is to eliminate undesired details while enhancing the visually salient features of each frame

on-the-fly by leveraging the color and spatial information. A parallelizable image-space salient region detection algorithm is

introduced to make the entire process GPU-friendly and real-time. We apply our approach on several representative visualization

scenarios and conduct a preliminary user study. Experimental results demonstrate the effectiveness, efficiency, and quality of our

approach.

Keywords: Energy saving visualization, OLED, Image space, Illustrative visualization

1. Introduction

Among the various components that constitute our desktop,

notebook computers, and mobile devices, the display has

become a major source of energy consumption which can

consume up to 38% to 50% energy of the total energy [1, 2].

Compared with the conventional liquid crystal display (LCD)

which requires a high-intensity backlight, the emerging OLED

(organic light-emitting diode) display brings a new opportunity

for energy saving. Unlike LCD, the energy consumption of

OLED is directly dependent on the color of pixels illuminated

on the display. Thus the total energy of an OLED varies

drastically in terms of the shown content.

In the past decade, a large amount of schemes have been

proposed to reduce the energy consumption of the display.

Among these techniques, dimming [3, 4] is a traditional

and popular scheme for saving energy which reduces the

backlight intensity by tracking user interactions [5, 1] or

considering the importance of displayed objects [6]. Due to

its simplicity and effectiveness, dimming has been widely used

in LCD-based mobile devices and can be applied to OLED

displays. Essentially, conventional dimming solutions employ a

context-aware scheme, i.e., the color dimming is performed on

the basis of the displayed objects. This would inevitably lead

to perceptual quality loss because the objects in the scene are

individually considered during the dimming process. Instead

[email protected] Key Laboratory of CAD & CG, Zhejiang University, CHINA2Department of Computer Science, Virginia Tech, U.S.3College of Informatics, Zhejiang University of Finance & Economics,

CHINA4Department of Computer Science and Engineering, Hong Kong University

of Science and Technology, HONGKONG

of reducing the intensity, color remapping techniques [7, 8]

seek to transform the colors into colors that yield lower energy

consumption and maximally preserve the perceptual quality.

Nevertheless, color remapping scheme cannot be applied to

scenarios that the color has specific meanings. For example,

in geo-visualization applications green is usually employed to

represent forest or meadow. In addition, most color remapping

schemes need to solve an optimization problem, which is quite

computation-consuming. Therefore, it can only be used as an

offline color design tool.

Few attention has been paid on the energy-saving color

design in the visualization community. The pioneering work [9,

10] adapt the color remapping scheme and transforms the

colors by maximizing the visual expressiveness. These methods

inevitably inherit the limitations of color remapping.

In this paper, we propose a novel saliency-guided dimming

approach that works in image space and is compatible with

color remapping methods. In other words, our method can be

used as a post process of color remapping technique in some

applications. Thus, it yields additional energy reduction if

color remapping has been used to optimize the color set for

both 2D and 3D visualization applications. The preservation of

perceptual quality is achieved by enhancing the visual salient

regions during the dimming process, which can be formulated

as an image enhancement problem. We introduce a novel

parallelizable algorithm for computing the visual saliency of

each frame in real-time. Adaptive color dimming is then

performed, in which regions with high spatial and color

contrast are explicitly highlighted. This is different from the

image compensation scheme [11, 12] that recovers the image

fidelity after the dimming process. A preliminary user study

demonstrates the effectiveness and acceptance of our method.

For most cases, our approach outperforms the brute-force

Preprint submitted to Computers & Graphics November 9, 2013

Page 2: An Image-space Energy-saving Visualization Scheme for ......An Image-space Energy-saving Visualization Scheme for OLED Displays Haidong Chen1,∗, Ji Wang2, Weifeng Chen3, Huamin Qu4,

dimming (uniform dimming) in terms of both the perceptual

quality and the energy consumption.

In summary, this paper presents an image-space color

dimming approach whose main contributions are twofold:

• An adaptive color dimming scheme that simultaneously

achieves energy reduction and minimization of perceptual

quality loss.

• A real-time visual saliency computation algorithm that can

be fully implemented in GPU.

The rest of this paper is organized as follows: after a short

discussion of the relevant work in Section 2, we elaborate

our approach in Section 3. Extensive experimental results are

presented in Section 4. We also conduct a preliminary user

study as described in Section 5. Finally, we conclude this paper

in Section 6.

2. Background and Related Work

2.1. OLED Display

Nowadays, LCD is still the most popular flat-panel display.

The LCD displays do not illuminate themselves and need

a high-intensity backlight which consumes a great amount

of power [12]. In contrast, OLED is an emerging display

technology that emits light by the display elements and does

not necessitate an eternal light source. For more details, please

refer to [13].

An OLED display has three independent light emitting

components for three color channels of each pixel. Dong et

al. [7] present a generic form of the energy consumption of a

colorful OLED display with N pixels as:

E = E0 +

N∑

i

( f (Ri) + g(Gi) + h(Bi)) (1)

where f (·), g(·) and h(·) are the energy consumption of red,

green and blue channels, respectively. E0 accounts for the static

energy consumption which is dominated by a driven current of

the control chips. And E0 can be estimated by measuring the

energy consumption of a completely black screen. f (·), g(·)

and h(·) is obtained by measuring the energy consumption for

each individual channel with different intensity levels. Figure 1

shows the energy consumption model on a µOLED-32028-P1

AMOLED display.

2.2. Energy Reduction

Recently, green computing attracts much attention for

reducing energy consumption of display devices. Many of them

are solely amenable for LCD. Here we focus on the approaches

for OLED devices.

Device-level scaling Backlight scaling [14, 15, 16] as a

device-level technique is originally designed only for power

reduction on LCD displays. Shin et al. [12] extend the concept

to OLED and propose a new technique called dynamic voltage

0 0.2 0.4 0.6 0.8 10.05

0.1

0.15

0.2

0.25

0.3

0.35

Intensity

Power (Watt)

RedGreenBlue

Figure 1: The energy consumption in Watt with respect to each color channel.

The statistic is measured on a µOLED-32028-P1 AMOLED display module,

and is used in our experiments.

scaling (DVS) which can save up to 52.5% while keeping nearly

the same human-perceived image quality for the Lena image.

Context-aware Dimming Existing dimming solutions are

context-aware in the sense that user interactions and behaviors

determine the start and degree of dimming in corresponding

screen regions. For example, Dalton et al. [5] propose to use the

low-level sensors to track user’s face. If the user is facing off the

display, the display will be turned off. Similarly, Moshnyaga

et al. [1] use a video camera to track user’s attention. When

the user detracts his/her attention from the screen, the display

is darkened to some content. The energy consumption can

also be saved by dimming or turning off the selected areas [6].

Typically, the selected areas may include inactive windows,

objects of no interest, and so on. Essentially, dimming

based techniques neglect the issue of perceptual quality loss.

Accordingly, Choi et al. [11] employ a post-processing image

compensation method to recover the screen readability as much

as possible after dimming. Different from these techniques, our

method explicitly highlight the visual important features during

dimming.

Color Remapping As illustrated in Figure 1, the energy

consumption of each color channel varies in an exponential

form. Color remapping aims to compute a color set used for

visualization that achieves low energy consumption without

sacrificing the perceptual quality. Chuang et al. [9] present

an energy-aware color set for visualization by formulating

an optimization problem of energy under the constraint of

good perceptual distinguishability. Similarly, Wang et al. [10]

introduced an multi-objective optimization approach to find

the most energy-saving color scheme for sequential data

visualization on OLED displays. Dong et al. [7] also treat the

energy saving mobile GUI design as an optimization problem

and present a learning-based sampling strategy to accelerate

the optimization process achieving 90% accuracy with 1600

times reduction in sampling numbers. Later, this concept is

introduced into the design of web browser for mobile OLED

displays [8]. Unfortunately, color remapping is not always

feasible to many applications. For example, it is intractable for

natural images, 3D renderings/visualizations, or videos where

the color can not be significantly adjusted.

2

Page 3: An Image-space Energy-saving Visualization Scheme for ......An Image-space Energy-saving Visualization Scheme for OLED Displays Haidong Chen1,∗, Ji Wang2, Weifeng Chen3, Huamin Qu4,

2.3. Tone Mapping

As regular display system has a low dynamic range,

compression is usually required to display a high dynamic

range image. This process is known as ”Tone Mapping” [17]

which can reduce the dynamic range while preserve the local

contrast. In recent years, a large number of tone mapping

techniques have been developed in this literature. These

techniques can be broadly classified into two categories:

global [18, 19, 20, 21] and local [22, 23, 24, 25]. Because

the same mapping function is employed to all pixels, most

global techniques have the limitation of contrast loss. Instead,

local methods use a mapping function that varies spatially to

preserve local contrast. In particular, local methods based on

bilateral filtering [23, 26] are most relevant to our approach.

These methods choose to preserve the important features by

composing the result of bilateral filtering into the low dynamic

range image. Similarly, our approach explicitly enhances the

local contrast within salient regions while reducing energy

consumption. To our best knowledge, this paper is the first

effort to introduce the concept of tone mapping for energy

saving.

3. Our Approach

Our approach is enlightened by two observations:

• Dimming remains an effective energy saving scheme and

is widely used in most OLED based devices.

• Lowering the brightness causes negative influences on

perception of the visualization. One prominent solution

would be explicit highlighting of visual salient features.

Generally, a set of color-related features play essential roles

in human vision system such as luminance, transparency, and

orientation [27]. And any discontinuities of these features can

be regarded as the boundaries of objects, or other perceptually

important information. In other words, high contrast in these

features encodes significant visual saliencies which constitute

the underlying structure of an image. Besides, spatial features

are also very important to human’s perception.

Based on the aforementioned observations, we propose to

take dimming as our basic scheme for energy reduction and use

visually salient features detected in color and spaces to enhance

the perceptual quality of the dimmed scene.

A schematic overview of our approach is shown in Figure 2.

The input of our method is a depth buffer and a color buffer

which can be directly obtained from most 3D visualization

scenarios. For 2D visualization, the color buffer is the only

input. Our approach starts by applying bilateral filtering

iteratively to smooth undesired distractive details within the

buffers. Then, the visually salient spatial and color features are

extracted by a separable difference-of-gaussian operator (DoG).

Thereafter, we define the saliency map as a combination of

the detected features in color buffer and/or depth buffer, and

employ it to guide the dimming process.

3.1. Abstraction with Bilateral Filter

Typically, high contrast regions in an image encode the

boundaries of objects while low contrast regions contain less

important information. Therefore, when the overall brightness

is low (e.g., when uniform dimming is applied), more efforts

should be engaged to express the visual salient regions.

Abstraction is a process that suppresses undesired details

while preserving the salient features. Enlightened by the

approaches in [27, 28], we employ the well-studied bilateral

filter as an abstraction means for the visualization.

Bilateral filter introduced by [29] is a non-linear filtering

technique that is capable of feature characterization while

preserving strong crisp edges. Essentially, it extends the

Gaussian filter by weighting the coefficients with their relative

intensities, or say, pixels spatially close will be weighted less

if their intensities are quite different. For a given image I, the

conventional bilateral filter is defined as:

B(I)p =1

Wp

q∈N(p)

Gσs(‖ p − q ‖)Gσr

(| Ip − Iq |)Iq (2)

Wp =

q∈N(p)

Gσs(‖ p − q ‖)Gσr

(| Ip − Iq |) (3)

where N(p) denotes the neighborhood pixels of p and Wp is the

normalization factor, σs corresponds to the spatial filter radius

and σr relates to the filter radius in intensity domain. σs and σr

determine the levels of smoothness.

As bilateral filter is non-separable, a brute-force computation

of Equation 2 is quite slow. Fortunately, with the

GPU-supported data structure bilateral grid [23], bilateral

filtering can be approximated by performing a Gaussian

filtering with spatial bandwidth ws and intensity bandwidth wr

on a 3D grid. Here, bilateral grid serves as a high-dimensional

representation of the 2D image, which combines a 2D spatial

domain and a 1D intensity domain. Due to the memory

limitation, in this paper, we regularly down-sample it with

spatial sampling rate S s = 16 and intensity sampling rate

S r = 0.065. As a result, a 512 × 512 image only requires

32 × 32 × 16 grids.

3.2. Edge-oriented Saliency Map Generation

As a widely used term, visual saliency refers to the concept

that parts of the scene are pre-attentively distinctive and bring

about immediate significant visual arousal [30]. In the literature

of computer vision, there exists a number of saliency modes.

However, most of them are computationally expensive. Thus,

for the sake of efficiency, we regard the edges in the color

buffer and the depth buffer for a colored visualization as

salient features. Because they exhibit high contract which can

introduce significant visual arousal.

Detecting edges in a buffer has been extensively studied to

enhance perception and cognition [31]. In this paper, DoG [27]

is employed on the depth buffer and the color buffer. Different

from many computation-expensive edge detector, DoG is

simple and can be further accelerated by separable Gaussian

3

Page 4: An Image-space Energy-saving Visualization Scheme for ......An Image-space Energy-saving Visualization Scheme for OLED Displays Haidong Chen1,∗, Ji Wang2, Weifeng Chen3, Huamin Qu4,

En

erg

y sa

ving

Bilateral filt

er

dimming

Do

G e

dg

es

Abstraction

Co

lor Bu

ffe

r

DoG

DoG

Dept

h Bu

ffe

r

Do

g E

dg

es

Salie

ncy m

ap

I(p)

D(p)

I(p) Mc(p)

Md(p) S (p)

I∗(p)

Figure 2: The pipeline of our approach. Key processes are presented in the purple tabs. The depth in the depth buffer D(p) is encoded with the grey value (darker

color represents nearer regions). The top-right shows the final result.

kernels. Instead of using a binary mode, we extend the standard

DoG with a simple transformation to get a smoothed edge

map M(p) such that visual artifacts and noise can be avoided.

Finally, our visually salient feature detector is defined as:

M(p) =

1 if G(p) > 0

1 + tanh(λG(p)) otherwise(4)

Here, G(p) = (Gσ1− Gσ2

) ⋆ f (p) is a standard DoG with

bandwidth σ1 and σ2. Bigger difference between σ1 and σ2

admits stronger edges. f (p) can be either the depth buffer or the

color buffer. λ is a scaling parameter which also determines the

sharpness or the width of the detected edges. In the examples

presented in this paper, we set σ1 = 1, σ2 = 3 and λ = 15.

Essentially, our edge-oriented saliency map S (p) = (1 −

τ)Md(p)+τMc(p) is defined as a linear combination of the depth

edge map Md and the color edge map Mc. τ is an interpolation

parameter. When τ goes to 0, more spatial important features

will be emphasized. On the contrary, more crucial features in

color space will be underlined. If no particular statement given,

τ is set to 0.5 by default in our experiments. For cases that depth

buffer is not available, e.g. in 2D visualization, only Mc will be

used.

3.3. Saliency-guided Dimming

The challenge of dimming is how to preserve or even

highlight the underlying visual structure of a scene. For that

the dimming degree of visually salient features should be

strengthened. More importantly, to achieve visual smoothness

and avoid visual artifacts, the dimming should be continuous in

the entire image. It is also desirable that the tradeoff between

energy consumption and image fidelity can be interactively

tuned. Thus, for a given input color buffer I, the output I∗ is:

I∗(p) = Y(α, β, p)[

αI(p) + (1 − α)I(p)]

(5)

where Y(α, β, p) is a dimming function defined as:

Y(α, β, p) = β[

1[0,1)(α)S (p) + 1[0,1)(α − 1)]

(6)

here 1[0,1)(x) represents the indicator function, I(p) denotes

the color buffer after applying bilateral filter. α ∈ [0, 1],

β ∈ [0, 1] are two user adjustable parameters. α is used to

control the degree of detail preservation during the dimming

process. Larger α means more details in I∗. When α = 1, our

method will be degenerated to a uniform dimming technique.

On the other hand, a painting-like visualization enhanced with

brush stroke is provided when α = 0. β is used to modulate the

global luminance. Smaller β yields lower energy consumption.

Figure 3 illustrates the entire dimming process for a phantom

3D scene: one sphere is in front of a rotated cube and two front

facing cubes. For the sake of clarity, results on the 1D case

specified by the red scanline are presented in Figure 3 (e, f, g).

And results of the simulated uniform dimming are presented in

Figure 3 (d, h) for comparison. It can be easily verified that the

peaks in both Mc(p) and Md(p) represent the boundaries. We

use a constant function to simulate the saliency map S u(p) for

the uniform dimming mode. As can be seen in Figure 3 (g) and

Figure 3 (h), the major difference between ours and the uniform

dimming lies in the regions around the detected peaks. The

local contrasts within these regions are explicitly strengthened

so as to highlight the boundaries (visual salient features) of

the objects. Please notice the local contrast within a region

indicated by the red arrow in Figure 3 (g, h). The regions

indicated by the yellow circles in Figure 3 (c) and Figure

3 (d) are 2D examples that demonstrate the advantages of our

method.

4

Page 5: An Image-space Energy-saving Visualization Scheme for ......An Image-space Energy-saving Visualization Scheme for OLED Displays Haidong Chen1,∗, Ji Wang2, Weifeng Chen3, Huamin Qu4,

(a) (b) (c) (d)

x

Dep

th Md(p) D(p)

x

Inte

nsi

ty I(p) Mc(p)

x

Inte

nsi

ty I(p) I∗(p) S (p)

x

Inte

nsi

ty I(p) I∗u (p) S u(p)

(e) (f) (g) (h)

Figure 3: Illustration of our approach for a simple 3D scene shown in (a). The results of our approach on a 1D case specified by the scanline in red are depicted

in (e, f, g). (b) The edge-oriented saliency map of (a). (c) Our result. (d) The result of simulated uniform dimming. (e) The depth information and edge detection

result along the scanline. (f) The color luminance after applying bilateral filter and the edge detection result along the scanline. (g) The color luminance and the

visual saliency map along the scanline of our method. (h) The color illuminance and the saliency map of uniform dimming. In this example, α = 0.5, β = 0.8.

4. Results and Evaluation

All programs in this paper are implemented with C++ and

accelerated by CUDA. The performance is collected on a

PC equipped with an Intel Core 2 Duo 3.0 GHz CPU, 4GB

host memory and an NVidia GTX580 video card with 1.5GB

video memory. A series of visualizations are tested with our

approach, including the volumetric data visualization, the 3D

game scene rendering, and the 2D geo-visualization. Because

all Gaussian operations used in our framework are separable,

we convolve each dimension with a 5-tap 1D kernel both for

bilateral filtering and DoG.

4.1. Measuring Energy Consumption Model

The energy consumption model used in our experiments

is built upon three estimation functions f (·), g(·) and

h(·) (Equation 1). We measure these functions on a

µOLED-32028-P1 AMOLED display module from 4D system

with an Agilent 34410A multi-meter and an Agilent E3631A

DC power supply. The resolution of the OLED display is

320×240 with 65K colors. During the measurement, we set

the DC voltage to 5.0 V and track the electrical current values

to calculate the energy consumption by P = UI.

With the measured f (·), g(·) and h(·), 32 intensity levels

scaled from 0 to 1 for each color channel are tested. In each test,

the OLED display is fully filled with the corresponding color

for 20 seconds. The average energy consumption is recorded

and computed (see Figure 1).

4.2. Examples of 3D Visualization

We examine two 3D visualization scenarios: volumetric data

visualization and 3D video game scene rendering.

Volumetric data Generally, the visualization of volumetric

dataset does not contain depth information. Instead, we

approximate the depth of the resulting visualization as the depth

of the first hit voxel in the ROI with respect to the employed

transfer function (e.g. the bone in the following example).

Figure 4 shows the results of the Feet dataset (256×256×128).

As can be seen in Figure 4 (b), a halo effect is generated with

our method, meaning that the perception to the depth and shape

is enhanced even when the global illuminance is significantly

degraded. More specifically, in this example, the halos make it

easy to distinguish the bone from the skin. The difference of

our result from that of uniform dimming is highlighted by the

yellow rectangles in Figure 4 (b,c).

3D Video Game Scene Video game, especially the mobile

game is another important application of our method. We build

a 3D game scene by means of an open source 3D game engine

called irrlicht (http://irrlicht.sourceforge.net/) for test. Figure 5

shows the results for one frame. In this example, we set α =

0.75, β = 0.80, and τ = 0.2. In this case, approximately 19%

energy saving is achieved with our scheme. Please pay attention

to the wood railings and the doors in Figure 5 (b,c). It is clear

that the spatial relationship is much easier to understand with

our method while reducing the energy consumption.

4.3. Examples of 2D Visualization

Our saliency-guided dimming technique is also suitable for

visualizations without any depth information.

Figure 6 demonstrates the results of applying our approach

for a 2D geo-visualization. The map used in this

example is obtained from Google Map API. In the field of

geo-visualization, a standard color set is used, e.g., green for

forest. Thus, the color remapping technique is not applicable to

5

Page 6: An Image-space Energy-saving Visualization Scheme for ......An Image-space Energy-saving Visualization Scheme for OLED Displays Haidong Chen1,∗, Ji Wang2, Weifeng Chen3, Huamin Qu4,

(a) (b) (c)

Figure 4: A volume visualization of the Feet dataset. (a) The direct volume rendering result. (b) Our result with 17.9% energy saving. (c) The uniform dimming of

(a) achieves 16.1% energy reduction. In this example, α = 0.65, β = 0.75.

(a) (b) (c)

Figure 5: A 3D game scenario. (a) A screenshot of the video game. (b) Our result. (c) Uniform dimming. Please note that the color contrast between (b) and (c) on

the door boundaries and the textured walls are distinctive.

this situation. On the other hand, context-aware color dimming

techniques can hardly be used as the user interactions or object

specification are not allowed. As shown in Figure 6 (c), uniform

dimming inevitably lowers the distinguishability of the map

objects which disables the usability of the map. In contrast, Our

saliency-guided dimming scheme clarifies the salient regions

and make them more recognizable (Figure 6 (b)). In this

example, α = 0.75, β = 0.5.

4.4. Performance

For all examples demonstrated in this paper, the energy

consumption under three configurations are measured: the

normal color scheme (NC); our saliency-guided dimming

(SGD) scheme; the simulated uniform dimming (UD) scheme.

The collected statistic is summarized in Table 1. Compared

with UD, SGD can save more energy consumption. This comes

from a fact that the local contrast of SGD within the vicinities of

visual salient features is larger than that in UD, which is verified

in Figure 3 (g).

One distinctive feature of our approach is that the

computation of the saliency map is highly parallelizable,

making the entire process very fast. For a visualization at the

resolution of 1024×1024, our saliency-guided dimming process

can be accomplished in less than 10 milliseconds (> 100 fps).

Examples Figure 4 Figure 5 Figure 6

NC 0.379 0.834 1.247

SGD 0.311 0.677 0.446

UD 0.318 0.704 0.453

Table 1: The energy consumption in Watt of all examples in this paper

A more detailed performance statistic is listed in Table 2. Here,

the running time is collected from the same screenshot of a

game scene with different resolutions. We run our approach

on them for 100 times separately and record the average time.

Resolution 2562 5122 10242 20482

Timing 1.325 3.033 9.087 35.449

Table 2: The performance in milliseconds of our approach

5. User Study

The major objective of this user study is to assess the

effectiveness and users’ acceptance of our energy-saving

scheme.

6

Page 7: An Image-space Energy-saving Visualization Scheme for ......An Image-space Energy-saving Visualization Scheme for OLED Displays Haidong Chen1,∗, Ji Wang2, Weifeng Chen3, Huamin Qu4,

(a) (b) (c)

Figure 6: (a) An input 2D visualization; (b) Our approach yields 64.2% energy saving; (c) Applying uniform dimming to (a) gets 63.7% energy reduction. In (b)

the local contrast along edges is much larger than that in (c).

5.1. Study Design

5.1.1. Participants

We recruited 24 participants (age 22 to 33, 9 females,

15 males, 4 undergraduates, 20 graduate students) from

our universities. Their academic majors included Computer

Science, Mathematics, and Corpus Linguistics. All participants

had no color blindness . Two of them knew the concept of

OLED. All of them were not familiar with our work before the

study.

5.1.2. Apparatus

The user study was conducted on a PC equipped with an

Intel Core i3 3.0 GHz CPU, 8GB host memory and an NVidia

GTX550 Ti video card with 1 GB video memory. In this

user study, two Dell 22-inch LCD displays with resolution of

1920×1080 were used. One was for answering the questions,

the other was for showing the resulting visualizations.

The reasons we used the normal LCD display to simulate

OLED display in our study are that:

• Currently, the normal size OLED display in the market is

rare and quite expensive.

• We assume that the visual effects of the current LCD

display and the future OLED display are similar. This

is because that the user would not allow too many visual

effect changes for a new display.

As long as the regular size OLED display is available in the

future, we will conduct a verification study.

5.1.3. Tasks

In this user study, the participants performed two tasks to

assess their performances and preferences.

[T1] Visual Search

In this task, the participants were asked to identify several

specific ”street map” patterns in the maps processed with three

different display schemes (NC, SGD, UD). Each participant had

to run two trials with only one display scheme.

[T2] Preference Ranking

In this task, all participants had to give their preference

orders for two visualizations (Volumetric data visualization

and 2D geo-visulization, i.e. the map) with three different

display schemes (NC, SGD, UD). As many participants

are not familiar with energy-saving visualization, several

frequently-used criterion are provided for ranking including: 1)

the clarity of structures presented in the results; 2) the local

contrast in visually salient regions; 3) the blurriness of the

results; 4) the energy consumption.

5.1.4. Procedure

Our study was conducted as a between-subject experiment

meaning that each participant had to finish T1 with only one

display scheme. For each trial, the task completion time and

the error rate was measured. As can be seen in this study, the

display scheme was an independent factor.

Before the formal study, a 5-minutes training was conducted

for each participant.

After the training, all participants were randomly assigned

to three groups. The participants in the first group just had to

finish two trials of T1 on maps with normal color scheme (NC).

The participants in the second and third group had to perform

two trials of T1 on maps processed with our saliency-guided

dimming scheme (SGD) and the simulated uniform dimming

scheme (UD) respectively.

In the beginning of T2, we told each participant the exact

energy consumption of each visualization. Then, we recorded

their preference orders. At last, the participants needed to

describe their criteria of preference ranking in our post survey.

In the end, the general comments for whole study and each

energy saving scheme were collected.

5.2. Results and Analysis

5.2.1. Quantitative Results

Task Completion Time

The logarithmic transformation is a widely-used method to

correct for the non-normal distribution of time performance

data. Thus, we first applied this simple technique to the

7

Page 8: An Image-space Energy-saving Visualization Scheme for ......An Image-space Energy-saving Visualization Scheme for OLED Displays Haidong Chen1,∗, Ji Wang2, Weifeng Chen3, Huamin Qu4,

NC SGD UD0

5

10

15

20

Tas

k T

ime

in S

eco

nd

s Trial-1 Trial-2

0

3

6

9

12

15

18

21

24

NC SGD UD

3rd

2nd

1st

0

3

6

9

12

15

18

21

24

NC SGD UD

3rd

2nd

1st

(a) (b) (c)

Figure 7: (a) Mean task completion time (in seconds) for each trial. Error bars represent standard error. (b) The number of participants who ranked each scheme

in map application in overall preference. (c) The number of participants who ranked each scheme in volumetric data visualization in overall preference. Note: 1st

means most preferred and 3rd means least preferred.

task completion time (in seconds) for analysis. Then, the

Shapiro-Wilk normal distribution test was conducted on the

task completion time of Trial-1 (p = 0.391) and Trial-2 (p =

0.102) in T1. Apparently, the transformed data followed a

normal distribution since the p-value is larger than 0.05.

We also ran an one-way ANOVA for the factor display

scheme in each trial. We found that it had significant effects

on task completion time in Trial-1 (F2,21=3.484, p=0.049) and

Trial-2 (F2,21=5.436, p=0.013).

Post-hoc comparisons for three different display schemes

were performed respectively. In Trial-1, we found that SGD

had a significantly lower task completion time (p=0.016) than

that of UD. From Figure 7(a), we can see that participants

spent less time by SGD compared to the other two display

schemes. In Trial-2, we found that the task completion time

of NC was significantly higher than that of SGD (p=0.004) and

UD (p=0.034). However, there were no significant differences

between the task completion time of SGD and UD. As shown

in Figure 7(a), we can also find that participants generally spent

less time with SGD compared to the other two display schemes.

Error Rate

We also summarized the error rate of three different display

schemes in the Visual Search task. There was no error (0 out

of 24 tests) for the NC scheme, 8.3% (2 out of 24 tests) for the

SGD scheme, and 20.8% (5 out of 24 tests) for the UD scheme.

This observation indicates that the SGD scheme can help users

identify structures in a dimmed scene more clearly compared

with the UD scheme.

Preference Ranking

In T2, we asked participants to provide an overall

preference ranking of three display schemes with energy

saving information in two visualization scenarios: map

(geo-visualization) and volumetric data visualization.

Figure 7 (b) shows the overall preference for the map

application. The Friedman test results exhibited significant

differences among the three display schemes based on the

preference ranking (χ2(2, N=24)=20.583, p < 0.001). The

follow-up pairwise Wilcoxon tests showed that SGD had a

significantly higher preference ranking than that of NC (p <

0.001) and UD (p < 0.001). There was no significant difference

in preference between NC and UD (p=0.597).

Figure 7 (c) summarizes the user preferences for the

volumetric data visualization. The Friedman test indicated

that there was no significant difference among these 3 display

schemes (χ2(2, N=24)=5.074, p=0.079).

5.2.2. Qualitative Results

Based on the users’ qualitative feedback, we find that users

prefer to gaining important outlines and structure information

when the display is dimmed.

I liked the energy saving visualization design when

viewing the map because the outlines were most

important to see. (Subject 23)

I think the cell shaded look was visual appealing

in some cases. I like it in the bones structure and

map especially because it helped you see what was

needed. (Subject 22)

The black boundary shows bone structure better.

(Subject 11)

We also summarize the general comments and suggestions

provided by the participants as following:

• 10 of them agree with that the visualization of NC scheme

will be his/her first choice, if the battery is unlimited.

However, sacrificing some visibility for longer usage is

permissible.

• 7 participants state that the boundaries make it more easier

to find streets and blocks in map application and bones in

our volumetric data visualization example.

• 4 of them mention that the SGD scheme changes the

fidelity of the images compared with NC.

5.3. Discussions

Based on the quantitative results and qualitative feedback,

we can see that explicit highlighting of visually salient features

influences the user performance and choice for the analysis

tasks in a dimming mode.

In map application, the outlines of map are important cues

for users to distinguish streets or blocks. Therefore, SGD has

significantly lower task completion time than the other two

display schemes. Meanwhile, the preference ranking results

indicate that user significantly prefer SGD to other two display

schemes.

8

Page 9: An Image-space Energy-saving Visualization Scheme for ......An Image-space Energy-saving Visualization Scheme for OLED Displays Haidong Chen1,∗, Ji Wang2, Weifeng Chen3, Huamin Qu4,

For volumetric data visualization, most participants prefer

uniform dimming which does not conform to our hypothesis.

Thus, we further analyze the academic background of each

participant. Finally, we find that 6 participants are familiar

with scientific visualization. And 5 out of them rate 1st for the

result of SGD scheme. We also check the comments provided

by these users. Four of them mention that the boundary make

the structure more clear. In the meantime, by analyzing the

comments who do not rate SGD as 1st, we find that 5 out

of them said that the boundary distorted the fidelity of the

original image. Based on these observations, we believe that

the user will adopt our saliency-guided dimming scheme for

energy saving once they know the advantages of illustrative

visualization.

6. Conclusions

Energy-aware coloring is of great importance. Because the

display has become a major energy consumer in modern mobile

devices and pads, the demand for energy reduction is becoming

more and more urgent. The recently developed organic light

emitting diode displays provide a new opportunity for energy

saving.

In this paper, we introduce a novel energy-saving color

scheme for visualizations on the OLED displays. Our approach

is inspired by the concept of tone mapping and illustrative

visualization. By suppressing the undesired distractive details

while retaining the main visual structures, the dimmed

energy-saving visualization can be better recognized than

results of uniform dimming. Because our method is simple

and highly parallelizable, it admits online usages with low

energy consumption for extra computation. Our approach can

also be used for pre-dimming static visualization that may be

displayed frequently and for long time. Experiments on several

canonical visualization scenarios demonstrate the effectiveness,

the efficiency, and the quality of our approach.

Because our approach is built upon the bilateral filter and

DoG filter, it naturally inherits the limitations of them. For

example, the detected salient features are aggregated by pixels

instead of well-defined global strokes. Thus, the high-contrast

background may be emphasized even if it is not visually salient.

This limitation also arises a potential future work on robust

efficient salient feature detector. For instance, the dynamic

features, e.g. moving objects, should be considered as well.

We also would like to introduce our framework on time-varying

data such as videos.

7. Acknowledgements

The authors would like to thank the support from the

National High Technology Research and Development Program

of China (2012AA12090), the Major Program of National

Natural Science Foundation of China (61232012), the National

Natural Science Foundation of China (61003193), and the

National Natural Science Foundation of China (81172124).

References

[1] V. Moshnyaga, E. Morikawa, Lcd display energy reduction by user

monitoring, in: Computer Design: VLSI in Computers and Processors,

2005. ICCD 2005. Proceedings. 2005 IEEE International Conference on,

IEEE, 2005, pp. 94–97.

[2] F. Shearer, Power management in mobile devices, Newnes, 2007.

[3] P. Narra, D. Zinger, An effective led dimming approach, in: IEEE

Industry Applications Conference, Vol. 3, 2004, pp. 1671–1676.

[4] W.-C. Cheng, Y. Hou, M. Pedram, Power minimization in a backlit tft-lcd

display by concurrent brightness and contrast scaling, IEEE Trans. on

Consum. Electron. 50 (1) (2004) 25–32.

[5] A. Dalton, C. Ellis, Sensing user intention and context for energy

management, in: Workshop on Hot Topics in Operating Systems

(HOTOS), 2003.

[6] J. Betts-LaCroix, Selective dimming of oled displays, uS Patent 0149223

A1 (2010).

[7] M. Dong, L. Zhong, Power modeling and optimization for oled displays,

IEEE Transactions on Mobile Computing 11 (9) (2012) 1587–1599.

[8] M. Dong, L. Zhong, Chameleon: a color-adaptive web browser for mobile

oled displays, in: Proceedings of the 9th international conference on

Mobile systems, applications, and services, ACM, 2011, pp. 85–98.

[9] J. Chuang, D. Weiskopf, T. Moller, Energy aware color sets 28 (2) (2009)

203–211.

[10] J. Wang, X. Lin, C. North, Greenvis: Energy-saving color schemes for

sequential data visualization on oled displays, Tech. rep., Department of

Computer Science, Virginia Tech (2012).

[11] I. Choi, H. Kim, H. Shin, N. Chang, Lpbp: Low-power basis profile of the

java 2 micro edition, in: Proceedings of the 2003 international symposium

on Low power electronics and design, ACM, New York, NY, USA, 2003,

pp. 36–39.

[12] D. Shin, Y. Kim, N. Chang, M. Pedram, Dynamic voltage scaling

of oled displays, in: Design Automation Conference (DAC), 48th

ACM/EDAC/IEEE, IEEE, 2011, pp. 53–58.

[13] S. Forrest, The road to high efficiency organic light emitting devices,

Organic Electronics 4 (2) (2003) 45–48.

[14] N. Chang, I. Choi, H. Shim, Dls: dynamic backlight luminance scaling of

liquid crystal display, IEEE Trans. Very Large Scale Integr. Syst. 12 (8)

(2004) 837–846.

[15] H. Shim, N. Chang, M. Pedram, A backlight power management

framework for battery-operated multimedia systems, IEEE Design & Test

of Computers 21 (5) (2004) 388–396.

[16] W. Lee, K. Patel, M. Pedram, White-led backlight control for motion-blur

reduction and power minimization in large lcd tvs, J. of SID 17 (1) (2009)

37–45.

[17] M. Ashikhmin, A tone mapping algorithm for high contrast images,

in: Proceedings of the 13th Eurographics workshop on Rendering,

Eurographics Association, 2002, pp. 145–156.

[18] J. Cohen, C. Tchou, T. Hawkins, P. Debevec, Real-Time high dynamic

range texture mapping, Springer, 2001.

[19] J. Duan, G. Qiu, Fast tone mapping for high dynamic range images,

in: Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th

International Conference on, Vol. 2, IEEE, 2004, pp. 847–850.

[20] G. Qiu, J. Duan, An optimal tone reproduction curve operator for

the display of high dynamic range images, in: Circuits and Systems,

2005. ISCAS 2005. IEEE International Symposium on, IEEE, 2005, pp.

6276–6279.

[21] G. Qiu, J. Guan, J. Duan, M. Chen, Tone mapping for hdr image using

optimization a new closed form solution, in: Pattern Recognition, 2006.

ICPR 2006. 18th International Conference on, Vol. 1, IEEE, 2006, pp.

996–999.

[22] E. Reinhard, W. Heidrich, P. Debevec, S. Pattanaik, G. Ward,

K. Myszkowski, High dynamic range imaging: acquisition, display, and

image-based lighting, Morgan Kaufmann, 2010.

[23] J. Chen, S. Paris, F. Durand, Real-time edge-aware image processing with

the bilateral grid, ACM Trans. Graph. 26 (3).

[24] W.-C. Kao, H.-C. Wang, Tone mapping operator for high dynamic range

imaging, in: Consumer Electronics (ISCE), 2013 IEEE 17th International

Symposium on, IEEE, 2013, pp. 267–268.

[25] Q. Tiant, J. Duantt, G. Qiuttt, Gpu-accelerated local tone-mapping for

high dynamic range images, in: Image Processing (ICIP), 2012 19th IEEE

International Conference on, IEEE, 2012, pp. 377–380.

9

Page 10: An Image-space Energy-saving Visualization Scheme for ......An Image-space Energy-saving Visualization Scheme for OLED Displays Haidong Chen1,∗, Ji Wang2, Weifeng Chen3, Huamin Qu4,

[26] F. Durand, J. Dorsey, Fast bilateral filtering for the display of

high-dynamic-range images, ACM Transactions on Graphics (TOG)

21 (3) (2002) 257–266.

[27] H. Winnemoller, S. C. Olsen, B. Gooch, Real-time video abstraction,

ACM Trans. Graph. 25 (3) (2006) 1221–1226.

[28] J. Kyprianidis, J. Dollner, Image abstraction by structure adaptive

filtering, Proc. EG UK Theory and Practice of Computer Graphics (2008)

51–58.

[29] C. Tomasi, R. Manduchi, Bilateral filtering for gray and color images, in:

Proceedings of the Sixth International Conference on Computer Vision,

ICCV ’98, 1998, pp. 839–846.

[30] T. Kadir, M. Brady, Saliency, scale and image description, Int. J. Comput.

Vision 45 (2) (2001) 83–105.

[31] A. Hertzmann, Introduction to 3d non-photorealistic rendering:

Silhouettes and outlines, Non-Photorealistic Rendering. SIGGRAPH 99

Course Notes.

10