Recognition-Difficulty-Aware Hidden Images Based ?· Recognition-Difficulty-Aware Hidden Images Based…

Download Recognition-Difficulty-Aware Hidden Images Based ?· Recognition-Difficulty-Aware Hidden Images Based…

Post on 06-Sep-2018




0 download

Embed Size (px)


<ul><li><p>Volume xx (200y), Number z, pp. 115</p><p>Recognition-Difficulty-Aware Hidden ImagesBased on Clue-map</p><p>Yandan Zhao, Hui Du, Xiaogang Jin</p><p>State Key Lab of CAD&amp;CG, Zhejiang University, Hangzhou 310058, China</p><p>AbstractHidden images contain one or several concealed foregrounds which can be recognized with the assistance of cluespreserved by artists. Experienced artists are trained for years to be skilled enough to find appropriate hiddenpositions for a given image. However, it is not an easy task for amateurs to quickly find these positions when theytry to create satisfactory hidden images. In this paper, we present an interactive framework to suggest the hiddenpositions and corresponding results. The suggested results generated by our approach are sequenced accordingto the levels of their recognition difficulties. To this end, we propose a novel approach for assessing the levels ofrecognition difficulty of the hidden images and a new hidden image synthesis method that takes spatial influenceinto account to make the foreground harmonious with the local surroundings. During the synthesis stage, weextract the characteristics of the foreground as the clues based on the visual attention model. We validate theeffectiveness of our approach by performing two user studies, including the quality of the hidden images and thesuggestion accuracy.</p><p>Categories and Subject Descriptors (according to ACM CCS): I.4.9 [Image Processing and Computer Vision]:Applications</p><p>1. Introduction</p><p>Hidden images, also referred to as camouflage images, area form of visual illusion art, in which artists embed oneor more unapparent figures or foregrounds. At first glance,viewers can only see the apparent background, while theycan recognize the foreground through clues after carefullywatching over a period of time. This can be explainedby the feature integration theory [TG80, Wol94]. To con-ceal the foreground, artists only retain a portion of featureswhich could be integrated as clues for viewers recogni-tion [HP07, HTP07].</p><p>Generating successful and interesting hidden images isnot an easy task, even for skilled artists. To provide a con-venient tool for artists, previous work [YLK08, CHM10,TZHM11, DJM12] has tried to create pleasing results withnatural photos. They utilize rough luminance distributionor the edge information of the foreground as clues, whichmight lead to missing some characteristics of the foreground.Skilled artists may find appropriate hidden positions for agiven background according to their prior knowledge. How-ever, due to a lack of this specialized expertise, it is fre-quently difficult for amateurs to quickly find these posi-</p><p>tions in the background. Tong et al. [TZHM11] find onlyone best hidden position by matching the edges extractedfrom the background and the foreground. The foregroundwould be hidden perfectly if its edges could be replaced bythe edges of the background. However, the saliency of theedges has not been taken into account, and it is inevitablethat some trivial edges would be extracted. In most cases,there is little prospect of finding a matching shape for ev-ery edge. If the salient edges on which viewers recognitionrelies are not matched, the foreground cannot be concealedvery well in the best position. Moreover, shape-matching istime-consuming, and blurring artifacts may occur because ofthe transformation of the foreground during shape-matching.</p><p>Fig. 1a, an example of hidden images made by an artist,shows how artists select embedded positions and clues. In-stead of a best position, the artist selects several differentones to conceal several eagles, and most of the positions con-tain a rich texture. Because the local texture of these embed-ded positions is different, the appearances of these eagles aredesigned to be dissimilar in order to be harmonious with thelocal surroundings. Nevertheless, these quite different eaglesstill can be recognized. This is due to the carefully designed</p><p>submitted to COMPUTER GRAPHICS Forum (8/2015).</p></li><li><p>2 Yandan Zhao &amp; Hui Du &amp; Xiaogang Jin / Recognition-Difficulty-Aware Hidden ImagesBased on Clue-map</p><p>(a) Hidden images</p><p>(b) Answer</p><p>Figure 1: Hidden images created by artist: 9 eagles.</p><p>clues. The artist selects the region representing the salientcharacteristics of the foreground as the clues. The charac-teristics of eagles are the eyes and the beaks, according towhich we distinguish eagles from other animals. The hiddeneagles circled by the green curves in Fig. 1b can be recog-nized through these characteristics. Unfortunately, automat-ically selecting the embedded positions and the characteris-tics of the foreground remains as a challenge.</p><p>In this paper, we present a fast hidden-image generationsystem, which is composed of a recognition difficulty as-sessment method and a hidden image synthesis method. Theformer evaluates the recognition difficulty of each embed-ded position by measuring the rich degree of the texturein the background. Our system suggests several appropriatehidden positions and provides corresponding results basedon the evaluation. Users also can select positions person-ally according to their requirements. Our method generatesnatural-looking hidden images based on texture synthesistechniques and preserves the salient characteristics as clues.To select the clues as artists do, we propose a method thatautomatically extracts the clues based on the focus atten-tion model [IKN98]. This model selects the characteristic in-formation which probably attracts relatively more attention.The appearance of the hidden foreground varies when it isembedded in different positions, in order to be harmoniouswith the surroundings. To this end, we add a space factor tothe hidden image synthesis, to change the appearance withthe shift of the hidden position.</p><p>Our major contributions can be summarized as follows:</p><p> A novel computer-suggested hidden-image generationsystem which automatically provides several appropriatehidden positions and corresponding results.</p><p> The first recognition difficulty assessing method which es-timates the recognition difficulty of each foreground em-bedded position.</p><p> A new hidden-image synthesis method that utilizes a newclues-extraction scheme, which achieves a balance be-tween the preservation of the foregrounds characteristicsand as much variation as possible of the foreground withthe changing of the embedded position. The space factoris also taken into account to make the foreground harmo-nious with the local surroundings.</p><p> The performance of hidden-image synthesis is sig-nificantly improved. Both our hidden-image synthesismethod and the difficulty assessing method can providereal-time feedback.</p><p>2. Related Work</p><p>In this section, we review related work from three aspects:hidden image synthesis, embedded position analysis, and theapplication of visual attention models in recreational art.</p><p>Hidden image synthesis A collage [GSP07, HZZ11] is</p><p>submitted to COMPUTER GRAPHICS Forum (8/2015).</p></li><li><p>Yandan Zhao &amp; Hui Du &amp; Xiaogang Jin / Recognition-Difficulty-Aware Hidden ImagesBased on Clue-map 3</p><p>one kind of hidden image, which is constructed from dif-ferent elements. In this approach, both the collage imagesand the elements are immediately recognizable. Moreover,the collage images are not characterized by changing the ap-pearance of the elements. Yoon et al. [YLK08] use stylizedline drawings to render the background and the foreground,and then employ the edges of the foreground as clues to findsuitable hidden positions by shape-matching. Instead of hid-ing line art foreground in a line art background, we aim athiding textured foregrounds into a natural image. Tong etal. [TZHM11] also utilize the edges as clues to aid usersrecognition, and try to find the best hidden position by shape-matching. However, their hidden results include blurring ar-tifacts because of the transformation. Du et al. [DJM12]employ the edges of the foreground as clues and formulatethe hidden image synthesis as a blending optimization prob-lem as well. When the contrast of the background is quitelow, too many details which are distinct from the surround-ings are preserved. When the contrast of the background isquite high, the characteristics of the foreground may not bepreserved. Chu et al. [CHM10] synthesize the hidden im-ages on the basis of texture synthesis techniques and useluminance distribution as the clues, which often makes theforeground change slightly in different positions and some-times be easy to find. To increase recognition difficulty, theyhave to add some distracting segments randomly. However,this probably disturbs important foreground regions, suchas the eyes and mouth of animals. Our method also syn-thesizes the hidden images based on texture synthesis tech-niques. The difference is that we utilize the focus attentionmodel [IKN98] to extract the clues, which always containthe characteristics of the foreground. All of the above worksare 2D camouflage. Owens et al. [OBF14] camouflage a 3Dobject into the background photographs which are capturedfrom many viewpoints. The problem that they face is differ-ent from ours, which tries to match the texture of the cubewith the background from every possible perspective.</p><p>Embedded position analysis Chu et al. [CHM10]present users with a list of candidate positions and orienta-tions that can minimize their energies. Tong et al. [TZHM11]find the best embedded position by matching the edges ofthe background and the foreground. Different from them,our method solves this problem on the basis of the obser-vation that the foreground can be concealed better in the re-gions whose texture is relatively more complicated and col-orful. Tong et al. [TZHM11] use the Sobel operator to ex-tract edges which probably include some edges useless forrecognition. If the matched edges are mostly useless ones,the foreground cannot be concealed well in the best position.Moreover, both of them leave the recognition difficulty tousers and cannot find the positions in real time. Our methodnot only provides several relevant suggestions but also esti-mates the recognition difficulty of each position.</p><p>Applications of visual attention models in recreationalart The visual attention model has been an active research</p><p>direction with many recreational applications. Change blind-ness is a psychological phenomenon in which viewers oftendo not notice some visual changes between images. Ma etal. [MXW13] employ the visual attention model to quan-tify the degree of blindness between an image pair. Theyuse a region-based saliency model which investigates howthe saliency of one region is influenced by other regions inits long-range context. However, this region-based model isnot suitable for our application because what we need is apixel-based saliency model. Image abstraction (stylization)is a widely acknowledged form of recreational art. To con-trol the level of details of the abstraction results, DeCarlo etal. [DS02] apply the human perception model by using ad-ditional data tracking devices to collect the eye movementsdata. When transferring a photograph into the style of anoil-painting, Collomosse et al. [CH02] use the visual atten-tion model to detect the edges of photographs, and only drawedges with high attention. Their visual attention model de-termines the importance by the gradient information only.The color information which will be used in our approach isnot taken into account in their model. To simulate the em-phasis effects of real recreational art and to predict viewersattention, Zhao et al. [ZMJ09] and Hate et al. [HTM12]use the attention model [IKN98] to control the degree of ab-straction and stylization. Itti et al.s basic model [IKN98] uti-lizes three feature channels (color, intensity, and orientation)and defines image saliency using central surrounded differ-ences across multi-scale image features. This model simu-lates the neuronal architecture of the visual system and hasbeen shown to correlate with human eye movements. There-fore, we employ this visual attention model to calculate theforegrounds clue-map. The clue-map describes which re-gions are likely to be selected as clues to assist the viewersrecognition.</p><p>3. Our method</p><p>Given a background image B and a foreground image F , ouralgorithm generates a hidden image which hides F in B andretains some clues for viewers to recognize. When gener-ating a hidden image, the user needs to specify the hiddenposition. A satisfactory hidden image depends highly on theselection of hidden positions. To this end, our hidden imagesystem provides two components: hidden position analysis(the red shaded part) and hidden image synthesis (the greenshaded part), as shown in Fig. 2.</p><p>For hidden position analysis, we analyse the recognitiondifficulty for each position and compute a heat-map of therecognition difficulty. Fig. 2 shows an example of a heat-map R of the recognition difficulty. In this map, green re-gions are of low recognition difficulty, and red regions areof high recognition difficulty. The heat map offers impor-tant references for users to select appropriate hidden posi-tions. Moreover, our system will automatically recommendthe hidden positions with the highest recognition difficulties</p><p>submitted to COMPUTER GRAPHICS Forum (8/2015).</p></li><li><p>4 Yandan Zhao &amp; Hui Du &amp; Xiaogang Jin / Recognition-Difficulty-Aware Hidden ImagesBased on Clue-map</p><p>Foreground (F) Clue-map (C) New foregrounds (F') Results</p><p>Background (B)</p><p>Hidden positions</p><p>Detail-map (H)Recognition difficulty (R)</p><p>Figure 2: The pipeline of our approach.</p><p>and will generate hidden results in real time. Visual percep-tion study of hidden images [TBL09] shows that the fore-ground is more difficult to detect when there is more high-contrast detail in the surrounding. This result implies that therecognition difficulty R can be modulated by the rich degreeof the high-contrast detail. Considering the fact that a viewercan only focus on a small area at one time, we compute therecognition difficulty of a specific position based on the richdegree of the detail of a small surrounding area. Here, weuse a detail-map H to represent the quantization of the richdegree of detail; and a clue-map C to choose surroundingregions, as illustrated in Fig. 2.</p><p>In hidden images synthesis, there is a balance betweentwo conflicting factors: immersion, which is responsiblefor the harmony between the foreground and the surround-ing background, and standout, which is responsible for theviewer recognizing the hidden image. The main task of hid-den image synthesis is to find a solution which achieves asatisfactory balance.</p><p>Similar to Chu et al. [CHM10], we use texture synthe-sis to replace the texture of the foreground with that of thebackground. Chu et al. [CHM10] segment the foregroundand optimize the luminance distribution of the segments toachieve the balance. They retain the luminance distributionas the recognition clue because the luminance channel canoffer sufficient cues for recognition [Pal99]. However, whenthe luminance c...</p></li></ul>


View more >