research article gradient compressive sensing for...

15
Research Article Gradient Compressive Sensing for Image Data Reduction in UAV Based Search and Rescue in the Wild Josip MusiT, 1 Irena OroviT, 2 Tea MarasoviT, 1 Vladan PapiT, 1 and Srdjan StankoviT 2 1 Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, Ruđera Boˇ skovi´ ca 32, 21000 Split, Croatia 2 Faculty of Electrical Engineering, University of Montenegro, Dˇ zordˇ za Vaˇ singtona bb, 81000 Podgorica, Montenegro Correspondence should be addressed to Josip Musi´ c; [email protected] Received 1 April 2016; Revised 30 September 2016; Accepted 11 October 2016 Academic Editor: Agathoklis Giaralis Copyright © 2016 Josip Musi´ c et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Search and rescue operations usually require significant resources, personnel, equipment, and time. In order to optimize the resources and expenses and to increase the efficiency of operations, the use of unmanned aerial vehicles (UAVs) and aerial photography is considered for fast reconnaissance of large and unreachable terrains. e images are then transmitted to control center for automatic processing and pattern recognition. Furthermore, due to the limited transmission capacities and significant battery consumption for recording high resolution images, in this paper we consider the use of smart acquisition strategy with decreased amount of image pixels following the compressive sensing paradigm. e images are completely reconstructed in the control center prior to the application of image processing for suspicious objects detection. e efficiency of this combined approach depends on the amount of acquired data and also on the complexity of the scenery observed. e proposed approach is tested on various high resolution aerial images, while the achieved results are analyzed using different quality metrics and validation tests. Additionally, a user study is performed on the original images to provide the baseline object detection performance. 1. Introduction In modern society, people oſten engage in different outdoor activities for fun or recreation. However, they sometimes overestimate their abilities or even get lost and need help and assistance. In such situations even if the lost person has a mobile phone, it is difficult to provide exact location or the signal strength is too low to be useful. In order to provide medical or other kinds of assistance, the emergency services need to locate a person. e number of such situations increases especially during summer months when people are more active both on the sea and in the mountains [1]. On average, there were 1,862 calls to emergency services in UK in 2014. Review of search and rescue operations in the US over time period of 15 years was examined in [2]. In the considered time period there were about 65,000 search and rescue incidents involving around 78,000 persons in need of assistance. Similar trends can be found in somewhat less developed countries with large number of tourists [3]. e exponential growth in total number of mountain rescue operations is observed for the period between 1991 and 2005. For the period of years 2002–2005, there were 1,217 operations in total: 12.16% were search operations and 27.12% were rescue operations. Out of all search operations, 81.75% of persons were found (out of which 10.71% involved fatalities). Among all search actions, in 17.52% of cases, search dogs were employed and in 10.22% of cases the military helicopters were used. is large number of operations and limited time frame requires significant resources in terms of money, equipment, and manpower. Hence, the search and rescue operations are very time and resource demanding. Usually, a number of ground personnel (including police officers, firefighters, ambulances, and even locals) are involved as well as military or civilian helicopters with their crews. ese resources are in general expensive and the majority of them are used for the search part of the operation. Search operation also needs to be completed as fast as possible to ensure positive outcome. us, the efforts are mainly focused towards optimizing search procedure in order to provide reliable information to the search team where to look first/next. In this way, it is Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2016, Article ID 6827414, 14 pages http://dx.doi.org/10.1155/2016/6827414

Upload: others

Post on 10-Jul-2020

14 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

Research ArticleGradient Compressive Sensing for Image Data Reduction inUAV Based Search and Rescue in the Wild

Josip MusiT1 Irena OroviT2 Tea MarasoviT1 Vladan PapiT1 and Srdjan StankoviT2

1Faculty of Electrical Engineering Mechanical Engineering and Naval Architecture University of Split Ruđera Boskovica 3221000 Split Croatia2Faculty of Electrical Engineering University of Montenegro Dzordza Vasingtona bb 81000 Podgorica Montenegro

Correspondence should be addressed to Josip Music jmusicfesbhr

Received 1 April 2016 Revised 30 September 2016 Accepted 11 October 2016

Academic Editor Agathoklis Giaralis

Copyright copy 2016 Josip Music et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Search and rescue operations usually require significant resources personnel equipment and time In order to optimize theresources and expenses and to increase the efficiency of operations the use of unmanned aerial vehicles (UAVs) and aerialphotography is considered for fast reconnaissance of large and unreachable terrains The images are then transmitted to controlcenter for automatic processing and pattern recognition Furthermore due to the limited transmission capacities and significantbattery consumption for recording high resolution images in this paper we consider the use of smart acquisition strategy withdecreased amount of image pixels following the compressive sensing paradigm The images are completely reconstructed in thecontrol center prior to the application of image processing for suspicious objects detectionThe efficiency of this combined approachdepends on the amount of acquired data and also on the complexity of the scenery observed The proposed approach is tested onvarious high resolution aerial images while the achieved results are analyzed using different quality metrics and validation testsAdditionally a user study is performed on the original images to provide the baseline object detection performance

1 Introduction

In modern society people often engage in different outdooractivities for fun or recreation However they sometimesoverestimate their abilities or even get lost and need help andassistance In such situations even if the lost person has amobile phone it is difficult to provide exact location or thesignal strength is too low to be useful In order to providemedical or other kinds of assistance the emergency servicesneed to locate a person The number of such situationsincreases especially during summer months when peopleare more active both on the sea and in the mountains [1]On average there were 1862 calls to emergency services inUK in 2014 Review of search and rescue operations in theUS over time period of 15 years was examined in [2] Inthe considered time period there were about 65000 searchand rescue incidents involving around 78000 persons inneed of assistance Similar trends can be found in somewhatless developed countries with large number of tourists [3]The exponential growth in total number of mountain rescue

operations is observed for the period between 1991 and 2005For the period of years 2002ndash2005 therewere 1217 operationsin total 1216 were search operations and 2712 wererescue operations Out of all search operations 8175 ofpersons were found (out of which 1071 involved fatalities)Among all search actions in 1752 of cases search dogs wereemployed and in 1022 of cases themilitary helicopters wereusedThis large number of operations and limited time framerequires significant resources in terms of money equipmentand manpower Hence the search and rescue operationsare very time and resource demanding Usually a numberof ground personnel (including police officers firefightersambulances and even locals) are involved as well as militaryor civilian helicopters with their crewsThese resources are ingeneral expensive and the majority of them are used for thesearch part of the operation Search operation also needs tobe completed as fast as possible to ensure positive outcomeThus the efforts are mainly focused towards optimizingsearch procedure in order to provide reliable information tothe search team where to look firstnext In this way it is

Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2016 Article ID 6827414 14 pageshttpdxdoiorg10115520166827414

2 Mathematical Problems in Engineering

possible to significantly increase the probability of positiveoutcome and to ensure lower costs One of the solutions is toinclude aerial photography (viaUAVs) and associated patternrecognition and image processing algorithms [4] NamelyUAVs enable fast reconnaissance of large and unreachableareas while taking photos in regular time intervals UAVs insuch scenarios are intended to complement the search proce-dure and not to completely replace search on the ground [5]

The use of UAVs for supporting search and rescueoperations was also suggested in [6] where the authors useddifferent victim detection algorithms and search strategies insimulated environment It is also suggested that four featuresshould be taken into account when designing the appropriatesearch system (1) quality of image data (2) level of informa-tion exchange between UAVs andor UAVs and ground sta-tion (3)UAVautonomy time and (4) environmental hazards

In this paper we are concerned with image quality andlevel of information exchange requirements which are ingeneral in opposition better quality sensors consume largernetwork bandwidth and vice versa Transmission resources(bandwidth signal availability and quality) are not readilyavailable in the wild where search and rescue operations usu-ally take place Thus finding a good compression algorithmto reduce amount of data [7 8] or using smart cameras withreduced number of pixels could be of great importance Forinstance using reduced number of pixels allows consumingless energy from UAVs battery and thus achieving greaterautonomyrange In [7] three levels of filters with differ-ent knowledge (redundancy task and priority) were usedDepending on the filter (or combination of filters) appliedthe reduction of transmission bandwidth is achieved in therange from 2407 to 6783 The authors however notedthat in generalMPEG reduction level was not achieved Addi-tional common difficulty is the presence of noise in the areaswhere signal coverage is limited [4] This noise can manifestin different manners such as salt-and-pepper type of noise oreven as whole blocks of missing pixelsThus finding effectiveways to deal with such situations is of importance since thenoise can reduce (or completely cover) victimrsquos footprint inthe image Therefore it is of great importance to explorepossibilities of reducing the amount of captured and trans-mitted data while maintaining the level of successful targetobjects detection using specially designed algorithms as itwas initially introduced in [9] Such algorithms should alsoexhibit certain level of robustness in the presence of noise Inthat sense we propose using a popular compressive sensing(CS) approach in order to deal with randomly undersampledterrain photos obtained as a result of smart sensing strategyor removal of impulse noise The CS reconstruction algo-rithms may deal with images having large amount of missingor corrupted pixels [10ndash16]The missing pixels can be subjectof the reduced sampling strategy when a certain amount ofpixels at random positions are omitted from observations ormay appear as a consequence of discarding pixels affected bycertain errors or noise as discussed in the sequel [14] In ageneral CS scenario it is possible to observe two noisy effectscalled the observation noise and the sampling noise [17 18]The observation noise appears in the phenomena beforesensing process while the sampling noise appears as an error

on the measurements after the sensing process As provedin [17] both types of noise would cause distortion in thefinal image whereas the observation noise in CS case is lessdetrimental than the sensing noise In the considered imagingsystem for aerial photography the noise can be occasionallypresent in the form of salt-and-pepper noise (or spike noise)It is mainly caused by the analog-to-digital converter errorsthat could be considered as observation noise appearing priorto CS process [18] It can be also caused by the errors intransmission which indeed correspond to the observationnoise In both cases our assumption is that we can discard thepixels affected by the salt-and-pepper noise and proceed withthe image reconstruction upon receipt in the control centerusing small amount of available nonnoisy pixels Henceeither the images are randomly undersampled in the absenceof noise or the random noisy pixels are discarded causingthe same setup as in the former case Moreover if someof the measurements are subject to the transmission errorsthese can be also discarded upon the receipt in the controlcenter Additionally in some applications facing other heavy-tailed noise models robust sampling operators based on theweighted myriad estimators can be considered [18]

Therefore we consider the following scenario (a) ran-domly undersampled images (with missing pixels causedby the sampling strategy or by discarding noisy pixels) areobtained and sent over the network (b) upon reception inthe control center the images are completely restoredrecon-structed and (c) are used as the input of image processingalgorithm for target objects detection Only the images ofinterest are flagged by the object detection algorithm (asthe images with suspicious objects) and further examinedby a human operator Hence the proposed system con-sists of two segments both running in the control centerimage reconstruction and object detection It is importantto emphasize that the output of the first segment influencesthe performance of the second segment depending on thenumber of missing pixels (ie image degradation level) Inthe paper we examine the performance of both segmentsunder different amounts ofmissing pixel with the aim to drawconclusions on their respective performances and possibleimprovements

2 Materials and Methods

An overview of the entire proposed approach is presentedin Figure 1 In the next sections the particular algorithmsfrom the figure will be presented and explained in moredetail However it should be noted that the entire approachis not yet fully integrated into UAV and adopted for real-timeapplication This is a topic of an ongoing research and it willbe implemented in the future

21 Compressive Sensing and Image Reconstruction Com-pressive sensing appeared as a new sensing paradigm whichallows acquiring significantly smaller number of samplescompared to the standard approach based on the Shannon-Nyquist sampling theorem [10ndash13] Particularly compressivesensing assumes that the exact signal can be reconstructedfrom a very small set of measurements under the condition

Mathematical Problems in Engineering 3

Image acquisition

Image data reduction

High speed link

CS imagereconstruction

UAV Surveillance center

Transform image to

Apply median filter

Split image intosubimages

Preprocessing

Subimages mean shiftclustering

Compose globalcluster matrix

Cluster global clustermatrix

Segmentation

Eliminate largecandidate regions

Erase single pixelareas

Merge nearbysegments

Decision-making

Ignore clusters withmultiple segments

Outline the results End user

YCbCr space

Figure 1 General overview of the proposed procedure

that the measurements are incoherent and the signal hassparse representation in certain transform domain

Consider a signal in 119877119873 that can be represented usingthe orthonormal basis vectors Ψ119894119873119894=1 The desired signal xcan then be observed in terms of the transform domain basisfunctions [10 11]

x = 119873sum119894=1

119883119894Ψ119894 (1)

or equivalently in the matrix formx = ΨX (2)

where the transform domain matrix is Ψ fl [1205951 | 1205952 | sdot sdot sdot |120595119873] while X represents the transform domain vector In thecase of sparse signals the vector X contains only 119870 ≪ 119873significant components while others are zeros or negligibleThen we might say that X is a K-sparse representation of xInstead of measuring the entire signal x we canmeasure onlya small set of 119872 lt 119873 random samples of x using a linearmeasurement matrix Φ fl [1206011 | 1206012 | sdot sdot sdot | 120601119873] The vector ofmeasurements y can now be written as follows

y = Φx = ΦΨX = ΘX (3)

whereΘ = ΦΨ of size119872times119873

The main challenge behind compressive sensing is todesign a measurement matrix Φ which can provide exactand unique reconstruction of K-sparse signal where119872 ge 119870Here we consider the randommeasurement matrixΦ of size119872 times 119873 (119872 ≪ 119873) that contains only one value equal to 1at the random position in each row (while other values are0) Particularly in the 119894th row the value 1 is on the positionof the 119894th random measurement Consequently the resultingcompressive sensing matrixΘ in (3) is usually called randompartial transform matrix [19 20] It contains partial basisfunctions from Ψ with values at the random positions ofmeasurements Note that in the case of images the two-dimensional measurements are simply rearranged into one-dimensional vector y while Ψ should correspond to a two-dimensional transformation

Since 119872 lt 119873 the solution is ill-posed the number ofequations is smaller than the number of unknowns If wecould determine the support of119870 components withinX thenthe problem would be determined by the 119872 ge 119870 systemof linear equations (119872 equations with 119870 unknowns) Thestable recovery is assured if the compressive sensing matrixΘ satisfies the restricted isometry property (RIP) Howeverin real situations we almost never know the signal support inX and the signal reconstruction needs to be performed using

4 Mathematical Problems in Engineering

certain minimization approaches The most natural choiceassumes theminimization of ℓ0 norm which is used to searchfor the sparsest vector X in the transform domain

X = argmin X0subject to ΘX = y (4)

However this approach is computationally unstable and rep-resents NP (nondeterministic polynomial-time) hard prob-lem Therefore the ℓ0 norm minimization is replaced by theℓ1 norm minimization leading to the convex optimizationproblem that can be solved using linear programming

X = argmin X1subject to ΘX = y (5)

In the sequel we consider a gradient algorithm for efficientreconstruction of images which belongs to the group ofconvex optimization methods [15] Note that the gradientbased methods generally do not require the signals to bestrictly sparse in certain transform domain and in that senseprovide significant benefits and relaxations for real-worldapplications Particularly it is known that the images arenot sparse in any of the transform domains but can beconsidered as compressible in the two-dimensional DiscreteCosine Transform (2D DCT) domain Hence the 2D DCT isemployed to estimate the direction and value of the gradientused to update the values of missing data towards the exactsolution

211 Gradient Based Signal Reconstruction Approach Pre-vious minimization problem can be solved using the gra-dient based approach The efficient implementation of thisapproach can be done in a block by block basis where theblock sizes are 119873 times 119872 (the square blocks 119873 = 119872 = 16 areused in the experiments) The available image measurementswithin the block are defined by the pixels indices

(119894 119895) isin Ω where Ω sub (1 1) (1 2) (119873119872) (6)

while

card Ω = 119872119886119873119886 ≪ 119873119872 (7)

The original (full) image block is denoted as 119891(119899119898) Theimage measurements are hence defined by

119891 (119894 119895) for (119899119898) = (119894 119895) isin Ω (8)

Let us now observe the initial image matrix z that containsthe available measurements and zero values at the positionsof unavailable pixels The elements of z can be defined as

119911 (119899119898) = 119891 (119899119898) 120575 (119899 minus 119894) 120575 (119898 minus 119895) (9)

where 120575(119899) is a unit delta functionThe gradient method usesthe basic concept of steepest descent method It treats onlythe missing pixels such that their initial zero values are variedin both directions for a certain constant Δ Then the ℓ1 normis applied in the transform domain to measure the level ofsparsity and to calculate the gradient which is also used toupdate the values of pixels through the iterations In thatsense we might observe the matrix Z comprising119873 vectors zformed by the elements 119911(119899119898)

Z = [z z sdot sdot sdot z] (10)

In order to model the process of missing pixels variations forplusmnΔ we can define the two matrices

Z+119896 = [z+1119896 z+2119896 sdot sdot sdot z+119873119896 ] = Z119896 + ΔZminus119896 = [zminus1119896 zminus2119896 sdot sdot sdot zminus119873119896 ] = Z119896 minus Δ (11)

where 119896 denotes the number of iterationsThe initial value ofΔ can be set as Δ = max(|z|) The previous matrices can bewritten in an expanded form

Z+119896 = Z119896 plusmn Δ= [z119896 z119896 sdot sdot sdot z119896 ] plusmn Δ diag [120575 (119899 minus 119894) 120575 (119898 minus 119895)] (12)

where

Z119896 =[[[[[[[

z119896 (1 1) z119896 (1 1)z119896 (1 2) sdot sdot sdot

z119896 (1 2)z119896 (119873119872) z119896 (119873119872)

]]]]]]] (13)

while

diag [120575 (119899 minus 119894) 120575 (119898 minus 119896)] =[[[[[[[[[

120575 (1 minus 1198941) 120575 (1 minus 1198951) 0 sdot sdot sdot 00 120575 (1 minus 1198941) 120575 (2 minus 1198952) sdot sdot sdot 0 d

0 0 sdot sdot sdot 120575 (119873 minus 119899119873) 120575 (119872 minus 119898119872)

]]]]]]]]] (14)

Mathematical Problems in Engineering 5

Based on the matrices Z+119896 and Zminus119896 the gradient vector G iscalculated as

G119896 = 12Δ [1003817100381710038171003817DCT2D Z+119896 1003817100381710038171003817col1 minus 1003817100381710038171003817DCT2D Zminus119896 1003817100381710038171003817col1 ] (15)

where DCT2Dsdot is 2D DCT being calculated over columnsof Z+119896 and Zminus119896 while sdot col1 denotes the ℓ1 norm calculatedover columns In the final step the pixels values are updatedas follows

z119896+1 = z119896 + G(2Δ) (16)

The gradient is generally proportional to the error betweenthe exact image block 119891 and its estimate 119911 Therefore themissing values will converge to the true signal values In orderto obtain a high level of the reconstruction precision the stepΔ is decreased when the algorithm convergence slows downNamely when the pixels values are very close to the exactvalues the gradient will oscillate around the exact value andthe step size should be reduced for example Δ = Δ3 Thestopping criterion can be set using the desired reconstructionaccuracy 120576 expressed in dB as follows

MSE = 10 log10sum10038161003816100381610038161003816119911119901 minus 119911119901minus1100381610038161003816100381610038162sum 10038161003816100381610038161003816119911119901minus1100381610038161003816100381610038162

lt 120576 (17)

where Mean Square Error (MSE) is calculated as a differencebetween the reconstructed signals before and after reducingstep Δ Here we use the precision 120576 = minus100 dB The sameprocedure is repeated for each image block resulting in thereconstructed image 119910 Computational complexity of thereconstruction algorithm is analyzed in detail in light ofpossible implementation of the algorithm in systems (likeFPGA) with limited computational resources The 2D DCTof size 119872 times 119872 is observed with 119872 being the power of2 Particularly 119872 = 16 is used here Hence for eachobserved image block the total number of additions is (119872 minus119872119860)2[(31198722)(log2(119872)minus1)+2]2+4 where119872119860 denotes theavailable samples while the total number ofmultiplications is(119872 minus119872119860)2[119872log2(119872)minus31198722+4]2+7 Note that themostdemanding operation is DCT calculation which can be doneusing fast algorithm with (31198722)(log2(119872) minus 1) + 2 additionsand119872log2(119872)minus31198722+4multiplicationsThese numbers ofoperations are squared for the considered 2D signal case

The performance of CS reconstruction algorithm can beseen in Figure 2 for three different numbers of compressedmeasurements (compared to the original image dimension-ality) 80 of measurements 50 of measurements and20 of measurements Consequently we may define thecorresponding image degradation levels as 20 degradation50 degradation and 80 degradation

22 Suspicious Object Detection Algorithm Figure 1 includesthe general overview of the proposed image processingalgorithm The block diagram implicitly suggests a three-stage operation the first stage being the preprocessing stage isrepresented by the top left part of the diagram and the second

stage being the segmentation stage is represented by the lowerleft part of the diagram whereas the third stage being thedecision-making stage is represented by the right part of theblock diagram It should be noted that the algorithm has beendeployed for Croatian Mountain Rescue Service for severalmonths as field assistance tool

221 Image Preprocessing Thepreprocessing stage comprisestwo parts At the start images are converted from the original119877119866119861 to 119884119862119887119862119903 color format Traditional 119877119866119861 color format isnot convenient for computer vision applications due to thehigh correlation between its color components Next blue-difference (119862119887) and red-difference (119862119903) color componentsare denoised by applying a 3 times 3 median filter The imageis then divided into nonoverlapping subimages which aresubsequently fed to the segmentation module for furtherprocessing Number of subimages was set to 64 since thenumber 8 divides both image height and width without theresidue and ensures nonoverlapping

222 Segmentation The segmentation stage comprises twosteps Each subimage is segmented using the mean shiftclustering algorithm [21] Mean shift is extremely versatilenonparametric iterative procedure for feature space analysisWhen used for color image segmentation the image datais mapped into the feature space resulting in a clusterpattern The clusters correspond to the significant featuresin the image namely dominant colors Using mean shiftalgorithm these clusters can be located and dominant colorscan therefore be extracted from the image to be used forsegmentation

The clusters are located by applying a search window inthe feature space which shifts towards the cluster center Themagnitude and the direction of the shift in feature space arebased on the difference between the window center and thelocal mean value inside the window For 119899 data points 119909119894 119894 =1 2 119899 in the d-dimensional space 119877119889 the shift is definedas

119898ℎ (119909) = sum119899119894=1 119909119894119892 (1003817100381710038171003817(119909 minus 119909119894) ℎ10038171003817100381710038172)sum119899119894=1 119892 (1003817100381710038171003817(119909 minus 119909119894) ℎ10038171003817100381710038172) minus 119909 (18)

where 119892(119909) is the kernel ℎ is a bandwidth parameter whichis set to the value 45 (determined experimentally usingdifferent terrain-image datasets) 119909 is the center of the kernel(window) and 119909119894 is the element inside kernel When themagnitude of the shift becomes small according to the giventhreshold the center of the search window is declared ascluster center and the algorithm is said to have convergedfor one clusterThis procedure is repeated until all significantclusters have been identified

Once all the subimages have been clustered a globalcluster matrix K is formed by merging resulting clustermatrices obtained during subimage segmentation This newmatrix K is clustered again using cluster centers (ie mean119862119887 and 119862119903 of each cluster) from the previous step insteadof subimage pixel values as input points This two-stepapproach is introduced in order to speed up segmentation

6 Mathematical Problems in Engineering

Orig

inal

imag

e10

Degraded image Reconstructed image

20

50

80

Figure 2 Example of compressive sensing image reconstruction algorithmrsquos performance on one of the images (10) from used datasetDetection results are also indicated green squares represent correct detections (in comparison to original image) red squares represent FNsand orange squares represent FPs

while keeping almost the same performance It assures thatthe number of points for data clustering stays reasonably lowin both stepsThe number of pixels in subimages is naturally119879 times smaller than the number of pixels in the originalimage and the number of cluster centers used in the secondstep is even smaller than the number of pixels in the first step

The output of the segmentation stage is a set of candidateregions each representing a cluster of similarly colored pixels

The bulk of computational complexity of segmentationstep is due to this cluster search procedure and is equal to119874(119873119883sub times 119873119884Sub) where 119873119883sub is number of pixels along119883 axis in the subimage while 119873119884Sub is number of pixels

Mathematical Problems in Engineering 7

along 119884 axis in the subimage All subsequent steps (includingdecision-making step) are only concerned with detectedclusters making their computational complexity negligiblecompared to complexity of mean shift algorithm in this step

223 Decision-Making The decision-making stage com-prises five steps In the first step large candidate regionsare eliminated from subsequent processing The eliminationthreshold value119873th is determined based on the image heightimage width and the estimated distance between the cameraand the observed surface The premise here is that if suchregions were to represent a person it would mean that theactual person is standing too close to the camera making thesearch trivial

The second step is to remove all those areas inside partic-ular candidate regions that contain only a handful of pixelsIn this way the residual noise presented by some scatteredpixels left aftermedian filtering is efficiently eliminatedThenin the third step the resulting image is dilated by applying a5 times 5mask This is done to increase the size of the connectedpixel areas inside candidate regions so that the similar nearbysegments can be merged together

In the next step the segments belonging to the clusterwith more than three spatially separated areas are excludedfrom the resulting set of candidate segments under theassumption that the imagewould not containmore than threesuspicious objects of the same color Finally all the remainingsegments that were not eliminated in any of the previous foursteps are singled out as potential targets

More formally the decision-making procedure can bewritten as follows An image 119868 consists of a set of clusters 119862119894obtained by grouping image pixels using only color values ofthe chosen color model as feature vector elements

119868 = 119899⋃119894=1

119862119894 119862119894 cap 119862119895 = 0 forall119894 119895 119894 = 119895 (19)

As mentioned before clustering according to similarity offeature vectors is performed usingmean shift algorithm Eachcluster 119862119894 represents a set of spatially connected-componentregions or segments 119878119894119896

119862119894 = 119898⋃119896=1

119878119894119896 119878119894119896 cap 119878119894119897 = 0 forall119896 119897 119896 = 119897 (20)

In order to accept segment 119878119894119896 as potential target thefollowing properties have to be satisfied

1199011 Size (119862119894) lt 119873max1199012 Size (119878119894119896) gt 119873min1199013 119898 le 119873119860

(21)

where 119873max and 119873min are chosen threshold values m isthe total number of segments within a given cluster and119873119860 denotes the maximum allowed number of candidatesegments belonging to the same cluster For our application119873min 119873max and119873119860 are set to 10 38000 and 3 respectivelyPlease note that 119873max value represents 0317 of total pixelsin the image andwas determined empirically (it encompassessome redundancy ie objects of interest are rarely that large)

23 Performance Metrics

231 Image QualityMetric Several image qualitymetrics areintroduced andused in the experiments in order to give betteroverview of obtained results as well as more weight to theresults It should also be noted that it is not our intention tomake conclusions about appropriateness of particular metricor to make their direct comparison but rather to make theresults more accessible to wider audience

Structural Similarity Index (SSI) [22 23] is inspired byhuman visual system which is highly accustomed to extract-ing and processing structural information from images Itdetects and evaluates structural changes between two signals(images) reference (119909) and reconstructed (119910) one Thismakes SSI very consistent with human visual perceptionObtained SSI values are in the range of 0 and 1 where 0corresponds to lowest quality image (compared to original)and 1 best quality image (which only happens for the exactlythe same image) SSI is calculated on small patches takenfrom the same locations of two images It encompassesthree similarity terms between two images (1) similarity oflocal luminancebrightnessmdash119897(119909 119910) (2) similarity of localcontrastmdash119888(119909 119910) and (3) similarity of local structuremdash119904(119909 119910) Formally it is defined by

SSI (119909 119910) = 119897 (119909 119910) sdot 119888 (119909 119910) sdot 119904 (119909 119910)= ( 2120583119909120583119910 + 11986211205832119909 + 1205832119910 + 1198621) sdot (

2120590119909120590119910 + 11986221205902119909 + 1205902119910 + 1198622)

sdot ( 120590119909119910 + 1198623120590119909120590119910 + 1198623) (22)

where120583119909 120583119910 are local samplemeans of119909 and119910 image120590119909 120590119910are local sample standard deviations of 119909 and 119910 images 120590119909119910is local sample cross-correlation of 119909 and 119910 images afterremoving their respective means and 1198621 1198622 and 1198623 aresmall positive constants used for numerical stability androbustness of the metric

It can be applied on both color and grayscale images butfor simplicity and without loss of generality it was appliedon normalized grayscale images in the paper It should benoted that SSI is widely used in predicting the perceivedquality of digital television and cinematic pictures in practicalapplication but its performance is sometimes disputed incomparison to MSE [24] It is used as main performancemetric in the experiment due to simple interpretation and itswide usage for image quality assessment

Peak signal to noise ratio (PSNR) [23]was used as a part ofauxiliary metric set for image quality assessment which alsoincluded MSE and ℓ2 norm Missing pixels can in a sensebe considered a salt-and-pepper type noise and thus useof PSNR makes sense since it is defined as a ratio betweenthe maximum possible power of a signal and the power ofnoise corrupted signal Larger PSNR indicates better qualityreconstructed image and vice versa PSNR does not have

8 Mathematical Problems in Engineering

limited range as is the case with SSI It is expressed in unitsof dB and defined by

PSNR (119909 119910) = 10 log10 (maxValue2

MSE) (23)

where maxValue is maximum range of the pixel which innormalized grayscale image is 1 andMSE is theMean SquareError between 119909 (referent image) and 119910 (reconstructedimage) defined as

MSE (119909 119910) = 1119873119873sum119894=1

(119909119894 minus 119910119894)2 (24)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) and 119909119894 and 119910119894 are normalizedvalues of 119894th pixel in the 119909 and 119910 image respectivelyMSE is dominant quantitative performance metric for theassessment of signal quality in the field of signal processingIt is simple to use and interpret has a clear physical meaningand is a desirable metric within statistics and estimationframework However its performance has been criticized indealing with perceptual signals such as images [25] Thisis mainly due to the fact that implicit assumptions relatedwith MSE are in general not met in the context of visualperception However it is still often used in the literaturewhen reporting performance in image reconstruction andthus we include it here for comparison purposes LargerMSEvalues indicate lower quality images (compared to referenceone) while smaller values indicate better quality image MSEvalue range is not limited as is the case with SSI

Final metric used in the paper is ℓ2 metric It is alsocalled the Euclidian distance and in the work we applied itto the color images This means that ℓ2 metric representsEuclidian distance between two points in 119877119866119861 space the 119894thpixel in the original image and the corresponding pixel in thereconstructed image It is expressed for all pixels in the imageand is defined as

ℓ2 (119909 119910)= 119873sum119894=1

radic(119877119909119894 minus 119877119910119894)2 + (119866119909119894 minus 119866119910119894)2 + (119861119909119894 minus 119861119910119894)2 (25)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) for all color channels 119877119909119894 119877119910119894are normalized red color channel values (0-1) of the 119894th pixel119866119909119894 119866119910119894 are normalized green color channel values (0-1) ofthe 119894th pixel and 119861119909119894 119861119910119894 are normalized blue color channelvalues (0-1) of the 119894th pixelThe larger the value of ℓ2metric isthemore difference there is between two imagesThe ℓ2 normmetric is mainly used in image similarity analysis althoughthere are some situations in which it has been shown that ℓ1metric can be considered as a proper choice as well [26]

232 Detection Quality Metric The performance of imageprocessing algorithm for detecting suspicious objects inimages for different levels of missing pixels was evaluated

in terms of precision and recall The standard definitions ofthese two measures are given by the following equations

Recall = TPTP + FN

Precision = TP

TP + FP

(26)

where TP denotes the number of true positives FP denotesthe number of false positives and FN denotes the number offalse negatives It should be noted that all of these parameterswere determined by checking whether or not a matchingsegment (object) has been found in the original imagewithout checking if it actually represents a person or anobject of interest More on accuracy (in terms of recalland precision) of the presented algorithm can be found inSection 331 where comparisonwith human performance onoriginal images was made

When making conclusions based on presented recall andprecision values it should be kept in mind that it is notenvisaged as a standalone tool but as cooperative tool forhuman operators aimed at reducing their workloadThus FPvalues do not have a high price since human operator cancheck it and dismiss it if it is a false alarm More costly areFNs since they can potentially mislead the operator

3 Results and Discussion

31 Database For the experiment we used 16 images of 4 Kresolution (2992 times 4000 pixels) obtained at 3 occasions withDJI Phantom 3rsquos gyro stabilized camera Camerarsquos plane wasparallel with the ground and ideally UAV was at the heightof 50m (although this was not always the case as will beexplained later on in Section 33) All images were taken incoastal area of Croatia in which search and rescue operationsoften take place Images 1ndash7 were taken during CroatianMountain Rescue Service search and rescue drills (Set 1)and images 8ndash12 were taken during our mockup testing (Set2) while images 13ndash16 were taken during actual search andrescue operation (Set 3)

All images were firstly intentionally degraded withdesired level (ranging between 20 and 80) in a mannerthat random pixels were set to white (ie they were missing)Images were then reconstructed using CS approach andtested for object detection performance

32 Image Reconstruction First we explore how well CSbased reconstruction algorithm performs in terms of imagequality metrics Metrics were calculated for each of theimages for two cases (a) when random missing sampleswere introduced to the original image (ie for degradedimage) and (b) when original image was reconstructed usinggradient based algorithm For both cases original unalteredimage was taken as a reference Obtained results for allmetrics can be found in Figure 3 in a form of enhanced Boxplots

As can be seen from Figure 3(a) the image quality beforereconstruction (as measured by SSI) is somewhat low withmean value in the range of 0033 to 0127 and it is significantly

Mathematical Problems in Engineering 9

0

02

04

06

08

1SS

I

30 40 50 60 70 8020Degradation level ()

(a) SSI

30 40 50 60 70 8020Degradation level ()

05

1015202530354045

PSN

R (d

B)

(b) PSNR

30 40 50 60 70 8020Degradation level ()

0

01

02

03

04

05

06

07

MSE

(c) MSE

30 40 50 60 70 8020Degradation level ()

0

1000

2000

3000

4000

5000

985747 2no

rm

(d) ℓ2 norm

Figure 3 Image quality metric for all degradation levels (over all images) before and after reconstruction Red color represents metric fordegraded images while blue represents postreconstruction values Black dots represent mean values

increased after reconstruction having mean values in therange 0666 to 0984 Same trend can be observed for all otherquality metrics PSNR (3797 dB to 9838 dB range beforeand 23973 dB to 38100 dB range after reconstruction) MSE(0107 to 0428 range before and 1629119890 minus 4 to 0004 rangeafter reconstruction) and ℓ2 norm (1943119890 + 3 to 3896119890 +3 range before and 75654 to 384180 range after reconstruc-tion) Please note that for some cases (like in Figures 3(c)and 3(d)) the distribution for particular condition is verytight making its graphical representation quite small Thenonparametric Kruskal-Wallis test [27] was conducted on allmetrics for postreconstruction case in order to determinestatistical significance of the results with degradation levelas independent variable Statistical significance was detectedin all cases with the following values SSI (1205942(6) = 9834119901 lt 005) PSNR (1205942(6) = 10117 119901 lt 005) MSE (1205942(6)= 10117 119901 lt 005) and ℓ2 norm (1205942(6) = 10117 119901 lt005) Tukeyrsquos honestly significant difference (THSD) post hoctests (corrected for multiple comparison) were performedwhich revealed some interesting patterns For all metricsstatistical difference was present only for those cases thathad at least 30 degradation difference between them (ie50 cases were statistically different only from 20 and80 cases please see Figure 2 for visual comparison) Webelieve this goes towards demonstrating quality of obtainedreconstruction (in terms of presented metrics) that is thereis no statistical difference between 20 and 40 missing

sample cases (although their means are different in favor of20 case) It should also be noted that even in cases of 70or80 of pixels missing reconstructed image was subjectivelygood enough (please see Figure 2) so that its content could berecognized by the viewer (with SSI means of 0778 and 0666resp) However it should be noted that in cases of such highimage degradation (especially for 80 case) reconstructedimages appeared somewhat smudged (pastel like effect) withsome details (subjectively) lost

It is interesting to explore how much gain was obtainedwith CS reconstruction This is depicted in Figure 4 for allmetrics From presented figures it can be seen that obtainedimprovement is significant and while its absolute valuevaries detected by all performance metrics The improve-ment range is between 3730 and 81637 for SSI between 3268and 9959 for PSNR between 9612119890 minus 4 and 00205 for MSEand between 0029 and 0143 for ℓ2 norm From this a generaltrend can be observed the improvement gain increases withthe degradation level There are a couple of exceptions tothis general rule for SSI in 12 out of 16 images gain for 70is larger than for 80 case However this phenomenon isnot present in other metrics (except for the case of PSNRimages 13 and 14 where they are almost identical) whichmight suggest this is due to some interaction between metricand type of environment in the image Also randomness ofpixel removal when degrading image should be considered

10 Mathematical Problems in Engineering

0

20

40

60

80SS

I rat

io

2 4 6 8 10 12 14 160Image number

20304050

607080

(a) SSI

0

2

4

6

8

10

PSN

R ra

tio

2 4 6 8 10 12 14 160Image number

20304050

607080

(b) PSNR

0

0005

001

0015

002

MSE

ratio

2 4 6 8 10 12 14 160Image number

20304050

607080

(c) MSE

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

0002004006008

01012014016

20304050

607080

985747 2no

rm ra

tio

(d) ℓ2 norm

Figure 4 Ratios of image quality metrics for all degradation levels (over all images) after and before reconstruction

Figure 4 also revels another interesting observation that war-rants further research reconstruction gain and performancemight depend on type of environmentscenery in the imageTaking into consideration that dataset contains images fromthree distinct cases (sets) as described earlier (which all haveunique environment in them) and that their range is 1ndash7 8ndash12 and 13ndash16 different patterns (gain amplitudes) can be seenfor all metrics

In order to detect if this observed pattern has statisticalsignificance nonparametric Kruskal-Wallis test (with posthoc THSD) was performed in a way that for particulardegradation level data for images were grouped into threecategories based onwhich set they belong to Obtained resultsrevealed that for allmetrics anddegradation levels there existsstatistical significance (with varying levels of 119901 value whichwe omit here for brevity) of image set on reconstructionimprovement Post hoc tests revealed that this difference wasbetween image sets 2 and 3 for SSI and PSNRmetrics while itwas between sets 1 and 23 for ℓ2 norm (between sets 1 and 2-3for degradation levels 20 30 and 40 and between sets1 and 3 for degradation levels 50 60 70 and 80) For

MSE metric the difference was between sets 1 and 2 for alldegradation levels with addition of difference between sets2 and 3 for 70 and 80 cases While all metrics do notagree with where the difference is they clearly demonstratethat terrain type influences algorithms performance whichcould be used (in conjunction with terrain type classificationlike the one in [28]) to estimate expected CS reconstructionperformance before deployment

Another interesting analysis in line with the last one canbe made so to explore if knowing image quality before thereconstruction can be used to infer image quality after thereconstruction This is depicted in Figure 5 for all qualitymetrics across all used test images Figure 5 suggests that thereexists clear relationship between quality metrics before andafter reconstruction which is to be expected However thisrelationship is nonlinear for all cases although for case ofPSNR it could be considered (almost) linearThis relationshipenables one to estimate image quality after reconstructionand (in conjunction with terrain type as demonstratedbefore) select optimal degradation level (in terms of reduceddata load) for particular case whilemaintaining image quality

Mathematical Problems in Engineering 11

SSI re

cons

truc

ted

065

07

075

08

085

09

095

1

004 005 006 007 008 009 01 011 012 013003SSIdegraded

(a) SSI

22242628303234363840

PSN

R rec

onstr

ucte

d(d

B)

5 6 7 8 9 104PSNRdegraded (dB)

(b) PSNR

times10minus3

005

115

225

335

4

MSE

reco

nstr

ucte

d

015 02 025 03 035 04 04501MSEdegraded

(c) MSE

2200 2400 2600 2800 3000 3200 3400 3600 3800 4000200050

100

150

200

250

300

350

400

985747 2no

rmre

cons

truc

ted

9857472 normdegraded

(d) ℓ2 norm

Figure 5 Dependency of image quality metrics after reconstruction on image quality metrics before reconstruction across all used testimages Presented are means for seven degradation levels in ascending order (from left to right)

TPFN

0

20

40

60

80

100

Perc

enta

ge

30 40 50 60 70 8020Degradation level ()

Figure 6 Relation between true positives (TPs) and false negatives(FNs) for particular degradation level over all images

at desired level Since pixel degradation is random oneshould also expect certain level of deviation from presentedmeans

33 Object Detection As stated before due to intendedapplication FNs are more expensive than FPs and it thusmakes sense to first explore number of total FNs in all imagesFigure 6 presents number of FNs and TPs in percentageof total detections in the original image (since they always

sum up to the same value for particular image) Pleasenote that reasons behind choosing detections in the original(nondegraded) images as the ground truth for calculations ofFNs and TPs are explained in more detail in Section 331

Figure 6 depicts that there is more or less a constantdownward trend in TP rate that is FN rates increase asdegradation level increases Exception is 30 levelThis dip inTP rate at 30might be considered a fluke but it appeared inanother unrelated analysis [9] (on completely different imageset with same CS and detection algorithms) Since currentlyreason for this is unclear it should be investigated in thefuture As expected FN percentage was the lowest for 20case (2338) and the highest for 80 case (4674) It hasvalue of about 35 for other cases (except 50 case) Thismight be considered a pretty large cost in search and rescuescenario but while making this type of conclusion it shouldbe kept in mind that comparisons are made in respect toalgorithms performance on original image and not groundtruth (which is hard to establish in this type of experimentwhere complete control of environment is not possible)

Some additional insight will be gained in Section 331where we conducted miniuser study on original image todetect how well humans perform on ideal (and not recon-structed) image For completeness of presentation Figure 7showing comparison of numbers of TPs and FPs for alldegradation levels is included

Additional insight in algorithmrsquos performance can beobtained by observing recall and precision values in Figure 8

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

2 Mathematical Problems in Engineering

possible to significantly increase the probability of positiveoutcome and to ensure lower costs One of the solutions is toinclude aerial photography (viaUAVs) and associated patternrecognition and image processing algorithms [4] NamelyUAVs enable fast reconnaissance of large and unreachableareas while taking photos in regular time intervals UAVs insuch scenarios are intended to complement the search proce-dure and not to completely replace search on the ground [5]

The use of UAVs for supporting search and rescueoperations was also suggested in [6] where the authors useddifferent victim detection algorithms and search strategies insimulated environment It is also suggested that four featuresshould be taken into account when designing the appropriatesearch system (1) quality of image data (2) level of informa-tion exchange between UAVs andor UAVs and ground sta-tion (3)UAVautonomy time and (4) environmental hazards

In this paper we are concerned with image quality andlevel of information exchange requirements which are ingeneral in opposition better quality sensors consume largernetwork bandwidth and vice versa Transmission resources(bandwidth signal availability and quality) are not readilyavailable in the wild where search and rescue operations usu-ally take place Thus finding a good compression algorithmto reduce amount of data [7 8] or using smart cameras withreduced number of pixels could be of great importance Forinstance using reduced number of pixels allows consumingless energy from UAVs battery and thus achieving greaterautonomyrange In [7] three levels of filters with differ-ent knowledge (redundancy task and priority) were usedDepending on the filter (or combination of filters) appliedthe reduction of transmission bandwidth is achieved in therange from 2407 to 6783 The authors however notedthat in generalMPEG reduction level was not achieved Addi-tional common difficulty is the presence of noise in the areaswhere signal coverage is limited [4] This noise can manifestin different manners such as salt-and-pepper type of noise oreven as whole blocks of missing pixelsThus finding effectiveways to deal with such situations is of importance since thenoise can reduce (or completely cover) victimrsquos footprint inthe image Therefore it is of great importance to explorepossibilities of reducing the amount of captured and trans-mitted data while maintaining the level of successful targetobjects detection using specially designed algorithms as itwas initially introduced in [9] Such algorithms should alsoexhibit certain level of robustness in the presence of noise Inthat sense we propose using a popular compressive sensing(CS) approach in order to deal with randomly undersampledterrain photos obtained as a result of smart sensing strategyor removal of impulse noise The CS reconstruction algo-rithms may deal with images having large amount of missingor corrupted pixels [10ndash16]The missing pixels can be subjectof the reduced sampling strategy when a certain amount ofpixels at random positions are omitted from observations ormay appear as a consequence of discarding pixels affected bycertain errors or noise as discussed in the sequel [14] In ageneral CS scenario it is possible to observe two noisy effectscalled the observation noise and the sampling noise [17 18]The observation noise appears in the phenomena beforesensing process while the sampling noise appears as an error

on the measurements after the sensing process As provedin [17] both types of noise would cause distortion in thefinal image whereas the observation noise in CS case is lessdetrimental than the sensing noise In the considered imagingsystem for aerial photography the noise can be occasionallypresent in the form of salt-and-pepper noise (or spike noise)It is mainly caused by the analog-to-digital converter errorsthat could be considered as observation noise appearing priorto CS process [18] It can be also caused by the errors intransmission which indeed correspond to the observationnoise In both cases our assumption is that we can discard thepixels affected by the salt-and-pepper noise and proceed withthe image reconstruction upon receipt in the control centerusing small amount of available nonnoisy pixels Henceeither the images are randomly undersampled in the absenceof noise or the random noisy pixels are discarded causingthe same setup as in the former case Moreover if someof the measurements are subject to the transmission errorsthese can be also discarded upon the receipt in the controlcenter Additionally in some applications facing other heavy-tailed noise models robust sampling operators based on theweighted myriad estimators can be considered [18]

Therefore we consider the following scenario (a) ran-domly undersampled images (with missing pixels causedby the sampling strategy or by discarding noisy pixels) areobtained and sent over the network (b) upon reception inthe control center the images are completely restoredrecon-structed and (c) are used as the input of image processingalgorithm for target objects detection Only the images ofinterest are flagged by the object detection algorithm (asthe images with suspicious objects) and further examinedby a human operator Hence the proposed system con-sists of two segments both running in the control centerimage reconstruction and object detection It is importantto emphasize that the output of the first segment influencesthe performance of the second segment depending on thenumber of missing pixels (ie image degradation level) Inthe paper we examine the performance of both segmentsunder different amounts ofmissing pixel with the aim to drawconclusions on their respective performances and possibleimprovements

2 Materials and Methods

An overview of the entire proposed approach is presentedin Figure 1 In the next sections the particular algorithmsfrom the figure will be presented and explained in moredetail However it should be noted that the entire approachis not yet fully integrated into UAV and adopted for real-timeapplication This is a topic of an ongoing research and it willbe implemented in the future

21 Compressive Sensing and Image Reconstruction Com-pressive sensing appeared as a new sensing paradigm whichallows acquiring significantly smaller number of samplescompared to the standard approach based on the Shannon-Nyquist sampling theorem [10ndash13] Particularly compressivesensing assumes that the exact signal can be reconstructedfrom a very small set of measurements under the condition

Mathematical Problems in Engineering 3

Image acquisition

Image data reduction

High speed link

CS imagereconstruction

UAV Surveillance center

Transform image to

Apply median filter

Split image intosubimages

Preprocessing

Subimages mean shiftclustering

Compose globalcluster matrix

Cluster global clustermatrix

Segmentation

Eliminate largecandidate regions

Erase single pixelareas

Merge nearbysegments

Decision-making

Ignore clusters withmultiple segments

Outline the results End user

YCbCr space

Figure 1 General overview of the proposed procedure

that the measurements are incoherent and the signal hassparse representation in certain transform domain

Consider a signal in 119877119873 that can be represented usingthe orthonormal basis vectors Ψ119894119873119894=1 The desired signal xcan then be observed in terms of the transform domain basisfunctions [10 11]

x = 119873sum119894=1

119883119894Ψ119894 (1)

or equivalently in the matrix formx = ΨX (2)

where the transform domain matrix is Ψ fl [1205951 | 1205952 | sdot sdot sdot |120595119873] while X represents the transform domain vector In thecase of sparse signals the vector X contains only 119870 ≪ 119873significant components while others are zeros or negligibleThen we might say that X is a K-sparse representation of xInstead of measuring the entire signal x we canmeasure onlya small set of 119872 lt 119873 random samples of x using a linearmeasurement matrix Φ fl [1206011 | 1206012 | sdot sdot sdot | 120601119873] The vector ofmeasurements y can now be written as follows

y = Φx = ΦΨX = ΘX (3)

whereΘ = ΦΨ of size119872times119873

The main challenge behind compressive sensing is todesign a measurement matrix Φ which can provide exactand unique reconstruction of K-sparse signal where119872 ge 119870Here we consider the randommeasurement matrixΦ of size119872 times 119873 (119872 ≪ 119873) that contains only one value equal to 1at the random position in each row (while other values are0) Particularly in the 119894th row the value 1 is on the positionof the 119894th random measurement Consequently the resultingcompressive sensing matrixΘ in (3) is usually called randompartial transform matrix [19 20] It contains partial basisfunctions from Ψ with values at the random positions ofmeasurements Note that in the case of images the two-dimensional measurements are simply rearranged into one-dimensional vector y while Ψ should correspond to a two-dimensional transformation

Since 119872 lt 119873 the solution is ill-posed the number ofequations is smaller than the number of unknowns If wecould determine the support of119870 components withinX thenthe problem would be determined by the 119872 ge 119870 systemof linear equations (119872 equations with 119870 unknowns) Thestable recovery is assured if the compressive sensing matrixΘ satisfies the restricted isometry property (RIP) Howeverin real situations we almost never know the signal support inX and the signal reconstruction needs to be performed using

4 Mathematical Problems in Engineering

certain minimization approaches The most natural choiceassumes theminimization of ℓ0 norm which is used to searchfor the sparsest vector X in the transform domain

X = argmin X0subject to ΘX = y (4)

However this approach is computationally unstable and rep-resents NP (nondeterministic polynomial-time) hard prob-lem Therefore the ℓ0 norm minimization is replaced by theℓ1 norm minimization leading to the convex optimizationproblem that can be solved using linear programming

X = argmin X1subject to ΘX = y (5)

In the sequel we consider a gradient algorithm for efficientreconstruction of images which belongs to the group ofconvex optimization methods [15] Note that the gradientbased methods generally do not require the signals to bestrictly sparse in certain transform domain and in that senseprovide significant benefits and relaxations for real-worldapplications Particularly it is known that the images arenot sparse in any of the transform domains but can beconsidered as compressible in the two-dimensional DiscreteCosine Transform (2D DCT) domain Hence the 2D DCT isemployed to estimate the direction and value of the gradientused to update the values of missing data towards the exactsolution

211 Gradient Based Signal Reconstruction Approach Pre-vious minimization problem can be solved using the gra-dient based approach The efficient implementation of thisapproach can be done in a block by block basis where theblock sizes are 119873 times 119872 (the square blocks 119873 = 119872 = 16 areused in the experiments) The available image measurementswithin the block are defined by the pixels indices

(119894 119895) isin Ω where Ω sub (1 1) (1 2) (119873119872) (6)

while

card Ω = 119872119886119873119886 ≪ 119873119872 (7)

The original (full) image block is denoted as 119891(119899119898) Theimage measurements are hence defined by

119891 (119894 119895) for (119899119898) = (119894 119895) isin Ω (8)

Let us now observe the initial image matrix z that containsthe available measurements and zero values at the positionsof unavailable pixels The elements of z can be defined as

119911 (119899119898) = 119891 (119899119898) 120575 (119899 minus 119894) 120575 (119898 minus 119895) (9)

where 120575(119899) is a unit delta functionThe gradient method usesthe basic concept of steepest descent method It treats onlythe missing pixels such that their initial zero values are variedin both directions for a certain constant Δ Then the ℓ1 normis applied in the transform domain to measure the level ofsparsity and to calculate the gradient which is also used toupdate the values of pixels through the iterations In thatsense we might observe the matrix Z comprising119873 vectors zformed by the elements 119911(119899119898)

Z = [z z sdot sdot sdot z] (10)

In order to model the process of missing pixels variations forplusmnΔ we can define the two matrices

Z+119896 = [z+1119896 z+2119896 sdot sdot sdot z+119873119896 ] = Z119896 + ΔZminus119896 = [zminus1119896 zminus2119896 sdot sdot sdot zminus119873119896 ] = Z119896 minus Δ (11)

where 119896 denotes the number of iterationsThe initial value ofΔ can be set as Δ = max(|z|) The previous matrices can bewritten in an expanded form

Z+119896 = Z119896 plusmn Δ= [z119896 z119896 sdot sdot sdot z119896 ] plusmn Δ diag [120575 (119899 minus 119894) 120575 (119898 minus 119895)] (12)

where

Z119896 =[[[[[[[

z119896 (1 1) z119896 (1 1)z119896 (1 2) sdot sdot sdot

z119896 (1 2)z119896 (119873119872) z119896 (119873119872)

]]]]]]] (13)

while

diag [120575 (119899 minus 119894) 120575 (119898 minus 119896)] =[[[[[[[[[

120575 (1 minus 1198941) 120575 (1 minus 1198951) 0 sdot sdot sdot 00 120575 (1 minus 1198941) 120575 (2 minus 1198952) sdot sdot sdot 0 d

0 0 sdot sdot sdot 120575 (119873 minus 119899119873) 120575 (119872 minus 119898119872)

]]]]]]]]] (14)

Mathematical Problems in Engineering 5

Based on the matrices Z+119896 and Zminus119896 the gradient vector G iscalculated as

G119896 = 12Δ [1003817100381710038171003817DCT2D Z+119896 1003817100381710038171003817col1 minus 1003817100381710038171003817DCT2D Zminus119896 1003817100381710038171003817col1 ] (15)

where DCT2Dsdot is 2D DCT being calculated over columnsof Z+119896 and Zminus119896 while sdot col1 denotes the ℓ1 norm calculatedover columns In the final step the pixels values are updatedas follows

z119896+1 = z119896 + G(2Δ) (16)

The gradient is generally proportional to the error betweenthe exact image block 119891 and its estimate 119911 Therefore themissing values will converge to the true signal values In orderto obtain a high level of the reconstruction precision the stepΔ is decreased when the algorithm convergence slows downNamely when the pixels values are very close to the exactvalues the gradient will oscillate around the exact value andthe step size should be reduced for example Δ = Δ3 Thestopping criterion can be set using the desired reconstructionaccuracy 120576 expressed in dB as follows

MSE = 10 log10sum10038161003816100381610038161003816119911119901 minus 119911119901minus1100381610038161003816100381610038162sum 10038161003816100381610038161003816119911119901minus1100381610038161003816100381610038162

lt 120576 (17)

where Mean Square Error (MSE) is calculated as a differencebetween the reconstructed signals before and after reducingstep Δ Here we use the precision 120576 = minus100 dB The sameprocedure is repeated for each image block resulting in thereconstructed image 119910 Computational complexity of thereconstruction algorithm is analyzed in detail in light ofpossible implementation of the algorithm in systems (likeFPGA) with limited computational resources The 2D DCTof size 119872 times 119872 is observed with 119872 being the power of2 Particularly 119872 = 16 is used here Hence for eachobserved image block the total number of additions is (119872 minus119872119860)2[(31198722)(log2(119872)minus1)+2]2+4 where119872119860 denotes theavailable samples while the total number ofmultiplications is(119872 minus119872119860)2[119872log2(119872)minus31198722+4]2+7 Note that themostdemanding operation is DCT calculation which can be doneusing fast algorithm with (31198722)(log2(119872) minus 1) + 2 additionsand119872log2(119872)minus31198722+4multiplicationsThese numbers ofoperations are squared for the considered 2D signal case

The performance of CS reconstruction algorithm can beseen in Figure 2 for three different numbers of compressedmeasurements (compared to the original image dimension-ality) 80 of measurements 50 of measurements and20 of measurements Consequently we may define thecorresponding image degradation levels as 20 degradation50 degradation and 80 degradation

22 Suspicious Object Detection Algorithm Figure 1 includesthe general overview of the proposed image processingalgorithm The block diagram implicitly suggests a three-stage operation the first stage being the preprocessing stage isrepresented by the top left part of the diagram and the second

stage being the segmentation stage is represented by the lowerleft part of the diagram whereas the third stage being thedecision-making stage is represented by the right part of theblock diagram It should be noted that the algorithm has beendeployed for Croatian Mountain Rescue Service for severalmonths as field assistance tool

221 Image Preprocessing Thepreprocessing stage comprisestwo parts At the start images are converted from the original119877119866119861 to 119884119862119887119862119903 color format Traditional 119877119866119861 color format isnot convenient for computer vision applications due to thehigh correlation between its color components Next blue-difference (119862119887) and red-difference (119862119903) color componentsare denoised by applying a 3 times 3 median filter The imageis then divided into nonoverlapping subimages which aresubsequently fed to the segmentation module for furtherprocessing Number of subimages was set to 64 since thenumber 8 divides both image height and width without theresidue and ensures nonoverlapping

222 Segmentation The segmentation stage comprises twosteps Each subimage is segmented using the mean shiftclustering algorithm [21] Mean shift is extremely versatilenonparametric iterative procedure for feature space analysisWhen used for color image segmentation the image datais mapped into the feature space resulting in a clusterpattern The clusters correspond to the significant featuresin the image namely dominant colors Using mean shiftalgorithm these clusters can be located and dominant colorscan therefore be extracted from the image to be used forsegmentation

The clusters are located by applying a search window inthe feature space which shifts towards the cluster center Themagnitude and the direction of the shift in feature space arebased on the difference between the window center and thelocal mean value inside the window For 119899 data points 119909119894 119894 =1 2 119899 in the d-dimensional space 119877119889 the shift is definedas

119898ℎ (119909) = sum119899119894=1 119909119894119892 (1003817100381710038171003817(119909 minus 119909119894) ℎ10038171003817100381710038172)sum119899119894=1 119892 (1003817100381710038171003817(119909 minus 119909119894) ℎ10038171003817100381710038172) minus 119909 (18)

where 119892(119909) is the kernel ℎ is a bandwidth parameter whichis set to the value 45 (determined experimentally usingdifferent terrain-image datasets) 119909 is the center of the kernel(window) and 119909119894 is the element inside kernel When themagnitude of the shift becomes small according to the giventhreshold the center of the search window is declared ascluster center and the algorithm is said to have convergedfor one clusterThis procedure is repeated until all significantclusters have been identified

Once all the subimages have been clustered a globalcluster matrix K is formed by merging resulting clustermatrices obtained during subimage segmentation This newmatrix K is clustered again using cluster centers (ie mean119862119887 and 119862119903 of each cluster) from the previous step insteadof subimage pixel values as input points This two-stepapproach is introduced in order to speed up segmentation

6 Mathematical Problems in Engineering

Orig

inal

imag

e10

Degraded image Reconstructed image

20

50

80

Figure 2 Example of compressive sensing image reconstruction algorithmrsquos performance on one of the images (10) from used datasetDetection results are also indicated green squares represent correct detections (in comparison to original image) red squares represent FNsand orange squares represent FPs

while keeping almost the same performance It assures thatthe number of points for data clustering stays reasonably lowin both stepsThe number of pixels in subimages is naturally119879 times smaller than the number of pixels in the originalimage and the number of cluster centers used in the secondstep is even smaller than the number of pixels in the first step

The output of the segmentation stage is a set of candidateregions each representing a cluster of similarly colored pixels

The bulk of computational complexity of segmentationstep is due to this cluster search procedure and is equal to119874(119873119883sub times 119873119884Sub) where 119873119883sub is number of pixels along119883 axis in the subimage while 119873119884Sub is number of pixels

Mathematical Problems in Engineering 7

along 119884 axis in the subimage All subsequent steps (includingdecision-making step) are only concerned with detectedclusters making their computational complexity negligiblecompared to complexity of mean shift algorithm in this step

223 Decision-Making The decision-making stage com-prises five steps In the first step large candidate regionsare eliminated from subsequent processing The eliminationthreshold value119873th is determined based on the image heightimage width and the estimated distance between the cameraand the observed surface The premise here is that if suchregions were to represent a person it would mean that theactual person is standing too close to the camera making thesearch trivial

The second step is to remove all those areas inside partic-ular candidate regions that contain only a handful of pixelsIn this way the residual noise presented by some scatteredpixels left aftermedian filtering is efficiently eliminatedThenin the third step the resulting image is dilated by applying a5 times 5mask This is done to increase the size of the connectedpixel areas inside candidate regions so that the similar nearbysegments can be merged together

In the next step the segments belonging to the clusterwith more than three spatially separated areas are excludedfrom the resulting set of candidate segments under theassumption that the imagewould not containmore than threesuspicious objects of the same color Finally all the remainingsegments that were not eliminated in any of the previous foursteps are singled out as potential targets

More formally the decision-making procedure can bewritten as follows An image 119868 consists of a set of clusters 119862119894obtained by grouping image pixels using only color values ofthe chosen color model as feature vector elements

119868 = 119899⋃119894=1

119862119894 119862119894 cap 119862119895 = 0 forall119894 119895 119894 = 119895 (19)

As mentioned before clustering according to similarity offeature vectors is performed usingmean shift algorithm Eachcluster 119862119894 represents a set of spatially connected-componentregions or segments 119878119894119896

119862119894 = 119898⋃119896=1

119878119894119896 119878119894119896 cap 119878119894119897 = 0 forall119896 119897 119896 = 119897 (20)

In order to accept segment 119878119894119896 as potential target thefollowing properties have to be satisfied

1199011 Size (119862119894) lt 119873max1199012 Size (119878119894119896) gt 119873min1199013 119898 le 119873119860

(21)

where 119873max and 119873min are chosen threshold values m isthe total number of segments within a given cluster and119873119860 denotes the maximum allowed number of candidatesegments belonging to the same cluster For our application119873min 119873max and119873119860 are set to 10 38000 and 3 respectivelyPlease note that 119873max value represents 0317 of total pixelsin the image andwas determined empirically (it encompassessome redundancy ie objects of interest are rarely that large)

23 Performance Metrics

231 Image QualityMetric Several image qualitymetrics areintroduced andused in the experiments in order to give betteroverview of obtained results as well as more weight to theresults It should also be noted that it is not our intention tomake conclusions about appropriateness of particular metricor to make their direct comparison but rather to make theresults more accessible to wider audience

Structural Similarity Index (SSI) [22 23] is inspired byhuman visual system which is highly accustomed to extract-ing and processing structural information from images Itdetects and evaluates structural changes between two signals(images) reference (119909) and reconstructed (119910) one Thismakes SSI very consistent with human visual perceptionObtained SSI values are in the range of 0 and 1 where 0corresponds to lowest quality image (compared to original)and 1 best quality image (which only happens for the exactlythe same image) SSI is calculated on small patches takenfrom the same locations of two images It encompassesthree similarity terms between two images (1) similarity oflocal luminancebrightnessmdash119897(119909 119910) (2) similarity of localcontrastmdash119888(119909 119910) and (3) similarity of local structuremdash119904(119909 119910) Formally it is defined by

SSI (119909 119910) = 119897 (119909 119910) sdot 119888 (119909 119910) sdot 119904 (119909 119910)= ( 2120583119909120583119910 + 11986211205832119909 + 1205832119910 + 1198621) sdot (

2120590119909120590119910 + 11986221205902119909 + 1205902119910 + 1198622)

sdot ( 120590119909119910 + 1198623120590119909120590119910 + 1198623) (22)

where120583119909 120583119910 are local samplemeans of119909 and119910 image120590119909 120590119910are local sample standard deviations of 119909 and 119910 images 120590119909119910is local sample cross-correlation of 119909 and 119910 images afterremoving their respective means and 1198621 1198622 and 1198623 aresmall positive constants used for numerical stability androbustness of the metric

It can be applied on both color and grayscale images butfor simplicity and without loss of generality it was appliedon normalized grayscale images in the paper It should benoted that SSI is widely used in predicting the perceivedquality of digital television and cinematic pictures in practicalapplication but its performance is sometimes disputed incomparison to MSE [24] It is used as main performancemetric in the experiment due to simple interpretation and itswide usage for image quality assessment

Peak signal to noise ratio (PSNR) [23]was used as a part ofauxiliary metric set for image quality assessment which alsoincluded MSE and ℓ2 norm Missing pixels can in a sensebe considered a salt-and-pepper type noise and thus useof PSNR makes sense since it is defined as a ratio betweenthe maximum possible power of a signal and the power ofnoise corrupted signal Larger PSNR indicates better qualityreconstructed image and vice versa PSNR does not have

8 Mathematical Problems in Engineering

limited range as is the case with SSI It is expressed in unitsof dB and defined by

PSNR (119909 119910) = 10 log10 (maxValue2

MSE) (23)

where maxValue is maximum range of the pixel which innormalized grayscale image is 1 andMSE is theMean SquareError between 119909 (referent image) and 119910 (reconstructedimage) defined as

MSE (119909 119910) = 1119873119873sum119894=1

(119909119894 minus 119910119894)2 (24)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) and 119909119894 and 119910119894 are normalizedvalues of 119894th pixel in the 119909 and 119910 image respectivelyMSE is dominant quantitative performance metric for theassessment of signal quality in the field of signal processingIt is simple to use and interpret has a clear physical meaningand is a desirable metric within statistics and estimationframework However its performance has been criticized indealing with perceptual signals such as images [25] Thisis mainly due to the fact that implicit assumptions relatedwith MSE are in general not met in the context of visualperception However it is still often used in the literaturewhen reporting performance in image reconstruction andthus we include it here for comparison purposes LargerMSEvalues indicate lower quality images (compared to referenceone) while smaller values indicate better quality image MSEvalue range is not limited as is the case with SSI

Final metric used in the paper is ℓ2 metric It is alsocalled the Euclidian distance and in the work we applied itto the color images This means that ℓ2 metric representsEuclidian distance between two points in 119877119866119861 space the 119894thpixel in the original image and the corresponding pixel in thereconstructed image It is expressed for all pixels in the imageand is defined as

ℓ2 (119909 119910)= 119873sum119894=1

radic(119877119909119894 minus 119877119910119894)2 + (119866119909119894 minus 119866119910119894)2 + (119861119909119894 minus 119861119910119894)2 (25)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) for all color channels 119877119909119894 119877119910119894are normalized red color channel values (0-1) of the 119894th pixel119866119909119894 119866119910119894 are normalized green color channel values (0-1) ofthe 119894th pixel and 119861119909119894 119861119910119894 are normalized blue color channelvalues (0-1) of the 119894th pixelThe larger the value of ℓ2metric isthemore difference there is between two imagesThe ℓ2 normmetric is mainly used in image similarity analysis althoughthere are some situations in which it has been shown that ℓ1metric can be considered as a proper choice as well [26]

232 Detection Quality Metric The performance of imageprocessing algorithm for detecting suspicious objects inimages for different levels of missing pixels was evaluated

in terms of precision and recall The standard definitions ofthese two measures are given by the following equations

Recall = TPTP + FN

Precision = TP

TP + FP

(26)

where TP denotes the number of true positives FP denotesthe number of false positives and FN denotes the number offalse negatives It should be noted that all of these parameterswere determined by checking whether or not a matchingsegment (object) has been found in the original imagewithout checking if it actually represents a person or anobject of interest More on accuracy (in terms of recalland precision) of the presented algorithm can be found inSection 331 where comparisonwith human performance onoriginal images was made

When making conclusions based on presented recall andprecision values it should be kept in mind that it is notenvisaged as a standalone tool but as cooperative tool forhuman operators aimed at reducing their workloadThus FPvalues do not have a high price since human operator cancheck it and dismiss it if it is a false alarm More costly areFNs since they can potentially mislead the operator

3 Results and Discussion

31 Database For the experiment we used 16 images of 4 Kresolution (2992 times 4000 pixels) obtained at 3 occasions withDJI Phantom 3rsquos gyro stabilized camera Camerarsquos plane wasparallel with the ground and ideally UAV was at the heightof 50m (although this was not always the case as will beexplained later on in Section 33) All images were taken incoastal area of Croatia in which search and rescue operationsoften take place Images 1ndash7 were taken during CroatianMountain Rescue Service search and rescue drills (Set 1)and images 8ndash12 were taken during our mockup testing (Set2) while images 13ndash16 were taken during actual search andrescue operation (Set 3)

All images were firstly intentionally degraded withdesired level (ranging between 20 and 80) in a mannerthat random pixels were set to white (ie they were missing)Images were then reconstructed using CS approach andtested for object detection performance

32 Image Reconstruction First we explore how well CSbased reconstruction algorithm performs in terms of imagequality metrics Metrics were calculated for each of theimages for two cases (a) when random missing sampleswere introduced to the original image (ie for degradedimage) and (b) when original image was reconstructed usinggradient based algorithm For both cases original unalteredimage was taken as a reference Obtained results for allmetrics can be found in Figure 3 in a form of enhanced Boxplots

As can be seen from Figure 3(a) the image quality beforereconstruction (as measured by SSI) is somewhat low withmean value in the range of 0033 to 0127 and it is significantly

Mathematical Problems in Engineering 9

0

02

04

06

08

1SS

I

30 40 50 60 70 8020Degradation level ()

(a) SSI

30 40 50 60 70 8020Degradation level ()

05

1015202530354045

PSN

R (d

B)

(b) PSNR

30 40 50 60 70 8020Degradation level ()

0

01

02

03

04

05

06

07

MSE

(c) MSE

30 40 50 60 70 8020Degradation level ()

0

1000

2000

3000

4000

5000

985747 2no

rm

(d) ℓ2 norm

Figure 3 Image quality metric for all degradation levels (over all images) before and after reconstruction Red color represents metric fordegraded images while blue represents postreconstruction values Black dots represent mean values

increased after reconstruction having mean values in therange 0666 to 0984 Same trend can be observed for all otherquality metrics PSNR (3797 dB to 9838 dB range beforeand 23973 dB to 38100 dB range after reconstruction) MSE(0107 to 0428 range before and 1629119890 minus 4 to 0004 rangeafter reconstruction) and ℓ2 norm (1943119890 + 3 to 3896119890 +3 range before and 75654 to 384180 range after reconstruc-tion) Please note that for some cases (like in Figures 3(c)and 3(d)) the distribution for particular condition is verytight making its graphical representation quite small Thenonparametric Kruskal-Wallis test [27] was conducted on allmetrics for postreconstruction case in order to determinestatistical significance of the results with degradation levelas independent variable Statistical significance was detectedin all cases with the following values SSI (1205942(6) = 9834119901 lt 005) PSNR (1205942(6) = 10117 119901 lt 005) MSE (1205942(6)= 10117 119901 lt 005) and ℓ2 norm (1205942(6) = 10117 119901 lt005) Tukeyrsquos honestly significant difference (THSD) post hoctests (corrected for multiple comparison) were performedwhich revealed some interesting patterns For all metricsstatistical difference was present only for those cases thathad at least 30 degradation difference between them (ie50 cases were statistically different only from 20 and80 cases please see Figure 2 for visual comparison) Webelieve this goes towards demonstrating quality of obtainedreconstruction (in terms of presented metrics) that is thereis no statistical difference between 20 and 40 missing

sample cases (although their means are different in favor of20 case) It should also be noted that even in cases of 70or80 of pixels missing reconstructed image was subjectivelygood enough (please see Figure 2) so that its content could berecognized by the viewer (with SSI means of 0778 and 0666resp) However it should be noted that in cases of such highimage degradation (especially for 80 case) reconstructedimages appeared somewhat smudged (pastel like effect) withsome details (subjectively) lost

It is interesting to explore how much gain was obtainedwith CS reconstruction This is depicted in Figure 4 for allmetrics From presented figures it can be seen that obtainedimprovement is significant and while its absolute valuevaries detected by all performance metrics The improve-ment range is between 3730 and 81637 for SSI between 3268and 9959 for PSNR between 9612119890 minus 4 and 00205 for MSEand between 0029 and 0143 for ℓ2 norm From this a generaltrend can be observed the improvement gain increases withthe degradation level There are a couple of exceptions tothis general rule for SSI in 12 out of 16 images gain for 70is larger than for 80 case However this phenomenon isnot present in other metrics (except for the case of PSNRimages 13 and 14 where they are almost identical) whichmight suggest this is due to some interaction between metricand type of environment in the image Also randomness ofpixel removal when degrading image should be considered

10 Mathematical Problems in Engineering

0

20

40

60

80SS

I rat

io

2 4 6 8 10 12 14 160Image number

20304050

607080

(a) SSI

0

2

4

6

8

10

PSN

R ra

tio

2 4 6 8 10 12 14 160Image number

20304050

607080

(b) PSNR

0

0005

001

0015

002

MSE

ratio

2 4 6 8 10 12 14 160Image number

20304050

607080

(c) MSE

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

0002004006008

01012014016

20304050

607080

985747 2no

rm ra

tio

(d) ℓ2 norm

Figure 4 Ratios of image quality metrics for all degradation levels (over all images) after and before reconstruction

Figure 4 also revels another interesting observation that war-rants further research reconstruction gain and performancemight depend on type of environmentscenery in the imageTaking into consideration that dataset contains images fromthree distinct cases (sets) as described earlier (which all haveunique environment in them) and that their range is 1ndash7 8ndash12 and 13ndash16 different patterns (gain amplitudes) can be seenfor all metrics

In order to detect if this observed pattern has statisticalsignificance nonparametric Kruskal-Wallis test (with posthoc THSD) was performed in a way that for particulardegradation level data for images were grouped into threecategories based onwhich set they belong to Obtained resultsrevealed that for allmetrics anddegradation levels there existsstatistical significance (with varying levels of 119901 value whichwe omit here for brevity) of image set on reconstructionimprovement Post hoc tests revealed that this difference wasbetween image sets 2 and 3 for SSI and PSNRmetrics while itwas between sets 1 and 23 for ℓ2 norm (between sets 1 and 2-3for degradation levels 20 30 and 40 and between sets1 and 3 for degradation levels 50 60 70 and 80) For

MSE metric the difference was between sets 1 and 2 for alldegradation levels with addition of difference between sets2 and 3 for 70 and 80 cases While all metrics do notagree with where the difference is they clearly demonstratethat terrain type influences algorithms performance whichcould be used (in conjunction with terrain type classificationlike the one in [28]) to estimate expected CS reconstructionperformance before deployment

Another interesting analysis in line with the last one canbe made so to explore if knowing image quality before thereconstruction can be used to infer image quality after thereconstruction This is depicted in Figure 5 for all qualitymetrics across all used test images Figure 5 suggests that thereexists clear relationship between quality metrics before andafter reconstruction which is to be expected However thisrelationship is nonlinear for all cases although for case ofPSNR it could be considered (almost) linearThis relationshipenables one to estimate image quality after reconstructionand (in conjunction with terrain type as demonstratedbefore) select optimal degradation level (in terms of reduceddata load) for particular case whilemaintaining image quality

Mathematical Problems in Engineering 11

SSI re

cons

truc

ted

065

07

075

08

085

09

095

1

004 005 006 007 008 009 01 011 012 013003SSIdegraded

(a) SSI

22242628303234363840

PSN

R rec

onstr

ucte

d(d

B)

5 6 7 8 9 104PSNRdegraded (dB)

(b) PSNR

times10minus3

005

115

225

335

4

MSE

reco

nstr

ucte

d

015 02 025 03 035 04 04501MSEdegraded

(c) MSE

2200 2400 2600 2800 3000 3200 3400 3600 3800 4000200050

100

150

200

250

300

350

400

985747 2no

rmre

cons

truc

ted

9857472 normdegraded

(d) ℓ2 norm

Figure 5 Dependency of image quality metrics after reconstruction on image quality metrics before reconstruction across all used testimages Presented are means for seven degradation levels in ascending order (from left to right)

TPFN

0

20

40

60

80

100

Perc

enta

ge

30 40 50 60 70 8020Degradation level ()

Figure 6 Relation between true positives (TPs) and false negatives(FNs) for particular degradation level over all images

at desired level Since pixel degradation is random oneshould also expect certain level of deviation from presentedmeans

33 Object Detection As stated before due to intendedapplication FNs are more expensive than FPs and it thusmakes sense to first explore number of total FNs in all imagesFigure 6 presents number of FNs and TPs in percentageof total detections in the original image (since they always

sum up to the same value for particular image) Pleasenote that reasons behind choosing detections in the original(nondegraded) images as the ground truth for calculations ofFNs and TPs are explained in more detail in Section 331

Figure 6 depicts that there is more or less a constantdownward trend in TP rate that is FN rates increase asdegradation level increases Exception is 30 levelThis dip inTP rate at 30might be considered a fluke but it appeared inanother unrelated analysis [9] (on completely different imageset with same CS and detection algorithms) Since currentlyreason for this is unclear it should be investigated in thefuture As expected FN percentage was the lowest for 20case (2338) and the highest for 80 case (4674) It hasvalue of about 35 for other cases (except 50 case) Thismight be considered a pretty large cost in search and rescuescenario but while making this type of conclusion it shouldbe kept in mind that comparisons are made in respect toalgorithms performance on original image and not groundtruth (which is hard to establish in this type of experimentwhere complete control of environment is not possible)

Some additional insight will be gained in Section 331where we conducted miniuser study on original image todetect how well humans perform on ideal (and not recon-structed) image For completeness of presentation Figure 7showing comparison of numbers of TPs and FPs for alldegradation levels is included

Additional insight in algorithmrsquos performance can beobtained by observing recall and precision values in Figure 8

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

Mathematical Problems in Engineering 3

Image acquisition

Image data reduction

High speed link

CS imagereconstruction

UAV Surveillance center

Transform image to

Apply median filter

Split image intosubimages

Preprocessing

Subimages mean shiftclustering

Compose globalcluster matrix

Cluster global clustermatrix

Segmentation

Eliminate largecandidate regions

Erase single pixelareas

Merge nearbysegments

Decision-making

Ignore clusters withmultiple segments

Outline the results End user

YCbCr space

Figure 1 General overview of the proposed procedure

that the measurements are incoherent and the signal hassparse representation in certain transform domain

Consider a signal in 119877119873 that can be represented usingthe orthonormal basis vectors Ψ119894119873119894=1 The desired signal xcan then be observed in terms of the transform domain basisfunctions [10 11]

x = 119873sum119894=1

119883119894Ψ119894 (1)

or equivalently in the matrix formx = ΨX (2)

where the transform domain matrix is Ψ fl [1205951 | 1205952 | sdot sdot sdot |120595119873] while X represents the transform domain vector In thecase of sparse signals the vector X contains only 119870 ≪ 119873significant components while others are zeros or negligibleThen we might say that X is a K-sparse representation of xInstead of measuring the entire signal x we canmeasure onlya small set of 119872 lt 119873 random samples of x using a linearmeasurement matrix Φ fl [1206011 | 1206012 | sdot sdot sdot | 120601119873] The vector ofmeasurements y can now be written as follows

y = Φx = ΦΨX = ΘX (3)

whereΘ = ΦΨ of size119872times119873

The main challenge behind compressive sensing is todesign a measurement matrix Φ which can provide exactand unique reconstruction of K-sparse signal where119872 ge 119870Here we consider the randommeasurement matrixΦ of size119872 times 119873 (119872 ≪ 119873) that contains only one value equal to 1at the random position in each row (while other values are0) Particularly in the 119894th row the value 1 is on the positionof the 119894th random measurement Consequently the resultingcompressive sensing matrixΘ in (3) is usually called randompartial transform matrix [19 20] It contains partial basisfunctions from Ψ with values at the random positions ofmeasurements Note that in the case of images the two-dimensional measurements are simply rearranged into one-dimensional vector y while Ψ should correspond to a two-dimensional transformation

Since 119872 lt 119873 the solution is ill-posed the number ofequations is smaller than the number of unknowns If wecould determine the support of119870 components withinX thenthe problem would be determined by the 119872 ge 119870 systemof linear equations (119872 equations with 119870 unknowns) Thestable recovery is assured if the compressive sensing matrixΘ satisfies the restricted isometry property (RIP) Howeverin real situations we almost never know the signal support inX and the signal reconstruction needs to be performed using

4 Mathematical Problems in Engineering

certain minimization approaches The most natural choiceassumes theminimization of ℓ0 norm which is used to searchfor the sparsest vector X in the transform domain

X = argmin X0subject to ΘX = y (4)

However this approach is computationally unstable and rep-resents NP (nondeterministic polynomial-time) hard prob-lem Therefore the ℓ0 norm minimization is replaced by theℓ1 norm minimization leading to the convex optimizationproblem that can be solved using linear programming

X = argmin X1subject to ΘX = y (5)

In the sequel we consider a gradient algorithm for efficientreconstruction of images which belongs to the group ofconvex optimization methods [15] Note that the gradientbased methods generally do not require the signals to bestrictly sparse in certain transform domain and in that senseprovide significant benefits and relaxations for real-worldapplications Particularly it is known that the images arenot sparse in any of the transform domains but can beconsidered as compressible in the two-dimensional DiscreteCosine Transform (2D DCT) domain Hence the 2D DCT isemployed to estimate the direction and value of the gradientused to update the values of missing data towards the exactsolution

211 Gradient Based Signal Reconstruction Approach Pre-vious minimization problem can be solved using the gra-dient based approach The efficient implementation of thisapproach can be done in a block by block basis where theblock sizes are 119873 times 119872 (the square blocks 119873 = 119872 = 16 areused in the experiments) The available image measurementswithin the block are defined by the pixels indices

(119894 119895) isin Ω where Ω sub (1 1) (1 2) (119873119872) (6)

while

card Ω = 119872119886119873119886 ≪ 119873119872 (7)

The original (full) image block is denoted as 119891(119899119898) Theimage measurements are hence defined by

119891 (119894 119895) for (119899119898) = (119894 119895) isin Ω (8)

Let us now observe the initial image matrix z that containsthe available measurements and zero values at the positionsof unavailable pixels The elements of z can be defined as

119911 (119899119898) = 119891 (119899119898) 120575 (119899 minus 119894) 120575 (119898 minus 119895) (9)

where 120575(119899) is a unit delta functionThe gradient method usesthe basic concept of steepest descent method It treats onlythe missing pixels such that their initial zero values are variedin both directions for a certain constant Δ Then the ℓ1 normis applied in the transform domain to measure the level ofsparsity and to calculate the gradient which is also used toupdate the values of pixels through the iterations In thatsense we might observe the matrix Z comprising119873 vectors zformed by the elements 119911(119899119898)

Z = [z z sdot sdot sdot z] (10)

In order to model the process of missing pixels variations forplusmnΔ we can define the two matrices

Z+119896 = [z+1119896 z+2119896 sdot sdot sdot z+119873119896 ] = Z119896 + ΔZminus119896 = [zminus1119896 zminus2119896 sdot sdot sdot zminus119873119896 ] = Z119896 minus Δ (11)

where 119896 denotes the number of iterationsThe initial value ofΔ can be set as Δ = max(|z|) The previous matrices can bewritten in an expanded form

Z+119896 = Z119896 plusmn Δ= [z119896 z119896 sdot sdot sdot z119896 ] plusmn Δ diag [120575 (119899 minus 119894) 120575 (119898 minus 119895)] (12)

where

Z119896 =[[[[[[[

z119896 (1 1) z119896 (1 1)z119896 (1 2) sdot sdot sdot

z119896 (1 2)z119896 (119873119872) z119896 (119873119872)

]]]]]]] (13)

while

diag [120575 (119899 minus 119894) 120575 (119898 minus 119896)] =[[[[[[[[[

120575 (1 minus 1198941) 120575 (1 minus 1198951) 0 sdot sdot sdot 00 120575 (1 minus 1198941) 120575 (2 minus 1198952) sdot sdot sdot 0 d

0 0 sdot sdot sdot 120575 (119873 minus 119899119873) 120575 (119872 minus 119898119872)

]]]]]]]]] (14)

Mathematical Problems in Engineering 5

Based on the matrices Z+119896 and Zminus119896 the gradient vector G iscalculated as

G119896 = 12Δ [1003817100381710038171003817DCT2D Z+119896 1003817100381710038171003817col1 minus 1003817100381710038171003817DCT2D Zminus119896 1003817100381710038171003817col1 ] (15)

where DCT2Dsdot is 2D DCT being calculated over columnsof Z+119896 and Zminus119896 while sdot col1 denotes the ℓ1 norm calculatedover columns In the final step the pixels values are updatedas follows

z119896+1 = z119896 + G(2Δ) (16)

The gradient is generally proportional to the error betweenthe exact image block 119891 and its estimate 119911 Therefore themissing values will converge to the true signal values In orderto obtain a high level of the reconstruction precision the stepΔ is decreased when the algorithm convergence slows downNamely when the pixels values are very close to the exactvalues the gradient will oscillate around the exact value andthe step size should be reduced for example Δ = Δ3 Thestopping criterion can be set using the desired reconstructionaccuracy 120576 expressed in dB as follows

MSE = 10 log10sum10038161003816100381610038161003816119911119901 minus 119911119901minus1100381610038161003816100381610038162sum 10038161003816100381610038161003816119911119901minus1100381610038161003816100381610038162

lt 120576 (17)

where Mean Square Error (MSE) is calculated as a differencebetween the reconstructed signals before and after reducingstep Δ Here we use the precision 120576 = minus100 dB The sameprocedure is repeated for each image block resulting in thereconstructed image 119910 Computational complexity of thereconstruction algorithm is analyzed in detail in light ofpossible implementation of the algorithm in systems (likeFPGA) with limited computational resources The 2D DCTof size 119872 times 119872 is observed with 119872 being the power of2 Particularly 119872 = 16 is used here Hence for eachobserved image block the total number of additions is (119872 minus119872119860)2[(31198722)(log2(119872)minus1)+2]2+4 where119872119860 denotes theavailable samples while the total number ofmultiplications is(119872 minus119872119860)2[119872log2(119872)minus31198722+4]2+7 Note that themostdemanding operation is DCT calculation which can be doneusing fast algorithm with (31198722)(log2(119872) minus 1) + 2 additionsand119872log2(119872)minus31198722+4multiplicationsThese numbers ofoperations are squared for the considered 2D signal case

The performance of CS reconstruction algorithm can beseen in Figure 2 for three different numbers of compressedmeasurements (compared to the original image dimension-ality) 80 of measurements 50 of measurements and20 of measurements Consequently we may define thecorresponding image degradation levels as 20 degradation50 degradation and 80 degradation

22 Suspicious Object Detection Algorithm Figure 1 includesthe general overview of the proposed image processingalgorithm The block diagram implicitly suggests a three-stage operation the first stage being the preprocessing stage isrepresented by the top left part of the diagram and the second

stage being the segmentation stage is represented by the lowerleft part of the diagram whereas the third stage being thedecision-making stage is represented by the right part of theblock diagram It should be noted that the algorithm has beendeployed for Croatian Mountain Rescue Service for severalmonths as field assistance tool

221 Image Preprocessing Thepreprocessing stage comprisestwo parts At the start images are converted from the original119877119866119861 to 119884119862119887119862119903 color format Traditional 119877119866119861 color format isnot convenient for computer vision applications due to thehigh correlation between its color components Next blue-difference (119862119887) and red-difference (119862119903) color componentsare denoised by applying a 3 times 3 median filter The imageis then divided into nonoverlapping subimages which aresubsequently fed to the segmentation module for furtherprocessing Number of subimages was set to 64 since thenumber 8 divides both image height and width without theresidue and ensures nonoverlapping

222 Segmentation The segmentation stage comprises twosteps Each subimage is segmented using the mean shiftclustering algorithm [21] Mean shift is extremely versatilenonparametric iterative procedure for feature space analysisWhen used for color image segmentation the image datais mapped into the feature space resulting in a clusterpattern The clusters correspond to the significant featuresin the image namely dominant colors Using mean shiftalgorithm these clusters can be located and dominant colorscan therefore be extracted from the image to be used forsegmentation

The clusters are located by applying a search window inthe feature space which shifts towards the cluster center Themagnitude and the direction of the shift in feature space arebased on the difference between the window center and thelocal mean value inside the window For 119899 data points 119909119894 119894 =1 2 119899 in the d-dimensional space 119877119889 the shift is definedas

119898ℎ (119909) = sum119899119894=1 119909119894119892 (1003817100381710038171003817(119909 minus 119909119894) ℎ10038171003817100381710038172)sum119899119894=1 119892 (1003817100381710038171003817(119909 minus 119909119894) ℎ10038171003817100381710038172) minus 119909 (18)

where 119892(119909) is the kernel ℎ is a bandwidth parameter whichis set to the value 45 (determined experimentally usingdifferent terrain-image datasets) 119909 is the center of the kernel(window) and 119909119894 is the element inside kernel When themagnitude of the shift becomes small according to the giventhreshold the center of the search window is declared ascluster center and the algorithm is said to have convergedfor one clusterThis procedure is repeated until all significantclusters have been identified

Once all the subimages have been clustered a globalcluster matrix K is formed by merging resulting clustermatrices obtained during subimage segmentation This newmatrix K is clustered again using cluster centers (ie mean119862119887 and 119862119903 of each cluster) from the previous step insteadof subimage pixel values as input points This two-stepapproach is introduced in order to speed up segmentation

6 Mathematical Problems in Engineering

Orig

inal

imag

e10

Degraded image Reconstructed image

20

50

80

Figure 2 Example of compressive sensing image reconstruction algorithmrsquos performance on one of the images (10) from used datasetDetection results are also indicated green squares represent correct detections (in comparison to original image) red squares represent FNsand orange squares represent FPs

while keeping almost the same performance It assures thatthe number of points for data clustering stays reasonably lowin both stepsThe number of pixels in subimages is naturally119879 times smaller than the number of pixels in the originalimage and the number of cluster centers used in the secondstep is even smaller than the number of pixels in the first step

The output of the segmentation stage is a set of candidateregions each representing a cluster of similarly colored pixels

The bulk of computational complexity of segmentationstep is due to this cluster search procedure and is equal to119874(119873119883sub times 119873119884Sub) where 119873119883sub is number of pixels along119883 axis in the subimage while 119873119884Sub is number of pixels

Mathematical Problems in Engineering 7

along 119884 axis in the subimage All subsequent steps (includingdecision-making step) are only concerned with detectedclusters making their computational complexity negligiblecompared to complexity of mean shift algorithm in this step

223 Decision-Making The decision-making stage com-prises five steps In the first step large candidate regionsare eliminated from subsequent processing The eliminationthreshold value119873th is determined based on the image heightimage width and the estimated distance between the cameraand the observed surface The premise here is that if suchregions were to represent a person it would mean that theactual person is standing too close to the camera making thesearch trivial

The second step is to remove all those areas inside partic-ular candidate regions that contain only a handful of pixelsIn this way the residual noise presented by some scatteredpixels left aftermedian filtering is efficiently eliminatedThenin the third step the resulting image is dilated by applying a5 times 5mask This is done to increase the size of the connectedpixel areas inside candidate regions so that the similar nearbysegments can be merged together

In the next step the segments belonging to the clusterwith more than three spatially separated areas are excludedfrom the resulting set of candidate segments under theassumption that the imagewould not containmore than threesuspicious objects of the same color Finally all the remainingsegments that were not eliminated in any of the previous foursteps are singled out as potential targets

More formally the decision-making procedure can bewritten as follows An image 119868 consists of a set of clusters 119862119894obtained by grouping image pixels using only color values ofthe chosen color model as feature vector elements

119868 = 119899⋃119894=1

119862119894 119862119894 cap 119862119895 = 0 forall119894 119895 119894 = 119895 (19)

As mentioned before clustering according to similarity offeature vectors is performed usingmean shift algorithm Eachcluster 119862119894 represents a set of spatially connected-componentregions or segments 119878119894119896

119862119894 = 119898⋃119896=1

119878119894119896 119878119894119896 cap 119878119894119897 = 0 forall119896 119897 119896 = 119897 (20)

In order to accept segment 119878119894119896 as potential target thefollowing properties have to be satisfied

1199011 Size (119862119894) lt 119873max1199012 Size (119878119894119896) gt 119873min1199013 119898 le 119873119860

(21)

where 119873max and 119873min are chosen threshold values m isthe total number of segments within a given cluster and119873119860 denotes the maximum allowed number of candidatesegments belonging to the same cluster For our application119873min 119873max and119873119860 are set to 10 38000 and 3 respectivelyPlease note that 119873max value represents 0317 of total pixelsin the image andwas determined empirically (it encompassessome redundancy ie objects of interest are rarely that large)

23 Performance Metrics

231 Image QualityMetric Several image qualitymetrics areintroduced andused in the experiments in order to give betteroverview of obtained results as well as more weight to theresults It should also be noted that it is not our intention tomake conclusions about appropriateness of particular metricor to make their direct comparison but rather to make theresults more accessible to wider audience

Structural Similarity Index (SSI) [22 23] is inspired byhuman visual system which is highly accustomed to extract-ing and processing structural information from images Itdetects and evaluates structural changes between two signals(images) reference (119909) and reconstructed (119910) one Thismakes SSI very consistent with human visual perceptionObtained SSI values are in the range of 0 and 1 where 0corresponds to lowest quality image (compared to original)and 1 best quality image (which only happens for the exactlythe same image) SSI is calculated on small patches takenfrom the same locations of two images It encompassesthree similarity terms between two images (1) similarity oflocal luminancebrightnessmdash119897(119909 119910) (2) similarity of localcontrastmdash119888(119909 119910) and (3) similarity of local structuremdash119904(119909 119910) Formally it is defined by

SSI (119909 119910) = 119897 (119909 119910) sdot 119888 (119909 119910) sdot 119904 (119909 119910)= ( 2120583119909120583119910 + 11986211205832119909 + 1205832119910 + 1198621) sdot (

2120590119909120590119910 + 11986221205902119909 + 1205902119910 + 1198622)

sdot ( 120590119909119910 + 1198623120590119909120590119910 + 1198623) (22)

where120583119909 120583119910 are local samplemeans of119909 and119910 image120590119909 120590119910are local sample standard deviations of 119909 and 119910 images 120590119909119910is local sample cross-correlation of 119909 and 119910 images afterremoving their respective means and 1198621 1198622 and 1198623 aresmall positive constants used for numerical stability androbustness of the metric

It can be applied on both color and grayscale images butfor simplicity and without loss of generality it was appliedon normalized grayscale images in the paper It should benoted that SSI is widely used in predicting the perceivedquality of digital television and cinematic pictures in practicalapplication but its performance is sometimes disputed incomparison to MSE [24] It is used as main performancemetric in the experiment due to simple interpretation and itswide usage for image quality assessment

Peak signal to noise ratio (PSNR) [23]was used as a part ofauxiliary metric set for image quality assessment which alsoincluded MSE and ℓ2 norm Missing pixels can in a sensebe considered a salt-and-pepper type noise and thus useof PSNR makes sense since it is defined as a ratio betweenthe maximum possible power of a signal and the power ofnoise corrupted signal Larger PSNR indicates better qualityreconstructed image and vice versa PSNR does not have

8 Mathematical Problems in Engineering

limited range as is the case with SSI It is expressed in unitsof dB and defined by

PSNR (119909 119910) = 10 log10 (maxValue2

MSE) (23)

where maxValue is maximum range of the pixel which innormalized grayscale image is 1 andMSE is theMean SquareError between 119909 (referent image) and 119910 (reconstructedimage) defined as

MSE (119909 119910) = 1119873119873sum119894=1

(119909119894 minus 119910119894)2 (24)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) and 119909119894 and 119910119894 are normalizedvalues of 119894th pixel in the 119909 and 119910 image respectivelyMSE is dominant quantitative performance metric for theassessment of signal quality in the field of signal processingIt is simple to use and interpret has a clear physical meaningand is a desirable metric within statistics and estimationframework However its performance has been criticized indealing with perceptual signals such as images [25] Thisis mainly due to the fact that implicit assumptions relatedwith MSE are in general not met in the context of visualperception However it is still often used in the literaturewhen reporting performance in image reconstruction andthus we include it here for comparison purposes LargerMSEvalues indicate lower quality images (compared to referenceone) while smaller values indicate better quality image MSEvalue range is not limited as is the case with SSI

Final metric used in the paper is ℓ2 metric It is alsocalled the Euclidian distance and in the work we applied itto the color images This means that ℓ2 metric representsEuclidian distance between two points in 119877119866119861 space the 119894thpixel in the original image and the corresponding pixel in thereconstructed image It is expressed for all pixels in the imageand is defined as

ℓ2 (119909 119910)= 119873sum119894=1

radic(119877119909119894 minus 119877119910119894)2 + (119866119909119894 minus 119866119910119894)2 + (119861119909119894 minus 119861119910119894)2 (25)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) for all color channels 119877119909119894 119877119910119894are normalized red color channel values (0-1) of the 119894th pixel119866119909119894 119866119910119894 are normalized green color channel values (0-1) ofthe 119894th pixel and 119861119909119894 119861119910119894 are normalized blue color channelvalues (0-1) of the 119894th pixelThe larger the value of ℓ2metric isthemore difference there is between two imagesThe ℓ2 normmetric is mainly used in image similarity analysis althoughthere are some situations in which it has been shown that ℓ1metric can be considered as a proper choice as well [26]

232 Detection Quality Metric The performance of imageprocessing algorithm for detecting suspicious objects inimages for different levels of missing pixels was evaluated

in terms of precision and recall The standard definitions ofthese two measures are given by the following equations

Recall = TPTP + FN

Precision = TP

TP + FP

(26)

where TP denotes the number of true positives FP denotesthe number of false positives and FN denotes the number offalse negatives It should be noted that all of these parameterswere determined by checking whether or not a matchingsegment (object) has been found in the original imagewithout checking if it actually represents a person or anobject of interest More on accuracy (in terms of recalland precision) of the presented algorithm can be found inSection 331 where comparisonwith human performance onoriginal images was made

When making conclusions based on presented recall andprecision values it should be kept in mind that it is notenvisaged as a standalone tool but as cooperative tool forhuman operators aimed at reducing their workloadThus FPvalues do not have a high price since human operator cancheck it and dismiss it if it is a false alarm More costly areFNs since they can potentially mislead the operator

3 Results and Discussion

31 Database For the experiment we used 16 images of 4 Kresolution (2992 times 4000 pixels) obtained at 3 occasions withDJI Phantom 3rsquos gyro stabilized camera Camerarsquos plane wasparallel with the ground and ideally UAV was at the heightof 50m (although this was not always the case as will beexplained later on in Section 33) All images were taken incoastal area of Croatia in which search and rescue operationsoften take place Images 1ndash7 were taken during CroatianMountain Rescue Service search and rescue drills (Set 1)and images 8ndash12 were taken during our mockup testing (Set2) while images 13ndash16 were taken during actual search andrescue operation (Set 3)

All images were firstly intentionally degraded withdesired level (ranging between 20 and 80) in a mannerthat random pixels were set to white (ie they were missing)Images were then reconstructed using CS approach andtested for object detection performance

32 Image Reconstruction First we explore how well CSbased reconstruction algorithm performs in terms of imagequality metrics Metrics were calculated for each of theimages for two cases (a) when random missing sampleswere introduced to the original image (ie for degradedimage) and (b) when original image was reconstructed usinggradient based algorithm For both cases original unalteredimage was taken as a reference Obtained results for allmetrics can be found in Figure 3 in a form of enhanced Boxplots

As can be seen from Figure 3(a) the image quality beforereconstruction (as measured by SSI) is somewhat low withmean value in the range of 0033 to 0127 and it is significantly

Mathematical Problems in Engineering 9

0

02

04

06

08

1SS

I

30 40 50 60 70 8020Degradation level ()

(a) SSI

30 40 50 60 70 8020Degradation level ()

05

1015202530354045

PSN

R (d

B)

(b) PSNR

30 40 50 60 70 8020Degradation level ()

0

01

02

03

04

05

06

07

MSE

(c) MSE

30 40 50 60 70 8020Degradation level ()

0

1000

2000

3000

4000

5000

985747 2no

rm

(d) ℓ2 norm

Figure 3 Image quality metric for all degradation levels (over all images) before and after reconstruction Red color represents metric fordegraded images while blue represents postreconstruction values Black dots represent mean values

increased after reconstruction having mean values in therange 0666 to 0984 Same trend can be observed for all otherquality metrics PSNR (3797 dB to 9838 dB range beforeand 23973 dB to 38100 dB range after reconstruction) MSE(0107 to 0428 range before and 1629119890 minus 4 to 0004 rangeafter reconstruction) and ℓ2 norm (1943119890 + 3 to 3896119890 +3 range before and 75654 to 384180 range after reconstruc-tion) Please note that for some cases (like in Figures 3(c)and 3(d)) the distribution for particular condition is verytight making its graphical representation quite small Thenonparametric Kruskal-Wallis test [27] was conducted on allmetrics for postreconstruction case in order to determinestatistical significance of the results with degradation levelas independent variable Statistical significance was detectedin all cases with the following values SSI (1205942(6) = 9834119901 lt 005) PSNR (1205942(6) = 10117 119901 lt 005) MSE (1205942(6)= 10117 119901 lt 005) and ℓ2 norm (1205942(6) = 10117 119901 lt005) Tukeyrsquos honestly significant difference (THSD) post hoctests (corrected for multiple comparison) were performedwhich revealed some interesting patterns For all metricsstatistical difference was present only for those cases thathad at least 30 degradation difference between them (ie50 cases were statistically different only from 20 and80 cases please see Figure 2 for visual comparison) Webelieve this goes towards demonstrating quality of obtainedreconstruction (in terms of presented metrics) that is thereis no statistical difference between 20 and 40 missing

sample cases (although their means are different in favor of20 case) It should also be noted that even in cases of 70or80 of pixels missing reconstructed image was subjectivelygood enough (please see Figure 2) so that its content could berecognized by the viewer (with SSI means of 0778 and 0666resp) However it should be noted that in cases of such highimage degradation (especially for 80 case) reconstructedimages appeared somewhat smudged (pastel like effect) withsome details (subjectively) lost

It is interesting to explore how much gain was obtainedwith CS reconstruction This is depicted in Figure 4 for allmetrics From presented figures it can be seen that obtainedimprovement is significant and while its absolute valuevaries detected by all performance metrics The improve-ment range is between 3730 and 81637 for SSI between 3268and 9959 for PSNR between 9612119890 minus 4 and 00205 for MSEand between 0029 and 0143 for ℓ2 norm From this a generaltrend can be observed the improvement gain increases withthe degradation level There are a couple of exceptions tothis general rule for SSI in 12 out of 16 images gain for 70is larger than for 80 case However this phenomenon isnot present in other metrics (except for the case of PSNRimages 13 and 14 where they are almost identical) whichmight suggest this is due to some interaction between metricand type of environment in the image Also randomness ofpixel removal when degrading image should be considered

10 Mathematical Problems in Engineering

0

20

40

60

80SS

I rat

io

2 4 6 8 10 12 14 160Image number

20304050

607080

(a) SSI

0

2

4

6

8

10

PSN

R ra

tio

2 4 6 8 10 12 14 160Image number

20304050

607080

(b) PSNR

0

0005

001

0015

002

MSE

ratio

2 4 6 8 10 12 14 160Image number

20304050

607080

(c) MSE

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

0002004006008

01012014016

20304050

607080

985747 2no

rm ra

tio

(d) ℓ2 norm

Figure 4 Ratios of image quality metrics for all degradation levels (over all images) after and before reconstruction

Figure 4 also revels another interesting observation that war-rants further research reconstruction gain and performancemight depend on type of environmentscenery in the imageTaking into consideration that dataset contains images fromthree distinct cases (sets) as described earlier (which all haveunique environment in them) and that their range is 1ndash7 8ndash12 and 13ndash16 different patterns (gain amplitudes) can be seenfor all metrics

In order to detect if this observed pattern has statisticalsignificance nonparametric Kruskal-Wallis test (with posthoc THSD) was performed in a way that for particulardegradation level data for images were grouped into threecategories based onwhich set they belong to Obtained resultsrevealed that for allmetrics anddegradation levels there existsstatistical significance (with varying levels of 119901 value whichwe omit here for brevity) of image set on reconstructionimprovement Post hoc tests revealed that this difference wasbetween image sets 2 and 3 for SSI and PSNRmetrics while itwas between sets 1 and 23 for ℓ2 norm (between sets 1 and 2-3for degradation levels 20 30 and 40 and between sets1 and 3 for degradation levels 50 60 70 and 80) For

MSE metric the difference was between sets 1 and 2 for alldegradation levels with addition of difference between sets2 and 3 for 70 and 80 cases While all metrics do notagree with where the difference is they clearly demonstratethat terrain type influences algorithms performance whichcould be used (in conjunction with terrain type classificationlike the one in [28]) to estimate expected CS reconstructionperformance before deployment

Another interesting analysis in line with the last one canbe made so to explore if knowing image quality before thereconstruction can be used to infer image quality after thereconstruction This is depicted in Figure 5 for all qualitymetrics across all used test images Figure 5 suggests that thereexists clear relationship between quality metrics before andafter reconstruction which is to be expected However thisrelationship is nonlinear for all cases although for case ofPSNR it could be considered (almost) linearThis relationshipenables one to estimate image quality after reconstructionand (in conjunction with terrain type as demonstratedbefore) select optimal degradation level (in terms of reduceddata load) for particular case whilemaintaining image quality

Mathematical Problems in Engineering 11

SSI re

cons

truc

ted

065

07

075

08

085

09

095

1

004 005 006 007 008 009 01 011 012 013003SSIdegraded

(a) SSI

22242628303234363840

PSN

R rec

onstr

ucte

d(d

B)

5 6 7 8 9 104PSNRdegraded (dB)

(b) PSNR

times10minus3

005

115

225

335

4

MSE

reco

nstr

ucte

d

015 02 025 03 035 04 04501MSEdegraded

(c) MSE

2200 2400 2600 2800 3000 3200 3400 3600 3800 4000200050

100

150

200

250

300

350

400

985747 2no

rmre

cons

truc

ted

9857472 normdegraded

(d) ℓ2 norm

Figure 5 Dependency of image quality metrics after reconstruction on image quality metrics before reconstruction across all used testimages Presented are means for seven degradation levels in ascending order (from left to right)

TPFN

0

20

40

60

80

100

Perc

enta

ge

30 40 50 60 70 8020Degradation level ()

Figure 6 Relation between true positives (TPs) and false negatives(FNs) for particular degradation level over all images

at desired level Since pixel degradation is random oneshould also expect certain level of deviation from presentedmeans

33 Object Detection As stated before due to intendedapplication FNs are more expensive than FPs and it thusmakes sense to first explore number of total FNs in all imagesFigure 6 presents number of FNs and TPs in percentageof total detections in the original image (since they always

sum up to the same value for particular image) Pleasenote that reasons behind choosing detections in the original(nondegraded) images as the ground truth for calculations ofFNs and TPs are explained in more detail in Section 331

Figure 6 depicts that there is more or less a constantdownward trend in TP rate that is FN rates increase asdegradation level increases Exception is 30 levelThis dip inTP rate at 30might be considered a fluke but it appeared inanother unrelated analysis [9] (on completely different imageset with same CS and detection algorithms) Since currentlyreason for this is unclear it should be investigated in thefuture As expected FN percentage was the lowest for 20case (2338) and the highest for 80 case (4674) It hasvalue of about 35 for other cases (except 50 case) Thismight be considered a pretty large cost in search and rescuescenario but while making this type of conclusion it shouldbe kept in mind that comparisons are made in respect toalgorithms performance on original image and not groundtruth (which is hard to establish in this type of experimentwhere complete control of environment is not possible)

Some additional insight will be gained in Section 331where we conducted miniuser study on original image todetect how well humans perform on ideal (and not recon-structed) image For completeness of presentation Figure 7showing comparison of numbers of TPs and FPs for alldegradation levels is included

Additional insight in algorithmrsquos performance can beobtained by observing recall and precision values in Figure 8

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

4 Mathematical Problems in Engineering

certain minimization approaches The most natural choiceassumes theminimization of ℓ0 norm which is used to searchfor the sparsest vector X in the transform domain

X = argmin X0subject to ΘX = y (4)

However this approach is computationally unstable and rep-resents NP (nondeterministic polynomial-time) hard prob-lem Therefore the ℓ0 norm minimization is replaced by theℓ1 norm minimization leading to the convex optimizationproblem that can be solved using linear programming

X = argmin X1subject to ΘX = y (5)

In the sequel we consider a gradient algorithm for efficientreconstruction of images which belongs to the group ofconvex optimization methods [15] Note that the gradientbased methods generally do not require the signals to bestrictly sparse in certain transform domain and in that senseprovide significant benefits and relaxations for real-worldapplications Particularly it is known that the images arenot sparse in any of the transform domains but can beconsidered as compressible in the two-dimensional DiscreteCosine Transform (2D DCT) domain Hence the 2D DCT isemployed to estimate the direction and value of the gradientused to update the values of missing data towards the exactsolution

211 Gradient Based Signal Reconstruction Approach Pre-vious minimization problem can be solved using the gra-dient based approach The efficient implementation of thisapproach can be done in a block by block basis where theblock sizes are 119873 times 119872 (the square blocks 119873 = 119872 = 16 areused in the experiments) The available image measurementswithin the block are defined by the pixels indices

(119894 119895) isin Ω where Ω sub (1 1) (1 2) (119873119872) (6)

while

card Ω = 119872119886119873119886 ≪ 119873119872 (7)

The original (full) image block is denoted as 119891(119899119898) Theimage measurements are hence defined by

119891 (119894 119895) for (119899119898) = (119894 119895) isin Ω (8)

Let us now observe the initial image matrix z that containsthe available measurements and zero values at the positionsof unavailable pixels The elements of z can be defined as

119911 (119899119898) = 119891 (119899119898) 120575 (119899 minus 119894) 120575 (119898 minus 119895) (9)

where 120575(119899) is a unit delta functionThe gradient method usesthe basic concept of steepest descent method It treats onlythe missing pixels such that their initial zero values are variedin both directions for a certain constant Δ Then the ℓ1 normis applied in the transform domain to measure the level ofsparsity and to calculate the gradient which is also used toupdate the values of pixels through the iterations In thatsense we might observe the matrix Z comprising119873 vectors zformed by the elements 119911(119899119898)

Z = [z z sdot sdot sdot z] (10)

In order to model the process of missing pixels variations forplusmnΔ we can define the two matrices

Z+119896 = [z+1119896 z+2119896 sdot sdot sdot z+119873119896 ] = Z119896 + ΔZminus119896 = [zminus1119896 zminus2119896 sdot sdot sdot zminus119873119896 ] = Z119896 minus Δ (11)

where 119896 denotes the number of iterationsThe initial value ofΔ can be set as Δ = max(|z|) The previous matrices can bewritten in an expanded form

Z+119896 = Z119896 plusmn Δ= [z119896 z119896 sdot sdot sdot z119896 ] plusmn Δ diag [120575 (119899 minus 119894) 120575 (119898 minus 119895)] (12)

where

Z119896 =[[[[[[[

z119896 (1 1) z119896 (1 1)z119896 (1 2) sdot sdot sdot

z119896 (1 2)z119896 (119873119872) z119896 (119873119872)

]]]]]]] (13)

while

diag [120575 (119899 minus 119894) 120575 (119898 minus 119896)] =[[[[[[[[[

120575 (1 minus 1198941) 120575 (1 minus 1198951) 0 sdot sdot sdot 00 120575 (1 minus 1198941) 120575 (2 minus 1198952) sdot sdot sdot 0 d

0 0 sdot sdot sdot 120575 (119873 minus 119899119873) 120575 (119872 minus 119898119872)

]]]]]]]]] (14)

Mathematical Problems in Engineering 5

Based on the matrices Z+119896 and Zminus119896 the gradient vector G iscalculated as

G119896 = 12Δ [1003817100381710038171003817DCT2D Z+119896 1003817100381710038171003817col1 minus 1003817100381710038171003817DCT2D Zminus119896 1003817100381710038171003817col1 ] (15)

where DCT2Dsdot is 2D DCT being calculated over columnsof Z+119896 and Zminus119896 while sdot col1 denotes the ℓ1 norm calculatedover columns In the final step the pixels values are updatedas follows

z119896+1 = z119896 + G(2Δ) (16)

The gradient is generally proportional to the error betweenthe exact image block 119891 and its estimate 119911 Therefore themissing values will converge to the true signal values In orderto obtain a high level of the reconstruction precision the stepΔ is decreased when the algorithm convergence slows downNamely when the pixels values are very close to the exactvalues the gradient will oscillate around the exact value andthe step size should be reduced for example Δ = Δ3 Thestopping criterion can be set using the desired reconstructionaccuracy 120576 expressed in dB as follows

MSE = 10 log10sum10038161003816100381610038161003816119911119901 minus 119911119901minus1100381610038161003816100381610038162sum 10038161003816100381610038161003816119911119901minus1100381610038161003816100381610038162

lt 120576 (17)

where Mean Square Error (MSE) is calculated as a differencebetween the reconstructed signals before and after reducingstep Δ Here we use the precision 120576 = minus100 dB The sameprocedure is repeated for each image block resulting in thereconstructed image 119910 Computational complexity of thereconstruction algorithm is analyzed in detail in light ofpossible implementation of the algorithm in systems (likeFPGA) with limited computational resources The 2D DCTof size 119872 times 119872 is observed with 119872 being the power of2 Particularly 119872 = 16 is used here Hence for eachobserved image block the total number of additions is (119872 minus119872119860)2[(31198722)(log2(119872)minus1)+2]2+4 where119872119860 denotes theavailable samples while the total number ofmultiplications is(119872 minus119872119860)2[119872log2(119872)minus31198722+4]2+7 Note that themostdemanding operation is DCT calculation which can be doneusing fast algorithm with (31198722)(log2(119872) minus 1) + 2 additionsand119872log2(119872)minus31198722+4multiplicationsThese numbers ofoperations are squared for the considered 2D signal case

The performance of CS reconstruction algorithm can beseen in Figure 2 for three different numbers of compressedmeasurements (compared to the original image dimension-ality) 80 of measurements 50 of measurements and20 of measurements Consequently we may define thecorresponding image degradation levels as 20 degradation50 degradation and 80 degradation

22 Suspicious Object Detection Algorithm Figure 1 includesthe general overview of the proposed image processingalgorithm The block diagram implicitly suggests a three-stage operation the first stage being the preprocessing stage isrepresented by the top left part of the diagram and the second

stage being the segmentation stage is represented by the lowerleft part of the diagram whereas the third stage being thedecision-making stage is represented by the right part of theblock diagram It should be noted that the algorithm has beendeployed for Croatian Mountain Rescue Service for severalmonths as field assistance tool

221 Image Preprocessing Thepreprocessing stage comprisestwo parts At the start images are converted from the original119877119866119861 to 119884119862119887119862119903 color format Traditional 119877119866119861 color format isnot convenient for computer vision applications due to thehigh correlation between its color components Next blue-difference (119862119887) and red-difference (119862119903) color componentsare denoised by applying a 3 times 3 median filter The imageis then divided into nonoverlapping subimages which aresubsequently fed to the segmentation module for furtherprocessing Number of subimages was set to 64 since thenumber 8 divides both image height and width without theresidue and ensures nonoverlapping

222 Segmentation The segmentation stage comprises twosteps Each subimage is segmented using the mean shiftclustering algorithm [21] Mean shift is extremely versatilenonparametric iterative procedure for feature space analysisWhen used for color image segmentation the image datais mapped into the feature space resulting in a clusterpattern The clusters correspond to the significant featuresin the image namely dominant colors Using mean shiftalgorithm these clusters can be located and dominant colorscan therefore be extracted from the image to be used forsegmentation

The clusters are located by applying a search window inthe feature space which shifts towards the cluster center Themagnitude and the direction of the shift in feature space arebased on the difference between the window center and thelocal mean value inside the window For 119899 data points 119909119894 119894 =1 2 119899 in the d-dimensional space 119877119889 the shift is definedas

119898ℎ (119909) = sum119899119894=1 119909119894119892 (1003817100381710038171003817(119909 minus 119909119894) ℎ10038171003817100381710038172)sum119899119894=1 119892 (1003817100381710038171003817(119909 minus 119909119894) ℎ10038171003817100381710038172) minus 119909 (18)

where 119892(119909) is the kernel ℎ is a bandwidth parameter whichis set to the value 45 (determined experimentally usingdifferent terrain-image datasets) 119909 is the center of the kernel(window) and 119909119894 is the element inside kernel When themagnitude of the shift becomes small according to the giventhreshold the center of the search window is declared ascluster center and the algorithm is said to have convergedfor one clusterThis procedure is repeated until all significantclusters have been identified

Once all the subimages have been clustered a globalcluster matrix K is formed by merging resulting clustermatrices obtained during subimage segmentation This newmatrix K is clustered again using cluster centers (ie mean119862119887 and 119862119903 of each cluster) from the previous step insteadof subimage pixel values as input points This two-stepapproach is introduced in order to speed up segmentation

6 Mathematical Problems in Engineering

Orig

inal

imag

e10

Degraded image Reconstructed image

20

50

80

Figure 2 Example of compressive sensing image reconstruction algorithmrsquos performance on one of the images (10) from used datasetDetection results are also indicated green squares represent correct detections (in comparison to original image) red squares represent FNsand orange squares represent FPs

while keeping almost the same performance It assures thatthe number of points for data clustering stays reasonably lowin both stepsThe number of pixels in subimages is naturally119879 times smaller than the number of pixels in the originalimage and the number of cluster centers used in the secondstep is even smaller than the number of pixels in the first step

The output of the segmentation stage is a set of candidateregions each representing a cluster of similarly colored pixels

The bulk of computational complexity of segmentationstep is due to this cluster search procedure and is equal to119874(119873119883sub times 119873119884Sub) where 119873119883sub is number of pixels along119883 axis in the subimage while 119873119884Sub is number of pixels

Mathematical Problems in Engineering 7

along 119884 axis in the subimage All subsequent steps (includingdecision-making step) are only concerned with detectedclusters making their computational complexity negligiblecompared to complexity of mean shift algorithm in this step

223 Decision-Making The decision-making stage com-prises five steps In the first step large candidate regionsare eliminated from subsequent processing The eliminationthreshold value119873th is determined based on the image heightimage width and the estimated distance between the cameraand the observed surface The premise here is that if suchregions were to represent a person it would mean that theactual person is standing too close to the camera making thesearch trivial

The second step is to remove all those areas inside partic-ular candidate regions that contain only a handful of pixelsIn this way the residual noise presented by some scatteredpixels left aftermedian filtering is efficiently eliminatedThenin the third step the resulting image is dilated by applying a5 times 5mask This is done to increase the size of the connectedpixel areas inside candidate regions so that the similar nearbysegments can be merged together

In the next step the segments belonging to the clusterwith more than three spatially separated areas are excludedfrom the resulting set of candidate segments under theassumption that the imagewould not containmore than threesuspicious objects of the same color Finally all the remainingsegments that were not eliminated in any of the previous foursteps are singled out as potential targets

More formally the decision-making procedure can bewritten as follows An image 119868 consists of a set of clusters 119862119894obtained by grouping image pixels using only color values ofthe chosen color model as feature vector elements

119868 = 119899⋃119894=1

119862119894 119862119894 cap 119862119895 = 0 forall119894 119895 119894 = 119895 (19)

As mentioned before clustering according to similarity offeature vectors is performed usingmean shift algorithm Eachcluster 119862119894 represents a set of spatially connected-componentregions or segments 119878119894119896

119862119894 = 119898⋃119896=1

119878119894119896 119878119894119896 cap 119878119894119897 = 0 forall119896 119897 119896 = 119897 (20)

In order to accept segment 119878119894119896 as potential target thefollowing properties have to be satisfied

1199011 Size (119862119894) lt 119873max1199012 Size (119878119894119896) gt 119873min1199013 119898 le 119873119860

(21)

where 119873max and 119873min are chosen threshold values m isthe total number of segments within a given cluster and119873119860 denotes the maximum allowed number of candidatesegments belonging to the same cluster For our application119873min 119873max and119873119860 are set to 10 38000 and 3 respectivelyPlease note that 119873max value represents 0317 of total pixelsin the image andwas determined empirically (it encompassessome redundancy ie objects of interest are rarely that large)

23 Performance Metrics

231 Image QualityMetric Several image qualitymetrics areintroduced andused in the experiments in order to give betteroverview of obtained results as well as more weight to theresults It should also be noted that it is not our intention tomake conclusions about appropriateness of particular metricor to make their direct comparison but rather to make theresults more accessible to wider audience

Structural Similarity Index (SSI) [22 23] is inspired byhuman visual system which is highly accustomed to extract-ing and processing structural information from images Itdetects and evaluates structural changes between two signals(images) reference (119909) and reconstructed (119910) one Thismakes SSI very consistent with human visual perceptionObtained SSI values are in the range of 0 and 1 where 0corresponds to lowest quality image (compared to original)and 1 best quality image (which only happens for the exactlythe same image) SSI is calculated on small patches takenfrom the same locations of two images It encompassesthree similarity terms between two images (1) similarity oflocal luminancebrightnessmdash119897(119909 119910) (2) similarity of localcontrastmdash119888(119909 119910) and (3) similarity of local structuremdash119904(119909 119910) Formally it is defined by

SSI (119909 119910) = 119897 (119909 119910) sdot 119888 (119909 119910) sdot 119904 (119909 119910)= ( 2120583119909120583119910 + 11986211205832119909 + 1205832119910 + 1198621) sdot (

2120590119909120590119910 + 11986221205902119909 + 1205902119910 + 1198622)

sdot ( 120590119909119910 + 1198623120590119909120590119910 + 1198623) (22)

where120583119909 120583119910 are local samplemeans of119909 and119910 image120590119909 120590119910are local sample standard deviations of 119909 and 119910 images 120590119909119910is local sample cross-correlation of 119909 and 119910 images afterremoving their respective means and 1198621 1198622 and 1198623 aresmall positive constants used for numerical stability androbustness of the metric

It can be applied on both color and grayscale images butfor simplicity and without loss of generality it was appliedon normalized grayscale images in the paper It should benoted that SSI is widely used in predicting the perceivedquality of digital television and cinematic pictures in practicalapplication but its performance is sometimes disputed incomparison to MSE [24] It is used as main performancemetric in the experiment due to simple interpretation and itswide usage for image quality assessment

Peak signal to noise ratio (PSNR) [23]was used as a part ofauxiliary metric set for image quality assessment which alsoincluded MSE and ℓ2 norm Missing pixels can in a sensebe considered a salt-and-pepper type noise and thus useof PSNR makes sense since it is defined as a ratio betweenthe maximum possible power of a signal and the power ofnoise corrupted signal Larger PSNR indicates better qualityreconstructed image and vice versa PSNR does not have

8 Mathematical Problems in Engineering

limited range as is the case with SSI It is expressed in unitsof dB and defined by

PSNR (119909 119910) = 10 log10 (maxValue2

MSE) (23)

where maxValue is maximum range of the pixel which innormalized grayscale image is 1 andMSE is theMean SquareError between 119909 (referent image) and 119910 (reconstructedimage) defined as

MSE (119909 119910) = 1119873119873sum119894=1

(119909119894 minus 119910119894)2 (24)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) and 119909119894 and 119910119894 are normalizedvalues of 119894th pixel in the 119909 and 119910 image respectivelyMSE is dominant quantitative performance metric for theassessment of signal quality in the field of signal processingIt is simple to use and interpret has a clear physical meaningand is a desirable metric within statistics and estimationframework However its performance has been criticized indealing with perceptual signals such as images [25] Thisis mainly due to the fact that implicit assumptions relatedwith MSE are in general not met in the context of visualperception However it is still often used in the literaturewhen reporting performance in image reconstruction andthus we include it here for comparison purposes LargerMSEvalues indicate lower quality images (compared to referenceone) while smaller values indicate better quality image MSEvalue range is not limited as is the case with SSI

Final metric used in the paper is ℓ2 metric It is alsocalled the Euclidian distance and in the work we applied itto the color images This means that ℓ2 metric representsEuclidian distance between two points in 119877119866119861 space the 119894thpixel in the original image and the corresponding pixel in thereconstructed image It is expressed for all pixels in the imageand is defined as

ℓ2 (119909 119910)= 119873sum119894=1

radic(119877119909119894 minus 119877119910119894)2 + (119866119909119894 minus 119866119910119894)2 + (119861119909119894 minus 119861119910119894)2 (25)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) for all color channels 119877119909119894 119877119910119894are normalized red color channel values (0-1) of the 119894th pixel119866119909119894 119866119910119894 are normalized green color channel values (0-1) ofthe 119894th pixel and 119861119909119894 119861119910119894 are normalized blue color channelvalues (0-1) of the 119894th pixelThe larger the value of ℓ2metric isthemore difference there is between two imagesThe ℓ2 normmetric is mainly used in image similarity analysis althoughthere are some situations in which it has been shown that ℓ1metric can be considered as a proper choice as well [26]

232 Detection Quality Metric The performance of imageprocessing algorithm for detecting suspicious objects inimages for different levels of missing pixels was evaluated

in terms of precision and recall The standard definitions ofthese two measures are given by the following equations

Recall = TPTP + FN

Precision = TP

TP + FP

(26)

where TP denotes the number of true positives FP denotesthe number of false positives and FN denotes the number offalse negatives It should be noted that all of these parameterswere determined by checking whether or not a matchingsegment (object) has been found in the original imagewithout checking if it actually represents a person or anobject of interest More on accuracy (in terms of recalland precision) of the presented algorithm can be found inSection 331 where comparisonwith human performance onoriginal images was made

When making conclusions based on presented recall andprecision values it should be kept in mind that it is notenvisaged as a standalone tool but as cooperative tool forhuman operators aimed at reducing their workloadThus FPvalues do not have a high price since human operator cancheck it and dismiss it if it is a false alarm More costly areFNs since they can potentially mislead the operator

3 Results and Discussion

31 Database For the experiment we used 16 images of 4 Kresolution (2992 times 4000 pixels) obtained at 3 occasions withDJI Phantom 3rsquos gyro stabilized camera Camerarsquos plane wasparallel with the ground and ideally UAV was at the heightof 50m (although this was not always the case as will beexplained later on in Section 33) All images were taken incoastal area of Croatia in which search and rescue operationsoften take place Images 1ndash7 were taken during CroatianMountain Rescue Service search and rescue drills (Set 1)and images 8ndash12 were taken during our mockup testing (Set2) while images 13ndash16 were taken during actual search andrescue operation (Set 3)

All images were firstly intentionally degraded withdesired level (ranging between 20 and 80) in a mannerthat random pixels were set to white (ie they were missing)Images were then reconstructed using CS approach andtested for object detection performance

32 Image Reconstruction First we explore how well CSbased reconstruction algorithm performs in terms of imagequality metrics Metrics were calculated for each of theimages for two cases (a) when random missing sampleswere introduced to the original image (ie for degradedimage) and (b) when original image was reconstructed usinggradient based algorithm For both cases original unalteredimage was taken as a reference Obtained results for allmetrics can be found in Figure 3 in a form of enhanced Boxplots

As can be seen from Figure 3(a) the image quality beforereconstruction (as measured by SSI) is somewhat low withmean value in the range of 0033 to 0127 and it is significantly

Mathematical Problems in Engineering 9

0

02

04

06

08

1SS

I

30 40 50 60 70 8020Degradation level ()

(a) SSI

30 40 50 60 70 8020Degradation level ()

05

1015202530354045

PSN

R (d

B)

(b) PSNR

30 40 50 60 70 8020Degradation level ()

0

01

02

03

04

05

06

07

MSE

(c) MSE

30 40 50 60 70 8020Degradation level ()

0

1000

2000

3000

4000

5000

985747 2no

rm

(d) ℓ2 norm

Figure 3 Image quality metric for all degradation levels (over all images) before and after reconstruction Red color represents metric fordegraded images while blue represents postreconstruction values Black dots represent mean values

increased after reconstruction having mean values in therange 0666 to 0984 Same trend can be observed for all otherquality metrics PSNR (3797 dB to 9838 dB range beforeand 23973 dB to 38100 dB range after reconstruction) MSE(0107 to 0428 range before and 1629119890 minus 4 to 0004 rangeafter reconstruction) and ℓ2 norm (1943119890 + 3 to 3896119890 +3 range before and 75654 to 384180 range after reconstruc-tion) Please note that for some cases (like in Figures 3(c)and 3(d)) the distribution for particular condition is verytight making its graphical representation quite small Thenonparametric Kruskal-Wallis test [27] was conducted on allmetrics for postreconstruction case in order to determinestatistical significance of the results with degradation levelas independent variable Statistical significance was detectedin all cases with the following values SSI (1205942(6) = 9834119901 lt 005) PSNR (1205942(6) = 10117 119901 lt 005) MSE (1205942(6)= 10117 119901 lt 005) and ℓ2 norm (1205942(6) = 10117 119901 lt005) Tukeyrsquos honestly significant difference (THSD) post hoctests (corrected for multiple comparison) were performedwhich revealed some interesting patterns For all metricsstatistical difference was present only for those cases thathad at least 30 degradation difference between them (ie50 cases were statistically different only from 20 and80 cases please see Figure 2 for visual comparison) Webelieve this goes towards demonstrating quality of obtainedreconstruction (in terms of presented metrics) that is thereis no statistical difference between 20 and 40 missing

sample cases (although their means are different in favor of20 case) It should also be noted that even in cases of 70or80 of pixels missing reconstructed image was subjectivelygood enough (please see Figure 2) so that its content could berecognized by the viewer (with SSI means of 0778 and 0666resp) However it should be noted that in cases of such highimage degradation (especially for 80 case) reconstructedimages appeared somewhat smudged (pastel like effect) withsome details (subjectively) lost

It is interesting to explore how much gain was obtainedwith CS reconstruction This is depicted in Figure 4 for allmetrics From presented figures it can be seen that obtainedimprovement is significant and while its absolute valuevaries detected by all performance metrics The improve-ment range is between 3730 and 81637 for SSI between 3268and 9959 for PSNR between 9612119890 minus 4 and 00205 for MSEand between 0029 and 0143 for ℓ2 norm From this a generaltrend can be observed the improvement gain increases withthe degradation level There are a couple of exceptions tothis general rule for SSI in 12 out of 16 images gain for 70is larger than for 80 case However this phenomenon isnot present in other metrics (except for the case of PSNRimages 13 and 14 where they are almost identical) whichmight suggest this is due to some interaction between metricand type of environment in the image Also randomness ofpixel removal when degrading image should be considered

10 Mathematical Problems in Engineering

0

20

40

60

80SS

I rat

io

2 4 6 8 10 12 14 160Image number

20304050

607080

(a) SSI

0

2

4

6

8

10

PSN

R ra

tio

2 4 6 8 10 12 14 160Image number

20304050

607080

(b) PSNR

0

0005

001

0015

002

MSE

ratio

2 4 6 8 10 12 14 160Image number

20304050

607080

(c) MSE

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

0002004006008

01012014016

20304050

607080

985747 2no

rm ra

tio

(d) ℓ2 norm

Figure 4 Ratios of image quality metrics for all degradation levels (over all images) after and before reconstruction

Figure 4 also revels another interesting observation that war-rants further research reconstruction gain and performancemight depend on type of environmentscenery in the imageTaking into consideration that dataset contains images fromthree distinct cases (sets) as described earlier (which all haveunique environment in them) and that their range is 1ndash7 8ndash12 and 13ndash16 different patterns (gain amplitudes) can be seenfor all metrics

In order to detect if this observed pattern has statisticalsignificance nonparametric Kruskal-Wallis test (with posthoc THSD) was performed in a way that for particulardegradation level data for images were grouped into threecategories based onwhich set they belong to Obtained resultsrevealed that for allmetrics anddegradation levels there existsstatistical significance (with varying levels of 119901 value whichwe omit here for brevity) of image set on reconstructionimprovement Post hoc tests revealed that this difference wasbetween image sets 2 and 3 for SSI and PSNRmetrics while itwas between sets 1 and 23 for ℓ2 norm (between sets 1 and 2-3for degradation levels 20 30 and 40 and between sets1 and 3 for degradation levels 50 60 70 and 80) For

MSE metric the difference was between sets 1 and 2 for alldegradation levels with addition of difference between sets2 and 3 for 70 and 80 cases While all metrics do notagree with where the difference is they clearly demonstratethat terrain type influences algorithms performance whichcould be used (in conjunction with terrain type classificationlike the one in [28]) to estimate expected CS reconstructionperformance before deployment

Another interesting analysis in line with the last one canbe made so to explore if knowing image quality before thereconstruction can be used to infer image quality after thereconstruction This is depicted in Figure 5 for all qualitymetrics across all used test images Figure 5 suggests that thereexists clear relationship between quality metrics before andafter reconstruction which is to be expected However thisrelationship is nonlinear for all cases although for case ofPSNR it could be considered (almost) linearThis relationshipenables one to estimate image quality after reconstructionand (in conjunction with terrain type as demonstratedbefore) select optimal degradation level (in terms of reduceddata load) for particular case whilemaintaining image quality

Mathematical Problems in Engineering 11

SSI re

cons

truc

ted

065

07

075

08

085

09

095

1

004 005 006 007 008 009 01 011 012 013003SSIdegraded

(a) SSI

22242628303234363840

PSN

R rec

onstr

ucte

d(d

B)

5 6 7 8 9 104PSNRdegraded (dB)

(b) PSNR

times10minus3

005

115

225

335

4

MSE

reco

nstr

ucte

d

015 02 025 03 035 04 04501MSEdegraded

(c) MSE

2200 2400 2600 2800 3000 3200 3400 3600 3800 4000200050

100

150

200

250

300

350

400

985747 2no

rmre

cons

truc

ted

9857472 normdegraded

(d) ℓ2 norm

Figure 5 Dependency of image quality metrics after reconstruction on image quality metrics before reconstruction across all used testimages Presented are means for seven degradation levels in ascending order (from left to right)

TPFN

0

20

40

60

80

100

Perc

enta

ge

30 40 50 60 70 8020Degradation level ()

Figure 6 Relation between true positives (TPs) and false negatives(FNs) for particular degradation level over all images

at desired level Since pixel degradation is random oneshould also expect certain level of deviation from presentedmeans

33 Object Detection As stated before due to intendedapplication FNs are more expensive than FPs and it thusmakes sense to first explore number of total FNs in all imagesFigure 6 presents number of FNs and TPs in percentageof total detections in the original image (since they always

sum up to the same value for particular image) Pleasenote that reasons behind choosing detections in the original(nondegraded) images as the ground truth for calculations ofFNs and TPs are explained in more detail in Section 331

Figure 6 depicts that there is more or less a constantdownward trend in TP rate that is FN rates increase asdegradation level increases Exception is 30 levelThis dip inTP rate at 30might be considered a fluke but it appeared inanother unrelated analysis [9] (on completely different imageset with same CS and detection algorithms) Since currentlyreason for this is unclear it should be investigated in thefuture As expected FN percentage was the lowest for 20case (2338) and the highest for 80 case (4674) It hasvalue of about 35 for other cases (except 50 case) Thismight be considered a pretty large cost in search and rescuescenario but while making this type of conclusion it shouldbe kept in mind that comparisons are made in respect toalgorithms performance on original image and not groundtruth (which is hard to establish in this type of experimentwhere complete control of environment is not possible)

Some additional insight will be gained in Section 331where we conducted miniuser study on original image todetect how well humans perform on ideal (and not recon-structed) image For completeness of presentation Figure 7showing comparison of numbers of TPs and FPs for alldegradation levels is included

Additional insight in algorithmrsquos performance can beobtained by observing recall and precision values in Figure 8

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

Mathematical Problems in Engineering 5

Based on the matrices Z+119896 and Zminus119896 the gradient vector G iscalculated as

G119896 = 12Δ [1003817100381710038171003817DCT2D Z+119896 1003817100381710038171003817col1 minus 1003817100381710038171003817DCT2D Zminus119896 1003817100381710038171003817col1 ] (15)

where DCT2Dsdot is 2D DCT being calculated over columnsof Z+119896 and Zminus119896 while sdot col1 denotes the ℓ1 norm calculatedover columns In the final step the pixels values are updatedas follows

z119896+1 = z119896 + G(2Δ) (16)

The gradient is generally proportional to the error betweenthe exact image block 119891 and its estimate 119911 Therefore themissing values will converge to the true signal values In orderto obtain a high level of the reconstruction precision the stepΔ is decreased when the algorithm convergence slows downNamely when the pixels values are very close to the exactvalues the gradient will oscillate around the exact value andthe step size should be reduced for example Δ = Δ3 Thestopping criterion can be set using the desired reconstructionaccuracy 120576 expressed in dB as follows

MSE = 10 log10sum10038161003816100381610038161003816119911119901 minus 119911119901minus1100381610038161003816100381610038162sum 10038161003816100381610038161003816119911119901minus1100381610038161003816100381610038162

lt 120576 (17)

where Mean Square Error (MSE) is calculated as a differencebetween the reconstructed signals before and after reducingstep Δ Here we use the precision 120576 = minus100 dB The sameprocedure is repeated for each image block resulting in thereconstructed image 119910 Computational complexity of thereconstruction algorithm is analyzed in detail in light ofpossible implementation of the algorithm in systems (likeFPGA) with limited computational resources The 2D DCTof size 119872 times 119872 is observed with 119872 being the power of2 Particularly 119872 = 16 is used here Hence for eachobserved image block the total number of additions is (119872 minus119872119860)2[(31198722)(log2(119872)minus1)+2]2+4 where119872119860 denotes theavailable samples while the total number ofmultiplications is(119872 minus119872119860)2[119872log2(119872)minus31198722+4]2+7 Note that themostdemanding operation is DCT calculation which can be doneusing fast algorithm with (31198722)(log2(119872) minus 1) + 2 additionsand119872log2(119872)minus31198722+4multiplicationsThese numbers ofoperations are squared for the considered 2D signal case

The performance of CS reconstruction algorithm can beseen in Figure 2 for three different numbers of compressedmeasurements (compared to the original image dimension-ality) 80 of measurements 50 of measurements and20 of measurements Consequently we may define thecorresponding image degradation levels as 20 degradation50 degradation and 80 degradation

22 Suspicious Object Detection Algorithm Figure 1 includesthe general overview of the proposed image processingalgorithm The block diagram implicitly suggests a three-stage operation the first stage being the preprocessing stage isrepresented by the top left part of the diagram and the second

stage being the segmentation stage is represented by the lowerleft part of the diagram whereas the third stage being thedecision-making stage is represented by the right part of theblock diagram It should be noted that the algorithm has beendeployed for Croatian Mountain Rescue Service for severalmonths as field assistance tool

221 Image Preprocessing Thepreprocessing stage comprisestwo parts At the start images are converted from the original119877119866119861 to 119884119862119887119862119903 color format Traditional 119877119866119861 color format isnot convenient for computer vision applications due to thehigh correlation between its color components Next blue-difference (119862119887) and red-difference (119862119903) color componentsare denoised by applying a 3 times 3 median filter The imageis then divided into nonoverlapping subimages which aresubsequently fed to the segmentation module for furtherprocessing Number of subimages was set to 64 since thenumber 8 divides both image height and width without theresidue and ensures nonoverlapping

222 Segmentation The segmentation stage comprises twosteps Each subimage is segmented using the mean shiftclustering algorithm [21] Mean shift is extremely versatilenonparametric iterative procedure for feature space analysisWhen used for color image segmentation the image datais mapped into the feature space resulting in a clusterpattern The clusters correspond to the significant featuresin the image namely dominant colors Using mean shiftalgorithm these clusters can be located and dominant colorscan therefore be extracted from the image to be used forsegmentation

The clusters are located by applying a search window inthe feature space which shifts towards the cluster center Themagnitude and the direction of the shift in feature space arebased on the difference between the window center and thelocal mean value inside the window For 119899 data points 119909119894 119894 =1 2 119899 in the d-dimensional space 119877119889 the shift is definedas

119898ℎ (119909) = sum119899119894=1 119909119894119892 (1003817100381710038171003817(119909 minus 119909119894) ℎ10038171003817100381710038172)sum119899119894=1 119892 (1003817100381710038171003817(119909 minus 119909119894) ℎ10038171003817100381710038172) minus 119909 (18)

where 119892(119909) is the kernel ℎ is a bandwidth parameter whichis set to the value 45 (determined experimentally usingdifferent terrain-image datasets) 119909 is the center of the kernel(window) and 119909119894 is the element inside kernel When themagnitude of the shift becomes small according to the giventhreshold the center of the search window is declared ascluster center and the algorithm is said to have convergedfor one clusterThis procedure is repeated until all significantclusters have been identified

Once all the subimages have been clustered a globalcluster matrix K is formed by merging resulting clustermatrices obtained during subimage segmentation This newmatrix K is clustered again using cluster centers (ie mean119862119887 and 119862119903 of each cluster) from the previous step insteadof subimage pixel values as input points This two-stepapproach is introduced in order to speed up segmentation

6 Mathematical Problems in Engineering

Orig

inal

imag

e10

Degraded image Reconstructed image

20

50

80

Figure 2 Example of compressive sensing image reconstruction algorithmrsquos performance on one of the images (10) from used datasetDetection results are also indicated green squares represent correct detections (in comparison to original image) red squares represent FNsand orange squares represent FPs

while keeping almost the same performance It assures thatthe number of points for data clustering stays reasonably lowin both stepsThe number of pixels in subimages is naturally119879 times smaller than the number of pixels in the originalimage and the number of cluster centers used in the secondstep is even smaller than the number of pixels in the first step

The output of the segmentation stage is a set of candidateregions each representing a cluster of similarly colored pixels

The bulk of computational complexity of segmentationstep is due to this cluster search procedure and is equal to119874(119873119883sub times 119873119884Sub) where 119873119883sub is number of pixels along119883 axis in the subimage while 119873119884Sub is number of pixels

Mathematical Problems in Engineering 7

along 119884 axis in the subimage All subsequent steps (includingdecision-making step) are only concerned with detectedclusters making their computational complexity negligiblecompared to complexity of mean shift algorithm in this step

223 Decision-Making The decision-making stage com-prises five steps In the first step large candidate regionsare eliminated from subsequent processing The eliminationthreshold value119873th is determined based on the image heightimage width and the estimated distance between the cameraand the observed surface The premise here is that if suchregions were to represent a person it would mean that theactual person is standing too close to the camera making thesearch trivial

The second step is to remove all those areas inside partic-ular candidate regions that contain only a handful of pixelsIn this way the residual noise presented by some scatteredpixels left aftermedian filtering is efficiently eliminatedThenin the third step the resulting image is dilated by applying a5 times 5mask This is done to increase the size of the connectedpixel areas inside candidate regions so that the similar nearbysegments can be merged together

In the next step the segments belonging to the clusterwith more than three spatially separated areas are excludedfrom the resulting set of candidate segments under theassumption that the imagewould not containmore than threesuspicious objects of the same color Finally all the remainingsegments that were not eliminated in any of the previous foursteps are singled out as potential targets

More formally the decision-making procedure can bewritten as follows An image 119868 consists of a set of clusters 119862119894obtained by grouping image pixels using only color values ofthe chosen color model as feature vector elements

119868 = 119899⋃119894=1

119862119894 119862119894 cap 119862119895 = 0 forall119894 119895 119894 = 119895 (19)

As mentioned before clustering according to similarity offeature vectors is performed usingmean shift algorithm Eachcluster 119862119894 represents a set of spatially connected-componentregions or segments 119878119894119896

119862119894 = 119898⋃119896=1

119878119894119896 119878119894119896 cap 119878119894119897 = 0 forall119896 119897 119896 = 119897 (20)

In order to accept segment 119878119894119896 as potential target thefollowing properties have to be satisfied

1199011 Size (119862119894) lt 119873max1199012 Size (119878119894119896) gt 119873min1199013 119898 le 119873119860

(21)

where 119873max and 119873min are chosen threshold values m isthe total number of segments within a given cluster and119873119860 denotes the maximum allowed number of candidatesegments belonging to the same cluster For our application119873min 119873max and119873119860 are set to 10 38000 and 3 respectivelyPlease note that 119873max value represents 0317 of total pixelsin the image andwas determined empirically (it encompassessome redundancy ie objects of interest are rarely that large)

23 Performance Metrics

231 Image QualityMetric Several image qualitymetrics areintroduced andused in the experiments in order to give betteroverview of obtained results as well as more weight to theresults It should also be noted that it is not our intention tomake conclusions about appropriateness of particular metricor to make their direct comparison but rather to make theresults more accessible to wider audience

Structural Similarity Index (SSI) [22 23] is inspired byhuman visual system which is highly accustomed to extract-ing and processing structural information from images Itdetects and evaluates structural changes between two signals(images) reference (119909) and reconstructed (119910) one Thismakes SSI very consistent with human visual perceptionObtained SSI values are in the range of 0 and 1 where 0corresponds to lowest quality image (compared to original)and 1 best quality image (which only happens for the exactlythe same image) SSI is calculated on small patches takenfrom the same locations of two images It encompassesthree similarity terms between two images (1) similarity oflocal luminancebrightnessmdash119897(119909 119910) (2) similarity of localcontrastmdash119888(119909 119910) and (3) similarity of local structuremdash119904(119909 119910) Formally it is defined by

SSI (119909 119910) = 119897 (119909 119910) sdot 119888 (119909 119910) sdot 119904 (119909 119910)= ( 2120583119909120583119910 + 11986211205832119909 + 1205832119910 + 1198621) sdot (

2120590119909120590119910 + 11986221205902119909 + 1205902119910 + 1198622)

sdot ( 120590119909119910 + 1198623120590119909120590119910 + 1198623) (22)

where120583119909 120583119910 are local samplemeans of119909 and119910 image120590119909 120590119910are local sample standard deviations of 119909 and 119910 images 120590119909119910is local sample cross-correlation of 119909 and 119910 images afterremoving their respective means and 1198621 1198622 and 1198623 aresmall positive constants used for numerical stability androbustness of the metric

It can be applied on both color and grayscale images butfor simplicity and without loss of generality it was appliedon normalized grayscale images in the paper It should benoted that SSI is widely used in predicting the perceivedquality of digital television and cinematic pictures in practicalapplication but its performance is sometimes disputed incomparison to MSE [24] It is used as main performancemetric in the experiment due to simple interpretation and itswide usage for image quality assessment

Peak signal to noise ratio (PSNR) [23]was used as a part ofauxiliary metric set for image quality assessment which alsoincluded MSE and ℓ2 norm Missing pixels can in a sensebe considered a salt-and-pepper type noise and thus useof PSNR makes sense since it is defined as a ratio betweenthe maximum possible power of a signal and the power ofnoise corrupted signal Larger PSNR indicates better qualityreconstructed image and vice versa PSNR does not have

8 Mathematical Problems in Engineering

limited range as is the case with SSI It is expressed in unitsof dB and defined by

PSNR (119909 119910) = 10 log10 (maxValue2

MSE) (23)

where maxValue is maximum range of the pixel which innormalized grayscale image is 1 andMSE is theMean SquareError between 119909 (referent image) and 119910 (reconstructedimage) defined as

MSE (119909 119910) = 1119873119873sum119894=1

(119909119894 minus 119910119894)2 (24)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) and 119909119894 and 119910119894 are normalizedvalues of 119894th pixel in the 119909 and 119910 image respectivelyMSE is dominant quantitative performance metric for theassessment of signal quality in the field of signal processingIt is simple to use and interpret has a clear physical meaningand is a desirable metric within statistics and estimationframework However its performance has been criticized indealing with perceptual signals such as images [25] Thisis mainly due to the fact that implicit assumptions relatedwith MSE are in general not met in the context of visualperception However it is still often used in the literaturewhen reporting performance in image reconstruction andthus we include it here for comparison purposes LargerMSEvalues indicate lower quality images (compared to referenceone) while smaller values indicate better quality image MSEvalue range is not limited as is the case with SSI

Final metric used in the paper is ℓ2 metric It is alsocalled the Euclidian distance and in the work we applied itto the color images This means that ℓ2 metric representsEuclidian distance between two points in 119877119866119861 space the 119894thpixel in the original image and the corresponding pixel in thereconstructed image It is expressed for all pixels in the imageand is defined as

ℓ2 (119909 119910)= 119873sum119894=1

radic(119877119909119894 minus 119877119910119894)2 + (119866119909119894 minus 119866119910119894)2 + (119861119909119894 minus 119861119910119894)2 (25)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) for all color channels 119877119909119894 119877119910119894are normalized red color channel values (0-1) of the 119894th pixel119866119909119894 119866119910119894 are normalized green color channel values (0-1) ofthe 119894th pixel and 119861119909119894 119861119910119894 are normalized blue color channelvalues (0-1) of the 119894th pixelThe larger the value of ℓ2metric isthemore difference there is between two imagesThe ℓ2 normmetric is mainly used in image similarity analysis althoughthere are some situations in which it has been shown that ℓ1metric can be considered as a proper choice as well [26]

232 Detection Quality Metric The performance of imageprocessing algorithm for detecting suspicious objects inimages for different levels of missing pixels was evaluated

in terms of precision and recall The standard definitions ofthese two measures are given by the following equations

Recall = TPTP + FN

Precision = TP

TP + FP

(26)

where TP denotes the number of true positives FP denotesthe number of false positives and FN denotes the number offalse negatives It should be noted that all of these parameterswere determined by checking whether or not a matchingsegment (object) has been found in the original imagewithout checking if it actually represents a person or anobject of interest More on accuracy (in terms of recalland precision) of the presented algorithm can be found inSection 331 where comparisonwith human performance onoriginal images was made

When making conclusions based on presented recall andprecision values it should be kept in mind that it is notenvisaged as a standalone tool but as cooperative tool forhuman operators aimed at reducing their workloadThus FPvalues do not have a high price since human operator cancheck it and dismiss it if it is a false alarm More costly areFNs since they can potentially mislead the operator

3 Results and Discussion

31 Database For the experiment we used 16 images of 4 Kresolution (2992 times 4000 pixels) obtained at 3 occasions withDJI Phantom 3rsquos gyro stabilized camera Camerarsquos plane wasparallel with the ground and ideally UAV was at the heightof 50m (although this was not always the case as will beexplained later on in Section 33) All images were taken incoastal area of Croatia in which search and rescue operationsoften take place Images 1ndash7 were taken during CroatianMountain Rescue Service search and rescue drills (Set 1)and images 8ndash12 were taken during our mockup testing (Set2) while images 13ndash16 were taken during actual search andrescue operation (Set 3)

All images were firstly intentionally degraded withdesired level (ranging between 20 and 80) in a mannerthat random pixels were set to white (ie they were missing)Images were then reconstructed using CS approach andtested for object detection performance

32 Image Reconstruction First we explore how well CSbased reconstruction algorithm performs in terms of imagequality metrics Metrics were calculated for each of theimages for two cases (a) when random missing sampleswere introduced to the original image (ie for degradedimage) and (b) when original image was reconstructed usinggradient based algorithm For both cases original unalteredimage was taken as a reference Obtained results for allmetrics can be found in Figure 3 in a form of enhanced Boxplots

As can be seen from Figure 3(a) the image quality beforereconstruction (as measured by SSI) is somewhat low withmean value in the range of 0033 to 0127 and it is significantly

Mathematical Problems in Engineering 9

0

02

04

06

08

1SS

I

30 40 50 60 70 8020Degradation level ()

(a) SSI

30 40 50 60 70 8020Degradation level ()

05

1015202530354045

PSN

R (d

B)

(b) PSNR

30 40 50 60 70 8020Degradation level ()

0

01

02

03

04

05

06

07

MSE

(c) MSE

30 40 50 60 70 8020Degradation level ()

0

1000

2000

3000

4000

5000

985747 2no

rm

(d) ℓ2 norm

Figure 3 Image quality metric for all degradation levels (over all images) before and after reconstruction Red color represents metric fordegraded images while blue represents postreconstruction values Black dots represent mean values

increased after reconstruction having mean values in therange 0666 to 0984 Same trend can be observed for all otherquality metrics PSNR (3797 dB to 9838 dB range beforeand 23973 dB to 38100 dB range after reconstruction) MSE(0107 to 0428 range before and 1629119890 minus 4 to 0004 rangeafter reconstruction) and ℓ2 norm (1943119890 + 3 to 3896119890 +3 range before and 75654 to 384180 range after reconstruc-tion) Please note that for some cases (like in Figures 3(c)and 3(d)) the distribution for particular condition is verytight making its graphical representation quite small Thenonparametric Kruskal-Wallis test [27] was conducted on allmetrics for postreconstruction case in order to determinestatistical significance of the results with degradation levelas independent variable Statistical significance was detectedin all cases with the following values SSI (1205942(6) = 9834119901 lt 005) PSNR (1205942(6) = 10117 119901 lt 005) MSE (1205942(6)= 10117 119901 lt 005) and ℓ2 norm (1205942(6) = 10117 119901 lt005) Tukeyrsquos honestly significant difference (THSD) post hoctests (corrected for multiple comparison) were performedwhich revealed some interesting patterns For all metricsstatistical difference was present only for those cases thathad at least 30 degradation difference between them (ie50 cases were statistically different only from 20 and80 cases please see Figure 2 for visual comparison) Webelieve this goes towards demonstrating quality of obtainedreconstruction (in terms of presented metrics) that is thereis no statistical difference between 20 and 40 missing

sample cases (although their means are different in favor of20 case) It should also be noted that even in cases of 70or80 of pixels missing reconstructed image was subjectivelygood enough (please see Figure 2) so that its content could berecognized by the viewer (with SSI means of 0778 and 0666resp) However it should be noted that in cases of such highimage degradation (especially for 80 case) reconstructedimages appeared somewhat smudged (pastel like effect) withsome details (subjectively) lost

It is interesting to explore how much gain was obtainedwith CS reconstruction This is depicted in Figure 4 for allmetrics From presented figures it can be seen that obtainedimprovement is significant and while its absolute valuevaries detected by all performance metrics The improve-ment range is between 3730 and 81637 for SSI between 3268and 9959 for PSNR between 9612119890 minus 4 and 00205 for MSEand between 0029 and 0143 for ℓ2 norm From this a generaltrend can be observed the improvement gain increases withthe degradation level There are a couple of exceptions tothis general rule for SSI in 12 out of 16 images gain for 70is larger than for 80 case However this phenomenon isnot present in other metrics (except for the case of PSNRimages 13 and 14 where they are almost identical) whichmight suggest this is due to some interaction between metricand type of environment in the image Also randomness ofpixel removal when degrading image should be considered

10 Mathematical Problems in Engineering

0

20

40

60

80SS

I rat

io

2 4 6 8 10 12 14 160Image number

20304050

607080

(a) SSI

0

2

4

6

8

10

PSN

R ra

tio

2 4 6 8 10 12 14 160Image number

20304050

607080

(b) PSNR

0

0005

001

0015

002

MSE

ratio

2 4 6 8 10 12 14 160Image number

20304050

607080

(c) MSE

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

0002004006008

01012014016

20304050

607080

985747 2no

rm ra

tio

(d) ℓ2 norm

Figure 4 Ratios of image quality metrics for all degradation levels (over all images) after and before reconstruction

Figure 4 also revels another interesting observation that war-rants further research reconstruction gain and performancemight depend on type of environmentscenery in the imageTaking into consideration that dataset contains images fromthree distinct cases (sets) as described earlier (which all haveunique environment in them) and that their range is 1ndash7 8ndash12 and 13ndash16 different patterns (gain amplitudes) can be seenfor all metrics

In order to detect if this observed pattern has statisticalsignificance nonparametric Kruskal-Wallis test (with posthoc THSD) was performed in a way that for particulardegradation level data for images were grouped into threecategories based onwhich set they belong to Obtained resultsrevealed that for allmetrics anddegradation levels there existsstatistical significance (with varying levels of 119901 value whichwe omit here for brevity) of image set on reconstructionimprovement Post hoc tests revealed that this difference wasbetween image sets 2 and 3 for SSI and PSNRmetrics while itwas between sets 1 and 23 for ℓ2 norm (between sets 1 and 2-3for degradation levels 20 30 and 40 and between sets1 and 3 for degradation levels 50 60 70 and 80) For

MSE metric the difference was between sets 1 and 2 for alldegradation levels with addition of difference between sets2 and 3 for 70 and 80 cases While all metrics do notagree with where the difference is they clearly demonstratethat terrain type influences algorithms performance whichcould be used (in conjunction with terrain type classificationlike the one in [28]) to estimate expected CS reconstructionperformance before deployment

Another interesting analysis in line with the last one canbe made so to explore if knowing image quality before thereconstruction can be used to infer image quality after thereconstruction This is depicted in Figure 5 for all qualitymetrics across all used test images Figure 5 suggests that thereexists clear relationship between quality metrics before andafter reconstruction which is to be expected However thisrelationship is nonlinear for all cases although for case ofPSNR it could be considered (almost) linearThis relationshipenables one to estimate image quality after reconstructionand (in conjunction with terrain type as demonstratedbefore) select optimal degradation level (in terms of reduceddata load) for particular case whilemaintaining image quality

Mathematical Problems in Engineering 11

SSI re

cons

truc

ted

065

07

075

08

085

09

095

1

004 005 006 007 008 009 01 011 012 013003SSIdegraded

(a) SSI

22242628303234363840

PSN

R rec

onstr

ucte

d(d

B)

5 6 7 8 9 104PSNRdegraded (dB)

(b) PSNR

times10minus3

005

115

225

335

4

MSE

reco

nstr

ucte

d

015 02 025 03 035 04 04501MSEdegraded

(c) MSE

2200 2400 2600 2800 3000 3200 3400 3600 3800 4000200050

100

150

200

250

300

350

400

985747 2no

rmre

cons

truc

ted

9857472 normdegraded

(d) ℓ2 norm

Figure 5 Dependency of image quality metrics after reconstruction on image quality metrics before reconstruction across all used testimages Presented are means for seven degradation levels in ascending order (from left to right)

TPFN

0

20

40

60

80

100

Perc

enta

ge

30 40 50 60 70 8020Degradation level ()

Figure 6 Relation between true positives (TPs) and false negatives(FNs) for particular degradation level over all images

at desired level Since pixel degradation is random oneshould also expect certain level of deviation from presentedmeans

33 Object Detection As stated before due to intendedapplication FNs are more expensive than FPs and it thusmakes sense to first explore number of total FNs in all imagesFigure 6 presents number of FNs and TPs in percentageof total detections in the original image (since they always

sum up to the same value for particular image) Pleasenote that reasons behind choosing detections in the original(nondegraded) images as the ground truth for calculations ofFNs and TPs are explained in more detail in Section 331

Figure 6 depicts that there is more or less a constantdownward trend in TP rate that is FN rates increase asdegradation level increases Exception is 30 levelThis dip inTP rate at 30might be considered a fluke but it appeared inanother unrelated analysis [9] (on completely different imageset with same CS and detection algorithms) Since currentlyreason for this is unclear it should be investigated in thefuture As expected FN percentage was the lowest for 20case (2338) and the highest for 80 case (4674) It hasvalue of about 35 for other cases (except 50 case) Thismight be considered a pretty large cost in search and rescuescenario but while making this type of conclusion it shouldbe kept in mind that comparisons are made in respect toalgorithms performance on original image and not groundtruth (which is hard to establish in this type of experimentwhere complete control of environment is not possible)

Some additional insight will be gained in Section 331where we conducted miniuser study on original image todetect how well humans perform on ideal (and not recon-structed) image For completeness of presentation Figure 7showing comparison of numbers of TPs and FPs for alldegradation levels is included

Additional insight in algorithmrsquos performance can beobtained by observing recall and precision values in Figure 8

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

6 Mathematical Problems in Engineering

Orig

inal

imag

e10

Degraded image Reconstructed image

20

50

80

Figure 2 Example of compressive sensing image reconstruction algorithmrsquos performance on one of the images (10) from used datasetDetection results are also indicated green squares represent correct detections (in comparison to original image) red squares represent FNsand orange squares represent FPs

while keeping almost the same performance It assures thatthe number of points for data clustering stays reasonably lowin both stepsThe number of pixels in subimages is naturally119879 times smaller than the number of pixels in the originalimage and the number of cluster centers used in the secondstep is even smaller than the number of pixels in the first step

The output of the segmentation stage is a set of candidateregions each representing a cluster of similarly colored pixels

The bulk of computational complexity of segmentationstep is due to this cluster search procedure and is equal to119874(119873119883sub times 119873119884Sub) where 119873119883sub is number of pixels along119883 axis in the subimage while 119873119884Sub is number of pixels

Mathematical Problems in Engineering 7

along 119884 axis in the subimage All subsequent steps (includingdecision-making step) are only concerned with detectedclusters making their computational complexity negligiblecompared to complexity of mean shift algorithm in this step

223 Decision-Making The decision-making stage com-prises five steps In the first step large candidate regionsare eliminated from subsequent processing The eliminationthreshold value119873th is determined based on the image heightimage width and the estimated distance between the cameraand the observed surface The premise here is that if suchregions were to represent a person it would mean that theactual person is standing too close to the camera making thesearch trivial

The second step is to remove all those areas inside partic-ular candidate regions that contain only a handful of pixelsIn this way the residual noise presented by some scatteredpixels left aftermedian filtering is efficiently eliminatedThenin the third step the resulting image is dilated by applying a5 times 5mask This is done to increase the size of the connectedpixel areas inside candidate regions so that the similar nearbysegments can be merged together

In the next step the segments belonging to the clusterwith more than three spatially separated areas are excludedfrom the resulting set of candidate segments under theassumption that the imagewould not containmore than threesuspicious objects of the same color Finally all the remainingsegments that were not eliminated in any of the previous foursteps are singled out as potential targets

More formally the decision-making procedure can bewritten as follows An image 119868 consists of a set of clusters 119862119894obtained by grouping image pixels using only color values ofthe chosen color model as feature vector elements

119868 = 119899⋃119894=1

119862119894 119862119894 cap 119862119895 = 0 forall119894 119895 119894 = 119895 (19)

As mentioned before clustering according to similarity offeature vectors is performed usingmean shift algorithm Eachcluster 119862119894 represents a set of spatially connected-componentregions or segments 119878119894119896

119862119894 = 119898⋃119896=1

119878119894119896 119878119894119896 cap 119878119894119897 = 0 forall119896 119897 119896 = 119897 (20)

In order to accept segment 119878119894119896 as potential target thefollowing properties have to be satisfied

1199011 Size (119862119894) lt 119873max1199012 Size (119878119894119896) gt 119873min1199013 119898 le 119873119860

(21)

where 119873max and 119873min are chosen threshold values m isthe total number of segments within a given cluster and119873119860 denotes the maximum allowed number of candidatesegments belonging to the same cluster For our application119873min 119873max and119873119860 are set to 10 38000 and 3 respectivelyPlease note that 119873max value represents 0317 of total pixelsin the image andwas determined empirically (it encompassessome redundancy ie objects of interest are rarely that large)

23 Performance Metrics

231 Image QualityMetric Several image qualitymetrics areintroduced andused in the experiments in order to give betteroverview of obtained results as well as more weight to theresults It should also be noted that it is not our intention tomake conclusions about appropriateness of particular metricor to make their direct comparison but rather to make theresults more accessible to wider audience

Structural Similarity Index (SSI) [22 23] is inspired byhuman visual system which is highly accustomed to extract-ing and processing structural information from images Itdetects and evaluates structural changes between two signals(images) reference (119909) and reconstructed (119910) one Thismakes SSI very consistent with human visual perceptionObtained SSI values are in the range of 0 and 1 where 0corresponds to lowest quality image (compared to original)and 1 best quality image (which only happens for the exactlythe same image) SSI is calculated on small patches takenfrom the same locations of two images It encompassesthree similarity terms between two images (1) similarity oflocal luminancebrightnessmdash119897(119909 119910) (2) similarity of localcontrastmdash119888(119909 119910) and (3) similarity of local structuremdash119904(119909 119910) Formally it is defined by

SSI (119909 119910) = 119897 (119909 119910) sdot 119888 (119909 119910) sdot 119904 (119909 119910)= ( 2120583119909120583119910 + 11986211205832119909 + 1205832119910 + 1198621) sdot (

2120590119909120590119910 + 11986221205902119909 + 1205902119910 + 1198622)

sdot ( 120590119909119910 + 1198623120590119909120590119910 + 1198623) (22)

where120583119909 120583119910 are local samplemeans of119909 and119910 image120590119909 120590119910are local sample standard deviations of 119909 and 119910 images 120590119909119910is local sample cross-correlation of 119909 and 119910 images afterremoving their respective means and 1198621 1198622 and 1198623 aresmall positive constants used for numerical stability androbustness of the metric

It can be applied on both color and grayscale images butfor simplicity and without loss of generality it was appliedon normalized grayscale images in the paper It should benoted that SSI is widely used in predicting the perceivedquality of digital television and cinematic pictures in practicalapplication but its performance is sometimes disputed incomparison to MSE [24] It is used as main performancemetric in the experiment due to simple interpretation and itswide usage for image quality assessment

Peak signal to noise ratio (PSNR) [23]was used as a part ofauxiliary metric set for image quality assessment which alsoincluded MSE and ℓ2 norm Missing pixels can in a sensebe considered a salt-and-pepper type noise and thus useof PSNR makes sense since it is defined as a ratio betweenthe maximum possible power of a signal and the power ofnoise corrupted signal Larger PSNR indicates better qualityreconstructed image and vice versa PSNR does not have

8 Mathematical Problems in Engineering

limited range as is the case with SSI It is expressed in unitsof dB and defined by

PSNR (119909 119910) = 10 log10 (maxValue2

MSE) (23)

where maxValue is maximum range of the pixel which innormalized grayscale image is 1 andMSE is theMean SquareError between 119909 (referent image) and 119910 (reconstructedimage) defined as

MSE (119909 119910) = 1119873119873sum119894=1

(119909119894 minus 119910119894)2 (24)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) and 119909119894 and 119910119894 are normalizedvalues of 119894th pixel in the 119909 and 119910 image respectivelyMSE is dominant quantitative performance metric for theassessment of signal quality in the field of signal processingIt is simple to use and interpret has a clear physical meaningand is a desirable metric within statistics and estimationframework However its performance has been criticized indealing with perceptual signals such as images [25] Thisis mainly due to the fact that implicit assumptions relatedwith MSE are in general not met in the context of visualperception However it is still often used in the literaturewhen reporting performance in image reconstruction andthus we include it here for comparison purposes LargerMSEvalues indicate lower quality images (compared to referenceone) while smaller values indicate better quality image MSEvalue range is not limited as is the case with SSI

Final metric used in the paper is ℓ2 metric It is alsocalled the Euclidian distance and in the work we applied itto the color images This means that ℓ2 metric representsEuclidian distance between two points in 119877119866119861 space the 119894thpixel in the original image and the corresponding pixel in thereconstructed image It is expressed for all pixels in the imageand is defined as

ℓ2 (119909 119910)= 119873sum119894=1

radic(119877119909119894 minus 119877119910119894)2 + (119866119909119894 minus 119866119910119894)2 + (119861119909119894 minus 119861119910119894)2 (25)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) for all color channels 119877119909119894 119877119910119894are normalized red color channel values (0-1) of the 119894th pixel119866119909119894 119866119910119894 are normalized green color channel values (0-1) ofthe 119894th pixel and 119861119909119894 119861119910119894 are normalized blue color channelvalues (0-1) of the 119894th pixelThe larger the value of ℓ2metric isthemore difference there is between two imagesThe ℓ2 normmetric is mainly used in image similarity analysis althoughthere are some situations in which it has been shown that ℓ1metric can be considered as a proper choice as well [26]

232 Detection Quality Metric The performance of imageprocessing algorithm for detecting suspicious objects inimages for different levels of missing pixels was evaluated

in terms of precision and recall The standard definitions ofthese two measures are given by the following equations

Recall = TPTP + FN

Precision = TP

TP + FP

(26)

where TP denotes the number of true positives FP denotesthe number of false positives and FN denotes the number offalse negatives It should be noted that all of these parameterswere determined by checking whether or not a matchingsegment (object) has been found in the original imagewithout checking if it actually represents a person or anobject of interest More on accuracy (in terms of recalland precision) of the presented algorithm can be found inSection 331 where comparisonwith human performance onoriginal images was made

When making conclusions based on presented recall andprecision values it should be kept in mind that it is notenvisaged as a standalone tool but as cooperative tool forhuman operators aimed at reducing their workloadThus FPvalues do not have a high price since human operator cancheck it and dismiss it if it is a false alarm More costly areFNs since they can potentially mislead the operator

3 Results and Discussion

31 Database For the experiment we used 16 images of 4 Kresolution (2992 times 4000 pixels) obtained at 3 occasions withDJI Phantom 3rsquos gyro stabilized camera Camerarsquos plane wasparallel with the ground and ideally UAV was at the heightof 50m (although this was not always the case as will beexplained later on in Section 33) All images were taken incoastal area of Croatia in which search and rescue operationsoften take place Images 1ndash7 were taken during CroatianMountain Rescue Service search and rescue drills (Set 1)and images 8ndash12 were taken during our mockup testing (Set2) while images 13ndash16 were taken during actual search andrescue operation (Set 3)

All images were firstly intentionally degraded withdesired level (ranging between 20 and 80) in a mannerthat random pixels were set to white (ie they were missing)Images were then reconstructed using CS approach andtested for object detection performance

32 Image Reconstruction First we explore how well CSbased reconstruction algorithm performs in terms of imagequality metrics Metrics were calculated for each of theimages for two cases (a) when random missing sampleswere introduced to the original image (ie for degradedimage) and (b) when original image was reconstructed usinggradient based algorithm For both cases original unalteredimage was taken as a reference Obtained results for allmetrics can be found in Figure 3 in a form of enhanced Boxplots

As can be seen from Figure 3(a) the image quality beforereconstruction (as measured by SSI) is somewhat low withmean value in the range of 0033 to 0127 and it is significantly

Mathematical Problems in Engineering 9

0

02

04

06

08

1SS

I

30 40 50 60 70 8020Degradation level ()

(a) SSI

30 40 50 60 70 8020Degradation level ()

05

1015202530354045

PSN

R (d

B)

(b) PSNR

30 40 50 60 70 8020Degradation level ()

0

01

02

03

04

05

06

07

MSE

(c) MSE

30 40 50 60 70 8020Degradation level ()

0

1000

2000

3000

4000

5000

985747 2no

rm

(d) ℓ2 norm

Figure 3 Image quality metric for all degradation levels (over all images) before and after reconstruction Red color represents metric fordegraded images while blue represents postreconstruction values Black dots represent mean values

increased after reconstruction having mean values in therange 0666 to 0984 Same trend can be observed for all otherquality metrics PSNR (3797 dB to 9838 dB range beforeand 23973 dB to 38100 dB range after reconstruction) MSE(0107 to 0428 range before and 1629119890 minus 4 to 0004 rangeafter reconstruction) and ℓ2 norm (1943119890 + 3 to 3896119890 +3 range before and 75654 to 384180 range after reconstruc-tion) Please note that for some cases (like in Figures 3(c)and 3(d)) the distribution for particular condition is verytight making its graphical representation quite small Thenonparametric Kruskal-Wallis test [27] was conducted on allmetrics for postreconstruction case in order to determinestatistical significance of the results with degradation levelas independent variable Statistical significance was detectedin all cases with the following values SSI (1205942(6) = 9834119901 lt 005) PSNR (1205942(6) = 10117 119901 lt 005) MSE (1205942(6)= 10117 119901 lt 005) and ℓ2 norm (1205942(6) = 10117 119901 lt005) Tukeyrsquos honestly significant difference (THSD) post hoctests (corrected for multiple comparison) were performedwhich revealed some interesting patterns For all metricsstatistical difference was present only for those cases thathad at least 30 degradation difference between them (ie50 cases were statistically different only from 20 and80 cases please see Figure 2 for visual comparison) Webelieve this goes towards demonstrating quality of obtainedreconstruction (in terms of presented metrics) that is thereis no statistical difference between 20 and 40 missing

sample cases (although their means are different in favor of20 case) It should also be noted that even in cases of 70or80 of pixels missing reconstructed image was subjectivelygood enough (please see Figure 2) so that its content could berecognized by the viewer (with SSI means of 0778 and 0666resp) However it should be noted that in cases of such highimage degradation (especially for 80 case) reconstructedimages appeared somewhat smudged (pastel like effect) withsome details (subjectively) lost

It is interesting to explore how much gain was obtainedwith CS reconstruction This is depicted in Figure 4 for allmetrics From presented figures it can be seen that obtainedimprovement is significant and while its absolute valuevaries detected by all performance metrics The improve-ment range is between 3730 and 81637 for SSI between 3268and 9959 for PSNR between 9612119890 minus 4 and 00205 for MSEand between 0029 and 0143 for ℓ2 norm From this a generaltrend can be observed the improvement gain increases withthe degradation level There are a couple of exceptions tothis general rule for SSI in 12 out of 16 images gain for 70is larger than for 80 case However this phenomenon isnot present in other metrics (except for the case of PSNRimages 13 and 14 where they are almost identical) whichmight suggest this is due to some interaction between metricand type of environment in the image Also randomness ofpixel removal when degrading image should be considered

10 Mathematical Problems in Engineering

0

20

40

60

80SS

I rat

io

2 4 6 8 10 12 14 160Image number

20304050

607080

(a) SSI

0

2

4

6

8

10

PSN

R ra

tio

2 4 6 8 10 12 14 160Image number

20304050

607080

(b) PSNR

0

0005

001

0015

002

MSE

ratio

2 4 6 8 10 12 14 160Image number

20304050

607080

(c) MSE

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

0002004006008

01012014016

20304050

607080

985747 2no

rm ra

tio

(d) ℓ2 norm

Figure 4 Ratios of image quality metrics for all degradation levels (over all images) after and before reconstruction

Figure 4 also revels another interesting observation that war-rants further research reconstruction gain and performancemight depend on type of environmentscenery in the imageTaking into consideration that dataset contains images fromthree distinct cases (sets) as described earlier (which all haveunique environment in them) and that their range is 1ndash7 8ndash12 and 13ndash16 different patterns (gain amplitudes) can be seenfor all metrics

In order to detect if this observed pattern has statisticalsignificance nonparametric Kruskal-Wallis test (with posthoc THSD) was performed in a way that for particulardegradation level data for images were grouped into threecategories based onwhich set they belong to Obtained resultsrevealed that for allmetrics anddegradation levels there existsstatistical significance (with varying levels of 119901 value whichwe omit here for brevity) of image set on reconstructionimprovement Post hoc tests revealed that this difference wasbetween image sets 2 and 3 for SSI and PSNRmetrics while itwas between sets 1 and 23 for ℓ2 norm (between sets 1 and 2-3for degradation levels 20 30 and 40 and between sets1 and 3 for degradation levels 50 60 70 and 80) For

MSE metric the difference was between sets 1 and 2 for alldegradation levels with addition of difference between sets2 and 3 for 70 and 80 cases While all metrics do notagree with where the difference is they clearly demonstratethat terrain type influences algorithms performance whichcould be used (in conjunction with terrain type classificationlike the one in [28]) to estimate expected CS reconstructionperformance before deployment

Another interesting analysis in line with the last one canbe made so to explore if knowing image quality before thereconstruction can be used to infer image quality after thereconstruction This is depicted in Figure 5 for all qualitymetrics across all used test images Figure 5 suggests that thereexists clear relationship between quality metrics before andafter reconstruction which is to be expected However thisrelationship is nonlinear for all cases although for case ofPSNR it could be considered (almost) linearThis relationshipenables one to estimate image quality after reconstructionand (in conjunction with terrain type as demonstratedbefore) select optimal degradation level (in terms of reduceddata load) for particular case whilemaintaining image quality

Mathematical Problems in Engineering 11

SSI re

cons

truc

ted

065

07

075

08

085

09

095

1

004 005 006 007 008 009 01 011 012 013003SSIdegraded

(a) SSI

22242628303234363840

PSN

R rec

onstr

ucte

d(d

B)

5 6 7 8 9 104PSNRdegraded (dB)

(b) PSNR

times10minus3

005

115

225

335

4

MSE

reco

nstr

ucte

d

015 02 025 03 035 04 04501MSEdegraded

(c) MSE

2200 2400 2600 2800 3000 3200 3400 3600 3800 4000200050

100

150

200

250

300

350

400

985747 2no

rmre

cons

truc

ted

9857472 normdegraded

(d) ℓ2 norm

Figure 5 Dependency of image quality metrics after reconstruction on image quality metrics before reconstruction across all used testimages Presented are means for seven degradation levels in ascending order (from left to right)

TPFN

0

20

40

60

80

100

Perc

enta

ge

30 40 50 60 70 8020Degradation level ()

Figure 6 Relation between true positives (TPs) and false negatives(FNs) for particular degradation level over all images

at desired level Since pixel degradation is random oneshould also expect certain level of deviation from presentedmeans

33 Object Detection As stated before due to intendedapplication FNs are more expensive than FPs and it thusmakes sense to first explore number of total FNs in all imagesFigure 6 presents number of FNs and TPs in percentageof total detections in the original image (since they always

sum up to the same value for particular image) Pleasenote that reasons behind choosing detections in the original(nondegraded) images as the ground truth for calculations ofFNs and TPs are explained in more detail in Section 331

Figure 6 depicts that there is more or less a constantdownward trend in TP rate that is FN rates increase asdegradation level increases Exception is 30 levelThis dip inTP rate at 30might be considered a fluke but it appeared inanother unrelated analysis [9] (on completely different imageset with same CS and detection algorithms) Since currentlyreason for this is unclear it should be investigated in thefuture As expected FN percentage was the lowest for 20case (2338) and the highest for 80 case (4674) It hasvalue of about 35 for other cases (except 50 case) Thismight be considered a pretty large cost in search and rescuescenario but while making this type of conclusion it shouldbe kept in mind that comparisons are made in respect toalgorithms performance on original image and not groundtruth (which is hard to establish in this type of experimentwhere complete control of environment is not possible)

Some additional insight will be gained in Section 331where we conducted miniuser study on original image todetect how well humans perform on ideal (and not recon-structed) image For completeness of presentation Figure 7showing comparison of numbers of TPs and FPs for alldegradation levels is included

Additional insight in algorithmrsquos performance can beobtained by observing recall and precision values in Figure 8

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

Mathematical Problems in Engineering 7

along 119884 axis in the subimage All subsequent steps (includingdecision-making step) are only concerned with detectedclusters making their computational complexity negligiblecompared to complexity of mean shift algorithm in this step

223 Decision-Making The decision-making stage com-prises five steps In the first step large candidate regionsare eliminated from subsequent processing The eliminationthreshold value119873th is determined based on the image heightimage width and the estimated distance between the cameraand the observed surface The premise here is that if suchregions were to represent a person it would mean that theactual person is standing too close to the camera making thesearch trivial

The second step is to remove all those areas inside partic-ular candidate regions that contain only a handful of pixelsIn this way the residual noise presented by some scatteredpixels left aftermedian filtering is efficiently eliminatedThenin the third step the resulting image is dilated by applying a5 times 5mask This is done to increase the size of the connectedpixel areas inside candidate regions so that the similar nearbysegments can be merged together

In the next step the segments belonging to the clusterwith more than three spatially separated areas are excludedfrom the resulting set of candidate segments under theassumption that the imagewould not containmore than threesuspicious objects of the same color Finally all the remainingsegments that were not eliminated in any of the previous foursteps are singled out as potential targets

More formally the decision-making procedure can bewritten as follows An image 119868 consists of a set of clusters 119862119894obtained by grouping image pixels using only color values ofthe chosen color model as feature vector elements

119868 = 119899⋃119894=1

119862119894 119862119894 cap 119862119895 = 0 forall119894 119895 119894 = 119895 (19)

As mentioned before clustering according to similarity offeature vectors is performed usingmean shift algorithm Eachcluster 119862119894 represents a set of spatially connected-componentregions or segments 119878119894119896

119862119894 = 119898⋃119896=1

119878119894119896 119878119894119896 cap 119878119894119897 = 0 forall119896 119897 119896 = 119897 (20)

In order to accept segment 119878119894119896 as potential target thefollowing properties have to be satisfied

1199011 Size (119862119894) lt 119873max1199012 Size (119878119894119896) gt 119873min1199013 119898 le 119873119860

(21)

where 119873max and 119873min are chosen threshold values m isthe total number of segments within a given cluster and119873119860 denotes the maximum allowed number of candidatesegments belonging to the same cluster For our application119873min 119873max and119873119860 are set to 10 38000 and 3 respectivelyPlease note that 119873max value represents 0317 of total pixelsin the image andwas determined empirically (it encompassessome redundancy ie objects of interest are rarely that large)

23 Performance Metrics

231 Image QualityMetric Several image qualitymetrics areintroduced andused in the experiments in order to give betteroverview of obtained results as well as more weight to theresults It should also be noted that it is not our intention tomake conclusions about appropriateness of particular metricor to make their direct comparison but rather to make theresults more accessible to wider audience

Structural Similarity Index (SSI) [22 23] is inspired byhuman visual system which is highly accustomed to extract-ing and processing structural information from images Itdetects and evaluates structural changes between two signals(images) reference (119909) and reconstructed (119910) one Thismakes SSI very consistent with human visual perceptionObtained SSI values are in the range of 0 and 1 where 0corresponds to lowest quality image (compared to original)and 1 best quality image (which only happens for the exactlythe same image) SSI is calculated on small patches takenfrom the same locations of two images It encompassesthree similarity terms between two images (1) similarity oflocal luminancebrightnessmdash119897(119909 119910) (2) similarity of localcontrastmdash119888(119909 119910) and (3) similarity of local structuremdash119904(119909 119910) Formally it is defined by

SSI (119909 119910) = 119897 (119909 119910) sdot 119888 (119909 119910) sdot 119904 (119909 119910)= ( 2120583119909120583119910 + 11986211205832119909 + 1205832119910 + 1198621) sdot (

2120590119909120590119910 + 11986221205902119909 + 1205902119910 + 1198622)

sdot ( 120590119909119910 + 1198623120590119909120590119910 + 1198623) (22)

where120583119909 120583119910 are local samplemeans of119909 and119910 image120590119909 120590119910are local sample standard deviations of 119909 and 119910 images 120590119909119910is local sample cross-correlation of 119909 and 119910 images afterremoving their respective means and 1198621 1198622 and 1198623 aresmall positive constants used for numerical stability androbustness of the metric

It can be applied on both color and grayscale images butfor simplicity and without loss of generality it was appliedon normalized grayscale images in the paper It should benoted that SSI is widely used in predicting the perceivedquality of digital television and cinematic pictures in practicalapplication but its performance is sometimes disputed incomparison to MSE [24] It is used as main performancemetric in the experiment due to simple interpretation and itswide usage for image quality assessment

Peak signal to noise ratio (PSNR) [23]was used as a part ofauxiliary metric set for image quality assessment which alsoincluded MSE and ℓ2 norm Missing pixels can in a sensebe considered a salt-and-pepper type noise and thus useof PSNR makes sense since it is defined as a ratio betweenthe maximum possible power of a signal and the power ofnoise corrupted signal Larger PSNR indicates better qualityreconstructed image and vice versa PSNR does not have

8 Mathematical Problems in Engineering

limited range as is the case with SSI It is expressed in unitsof dB and defined by

PSNR (119909 119910) = 10 log10 (maxValue2

MSE) (23)

where maxValue is maximum range of the pixel which innormalized grayscale image is 1 andMSE is theMean SquareError between 119909 (referent image) and 119910 (reconstructedimage) defined as

MSE (119909 119910) = 1119873119873sum119894=1

(119909119894 minus 119910119894)2 (24)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) and 119909119894 and 119910119894 are normalizedvalues of 119894th pixel in the 119909 and 119910 image respectivelyMSE is dominant quantitative performance metric for theassessment of signal quality in the field of signal processingIt is simple to use and interpret has a clear physical meaningand is a desirable metric within statistics and estimationframework However its performance has been criticized indealing with perceptual signals such as images [25] Thisis mainly due to the fact that implicit assumptions relatedwith MSE are in general not met in the context of visualperception However it is still often used in the literaturewhen reporting performance in image reconstruction andthus we include it here for comparison purposes LargerMSEvalues indicate lower quality images (compared to referenceone) while smaller values indicate better quality image MSEvalue range is not limited as is the case with SSI

Final metric used in the paper is ℓ2 metric It is alsocalled the Euclidian distance and in the work we applied itto the color images This means that ℓ2 metric representsEuclidian distance between two points in 119877119866119861 space the 119894thpixel in the original image and the corresponding pixel in thereconstructed image It is expressed for all pixels in the imageand is defined as

ℓ2 (119909 119910)= 119873sum119894=1

radic(119877119909119894 minus 119877119910119894)2 + (119866119909119894 minus 119866119910119894)2 + (119861119909119894 minus 119861119910119894)2 (25)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) for all color channels 119877119909119894 119877119910119894are normalized red color channel values (0-1) of the 119894th pixel119866119909119894 119866119910119894 are normalized green color channel values (0-1) ofthe 119894th pixel and 119861119909119894 119861119910119894 are normalized blue color channelvalues (0-1) of the 119894th pixelThe larger the value of ℓ2metric isthemore difference there is between two imagesThe ℓ2 normmetric is mainly used in image similarity analysis althoughthere are some situations in which it has been shown that ℓ1metric can be considered as a proper choice as well [26]

232 Detection Quality Metric The performance of imageprocessing algorithm for detecting suspicious objects inimages for different levels of missing pixels was evaluated

in terms of precision and recall The standard definitions ofthese two measures are given by the following equations

Recall = TPTP + FN

Precision = TP

TP + FP

(26)

where TP denotes the number of true positives FP denotesthe number of false positives and FN denotes the number offalse negatives It should be noted that all of these parameterswere determined by checking whether or not a matchingsegment (object) has been found in the original imagewithout checking if it actually represents a person or anobject of interest More on accuracy (in terms of recalland precision) of the presented algorithm can be found inSection 331 where comparisonwith human performance onoriginal images was made

When making conclusions based on presented recall andprecision values it should be kept in mind that it is notenvisaged as a standalone tool but as cooperative tool forhuman operators aimed at reducing their workloadThus FPvalues do not have a high price since human operator cancheck it and dismiss it if it is a false alarm More costly areFNs since they can potentially mislead the operator

3 Results and Discussion

31 Database For the experiment we used 16 images of 4 Kresolution (2992 times 4000 pixels) obtained at 3 occasions withDJI Phantom 3rsquos gyro stabilized camera Camerarsquos plane wasparallel with the ground and ideally UAV was at the heightof 50m (although this was not always the case as will beexplained later on in Section 33) All images were taken incoastal area of Croatia in which search and rescue operationsoften take place Images 1ndash7 were taken during CroatianMountain Rescue Service search and rescue drills (Set 1)and images 8ndash12 were taken during our mockup testing (Set2) while images 13ndash16 were taken during actual search andrescue operation (Set 3)

All images were firstly intentionally degraded withdesired level (ranging between 20 and 80) in a mannerthat random pixels were set to white (ie they were missing)Images were then reconstructed using CS approach andtested for object detection performance

32 Image Reconstruction First we explore how well CSbased reconstruction algorithm performs in terms of imagequality metrics Metrics were calculated for each of theimages for two cases (a) when random missing sampleswere introduced to the original image (ie for degradedimage) and (b) when original image was reconstructed usinggradient based algorithm For both cases original unalteredimage was taken as a reference Obtained results for allmetrics can be found in Figure 3 in a form of enhanced Boxplots

As can be seen from Figure 3(a) the image quality beforereconstruction (as measured by SSI) is somewhat low withmean value in the range of 0033 to 0127 and it is significantly

Mathematical Problems in Engineering 9

0

02

04

06

08

1SS

I

30 40 50 60 70 8020Degradation level ()

(a) SSI

30 40 50 60 70 8020Degradation level ()

05

1015202530354045

PSN

R (d

B)

(b) PSNR

30 40 50 60 70 8020Degradation level ()

0

01

02

03

04

05

06

07

MSE

(c) MSE

30 40 50 60 70 8020Degradation level ()

0

1000

2000

3000

4000

5000

985747 2no

rm

(d) ℓ2 norm

Figure 3 Image quality metric for all degradation levels (over all images) before and after reconstruction Red color represents metric fordegraded images while blue represents postreconstruction values Black dots represent mean values

increased after reconstruction having mean values in therange 0666 to 0984 Same trend can be observed for all otherquality metrics PSNR (3797 dB to 9838 dB range beforeand 23973 dB to 38100 dB range after reconstruction) MSE(0107 to 0428 range before and 1629119890 minus 4 to 0004 rangeafter reconstruction) and ℓ2 norm (1943119890 + 3 to 3896119890 +3 range before and 75654 to 384180 range after reconstruc-tion) Please note that for some cases (like in Figures 3(c)and 3(d)) the distribution for particular condition is verytight making its graphical representation quite small Thenonparametric Kruskal-Wallis test [27] was conducted on allmetrics for postreconstruction case in order to determinestatistical significance of the results with degradation levelas independent variable Statistical significance was detectedin all cases with the following values SSI (1205942(6) = 9834119901 lt 005) PSNR (1205942(6) = 10117 119901 lt 005) MSE (1205942(6)= 10117 119901 lt 005) and ℓ2 norm (1205942(6) = 10117 119901 lt005) Tukeyrsquos honestly significant difference (THSD) post hoctests (corrected for multiple comparison) were performedwhich revealed some interesting patterns For all metricsstatistical difference was present only for those cases thathad at least 30 degradation difference between them (ie50 cases were statistically different only from 20 and80 cases please see Figure 2 for visual comparison) Webelieve this goes towards demonstrating quality of obtainedreconstruction (in terms of presented metrics) that is thereis no statistical difference between 20 and 40 missing

sample cases (although their means are different in favor of20 case) It should also be noted that even in cases of 70or80 of pixels missing reconstructed image was subjectivelygood enough (please see Figure 2) so that its content could berecognized by the viewer (with SSI means of 0778 and 0666resp) However it should be noted that in cases of such highimage degradation (especially for 80 case) reconstructedimages appeared somewhat smudged (pastel like effect) withsome details (subjectively) lost

It is interesting to explore how much gain was obtainedwith CS reconstruction This is depicted in Figure 4 for allmetrics From presented figures it can be seen that obtainedimprovement is significant and while its absolute valuevaries detected by all performance metrics The improve-ment range is between 3730 and 81637 for SSI between 3268and 9959 for PSNR between 9612119890 minus 4 and 00205 for MSEand between 0029 and 0143 for ℓ2 norm From this a generaltrend can be observed the improvement gain increases withthe degradation level There are a couple of exceptions tothis general rule for SSI in 12 out of 16 images gain for 70is larger than for 80 case However this phenomenon isnot present in other metrics (except for the case of PSNRimages 13 and 14 where they are almost identical) whichmight suggest this is due to some interaction between metricand type of environment in the image Also randomness ofpixel removal when degrading image should be considered

10 Mathematical Problems in Engineering

0

20

40

60

80SS

I rat

io

2 4 6 8 10 12 14 160Image number

20304050

607080

(a) SSI

0

2

4

6

8

10

PSN

R ra

tio

2 4 6 8 10 12 14 160Image number

20304050

607080

(b) PSNR

0

0005

001

0015

002

MSE

ratio

2 4 6 8 10 12 14 160Image number

20304050

607080

(c) MSE

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

0002004006008

01012014016

20304050

607080

985747 2no

rm ra

tio

(d) ℓ2 norm

Figure 4 Ratios of image quality metrics for all degradation levels (over all images) after and before reconstruction

Figure 4 also revels another interesting observation that war-rants further research reconstruction gain and performancemight depend on type of environmentscenery in the imageTaking into consideration that dataset contains images fromthree distinct cases (sets) as described earlier (which all haveunique environment in them) and that their range is 1ndash7 8ndash12 and 13ndash16 different patterns (gain amplitudes) can be seenfor all metrics

In order to detect if this observed pattern has statisticalsignificance nonparametric Kruskal-Wallis test (with posthoc THSD) was performed in a way that for particulardegradation level data for images were grouped into threecategories based onwhich set they belong to Obtained resultsrevealed that for allmetrics anddegradation levels there existsstatistical significance (with varying levels of 119901 value whichwe omit here for brevity) of image set on reconstructionimprovement Post hoc tests revealed that this difference wasbetween image sets 2 and 3 for SSI and PSNRmetrics while itwas between sets 1 and 23 for ℓ2 norm (between sets 1 and 2-3for degradation levels 20 30 and 40 and between sets1 and 3 for degradation levels 50 60 70 and 80) For

MSE metric the difference was between sets 1 and 2 for alldegradation levels with addition of difference between sets2 and 3 for 70 and 80 cases While all metrics do notagree with where the difference is they clearly demonstratethat terrain type influences algorithms performance whichcould be used (in conjunction with terrain type classificationlike the one in [28]) to estimate expected CS reconstructionperformance before deployment

Another interesting analysis in line with the last one canbe made so to explore if knowing image quality before thereconstruction can be used to infer image quality after thereconstruction This is depicted in Figure 5 for all qualitymetrics across all used test images Figure 5 suggests that thereexists clear relationship between quality metrics before andafter reconstruction which is to be expected However thisrelationship is nonlinear for all cases although for case ofPSNR it could be considered (almost) linearThis relationshipenables one to estimate image quality after reconstructionand (in conjunction with terrain type as demonstratedbefore) select optimal degradation level (in terms of reduceddata load) for particular case whilemaintaining image quality

Mathematical Problems in Engineering 11

SSI re

cons

truc

ted

065

07

075

08

085

09

095

1

004 005 006 007 008 009 01 011 012 013003SSIdegraded

(a) SSI

22242628303234363840

PSN

R rec

onstr

ucte

d(d

B)

5 6 7 8 9 104PSNRdegraded (dB)

(b) PSNR

times10minus3

005

115

225

335

4

MSE

reco

nstr

ucte

d

015 02 025 03 035 04 04501MSEdegraded

(c) MSE

2200 2400 2600 2800 3000 3200 3400 3600 3800 4000200050

100

150

200

250

300

350

400

985747 2no

rmre

cons

truc

ted

9857472 normdegraded

(d) ℓ2 norm

Figure 5 Dependency of image quality metrics after reconstruction on image quality metrics before reconstruction across all used testimages Presented are means for seven degradation levels in ascending order (from left to right)

TPFN

0

20

40

60

80

100

Perc

enta

ge

30 40 50 60 70 8020Degradation level ()

Figure 6 Relation between true positives (TPs) and false negatives(FNs) for particular degradation level over all images

at desired level Since pixel degradation is random oneshould also expect certain level of deviation from presentedmeans

33 Object Detection As stated before due to intendedapplication FNs are more expensive than FPs and it thusmakes sense to first explore number of total FNs in all imagesFigure 6 presents number of FNs and TPs in percentageof total detections in the original image (since they always

sum up to the same value for particular image) Pleasenote that reasons behind choosing detections in the original(nondegraded) images as the ground truth for calculations ofFNs and TPs are explained in more detail in Section 331

Figure 6 depicts that there is more or less a constantdownward trend in TP rate that is FN rates increase asdegradation level increases Exception is 30 levelThis dip inTP rate at 30might be considered a fluke but it appeared inanother unrelated analysis [9] (on completely different imageset with same CS and detection algorithms) Since currentlyreason for this is unclear it should be investigated in thefuture As expected FN percentage was the lowest for 20case (2338) and the highest for 80 case (4674) It hasvalue of about 35 for other cases (except 50 case) Thismight be considered a pretty large cost in search and rescuescenario but while making this type of conclusion it shouldbe kept in mind that comparisons are made in respect toalgorithms performance on original image and not groundtruth (which is hard to establish in this type of experimentwhere complete control of environment is not possible)

Some additional insight will be gained in Section 331where we conducted miniuser study on original image todetect how well humans perform on ideal (and not recon-structed) image For completeness of presentation Figure 7showing comparison of numbers of TPs and FPs for alldegradation levels is included

Additional insight in algorithmrsquos performance can beobtained by observing recall and precision values in Figure 8

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 8: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

8 Mathematical Problems in Engineering

limited range as is the case with SSI It is expressed in unitsof dB and defined by

PSNR (119909 119910) = 10 log10 (maxValue2

MSE) (23)

where maxValue is maximum range of the pixel which innormalized grayscale image is 1 andMSE is theMean SquareError between 119909 (referent image) and 119910 (reconstructedimage) defined as

MSE (119909 119910) = 1119873119873sum119894=1

(119909119894 minus 119910119894)2 (24)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) and 119909119894 and 119910119894 are normalizedvalues of 119894th pixel in the 119909 and 119910 image respectivelyMSE is dominant quantitative performance metric for theassessment of signal quality in the field of signal processingIt is simple to use and interpret has a clear physical meaningand is a desirable metric within statistics and estimationframework However its performance has been criticized indealing with perceptual signals such as images [25] Thisis mainly due to the fact that implicit assumptions relatedwith MSE are in general not met in the context of visualperception However it is still often used in the literaturewhen reporting performance in image reconstruction andthus we include it here for comparison purposes LargerMSEvalues indicate lower quality images (compared to referenceone) while smaller values indicate better quality image MSEvalue range is not limited as is the case with SSI

Final metric used in the paper is ℓ2 metric It is alsocalled the Euclidian distance and in the work we applied itto the color images This means that ℓ2 metric representsEuclidian distance between two points in 119877119866119861 space the 119894thpixel in the original image and the corresponding pixel in thereconstructed image It is expressed for all pixels in the imageand is defined as

ℓ2 (119909 119910)= 119873sum119894=1

radic(119877119909119894 minus 119877119910119894)2 + (119866119909119894 minus 119866119910119894)2 + (119861119909119894 minus 119861119910119894)2 (25)

where119873 is number of pixels in the 119909 or 119910 image (sizes of bothimages have to be the same) for all color channels 119877119909119894 119877119910119894are normalized red color channel values (0-1) of the 119894th pixel119866119909119894 119866119910119894 are normalized green color channel values (0-1) ofthe 119894th pixel and 119861119909119894 119861119910119894 are normalized blue color channelvalues (0-1) of the 119894th pixelThe larger the value of ℓ2metric isthemore difference there is between two imagesThe ℓ2 normmetric is mainly used in image similarity analysis althoughthere are some situations in which it has been shown that ℓ1metric can be considered as a proper choice as well [26]

232 Detection Quality Metric The performance of imageprocessing algorithm for detecting suspicious objects inimages for different levels of missing pixels was evaluated

in terms of precision and recall The standard definitions ofthese two measures are given by the following equations

Recall = TPTP + FN

Precision = TP

TP + FP

(26)

where TP denotes the number of true positives FP denotesthe number of false positives and FN denotes the number offalse negatives It should be noted that all of these parameterswere determined by checking whether or not a matchingsegment (object) has been found in the original imagewithout checking if it actually represents a person or anobject of interest More on accuracy (in terms of recalland precision) of the presented algorithm can be found inSection 331 where comparisonwith human performance onoriginal images was made

When making conclusions based on presented recall andprecision values it should be kept in mind that it is notenvisaged as a standalone tool but as cooperative tool forhuman operators aimed at reducing their workloadThus FPvalues do not have a high price since human operator cancheck it and dismiss it if it is a false alarm More costly areFNs since they can potentially mislead the operator

3 Results and Discussion

31 Database For the experiment we used 16 images of 4 Kresolution (2992 times 4000 pixels) obtained at 3 occasions withDJI Phantom 3rsquos gyro stabilized camera Camerarsquos plane wasparallel with the ground and ideally UAV was at the heightof 50m (although this was not always the case as will beexplained later on in Section 33) All images were taken incoastal area of Croatia in which search and rescue operationsoften take place Images 1ndash7 were taken during CroatianMountain Rescue Service search and rescue drills (Set 1)and images 8ndash12 were taken during our mockup testing (Set2) while images 13ndash16 were taken during actual search andrescue operation (Set 3)

All images were firstly intentionally degraded withdesired level (ranging between 20 and 80) in a mannerthat random pixels were set to white (ie they were missing)Images were then reconstructed using CS approach andtested for object detection performance

32 Image Reconstruction First we explore how well CSbased reconstruction algorithm performs in terms of imagequality metrics Metrics were calculated for each of theimages for two cases (a) when random missing sampleswere introduced to the original image (ie for degradedimage) and (b) when original image was reconstructed usinggradient based algorithm For both cases original unalteredimage was taken as a reference Obtained results for allmetrics can be found in Figure 3 in a form of enhanced Boxplots

As can be seen from Figure 3(a) the image quality beforereconstruction (as measured by SSI) is somewhat low withmean value in the range of 0033 to 0127 and it is significantly

Mathematical Problems in Engineering 9

0

02

04

06

08

1SS

I

30 40 50 60 70 8020Degradation level ()

(a) SSI

30 40 50 60 70 8020Degradation level ()

05

1015202530354045

PSN

R (d

B)

(b) PSNR

30 40 50 60 70 8020Degradation level ()

0

01

02

03

04

05

06

07

MSE

(c) MSE

30 40 50 60 70 8020Degradation level ()

0

1000

2000

3000

4000

5000

985747 2no

rm

(d) ℓ2 norm

Figure 3 Image quality metric for all degradation levels (over all images) before and after reconstruction Red color represents metric fordegraded images while blue represents postreconstruction values Black dots represent mean values

increased after reconstruction having mean values in therange 0666 to 0984 Same trend can be observed for all otherquality metrics PSNR (3797 dB to 9838 dB range beforeand 23973 dB to 38100 dB range after reconstruction) MSE(0107 to 0428 range before and 1629119890 minus 4 to 0004 rangeafter reconstruction) and ℓ2 norm (1943119890 + 3 to 3896119890 +3 range before and 75654 to 384180 range after reconstruc-tion) Please note that for some cases (like in Figures 3(c)and 3(d)) the distribution for particular condition is verytight making its graphical representation quite small Thenonparametric Kruskal-Wallis test [27] was conducted on allmetrics for postreconstruction case in order to determinestatistical significance of the results with degradation levelas independent variable Statistical significance was detectedin all cases with the following values SSI (1205942(6) = 9834119901 lt 005) PSNR (1205942(6) = 10117 119901 lt 005) MSE (1205942(6)= 10117 119901 lt 005) and ℓ2 norm (1205942(6) = 10117 119901 lt005) Tukeyrsquos honestly significant difference (THSD) post hoctests (corrected for multiple comparison) were performedwhich revealed some interesting patterns For all metricsstatistical difference was present only for those cases thathad at least 30 degradation difference between them (ie50 cases were statistically different only from 20 and80 cases please see Figure 2 for visual comparison) Webelieve this goes towards demonstrating quality of obtainedreconstruction (in terms of presented metrics) that is thereis no statistical difference between 20 and 40 missing

sample cases (although their means are different in favor of20 case) It should also be noted that even in cases of 70or80 of pixels missing reconstructed image was subjectivelygood enough (please see Figure 2) so that its content could berecognized by the viewer (with SSI means of 0778 and 0666resp) However it should be noted that in cases of such highimage degradation (especially for 80 case) reconstructedimages appeared somewhat smudged (pastel like effect) withsome details (subjectively) lost

It is interesting to explore how much gain was obtainedwith CS reconstruction This is depicted in Figure 4 for allmetrics From presented figures it can be seen that obtainedimprovement is significant and while its absolute valuevaries detected by all performance metrics The improve-ment range is between 3730 and 81637 for SSI between 3268and 9959 for PSNR between 9612119890 minus 4 and 00205 for MSEand between 0029 and 0143 for ℓ2 norm From this a generaltrend can be observed the improvement gain increases withthe degradation level There are a couple of exceptions tothis general rule for SSI in 12 out of 16 images gain for 70is larger than for 80 case However this phenomenon isnot present in other metrics (except for the case of PSNRimages 13 and 14 where they are almost identical) whichmight suggest this is due to some interaction between metricand type of environment in the image Also randomness ofpixel removal when degrading image should be considered

10 Mathematical Problems in Engineering

0

20

40

60

80SS

I rat

io

2 4 6 8 10 12 14 160Image number

20304050

607080

(a) SSI

0

2

4

6

8

10

PSN

R ra

tio

2 4 6 8 10 12 14 160Image number

20304050

607080

(b) PSNR

0

0005

001

0015

002

MSE

ratio

2 4 6 8 10 12 14 160Image number

20304050

607080

(c) MSE

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

0002004006008

01012014016

20304050

607080

985747 2no

rm ra

tio

(d) ℓ2 norm

Figure 4 Ratios of image quality metrics for all degradation levels (over all images) after and before reconstruction

Figure 4 also revels another interesting observation that war-rants further research reconstruction gain and performancemight depend on type of environmentscenery in the imageTaking into consideration that dataset contains images fromthree distinct cases (sets) as described earlier (which all haveunique environment in them) and that their range is 1ndash7 8ndash12 and 13ndash16 different patterns (gain amplitudes) can be seenfor all metrics

In order to detect if this observed pattern has statisticalsignificance nonparametric Kruskal-Wallis test (with posthoc THSD) was performed in a way that for particulardegradation level data for images were grouped into threecategories based onwhich set they belong to Obtained resultsrevealed that for allmetrics anddegradation levels there existsstatistical significance (with varying levels of 119901 value whichwe omit here for brevity) of image set on reconstructionimprovement Post hoc tests revealed that this difference wasbetween image sets 2 and 3 for SSI and PSNRmetrics while itwas between sets 1 and 23 for ℓ2 norm (between sets 1 and 2-3for degradation levels 20 30 and 40 and between sets1 and 3 for degradation levels 50 60 70 and 80) For

MSE metric the difference was between sets 1 and 2 for alldegradation levels with addition of difference between sets2 and 3 for 70 and 80 cases While all metrics do notagree with where the difference is they clearly demonstratethat terrain type influences algorithms performance whichcould be used (in conjunction with terrain type classificationlike the one in [28]) to estimate expected CS reconstructionperformance before deployment

Another interesting analysis in line with the last one canbe made so to explore if knowing image quality before thereconstruction can be used to infer image quality after thereconstruction This is depicted in Figure 5 for all qualitymetrics across all used test images Figure 5 suggests that thereexists clear relationship between quality metrics before andafter reconstruction which is to be expected However thisrelationship is nonlinear for all cases although for case ofPSNR it could be considered (almost) linearThis relationshipenables one to estimate image quality after reconstructionand (in conjunction with terrain type as demonstratedbefore) select optimal degradation level (in terms of reduceddata load) for particular case whilemaintaining image quality

Mathematical Problems in Engineering 11

SSI re

cons

truc

ted

065

07

075

08

085

09

095

1

004 005 006 007 008 009 01 011 012 013003SSIdegraded

(a) SSI

22242628303234363840

PSN

R rec

onstr

ucte

d(d

B)

5 6 7 8 9 104PSNRdegraded (dB)

(b) PSNR

times10minus3

005

115

225

335

4

MSE

reco

nstr

ucte

d

015 02 025 03 035 04 04501MSEdegraded

(c) MSE

2200 2400 2600 2800 3000 3200 3400 3600 3800 4000200050

100

150

200

250

300

350

400

985747 2no

rmre

cons

truc

ted

9857472 normdegraded

(d) ℓ2 norm

Figure 5 Dependency of image quality metrics after reconstruction on image quality metrics before reconstruction across all used testimages Presented are means for seven degradation levels in ascending order (from left to right)

TPFN

0

20

40

60

80

100

Perc

enta

ge

30 40 50 60 70 8020Degradation level ()

Figure 6 Relation between true positives (TPs) and false negatives(FNs) for particular degradation level over all images

at desired level Since pixel degradation is random oneshould also expect certain level of deviation from presentedmeans

33 Object Detection As stated before due to intendedapplication FNs are more expensive than FPs and it thusmakes sense to first explore number of total FNs in all imagesFigure 6 presents number of FNs and TPs in percentageof total detections in the original image (since they always

sum up to the same value for particular image) Pleasenote that reasons behind choosing detections in the original(nondegraded) images as the ground truth for calculations ofFNs and TPs are explained in more detail in Section 331

Figure 6 depicts that there is more or less a constantdownward trend in TP rate that is FN rates increase asdegradation level increases Exception is 30 levelThis dip inTP rate at 30might be considered a fluke but it appeared inanother unrelated analysis [9] (on completely different imageset with same CS and detection algorithms) Since currentlyreason for this is unclear it should be investigated in thefuture As expected FN percentage was the lowest for 20case (2338) and the highest for 80 case (4674) It hasvalue of about 35 for other cases (except 50 case) Thismight be considered a pretty large cost in search and rescuescenario but while making this type of conclusion it shouldbe kept in mind that comparisons are made in respect toalgorithms performance on original image and not groundtruth (which is hard to establish in this type of experimentwhere complete control of environment is not possible)

Some additional insight will be gained in Section 331where we conducted miniuser study on original image todetect how well humans perform on ideal (and not recon-structed) image For completeness of presentation Figure 7showing comparison of numbers of TPs and FPs for alldegradation levels is included

Additional insight in algorithmrsquos performance can beobtained by observing recall and precision values in Figure 8

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 9: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

Mathematical Problems in Engineering 9

0

02

04

06

08

1SS

I

30 40 50 60 70 8020Degradation level ()

(a) SSI

30 40 50 60 70 8020Degradation level ()

05

1015202530354045

PSN

R (d

B)

(b) PSNR

30 40 50 60 70 8020Degradation level ()

0

01

02

03

04

05

06

07

MSE

(c) MSE

30 40 50 60 70 8020Degradation level ()

0

1000

2000

3000

4000

5000

985747 2no

rm

(d) ℓ2 norm

Figure 3 Image quality metric for all degradation levels (over all images) before and after reconstruction Red color represents metric fordegraded images while blue represents postreconstruction values Black dots represent mean values

increased after reconstruction having mean values in therange 0666 to 0984 Same trend can be observed for all otherquality metrics PSNR (3797 dB to 9838 dB range beforeand 23973 dB to 38100 dB range after reconstruction) MSE(0107 to 0428 range before and 1629119890 minus 4 to 0004 rangeafter reconstruction) and ℓ2 norm (1943119890 + 3 to 3896119890 +3 range before and 75654 to 384180 range after reconstruc-tion) Please note that for some cases (like in Figures 3(c)and 3(d)) the distribution for particular condition is verytight making its graphical representation quite small Thenonparametric Kruskal-Wallis test [27] was conducted on allmetrics for postreconstruction case in order to determinestatistical significance of the results with degradation levelas independent variable Statistical significance was detectedin all cases with the following values SSI (1205942(6) = 9834119901 lt 005) PSNR (1205942(6) = 10117 119901 lt 005) MSE (1205942(6)= 10117 119901 lt 005) and ℓ2 norm (1205942(6) = 10117 119901 lt005) Tukeyrsquos honestly significant difference (THSD) post hoctests (corrected for multiple comparison) were performedwhich revealed some interesting patterns For all metricsstatistical difference was present only for those cases thathad at least 30 degradation difference between them (ie50 cases were statistically different only from 20 and80 cases please see Figure 2 for visual comparison) Webelieve this goes towards demonstrating quality of obtainedreconstruction (in terms of presented metrics) that is thereis no statistical difference between 20 and 40 missing

sample cases (although their means are different in favor of20 case) It should also be noted that even in cases of 70or80 of pixels missing reconstructed image was subjectivelygood enough (please see Figure 2) so that its content could berecognized by the viewer (with SSI means of 0778 and 0666resp) However it should be noted that in cases of such highimage degradation (especially for 80 case) reconstructedimages appeared somewhat smudged (pastel like effect) withsome details (subjectively) lost

It is interesting to explore how much gain was obtainedwith CS reconstruction This is depicted in Figure 4 for allmetrics From presented figures it can be seen that obtainedimprovement is significant and while its absolute valuevaries detected by all performance metrics The improve-ment range is between 3730 and 81637 for SSI between 3268and 9959 for PSNR between 9612119890 minus 4 and 00205 for MSEand between 0029 and 0143 for ℓ2 norm From this a generaltrend can be observed the improvement gain increases withthe degradation level There are a couple of exceptions tothis general rule for SSI in 12 out of 16 images gain for 70is larger than for 80 case However this phenomenon isnot present in other metrics (except for the case of PSNRimages 13 and 14 where they are almost identical) whichmight suggest this is due to some interaction between metricand type of environment in the image Also randomness ofpixel removal when degrading image should be considered

10 Mathematical Problems in Engineering

0

20

40

60

80SS

I rat

io

2 4 6 8 10 12 14 160Image number

20304050

607080

(a) SSI

0

2

4

6

8

10

PSN

R ra

tio

2 4 6 8 10 12 14 160Image number

20304050

607080

(b) PSNR

0

0005

001

0015

002

MSE

ratio

2 4 6 8 10 12 14 160Image number

20304050

607080

(c) MSE

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

0002004006008

01012014016

20304050

607080

985747 2no

rm ra

tio

(d) ℓ2 norm

Figure 4 Ratios of image quality metrics for all degradation levels (over all images) after and before reconstruction

Figure 4 also revels another interesting observation that war-rants further research reconstruction gain and performancemight depend on type of environmentscenery in the imageTaking into consideration that dataset contains images fromthree distinct cases (sets) as described earlier (which all haveunique environment in them) and that their range is 1ndash7 8ndash12 and 13ndash16 different patterns (gain amplitudes) can be seenfor all metrics

In order to detect if this observed pattern has statisticalsignificance nonparametric Kruskal-Wallis test (with posthoc THSD) was performed in a way that for particulardegradation level data for images were grouped into threecategories based onwhich set they belong to Obtained resultsrevealed that for allmetrics anddegradation levels there existsstatistical significance (with varying levels of 119901 value whichwe omit here for brevity) of image set on reconstructionimprovement Post hoc tests revealed that this difference wasbetween image sets 2 and 3 for SSI and PSNRmetrics while itwas between sets 1 and 23 for ℓ2 norm (between sets 1 and 2-3for degradation levels 20 30 and 40 and between sets1 and 3 for degradation levels 50 60 70 and 80) For

MSE metric the difference was between sets 1 and 2 for alldegradation levels with addition of difference between sets2 and 3 for 70 and 80 cases While all metrics do notagree with where the difference is they clearly demonstratethat terrain type influences algorithms performance whichcould be used (in conjunction with terrain type classificationlike the one in [28]) to estimate expected CS reconstructionperformance before deployment

Another interesting analysis in line with the last one canbe made so to explore if knowing image quality before thereconstruction can be used to infer image quality after thereconstruction This is depicted in Figure 5 for all qualitymetrics across all used test images Figure 5 suggests that thereexists clear relationship between quality metrics before andafter reconstruction which is to be expected However thisrelationship is nonlinear for all cases although for case ofPSNR it could be considered (almost) linearThis relationshipenables one to estimate image quality after reconstructionand (in conjunction with terrain type as demonstratedbefore) select optimal degradation level (in terms of reduceddata load) for particular case whilemaintaining image quality

Mathematical Problems in Engineering 11

SSI re

cons

truc

ted

065

07

075

08

085

09

095

1

004 005 006 007 008 009 01 011 012 013003SSIdegraded

(a) SSI

22242628303234363840

PSN

R rec

onstr

ucte

d(d

B)

5 6 7 8 9 104PSNRdegraded (dB)

(b) PSNR

times10minus3

005

115

225

335

4

MSE

reco

nstr

ucte

d

015 02 025 03 035 04 04501MSEdegraded

(c) MSE

2200 2400 2600 2800 3000 3200 3400 3600 3800 4000200050

100

150

200

250

300

350

400

985747 2no

rmre

cons

truc

ted

9857472 normdegraded

(d) ℓ2 norm

Figure 5 Dependency of image quality metrics after reconstruction on image quality metrics before reconstruction across all used testimages Presented are means for seven degradation levels in ascending order (from left to right)

TPFN

0

20

40

60

80

100

Perc

enta

ge

30 40 50 60 70 8020Degradation level ()

Figure 6 Relation between true positives (TPs) and false negatives(FNs) for particular degradation level over all images

at desired level Since pixel degradation is random oneshould also expect certain level of deviation from presentedmeans

33 Object Detection As stated before due to intendedapplication FNs are more expensive than FPs and it thusmakes sense to first explore number of total FNs in all imagesFigure 6 presents number of FNs and TPs in percentageof total detections in the original image (since they always

sum up to the same value for particular image) Pleasenote that reasons behind choosing detections in the original(nondegraded) images as the ground truth for calculations ofFNs and TPs are explained in more detail in Section 331

Figure 6 depicts that there is more or less a constantdownward trend in TP rate that is FN rates increase asdegradation level increases Exception is 30 levelThis dip inTP rate at 30might be considered a fluke but it appeared inanother unrelated analysis [9] (on completely different imageset with same CS and detection algorithms) Since currentlyreason for this is unclear it should be investigated in thefuture As expected FN percentage was the lowest for 20case (2338) and the highest for 80 case (4674) It hasvalue of about 35 for other cases (except 50 case) Thismight be considered a pretty large cost in search and rescuescenario but while making this type of conclusion it shouldbe kept in mind that comparisons are made in respect toalgorithms performance on original image and not groundtruth (which is hard to establish in this type of experimentwhere complete control of environment is not possible)

Some additional insight will be gained in Section 331where we conducted miniuser study on original image todetect how well humans perform on ideal (and not recon-structed) image For completeness of presentation Figure 7showing comparison of numbers of TPs and FPs for alldegradation levels is included

Additional insight in algorithmrsquos performance can beobtained by observing recall and precision values in Figure 8

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 10: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

10 Mathematical Problems in Engineering

0

20

40

60

80SS

I rat

io

2 4 6 8 10 12 14 160Image number

20304050

607080

(a) SSI

0

2

4

6

8

10

PSN

R ra

tio

2 4 6 8 10 12 14 160Image number

20304050

607080

(b) PSNR

0

0005

001

0015

002

MSE

ratio

2 4 6 8 10 12 14 160Image number

20304050

607080

(c) MSE

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

0002004006008

01012014016

20304050

607080

985747 2no

rm ra

tio

(d) ℓ2 norm

Figure 4 Ratios of image quality metrics for all degradation levels (over all images) after and before reconstruction

Figure 4 also revels another interesting observation that war-rants further research reconstruction gain and performancemight depend on type of environmentscenery in the imageTaking into consideration that dataset contains images fromthree distinct cases (sets) as described earlier (which all haveunique environment in them) and that their range is 1ndash7 8ndash12 and 13ndash16 different patterns (gain amplitudes) can be seenfor all metrics

In order to detect if this observed pattern has statisticalsignificance nonparametric Kruskal-Wallis test (with posthoc THSD) was performed in a way that for particulardegradation level data for images were grouped into threecategories based onwhich set they belong to Obtained resultsrevealed that for allmetrics anddegradation levels there existsstatistical significance (with varying levels of 119901 value whichwe omit here for brevity) of image set on reconstructionimprovement Post hoc tests revealed that this difference wasbetween image sets 2 and 3 for SSI and PSNRmetrics while itwas between sets 1 and 23 for ℓ2 norm (between sets 1 and 2-3for degradation levels 20 30 and 40 and between sets1 and 3 for degradation levels 50 60 70 and 80) For

MSE metric the difference was between sets 1 and 2 for alldegradation levels with addition of difference between sets2 and 3 for 70 and 80 cases While all metrics do notagree with where the difference is they clearly demonstratethat terrain type influences algorithms performance whichcould be used (in conjunction with terrain type classificationlike the one in [28]) to estimate expected CS reconstructionperformance before deployment

Another interesting analysis in line with the last one canbe made so to explore if knowing image quality before thereconstruction can be used to infer image quality after thereconstruction This is depicted in Figure 5 for all qualitymetrics across all used test images Figure 5 suggests that thereexists clear relationship between quality metrics before andafter reconstruction which is to be expected However thisrelationship is nonlinear for all cases although for case ofPSNR it could be considered (almost) linearThis relationshipenables one to estimate image quality after reconstructionand (in conjunction with terrain type as demonstratedbefore) select optimal degradation level (in terms of reduceddata load) for particular case whilemaintaining image quality

Mathematical Problems in Engineering 11

SSI re

cons

truc

ted

065

07

075

08

085

09

095

1

004 005 006 007 008 009 01 011 012 013003SSIdegraded

(a) SSI

22242628303234363840

PSN

R rec

onstr

ucte

d(d

B)

5 6 7 8 9 104PSNRdegraded (dB)

(b) PSNR

times10minus3

005

115

225

335

4

MSE

reco

nstr

ucte

d

015 02 025 03 035 04 04501MSEdegraded

(c) MSE

2200 2400 2600 2800 3000 3200 3400 3600 3800 4000200050

100

150

200

250

300

350

400

985747 2no

rmre

cons

truc

ted

9857472 normdegraded

(d) ℓ2 norm

Figure 5 Dependency of image quality metrics after reconstruction on image quality metrics before reconstruction across all used testimages Presented are means for seven degradation levels in ascending order (from left to right)

TPFN

0

20

40

60

80

100

Perc

enta

ge

30 40 50 60 70 8020Degradation level ()

Figure 6 Relation between true positives (TPs) and false negatives(FNs) for particular degradation level over all images

at desired level Since pixel degradation is random oneshould also expect certain level of deviation from presentedmeans

33 Object Detection As stated before due to intendedapplication FNs are more expensive than FPs and it thusmakes sense to first explore number of total FNs in all imagesFigure 6 presents number of FNs and TPs in percentageof total detections in the original image (since they always

sum up to the same value for particular image) Pleasenote that reasons behind choosing detections in the original(nondegraded) images as the ground truth for calculations ofFNs and TPs are explained in more detail in Section 331

Figure 6 depicts that there is more or less a constantdownward trend in TP rate that is FN rates increase asdegradation level increases Exception is 30 levelThis dip inTP rate at 30might be considered a fluke but it appeared inanother unrelated analysis [9] (on completely different imageset with same CS and detection algorithms) Since currentlyreason for this is unclear it should be investigated in thefuture As expected FN percentage was the lowest for 20case (2338) and the highest for 80 case (4674) It hasvalue of about 35 for other cases (except 50 case) Thismight be considered a pretty large cost in search and rescuescenario but while making this type of conclusion it shouldbe kept in mind that comparisons are made in respect toalgorithms performance on original image and not groundtruth (which is hard to establish in this type of experimentwhere complete control of environment is not possible)

Some additional insight will be gained in Section 331where we conducted miniuser study on original image todetect how well humans perform on ideal (and not recon-structed) image For completeness of presentation Figure 7showing comparison of numbers of TPs and FPs for alldegradation levels is included

Additional insight in algorithmrsquos performance can beobtained by observing recall and precision values in Figure 8

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 11: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

Mathematical Problems in Engineering 11

SSI re

cons

truc

ted

065

07

075

08

085

09

095

1

004 005 006 007 008 009 01 011 012 013003SSIdegraded

(a) SSI

22242628303234363840

PSN

R rec

onstr

ucte

d(d

B)

5 6 7 8 9 104PSNRdegraded (dB)

(b) PSNR

times10minus3

005

115

225

335

4

MSE

reco

nstr

ucte

d

015 02 025 03 035 04 04501MSEdegraded

(c) MSE

2200 2400 2600 2800 3000 3200 3400 3600 3800 4000200050

100

150

200

250

300

350

400

985747 2no

rmre

cons

truc

ted

9857472 normdegraded

(d) ℓ2 norm

Figure 5 Dependency of image quality metrics after reconstruction on image quality metrics before reconstruction across all used testimages Presented are means for seven degradation levels in ascending order (from left to right)

TPFN

0

20

40

60

80

100

Perc

enta

ge

30 40 50 60 70 8020Degradation level ()

Figure 6 Relation between true positives (TPs) and false negatives(FNs) for particular degradation level over all images

at desired level Since pixel degradation is random oneshould also expect certain level of deviation from presentedmeans

33 Object Detection As stated before due to intendedapplication FNs are more expensive than FPs and it thusmakes sense to first explore number of total FNs in all imagesFigure 6 presents number of FNs and TPs in percentageof total detections in the original image (since they always

sum up to the same value for particular image) Pleasenote that reasons behind choosing detections in the original(nondegraded) images as the ground truth for calculations ofFNs and TPs are explained in more detail in Section 331

Figure 6 depicts that there is more or less a constantdownward trend in TP rate that is FN rates increase asdegradation level increases Exception is 30 levelThis dip inTP rate at 30might be considered a fluke but it appeared inanother unrelated analysis [9] (on completely different imageset with same CS and detection algorithms) Since currentlyreason for this is unclear it should be investigated in thefuture As expected FN percentage was the lowest for 20case (2338) and the highest for 80 case (4674) It hasvalue of about 35 for other cases (except 50 case) Thismight be considered a pretty large cost in search and rescuescenario but while making this type of conclusion it shouldbe kept in mind that comparisons are made in respect toalgorithms performance on original image and not groundtruth (which is hard to establish in this type of experimentwhere complete control of environment is not possible)

Some additional insight will be gained in Section 331where we conducted miniuser study on original image todetect how well humans perform on ideal (and not recon-structed) image For completeness of presentation Figure 7showing comparison of numbers of TPs and FPs for alldegradation levels is included

Additional insight in algorithmrsquos performance can beobtained by observing recall and precision values in Figure 8

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 12: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

12 Mathematical Problems in Engineering

FPFN

30 40 50 60 70 8020Degradation level ()

0

10

20

30

40

50

Num

ber o

f occ

urre

nces

Figure 7 Relation of number of occurrences of false positives (FPs)and false negatives (FNs) for particular degradation level over allimages

Degradation level ()20 30 40 50 60 70 80

01020304050607080

Reca

ll (

)

(a) Recall

01020304050607080

Prec

ision

()

30 40 50 60 70 8020Degradation level ()

(b) Precision

Figure 8 Recall and precision as detection performancemetrics forall degradation levels across all images

as defined by (26) The highest recall value (7662) isachieved in the case of 20 (missing samples or degradationlevel) followed closely by 40 case (7532) As expected(based on Figure 5) there is a significant dip in steadydownward trend for 30 case At the same time precisionis fairly constant (around 72) over the whole degradationrange with two peaks for the case of 20 of missing samples(peak value 7763) and for 60 of missing samples (peakvalue 8033) No statistical significance (as detected byKruskal-Wallis test) was detected for recall and precision inwhich degradation level was considered independent variable(across all images)

FPFN

05

1015202530354045

Num

ber o

f occ

urre

nces

2 3 4 5 6 7 8 9 10 11 12 13 14 15 161Image number

Figure 9 Number of occurrences of false negatives (FNs) and falsepositives (FPs) for all images across all degradation levels

If individual images and their respective FP and FNrates are examined across all degradation levels Figure 9 isobtained Some interesting observations can be made fromthe figure First it can be seen that images 14 and 16have unusually large number of FPs and FNs This shouldbe viewed in light that these two images belong to thirddataset This dataset was taken during real search and rescueoperation during which UAV operator did not position UAVat the desiredrequired height (of 50m) and also strong windgusts were present Also image 1 has unusually large numberof FNs If images 1 and 14 are removed from the calculation(since we care more about FNs and not FPs as explainedbefore) recall and precision values increase up to 12 and5 respectively Increase in recall is the largest for 40 case(12) while it is the lowest for 50 case (5) while increasein precision is the largest for 80 case (5) and the smallestfor 50 case (25) Second observation that can be madefrom Figure 9 is that there are number of cases (2 4 7 10 11and 12) where algorithm detection performs really well withcumulative number of occurrences of FPs and FNs around 5In case of image 4 it performed flawlessly (in image therewas one target that was correctly detected in all degradationlevels) without any FP or FN Note that no FNs were presentin images 2 and 7

331 User Study While we maintain that for evaluationof compressive sensing image reconstructionrsquos performancecomparison of detection rates in the original nondegradedimage (as ground truth) and reconstructed images are thebest choice baseline performance of detection algorithmis presented here for completeness However determiningbaseline performance (absolute ground truth) proved to benontrivial since environmental images from the wild (wheresearch and rescue usually takes place) is hard to controland usually there are unintended objects in the frame (eganimals or garbage) Thus we opted for 10-subject mini-study in which the decision whether there is an object inthe image was made by majority vote that is if 5 or morepeople detected something in certain place of the image thenit would be considered an object Subjects were from faculty

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 13: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

Mathematical Problems in Engineering 13

and student population and did not see the images before norhave ever participated in search and rescue via image analysisSubjects were instructed to find people cars animals clothbags or similar things in the image combining speed andaccuracy (ie not emphasizing either) They were seated infront of 236-inch LEDmonitor (Philips 247E) on which theyinspected images and were allowed to zoom in and out of theimage as they felt necessary Images were randomly presentedto subject to avoid undesired (learningfatigue) effects

On average it took a subject 6383 s (10 minutes and 383seconds) to go over all images In order to demonstrate inter-subject variability and how demanding the task was we ana-lyzed precision and recall for results from test subject studyFor example if 6 out of 10 subjects marked a part of an imageas an object this meant that there were 4 FNs (ie 4 subjectsdid not detect that object)On the other hand if 4 test subjectsmarked an area in an image (and since it did not pass thresh-old in majority voting process) it would be counted as 4 FPsAnalysis conducted in such manner yielded recall of 8222and precision of 9363Here again two images (15 and 16)accounted for more than 50 of FNs It should be noted thatthese results cannot be directly compared to the proposedalgorithm since they rely on significant human intervention

4 Conclusions

In the paper gradient based compressive sensing is presentedand applied to images acquired from UAV during search andrescue operationsdrills in order to reduce networktrans-mission burden Quality of CS reconstruction is analyzed aswell as its impact on object detection algorithms in use withsearch and rescue All introduced quality metrics showedsignificant improvement with varying ratios depending ondegradation level as well as type of terrainenvironmentdepicted in particular image Dependency of reconstructionquality on terrain type is interesting and opens up possibilityof inclusion of terrain type detection algorithms and sincethen reconstruction quality could be inferred in advanceand appropriate degradation level (ie in smart sensors)selected Dependency on quality of degraded image is alsodemonstrated Reconstructed images showed good perfor-mance (with varying recall and precision parameter values)within object detection algorithm although slightly higherfalse negative rate (whose cost is high in search applications)is present However there were few images in the dataset onwhich algorithm performed either flawlessly with no falsenegatives or with few false positives whose cost is not bigin current application setup with human operator checkingraised alarms Of interest is slight peak in performanceof 30 degradation level (compared to 40 and generaldownward trend)mdashthis peak was detected in earlier studyon completely different image set making this find by chanceunlikely Currently no explanation for the phenomenon canbe provided and it warrants future research Nevertheless webelieve that obtained results are promising (especially in lightof results of miniuser study) and require further researchespecially on the detection side For example algorithm couldbe augmented with terrain recognition algorithm whichcould give cues about terrain type to both the reconstruction

algorithm (adjusting degradation level while keeping desiredlevel of quality for reconstructed images) and detectionalgorithm augmented with automatic selection procedure forsome parameters likemean shift bandwidth (for performanceoptimization) Additionally automatic threshold estimationusing image size and UAV altitude could be used for adaptiveconfiguration of detection algorithm

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] UK Ministry of Defense ldquoMilitary search and rescue quarterlystatistics 2015 quarter 1rdquo Statistical Report 2015 httpswwwgovukgovernmentuploadssystemuploadsattachment datafile425819SAR Quarter1 2015 Reportpdf

[2] T W Heggie and M E Amundson ldquoDead men walkingsearch and rescue in US National Parksrdquo Wilderness andEnvironmental Medicine vol 20 no 3 pp 244ndash249 2009

[3] M Superina and K Pogacic ldquoFrequency of the Croatianmountain rescue service involvement in searching for missingpersonsrdquo Police and Security vol 16 no 3-4 pp 235ndash256 2008(Croatian)

[4] J Sokalski T P Breckon and I Cowling ldquoAutomatic salientobject detection in UAV imageryrdquo in Proceedings of the 25thInternational Conference on Unmanned Air Vehicle Systems pp111ndash1112 April 2010

[5] H Turic H Dujmic and V Papic ldquoTwo-stage segmentation ofaerial images for search and rescuerdquo InformationTechnology andControl vol 39 no 2 pp 138ndash145 2010

[6] S Waharte and N Trigoni ldquoSupporting search and rescueoperations with UAVsrdquo in Proceedings of the InternationalConference on Emerging Security Technologies (EST rsquo10) pp 142ndash147 Canterbury UK September 2010

[7] C Williams and R R Murphy ldquoKnowledge-based videocompression for search and rescue robots amp multiple sensornetworksrdquo in International Society for Optical EngineeringUnmanned Systems Technology VIII vol 6230 of Proceedings ofSPIE May 2006

[8] G S Martins D Portugal and R P Rocha ldquoOn the usage ofgeneral-purpose compression techniques for the optimizationof inter-robot communicationrdquo in Proceedings of the 11th Inter-national Conference on Informatics in Control Automation andRobotics (ICINCO rsquo14) pp 223ndash240ViennaAustria September2014

[9] J Music T Marasovic V Papic I Orovic and S StankovicldquoPerformance of compressive sensing image reconstruction forsearch and rescuerdquo IEEEGeoscience and Remote Sensing Lettersvol 13 no 11 pp 1739ndash1743 2016

[10] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation Theory vol 52 no 4 pp 1289ndash1306 2006

[11] R Chartrand ldquoExact reconstruction of sparse signals via non-convexminimizationrdquo IEEE Signal Processing Letters vol 14 no10 pp 707ndash710 2007

[12] S Foucart and H Rauhut A Mathematical Introduction toCompressive Sensing Springer 2013

[13] M F Duarte M A Davenport D Takbar et al ldquoSingle-pixelimaging via compressive sampling building simpler smaller

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 14: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

14 Mathematical Problems in Engineering

and less-expensive digital camerasrdquo IEEE Signal ProcessingMagazine vol 25 no 2 pp 83ndash91 2008

[14] L Stankovic ldquoISAR image analysis and recovery with unavail-able or heavily corrupted datardquo IEEE Transactions on Aerospaceand Electronic Systems vol 51 no 3 pp 2093ndash2106 2015

[15] M BrajovicM Dakovic I Orovic and S Stankovic ldquoGradient-based signal reconstruction algorithm in Hermite transformdomainrdquo Electronics Letters vol 52 no 1 pp 41ndash43 2016

[16] I Stankovic I Orovic and S Stankovic ldquoImage reconstructionfrom a reduced set of pixels using a simplified gradient algo-rithmrdquo in Proceedings of the 22nd Telecommunications ForumTelfor (TELFOR rsquo14) pp 497ndash500 Belgrade Serbia November2014

[17] G Reeves and M Gastpar ldquoDifferences between observationand sampling error in sparse signal reconstructionrdquo in Proceed-ings of the IEEESP 14thWorkshop on Statistical Signal Processing(SSP rsquo07) pp 690ndash694 Madison Wis USA August 2007

[18] R E Carrillo K E Barner and T C Aysal ldquoRobust samplingand reconstruction methods for sparse signals in the presenceof impulsive noiserdquo IEEE Journal on Selected Topics in SignalProcessing vol 4 no 2 pp 392ndash408 2010

[19] H Rauhut ldquoCompressive sensing and structured randommatricesrdquo in Theoretical Foundations and Numerical Methodsfor Sparse Recovery M Fornasier Ed vol 9 of Random SeriesComputational and Applied Mathematics pp 1ndash92 Walter deGruyter Berlin Germany 2010

[20] G E Pfander H Rauhut and J A Tropp ldquoThe restricted isom-etry property for time-frequency structured random matricesrdquoProbability Theory and Related Fields vol 156 no 3-4 pp 707ndash737 2013

[21] D Comaniciu and P Meer ldquoMean shift a robust approachtoward feature space analysisrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 24 no 5 pp 603ndash6192002

[22] ZWang A C Bovik H R Sheikh and E P Simoncelli ldquoImagequality assessment from error visibility to structural similarityrdquoIEEE Transactions on Image Processing vol 13 no 4 pp 600ndash612 2004

[23] A Hore and D Ziou ldquoImage quality metrics PSNR vs SSIMrdquoin Proceedings of the 20th International Conference on PatternRecognition (ICPR rsquo10) pp 2366ndash2369 Istanbul Turkey August2010

[24] R Dosselmann and X D Yang ldquoA comprehensive assessmentof the structural similarity indexrdquo Signal Image and VideoProcessing vol 5 no 1 pp 81ndash91 2011

[25] Z Wang and A C Bovik ldquoMean squared error Love it or leaveitrdquo IEEE Signal Processing Magazine vol 26 no 1 pp 98ndash1172009

[26] R Russell and P Sinha ldquoPerceptually-based comparison ofimage similarity metricsrdquo Tech Rep Massachusetts Institute ofTechnology (MIT)mdashArtificial Intelligence Laboratory 2001

[27] W H Kruskal and W A Wallis ldquoUse of ranks in one-criterion variance analysisrdquo Journal of the American StatisticalAssociation vol 47 no 260 pp 583ndash621 1952

[28] A Angelova L Matthies D Helmick and P Perona ldquoFastterrain classification using variable-length representation forautonomous navigationrdquo in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR rsquo07) pp 1ndash8 Minneapolis Minn USA June 2007

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 15: Research Article Gradient Compressive Sensing for …downloads.hindawi.com/journals/mpe/2016/6827414.pdfResearch Article Gradient Compressive Sensing for Image Data Reduction in UAV

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of