research article evaluating the role of content in...

10
Research Article Evaluating the Role of Content in Subjective Video Quality Assessment Milan Mirkovic, Petar Vrgovic, Dubravko Culibrk, Darko Stefanovic, and Andras Anderla University of Novi Sad, Faculty of Technical Sciences, Trg Dositeja Obradovica 6, 21000 Novi Sad, Serbia Correspondence should be addressed to Milan Mirkovic; [email protected] Received 31 August 2013; Accepted 22 October 2013; Published 9 January 2014 Academic Editors: E. A. Marengo, C. Saravanan, and A. Torsello Copyright © 2014 Milan Mirkovic et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Video quality as perceived by human observers is the ground truth when Video Quality Assessment (VQA) is in question. It is dependent on many variables, one of them being the content of the video that is being evaluated. Despite the evidence that content has an impact on the quality score the sequence receives from human evaluators, currently available VQA databases mostly comprise of sequences which fail to take this into account. In this paper, we aim to identify and analyze differences between human cognitive, affective, and conative responses to a set of videos commonly used for VQA and a set of videos specifically chosen to include video content which might affect the judgment of evaluators when perceived video quality is in question. Our findings indicate that considerable differences exist between the two sets on selected factors, which leads us to conclude that videos starring a different type of content than the currently employed ones might be more appropriate for VQA. 1. Introduction Recently, there has been a tremendous increase in usage of mobile devices to access the Internet and its services. Fierce competition on the end-user electronics market has caused smartphones to become accessible to everyone, and the telecommunications made broadband Internet access cheap and ubiquitous. As a consequence, vast majority of the population in developed countries now owns a cell phone [1], while one quarter of the smartphone owners also possess a tablet [2]. ese devices are becoming more potent and versatile by the day, as technological advances make enhanced integrated components (wireless and GPS modules, cameras, different sensors, etc.) affordable and more powerful at the same time. e result of this evolution is not only a change in people’s habits when day-to-day tasks are in question, but a shiſt in the way some traditional services are perceived (such as TV, mail, or telephony). is shiſt is especially noticeable when video content is concerned, as more and more of it gets consumed “on the move” [3]. In fact, some estimates have it that 86% of the global consumer traffic by 2016 will be generated by streaming video content [4]. Such a high percentage inevitably raises the issue of quality of the delivered content, as perceived by the end-users. In essence, there are two ways to express the quality of a given video content: the first one is to leverage objective measurements that are inherent to a given sequence in order to derive “objective quality” of the video, while the other is to rely on subjective assessment scores obtained from human evaluators. Ideally, one should be able to use the former to predict the latter. Video Quality Assessment (VQA) algorithms attempt to do this automatically (to assess perceptual degradations introduced by signal processing and transmission operations performed on video sequences and calculate the probable score that should be assigned by human observers), but their performance sometimes leaves much to be desired, and there is considerable room for improvement [5]. In fact, although the objective measure- ments that algorithms rely on have high reliability and inter- nal validity, it is sometimes questioned if those measurements have useful implications in day-to-day use. e ground truth data that is used to measure the performance of quality assessment algorithms—which in the domain of VQA takes the form of degraded sequences and Mean Opinion Scores (MOS)—is gathered mostly in laboratory tests on human observers (i.e., naive evaluators are asked to grade various aspects of the presented stimuli). Hindawi Publishing Corporation e Scientific World Journal Volume 2014, Article ID 625219, 9 pages http://dx.doi.org/10.1155/2014/625219

Upload: others

Post on 23-Feb-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Evaluating the Role of Content in ...downloads.hindawi.com/journals/tswj/2014/625219.pdf · Research Article Evaluating the Role of Content in Subjective Video Quality

Research ArticleEvaluating the Role of Content in Subjective VideoQuality Assessment

Milan Mirkovic Petar Vrgovic Dubravko Culibrk Darko Stefanovic and Andras Anderla

University of Novi Sad Faculty of Technical Sciences Trg Dositeja Obradovica 6 21000 Novi Sad Serbia

Correspondence should be addressed to Milan Mirkovic mirkovicmilangmailcom

Received 31 August 2013 Accepted 22 October 2013 Published 9 January 2014

Academic Editors E A Marengo C Saravanan and A Torsello

Copyright copy 2014 Milan Mirkovic et alThis is an open access article distributed under the Creative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Video quality as perceived by human observers is the ground truth when Video Quality Assessment (VQA) is in question It isdependent on many variables one of them being the content of the video that is being evaluated Despite the evidence that contenthas an impact on the quality score the sequence receives fromhuman evaluators currently availableVQAdatabasesmostly compriseof sequences which fail to take this into account In this paper we aim to identify and analyze differences between human cognitiveaffective and conative responses to a set of videos commonly used for VQA and a set of videos specifically chosen to include videocontent which might affect the judgment of evaluators when perceived video quality is in question Our findings indicate thatconsiderable differences exist between the two sets on selected factors which leads us to conclude that videos starring a differenttype of content than the currently employed ones might be more appropriate for VQA

1 Introduction

Recently there has been a tremendous increase in usageof mobile devices to access the Internet and its servicesFierce competition on the end-user electronics market hascaused smartphones to become accessible to everyone andthe telecommunications made broadband Internet accesscheap and ubiquitous As a consequence vast majority of thepopulation in developed countries now owns a cell phone[1] while one quarter of the smartphone owners also possessa tablet [2] These devices are becoming more potent andversatile by the day as technological advancesmake enhancedintegrated components (wireless and GPS modules camerasdifferent sensors etc) affordable and more powerful at thesame time The result of this evolution is not only a changein peoplersquos habits when day-to-day tasks are in question buta shift in the way some traditional services are perceived(such as TV mail or telephony) This shift is especiallynoticeable when video content is concerned as more andmore of it gets consumed ldquoon the moverdquo [3] In fact someestimates have it that 86 of the global consumer traffic by2016 will be generated by streaming video content [4] Sucha high percentage inevitably raises the issue of quality of thedelivered content as perceived by the end-users

In essence there are two ways to express the quality ofa given video content the first one is to leverage objectivemeasurements that are inherent to a given sequence inorder to derive ldquoobjective qualityrdquo of the video while theother is to rely on subjective assessment scores obtainedfrom human evaluators Ideally one should be able to usethe former to predict the latter Video Quality Assessment(VQA) algorithms attempt to do this automatically (to assessperceptual degradations introduced by signal processing andtransmission operations performed on video sequences andcalculate the probable score that should be assigned byhuman observers) but their performance sometimes leavesmuch to be desired and there is considerable room forimprovement [5] In fact although the objective measure-ments that algorithms rely on have high reliability and inter-nal validity it is sometimes questioned if thosemeasurementshave useful implications in day-to-day use

The ground truth data that is used to measure theperformance of quality assessment algorithmsmdashwhich inthe domain of VQA takes the form of degraded sequencesand Mean Opinion Scores (MOS)mdashis gathered mostly inlaboratory tests on human observers (ie naive evaluatorsare asked to grade various aspects of the presented stimuli)

Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 625219 9 pageshttpdxdoiorg1011552014625219

2 The Scientific World Journal

However there is an array of cognitive biases specific toa human mind such as contrast error or halo effect [6]which influences the evaluating processes in a way that canjeopardize the planned research designs These cognitivebiases are often content specific meaning that the contentof the presented external stimuli influences the evaluatorsrsquojudgment when assessing that or another related externalstimulus properties [7] Furthermore certain video contentscan induce high level of emotional activation [8] and thatintegral emotion (that is content induced) can additionallyinfluence cognition bias which is dependent on the type ofinduced emotion [9] If visual perception process is observedrelative to the observer and content then it makes sense totake an approach thatwill observe visual perception processesas influenced not only by objectively measurable propertiesof a stimuli but also by subjective elements induced by thestimuli within the observer This user-centered and content-influenced approach in evaluating visual stimuli propertiesis expected to yield results that can be used when designingfuture multimedia solutions as some of the most adaptivedelivery mechanisms for streaming multimedia content donot explicitly consider user-perceived quality when makingadaptation decisions in spite of a proven fact that usersrsquonature of perception of adapting multimedia is dynamic [10]Although some efforts exist when it comes to understandinguser experience and leveraging it to remodel the video qualityevaluation processes [11] as well as when mobile videodelivery is concerned [12] little is said about the effects ofpresented content on the subjective evaluation [13]

The research presented in this paper aims to define themental response properties of video stimuli that are pertinentto its content and to propose variables that ought to be takeninto account when dealing with the subjective video qualityassessments In order to do that a standard video qualitydatabase commonly used for VQA has been compared to acustom-constructed one based on various content-specificproperties that have been shown to influence subjectiveevaluation such as userrsquos familiarity with the content and theexpectancy of the content (cognitive component) inducedemotions (affective component) and userrsquos intent to watchthe video again or to recommend it to a friend (conativecomponent) The results indicate a significant differencebetween the two databases on the observed componentsand identify the properties of videos that have the highestdiscriminative power when it comes to subjective videoquality perception In addition a relationship between videocontent and the perceived video quality was identified

The rest of the paper is organized as follows Section 2provides an overview of the relevant published work andcurrent state in the domain Section 3 describes the method-ology employed and proposed approach in detail Section 4presents experimentally obtained results and provides adiscussion Section 5 contains conclusions and an outline ofpossible future research directions

2 Related Work

21 Sequences Commonly Used in Video Quality AssessmentResearch A recent survey by Winkler provides a fairly

exhaustive list of publicly available video quality databases[14] which consist of video sequences usually used withinresearch community for video quality assessment tasks Theadvantage of using these databases is that the researcher getsa set of video sequences (clips) that have been annotatedwith subjective quality ratings (derived either using one ofthe standardized methods proposed by ITU-T [15] or byapplying a novel method) that are sometimes tedious andortime-consuming to obtainmdashas they require a number ofhuman assessors to watch and grade respective sequences Apotential disadvantage is that the content of the sequencesin those databases is usually not selected in a manner thattakes into account the ldquohumanrdquo components motivation oremotions induced that might affect the grading outcomethat is it seems to be selected mostly on the basis ofcovering different scenarios that are known to bear weighton impairments induced due to the transport stream orcodingdecoding process (eg complex or detailed back-grounds high motion or camera movement etc) For exam-ple EPFLPoliMI Video Quality Assessment Database [16]VQEG HDTV Database [17] PolyNYU Packet Loss (PL)Database [18] and LIVE Video Quality Database [19 20] alloffer compressed videos corrupted (impaired) by simulatedtransmission over error-prone networks MMSP 3D VideoQuality Assessment Database [21]mdashbesides being the firstpublic-domain database on 3D video qualitymdashfocuses oneffects of different camera distances while PolyNYUVideoQuality Databases [18] contains videos with different frame-rates and quantization parameters The sequences in thesedatabases exhibit different artifacts (such as ringing blurringand blocking just to name a few) that have been shown toaffect human perception of video quality and they have evenbeen characterized and quantified using different objectiveparameters such as Spatial Information (indicator of edgeenergy) Colorfulness (perceptual indicator of the varietyand intensity of colors in the image) and Motion Vectors(an indicator of motion energy for video) [14] Howeverthe level of perceived degradation seems to be a compoundeffect of different factors pertinent both to the introducedimpairments (which can be objectivelymeasured) propertiesof Human Visual System (HVS) and possibly emotionalstate of the assessor sometimes influenced by the presentedcontent

22 The Relationship between Content and Video QualityPerception Even though themajority of video qualitymodelsand objective metrics rely heavily on the low level sensoryprocessing of HVS research has shown that low level modelsare not sufficient in characterizing the video quality judg-ments For example a study byMcCarthy et al [22] has shownthat assessors are willing to accept objectively poor temporalvideo quality (eg low frame-rate) if they are thoroughlyinterested in the content of the sequence In the experimentparticipants whowere soccer (football) fans rated low frame-rate (6 frames per second) video as acceptable 80 of thetime even though the motion was not fluid A similar effectto low frame rate was experienced by assessors in anotherstudy [23] where the content of videos presented to themwas ldquofrozenrdquo occasionally due to simulated network package

The Scientific World Journal 3

loss Participants in this experiment (who observed videos onmobile devices) were not so keen to accept this impairmentand isolated it as one of the top reasons for giving low scoresto videos where it was prominent This study however usedLIVE Video Quality Database videos (which were sometimesdescribed as ldquodullrdquo by the participants)

When information assimilation is the primary aim ofmultimedia presentation content plays an important role inthe perception of multimedia video quality as reported byGulliver et al [24] They discovered that the content of thesequence has a more significant effect on a userrsquos level ofinformation transfer than either the frame rate or displaydevice type However when information transfer is left outof the equation participants in the same study found frame-rate and device type important for perceived video qualitydemonstrating that they were able to distinguish betweentheir subjective enjoyment of a video clip and the level ofquality which they perceive the video clip to possess Thisimplies that there is a relationship between clip contents anduser-perceived video quality but components of the equationleading to a final score need to be evaluated carefully

Finally Jumisko et al [13] show that recognition of thecontent has an effect on perceived video quality evaluationThey found that video clips which were recognized by theparticipants were generally given lower ratings comparedto unrecognized clips while interesting contents collectedhigher ratings compared to ones deemed uninteresting(whether they were recognized or not) It is argued thatevaluators with previous knowledge about the genre aremoredemanding for the acceptability of quality Not in discordancewith similar studies in the field [25] they found that audiocomponent (when available in the experiment) compensatedto a fair degree the impairments in the visual part of the videoand at the same time impairments in speech were found tobe very distracting This is justified by the fact that pertinentto that particular experimental design audio contains therelevant information and the visual component only supportsit (eg music videos and news with the narrating voice in thebackground)

23 Factors Influencing Visual Perception Visual perceptionmay be broadly defined as mental organization and interpre-tation of sensory information received via individualrsquos sightHuman visual perception while being relatively objective byits means (the sight) and constant in terms of properties ofthe object being watched (light surfaces and textures) is stilloften significantly subjective due to various factors residing inthe observerrsquos mind Since perception requires interpretationof any received information and since interpretation is acomplex phenomenon that is unique to every individualit is evident that visual perception is heavily influenced byinternal and external subjective factors that are often hardto grasp Visual perception is still not fully understoodby contemporary science there are competing theories thatconcentrate on different aspects of it and offer explanationsthat are often in discordance with one other [26] Further-more there is a number of physiological and psychologicalfactors that are to be taken into account when assessingvisual perception spanning from fatigue [27] and substance

intoxication [28] to observerrsquos expectations and motivation[29]

Also some cognitive biases are known to distort ourinterpretation and assessment of what we see and evaluate[30] attention bias (focusing only on one part of the infor-mation set) choice-supportive bias (taking past choices astemplates for the new ones) conservatism (underestimatinghigh values while overestimating low ones) contrast effect(enhancement or reduction of one objectrsquos perception asresult of comparing with a recent contrasted object) curseof knowledge (advanced knowledge diminishes ability toperceive something from a common perspective) halo effect(tendency to observe and evaluate latter aspects in the lightof formerly observed aspects) negativity bias (more attentionis given to unpleasant information than to the pleasantone) recentness bias (the tendency to weigh recent eventsmore than earlier events) and selective perception (whereexpectations influence perception)

Finally bottom-up processing is known to be driven bythe stimulus presented [31] and some stimuli are intrinsicallyconspicuous or salient (ie outliers) in a given contextSaliency is independent of the nature of the particular taskoperates very rapidly and if a stimulus is sufficiently salientit will pop out of a visual scene This knowledge has beenleveraged to incorporate saliency information into objectiveimage quality metrics [32] as it is reasonable to assumethat artifacts appearing in less salient regions are less visibleand thus less annoying than artifacts affecting the region ofinterestWhile this assumption holdswhen natural saliency isin question (when observers are not given a particular task)there are indications that quality assessment task (ie whenobservers specifically look for impairments) modifies thenatural saliency of images so the masking effect of saliencyfor distortions in background regions should be taken withsome reserve when modeling subjective quality experiments[33]

3 Methodology

31 Defining the Test Set As related research reveals assessorsare by no means indifferent to the content they evaluateThis has potential implications for real-world applicationssince subjective perception of quality of some ldquoneutralrdquocontent selected on basis of its suitability for introducingparticular impairments to video stream (ie standardizedVQA databases) might differ from quality perception ofcontent that the assessor is familiar with or interested inand pays more attention to (ie content that the assessorregularly consumes such as TV shows and movies) To testthis hypothesis we created an alternative set of videos forcomparison with the commonly used ones which ought toactivate mental responses that are not activated to a greaterextent within commonly used VQA databases and shouldthus reflect more truthfully the perceived quality of real-life video sequences as experienced by the end-users whenimpaired We will refer to this set as the proposed set Asfamiliar video content was shown to sometimes get lowerquality rating at the same impairment level than the unrec-ognized one we included a number of videos that should

4 The Scientific World Journal

be fairly familiar to assessors instead of focusing solely oncontent that was supposed to induce different emotions orothermental responses First based on the reports of nationaltelevision service [34] as well as on the results of a surveyconducted by a media research group [35] we have identifiedseveral categories of TV program that are most often viewedby average citizens of Serbia (as participants were all Serbiancitizens) The identified categories were

(1) TV series(2) movies(3) informative shows(4) entertainment(5) news(6) music shows(7) sports and(8) other

While most of the categories are self-explanatory theldquoInformative showsrdquo category comprises content that coversa specific topic or event (eg such as shows most oftenencountered on the ldquoDiscoveryrdquo ldquoTravelrdquo or ldquoAnimal Planetrdquochannels) Using television program ratings as a guideline[34] we selected 4 to 6 sequences representative of eachcategory and downloaded them from YouTube (the numbervaried because some of the categories were more popularthan the other with the viewers) Second we leveragedsocial networks (Facebook Twitter and Google+) as well asYouTube itself to gain insight into what videos (in the publicdomain) are deemed popular within different communities(global when it comes to YouTube and local when socialnetworks are in question) Videos that were circulating socialnetworks that the authors are a part of andwhichwere highlypopular within the community at the time the experimentwas devised (ie had a lot of views) were downloadedassigned to appropriate categories and included in the testset All of the videos obtained for the experiment were partof public domain and their usage for scientific purposes wasnot prohibited Also they were downloaded in 480p format(while being already compressed by YouTube using H264)Finally we have opted for the LIVE Video Quality Databaseas a representative source of video sequences commonly usedfor VQA tasks since it is widely accepted and cited withinvideo quality assessment scientific community regularlyupdated and devised with automatic VQA algorithms inmind [36] All 10 video sequences available in the databasewere used in our test set To make for a fair comparison theLIVE Video Quality Database sequences were downloadedfrom YouTube as well in the 480p format rather thanusing the unimpaired database version These videos will bereferred to as the LIVEVQDB or ldquostandardrdquo video set All ofthe videos used in the experiment were scaled (and croppedwhen necessary) to a common resolution of 576 times 432 pixels(1 3 aspect ratio)The audio component of the sequences wasleft out (ie only visual component remained) All sequenceswere between 10s and 11s longThus the final test-set of videoscomprised 66 sequences in total 56 of them were classified

Table 1 Distribution of videos in the test set

Category name Number of videosTV series 4Movies 4Informative shows 8Entertainment 6News 8Music shows 6Sports 5Other 15LIVEVQDB 10

into one of the aforementioned categories and the additional10 from the LIVE Video Quality Database were assigned tothe ldquoLIVEVQDBrdquo categoryThe distribution of the final set ofvideos is provided in Table 1

32 Experimental Design and Setup As the first research goalwas to determine if the LIVEVQDB video set is similar inability to activate mental responses to a set of real-life videosobserved by media consumers an experimental researchdesign with two independent sets was adopted The twosets were compared on a number of relevant variables thatidentified level of mental activation Mental response in thiscontext was regarded as any kind of an individualrsquos consciousmental activity that is directly induced by specific visual stim-uli presented to that individual From various classificationsof mental activities available the widest nomenclature wasused in this research the tripartite classification of mentalactivities (responses) into cognition affection and conationThis classification has been in use since the eighteen centuryand is still widely used nowadays to address various aspectsof human mental state and responses [37]

Cognitive mental activities are mostly observed as aldquorationalrdquo or ldquoobjectiverdquo peace of mind These activities arethought to be responsible for processing information thatpeople get from their sensory systems via attention andmemory In order to measure cognitive response we proposevariables we named ldquointerestingrdquo and ldquofamiliarrdquo to measurethe potential of a stimulus to attract attention and to assess itsnovelty or the lack of it In contrast to the cognitive activitiesaffective activities (often called emotional reactions) arethought to be very subjective and relative to the personwho isreceiving certain stimuli As identified in the western culturesby Ekman et al [38] and reaffirmed universally by Fridlundet al [39] there are six basic emotions that people expressindependently from their culture and other external factorsanger disgust fear happiness sadness and surprise Wehave adopted this classification when researching emotionalresponse for the video stimuli thus obtaining six more vari-ables (one for each of the basic emotions) Conative mentalactivities are the ones that drive subjects towards certainactivities which means that the conative part of the minduses cognitive and affective parts to fuel its role Thereforeit is necessary to observe conative responses in order to fullyassess cognitive and affective states For this component three

The Scientific World Journal 5

Table 2 Variables used in the experiment

Variable name ScaleMOS 1ndash5 Likert scaleSeen before Multiple choice questionInteresting 1ndash7 Likert scaleFamiliar 1ndash7 Likert scaleAnger 1ndash7 Likert scaleFear 1ndash7 Likert scaleSadness 1ndash7 Likert scaleDisgust 1ndash7 Likert scaleHappiness 1ndash7 Likert scaleSurprise 1ndash7 Likert scaleAppealing 1ndash7 Likert scale

Share Multiple choice question(recoded to 3 point ordinal scales)

Watch again Multiple choice question(recoded to 3 point ordinal scales)

variables were designed ldquoappealingrdquo (denoting the subjectivelevel of attractiveness of a particular video) ldquowatch againrdquo(willingness to watch the video again) and ldquoshare videordquo(willingness to share the video with other people)

A measurement of subjectively perceived video qualitywas expressed in terms ofMeanOpinion Score (MOS) whichis calculated as the average score for a video sequence over allassessors It was measured on a 5-point quality scale rangingfrom 1 (bad) to 5 (excellent) Finally to be able to keep track ofcontent that was recognized by the assessors a variable ldquoseenbeforerdquo was introduced Complete set of variables and scalesused to measure them is presented in Table 2

Experimental setup in general and laboratory setup inparticular closely resembled the one proposed by the Inter-national Telecommunication Union (ITU) in their recom-mendations for VQA tasks [40] Sequences were presentedto assessors on a 2010158401015840 monitor (SAMSUNG S20B300B)which was operated at its native resolution of 1600 times 900pixels A custom in-house software was developed thatenables playback of the sequences against the neutral graybackground and voting on each video (ie filling in aquestionnaire) For each assessor video sequence playbackorder was randomized to avoid any ordering effects Afterpresenting the assessor with the sequence the softwaredisplays a questionnaire where the assessor is asked to rateselected features on either a 1ndash7 scale (1 being the lowestworstand 7 being the highestbest score clearly labeled in assessorrsquoslanguage besides the scale) or by selecting appropriate answer(depending on the question) The only exception to this wastheMOS scale that ranged from 1 to 5 Since the questionnairecomprised 13 questions and it was thus impossible to replicatethe exact ITU proposed setup for single stimulus type ofexperiments without conducting multiple runs we haveplaced the question regarding perceived video quality on top(ie it was the first question) therefore complying to thestandard procedure as much as possible Questions regardingcognitive affective and conative components followed Upon

completing the questionnaire software played the next videoAssessors were allowed to see sequences only once (ie replaywas not possible nor were there any additional runs) Thetest consisted of one session of about 45 minutes includingtraining Before the actual test oral instructions were givento subjects and a training session was run that consistedof three videos and a questionnaire following each onewhere the subjects were free to ask any questions regardingthe test procedure (including clarification of questionnairequestions) 20 subjectsmdash8 male and 12 femalemdashparticipatedin the test their age ranging from 20 to 64 None of themwere familiar with video processing nor had previouslyparticipated in similar tests All of the subjects reportednormal or corrected vision prior to testing

4 Results

After the experiment a number of different analyses wereperformed All of them were conducted with two con-siderations in mind (1) we operated with a sample sizethat could possibly be considered small in comparison toestablished norms for behavioral sciences but at the sametime is considered more than adequate for VQA tasks and (2)we operated dominantly with ordinal level of measurement(for the dependent variables)

41 Mental Responses Relative to the Video Sets The dataobtained was observed relative to the two video sets (1)the proposed one and (2) the LIVEVQDB set Separatedimensionswere compared followed by a comparison of sub-sequently computed variables for the three mental responsecategories

Figure 1 displays average scores over different categoriesfor different variables LIVEVQDB set is presented in red(filled in area) while videos comprising the proposed setare divided into respective content categories and presentedby lines Just by looking at averages it seems that videoscomprising LIVEVQDB scored significantly lower on almostall measured variables

To test this hypothesis against the alternative one (that thetwo observed populations are the same) we ran the Mann-Whitney 119880 test results of which are shown in Tables 3ndash5Indeed the test demonstrated that these sets have very lowchance of originating from the same population suggestingthat the proposed set of videos stimulates mental reactionsthat are significantly more intense than the mental reactionsstimulated by the LIVEVQDB set

As shown in Table 3 the proposed video set achievedhigher ranks in both of the cognitive variables observed(ldquointerestingrdquo and ldquofamiliarrdquo) A reason that videos compris-ing this set were found to be more interesting probably liesin their rich and versatile content which was why they werechosen as good candidates to arouse the attention of assessorsin the first place They were also marked as more familiarto observers than the ones from the LIVEVQDB set mostlikely because none of the subjects have participated in VQAexperiments before and thus had low odds of encounteringvideos from LIVEVQDB set before

6 The Scientific World Journal

Table 3 Comparison of cognitive dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Interesting Proposed 70691 79173750 6002250 000LIVEVQDB 40061 8012250

Familiar Proposed 68453 76667100 8508900 000LIVEVQDB 52595 10518900

Table 4 Comparison of affective dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Anger Proposed 67014 75055700 10120300 001LIVEVQDB 60652 12130300

Disgust Proposed 67864 76007150 9168850 000LIVEVQDB 55894 11178850

Fear Proposed 67329 75408050 9767950 000LIVEVQDB 58890 11777950

Happiness Proposed 67866 76010050 9165950 000LIVEVQDB 55880 11175950

Sadness Proposed 67747 75876650 9299350 000LIVEVQDB 56547 11309350

Surprise Proposed 69199 77502900 7673100 000LIVEVQDB 48416 9683100

The proposed set also achieved higher ranks in all ofthe six basic emotions observed as shown in Table 4 Thisfinding is probably a result of original media producerrsquosintent of activating certain affective states depending on thenature of a video Conversely the LIVEVQDB set might havebeen produced with intention to primarily capture differentcontent that is known to bear weight in terms of introducedimpairments when codingdecoding and video transmissionis in focus thus neglecting possible emotional response fromthe assessors

Theproposed video setwas also found to activate conativedimensions of subjectsrsquo mental activity to a greater extentthan the LIVEVQDB set didThese dimensions are especiallyimportant because they cast a new light on both the cognitiveand the affective dimensions If conative part of onersquos mind isin idle state the chances are that the cognitive and affectiveactivation did not happen even if the subject states theopposite When somebody is presented with interestingcontent he or she is likely to do something after seeing itmdasheither watch it again or share it with relevant others

The mean ranks difference can be observed in Table 5but it should be noted that one part of the videos from theproposed set was relatively unpleasant to watch activatingemotions like sadness fear anger or disgust These dimen-sions of affective responses were found to be significantlynegatively correlated or uncorrelated to the conative dimen-sions in this research so the ranks in the conative dimensionswould be even higher if we were to select only pleasant andappealing videos thus showing even greater difference fromthe standard set

After individual analysis the dimensions were summedinto the corresponding mental components forming three

variables named cognitive component (comprising 2 dimen-sions) affective activation (comprising 6 dimensions) andconative activation (comprising 3 dimensions) Again theMann-Whitney 119880 test has demonstrated significantly highermental activation induced by the proposed set than themental activation induced by the LIVEVQDB set (Table 6)

Finally relationship between the mental activationdimensions andperceived video quality (MOS)was observedSince all of the dependent variables were of ordinal typeSpearmanrsquos Rho measure of correlation was used follow-ing common guidelines of interpretation for behavioralsciences [41 42] Moderate correlations significant at the 01level (2-tailed) were found between video quality assessmentand the following dimensions interesting (472) happiness(415) surprise (306) appealing (497) watch again (393)and share video (395) while other dimensions mostlycorrelated with low coefficients

Also video quality assessment was correlated with thethree calculated mental components gaining moderate cor-relation with cognitive component (412) and conative acti-vation (496) while low correlation was found with affectiveactivation (232) again all significant at the 01 level (2-tailed)

Additionally Spearmanrsquos rho correlations were calculatedfor the three mental components suggesting a strong corre-lation between the cognitive and the conative components(741) while also suggesting a moderate correlation betweenthese two components in relation to the affective component(351 with cognitive) and (462 with conative)

42 Video Quality Assessment Unknown to assessors allvideos in the full test set were obtained from the same sourceand with the same quality settings (H264 480p) and were

The Scientific World Journal 7

Table 5 Comparison of conative dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Appealing Proposed 69320 77638550 7537450 000LIVEVQDB 47737 9547450

Watch again Proposed 68410 76619600 8556400 000LIVEVQDB 52832 10566400

Share video Proposed 68223 76410700 8765300 000LIVEVQDB 53877 10775300

Table 6 Mental components relative to the two video sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Cognitive component Proposed 70300 78735800 6440200 000LIVEVQDB 42251 8450200

Affective activation Proposed 70198 78621750 6554250 000LIVEVQDB 42821 8564250

Conative activation Proposed 69529 77872850 7303150 000LIVEVQDB 46566 9313150

000

100

200

300

400

500

600Interesting

Familiar

Anger

Disgust

Fear

Happiness

SadnessSurprise

Appeal

MOS

LIVEVQDBEntertainmentInformativeMoviesMusic

NewsOtherSeriesSports

Figure 1 Average scores for different variables (grouped by videocategories)

hence of roughly the same visual quality Even though wecould not control for the impairments and objective videoquality of the source sequences (apart from the LIVEVQDBdatabase) close visual inspection of the obtained materialassured us that nonexperts should hardly be able to tell thedifference between the quality of sequences presented to them(ie any differences they observed would stem from sources

not pertinent to visual impairments) To further confirm ourassumption a subset of 20 sequences was played to threeindependent video quality experts Among these videos 10were randomly selected from the proposed set while theother 10 were LIVEVQDB sequences (all of them obtainedfrom YouTube to account for any codec effects) Playingorder was randomized and the experts were asked to voteonly on quality (ie they did not take the full experiment)of the videos Subsequently unpaired 119905-test was run onMOS obtained and it did not show statistically significantdifferences between the samples thus confirming our initialassumptions

Since a correlation was established between differentvariables andMOS and LIVEVQDB videos scored constantlylower than the videos from the proposed set we have tried tofurther investigate and explain these differences

On average videos comprising the proposed set scoreda MOS value of 347 while LIVEVQDB videos scored 280This is a large difference on a 5-point scale but it only getslarger when individual components shown to affect perceivedquality are considered For example when interestingness isobserved best ranked LIVEVQDBvideowas placed 39th (outof 66 videos in the full set) A comparison between the top 10ranked ldquointerestingrdquo videos (as deemed by assessors) and thesequences from the LIVEVQDB set reveals a mean differenceof 134 points (280MOS for the LIVEVQDB versus 413MOSfor the proposed set) which was further shown by a 119905-test tobe statistically significant (119875 lt 00001 with 95 confidenceinterval between 104 and 164)What is evenmore interestingis that 7 out of 8 categories (for the proposed video set)found their place on the ldquotop 10 interesting videosrdquo The onlycategory that did not make it was the ldquomoviesrdquo category

When comparison of top 10 MOS rated videos pertinentto different emotions versus LIVEVQDB videos is concernedresults comply with previously discovered correlationsThuswhen a positive emotion such as happiness was induced bya sequence it yielded a high quality score (MOS = 396)

8 The Scientific World Journal

Videos that caused surprise were also ranked as of higherquality (MOS = 381) while negative emotions caused videoto plummet on the quality scale (sadness (MOS = 286) fear(MOS = 300) disgust (MOS = 286) and anger (MOS =289)) Interestingly one of the videos from the LIVEVQDBfound its place among the top 10 videos that the assessorswereangry with

5 Conclusions and Future Research

The most important conclusion we drew from the resultsobtained in the experiment is that the content of videosequence has a strong impact on activating different cog-nitive affective and conative components within assessorsThese components in turn have been shown (both by thisand previous researches) to play an integral role inVQA taskswhether assessors are aware of it or not Thus we providea number of recommendations that ought to be taken intoaccountwhen conducting subjective video quality assessmentexperiments

First video content has to be considered very carefullywhen trying to measure the perceived video quality as failingto do so might introduce a bias which will reflect in finalMOS obtained It is thus vital to choose the correct typeof content for addressing the research question at hand (atany rate choose the type of content that is most likely to beencountered by the viewers in real-world conditions) Havingthis in mind the ldquocontentrdquo variable needs to be monitoredwhenever possible Second since emotions of disgust fearanger and sadness are found to have different impact onconative mental activities than emotions of happiness andsurprise additional attention should be exhibited if conativeactivities are to be observed as dependent variables (especiallywhen it comes to commercially funded research)

Finally we have shown that subjective video qualityassessment might not be such a reliable measure of videoquality if video content is not controlled This has potentialconsequences on both the subjective video quality modelsalready devised that do not take content variable into accountand automatic approaches (ie algorithms) that try to predictMOS based on existing (possibly biased) results

Since the relationship between video content and per-ceived video quality was identified new research directionsopen up for further investigation Some of those includeidentifying and quantifying the strength of this relationshipinvestigating to what degree can a video be impaired withoutassessors noticing it (relative to video content) and whethercontent plays the same role in subjective perception of videoquality when sequences are impaired with different levels andtypes of artifacts Also a study at a larger scale might revealother factors pertinent to different populations (gender-wise culture-wise and demographically wise) that impactsubjective perception of video quality and should thus betaken into account when VQA tasks are in question

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The research presented in this paper was supported by FP7IRSESProjectQoSTREAMmdashVideoQualityDrivenMultime-dia Streaming in Mobile Wireless Networks and by SerbianMinistry of Education Science and Technology projects nosIII43002 and III44003

References

[1] A Lenhart K Purcell A Smith and K Zickuhr ldquoSocial mediaamp mobile internet use among teens and young adultsrdquo TechRep Pew Internet amp American Life Project Washington DCUSA 2010 httpwebpewinternetorg

[2] ComScore ldquoTodayrsquos US Tablet Owner Revealedrdquo 2012 httpwwwcomscorecomInsights

[3] K OrsquoHara A S Mitchell and A Vorbau ldquoConsuming video onmobile devicesrdquo in Proceedings of the 25th SIGCHI Conferenceon Human Factors in Computing Systems pp 857ndash866 ACMMay 2007

[4] CISCO ldquoCiscoVisual Networking Index Forecast andMethod-ology 2011ndash2016rdquo Tech Rep 2012 httpwwwciscocom

[5] K Seshadrinathan and A Bovik ldquoAn information theoreticvideo quality metric based onmotionmodelsrdquo in Proceedings ofthe 3rd International Workshop on Video Processing and QualityMetrics for Consumer Electronics 2007

[6] R J Corsini The Dictionary of Psychology Psychology Press2002

[7] H Hagtvedt and V M Patrick ldquoArt infusion the influenceof visual art on the perception and evaluation of consumerproductsrdquo Journal of Marketing Research vol 45 no 3 pp 379ndash389 2008

[8] A M Giannini F Ferlazzo R Sgalla P Cordellieri F Barallaand S Pepe ldquoThe use of videos in road safety training cognitiveand emotional effectsrdquo Accident Analysis amp Prevention vol 52pp 111ndash117 2013

[9] I Blanchette and A Richards ldquoThe influence of affect on higherlevel cognition a review of research on interpretation judge-ment decision making and reasoningrdquo Cognition amp Emotionvol 24 no 4 pp 561ndash595 2010

[10] N Cranley P Perry and L Murphy ldquoUser perception of adapt-ing video qualityrdquo International Journal of Human ComputerStudies vol 64 no 8 pp 637ndash647 2006

[11] R R Pastrana-Vidal J C Gicquel J L Blin and H CherifildquoPredicting subjective video quality from separated spatial andtemporal assessmentrdquo in Human Vision and Electronic ImagingXI vol 6057 of Proceedings of SPIE pp 276ndash286 January 2006

[12] M Kapa L Happe and F Jakab ldquoPrediction of quality ofuser experience for video streaming over IP networksrdquo CyberJournals pp 22ndash35 2012

[13] SH JumiskoV P Ilvonen andKAVaananen-Vainio-MattilaldquoEffect of TV content in subjective assessment of video qualityon mobile devicesrdquo in IS amp T Electronic ImagingmdashMultimediaonMobile Devices vol 5684 of Proceedings of SPIE pp 243ndash254January 2005

[14] S Winkler ldquoAnalysis of public image and video databases forquality assessmentrdquo IEEE Journal of Selected Topics in SignalProcessing vol 6 no 6 pp 616ndash625 2012

[15] ITU-T Recommendation ldquoSubjective video quality assessmentmethods for multimedia applicationsrdquo Tech Rep 1999

The Scientific World Journal 9

[16] F De Simone ldquoEPFL-PoliMI video quality assessment data-baserdquo 2009 httpvqacomopolimiit

[17] VQEG ldquoReport on the validation of video quality models forhigh definition video contentrdquo Tech Rep June 2010 httpwwwitsbldrdocgovvqegvqeg-homeaspx

[18] Y Wang ldquoPoly NYU video quality databasesrdquo QualityAssess-mentDatabase 2008 httpvisionpolyeduindexhtmlindexphpn=HomePageVideoLab

[19] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoStudy of subjective and objective quality assessmentof videordquo IEEE Transactions on Image Processing vol 19 no 6pp 1427ndash1441 2010

[20] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoA subjective study to evaluate video quality assess-ment algorithmsrdquo in Human Vision and Electronic Imaging XVProceedings of SPIE January 2010

[21] LGoldmann FDe Simone andT Ebrahimi ldquoA comprehensivedatabase and subjective evaluation methodology for quality ofexperience in stereoscopic videordquo in Three-Dimensional ImageProcessing (3DIP) and Applications vol 7526 of Proceedings ofSPIE January 2010

[22] J D McCarthy M A Sasse and D Miras ldquoSharp or smoothComparing the effects of quantization vs frame rate forstreamed videordquo in Proceedings of the Conference on HumanFactors in Computing Systems pp 535ndash542 ACM April 2004

[23] S Sladojevic D Culibrk M Mirkovic D Ruiz Coll and GBorba ldquoLogging real packet reception patterns for end-to-endquality of experience assessment in wireless multimedia trans-missionrdquo in Proceedings of the IEEE International Workshop onEmerging Multimedia Systems and Applications (EMSA rsquo13) SanJose Calif USA July 2013

[24] S R Gulliver T Serif andGGhinea ldquoPervasive and standalonecomputing the perceptual effects of variable multimedia qual-ityrdquo International Journal of Human Computer Studies vol 60no 5-6 pp 640ndash665 2004

[25] S Rihs ldquoThe influence of audio on perceived picture quality andsubjective audio-video delay tolerancerdquo inMOSAIC Handbookpp 183ndash187 1996

[26] I E GordonTheories of Visual Perception Taylor amp Francis 3rdedition 2005

[27] S Hancock and L McNaughton ldquoEffects of fatigue on ability toprocess visual information by experienced orienteersrdquo Percep-tual and Motor Skills vol 62 no 2 pp 491ndash498 1986

[28] C S Weinstein and H J Shaffer ldquoNeurocognitive aspects ofsubstance abuse treatment a psychotherapistrsquos primerrdquo Psy-chotherapy vol 30 no 2 pp 317ndash333 1993

[29] E Balcetis and D Dunning ldquoSee what you want to see moti-vational influences on visual perceptionrdquo Journal of Personalityand Social Psychology vol 91 no 4 pp 612ndash625 2006

[30] A Parducci ldquoResponse bias and contextual effects whenbiasedrdquo Advances in Psychology vol 68 pp 207ndash219 1990

[31] E A Styles Attention Perception and Memory An IntegratedIntroduction Taylor amp Francis Routledge New York NY USA2005

[32] D Culibrk M Mirkovic V Zlokolica M Pokric V Crnojevicand D Kukolj ldquoSalient motion features for video qualityassessmentrdquo IEEE Transactions on Image Processing vol 20 no4 pp 948ndash958 2011

[33] J Redi H Liu R Zunino and I Heynderickx ldquoInteractions ofvisual attention and quality perceptionrdquo in Human Vision andElectronic Imaging XVI Proceedings of SPIE January 2011

[34] RTS Istrazivanje (izvestaji o gledanosti) httpwwwrtsrspagertssrCIPAnews171IstraC5BEivanjehtml

[35] Nielsen Television Audience Measurement Nielsen AudienceMeasurement Serbia httpwwwagbnielsennetwherewearedynPageasplang=localampcountry=Serbiaampid=354

[36] LIVE Video Quality Database httpliveeceutexaseduresearchqualitylive videohtml

[37] E R Hilgard ldquoThe trilogy of mind cognition affection andconationrdquo Journal of the History of the Behavioral Sciences vol16 no 2 pp 107ndash117 1980

[38] P EkmanWV Friesen andP EllsworthEmotion in theHumanFace Guidelines for Research and an Integration of FindingsPergamon Press New York NY USA 1972

[39] J Fridlund P Ekman and H Oster ldquoFacial expressions ofemotionrdquo in Nonverbal Behavior and Communication pp 143ndash223 2nd edition 1987

[40] ITU-T 500-11 ldquoMethodology for the subjective assessment ofthe quality of television picturesrdquo Tech Rep InternationalTelecommunication Union Geneva Switzerland 2002

[41] J Cohen Statistical Power Analysis for the Behavioral ScienciesRoutledge 1988

[42] D S Dunn Statistics and Data Analysis for the BehavioralSciences McGraw-Hill 2001

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 2: Research Article Evaluating the Role of Content in ...downloads.hindawi.com/journals/tswj/2014/625219.pdf · Research Article Evaluating the Role of Content in Subjective Video Quality

2 The Scientific World Journal

However there is an array of cognitive biases specific toa human mind such as contrast error or halo effect [6]which influences the evaluating processes in a way that canjeopardize the planned research designs These cognitivebiases are often content specific meaning that the contentof the presented external stimuli influences the evaluatorsrsquojudgment when assessing that or another related externalstimulus properties [7] Furthermore certain video contentscan induce high level of emotional activation [8] and thatintegral emotion (that is content induced) can additionallyinfluence cognition bias which is dependent on the type ofinduced emotion [9] If visual perception process is observedrelative to the observer and content then it makes sense totake an approach thatwill observe visual perception processesas influenced not only by objectively measurable propertiesof a stimuli but also by subjective elements induced by thestimuli within the observer This user-centered and content-influenced approach in evaluating visual stimuli propertiesis expected to yield results that can be used when designingfuture multimedia solutions as some of the most adaptivedelivery mechanisms for streaming multimedia content donot explicitly consider user-perceived quality when makingadaptation decisions in spite of a proven fact that usersrsquonature of perception of adapting multimedia is dynamic [10]Although some efforts exist when it comes to understandinguser experience and leveraging it to remodel the video qualityevaluation processes [11] as well as when mobile videodelivery is concerned [12] little is said about the effects ofpresented content on the subjective evaluation [13]

The research presented in this paper aims to define themental response properties of video stimuli that are pertinentto its content and to propose variables that ought to be takeninto account when dealing with the subjective video qualityassessments In order to do that a standard video qualitydatabase commonly used for VQA has been compared to acustom-constructed one based on various content-specificproperties that have been shown to influence subjectiveevaluation such as userrsquos familiarity with the content and theexpectancy of the content (cognitive component) inducedemotions (affective component) and userrsquos intent to watchthe video again or to recommend it to a friend (conativecomponent) The results indicate a significant differencebetween the two databases on the observed componentsand identify the properties of videos that have the highestdiscriminative power when it comes to subjective videoquality perception In addition a relationship between videocontent and the perceived video quality was identified

The rest of the paper is organized as follows Section 2provides an overview of the relevant published work andcurrent state in the domain Section 3 describes the method-ology employed and proposed approach in detail Section 4presents experimentally obtained results and provides adiscussion Section 5 contains conclusions and an outline ofpossible future research directions

2 Related Work

21 Sequences Commonly Used in Video Quality AssessmentResearch A recent survey by Winkler provides a fairly

exhaustive list of publicly available video quality databases[14] which consist of video sequences usually used withinresearch community for video quality assessment tasks Theadvantage of using these databases is that the researcher getsa set of video sequences (clips) that have been annotatedwith subjective quality ratings (derived either using one ofthe standardized methods proposed by ITU-T [15] or byapplying a novel method) that are sometimes tedious andortime-consuming to obtainmdashas they require a number ofhuman assessors to watch and grade respective sequences Apotential disadvantage is that the content of the sequencesin those databases is usually not selected in a manner thattakes into account the ldquohumanrdquo components motivation oremotions induced that might affect the grading outcomethat is it seems to be selected mostly on the basis ofcovering different scenarios that are known to bear weighton impairments induced due to the transport stream orcodingdecoding process (eg complex or detailed back-grounds high motion or camera movement etc) For exam-ple EPFLPoliMI Video Quality Assessment Database [16]VQEG HDTV Database [17] PolyNYU Packet Loss (PL)Database [18] and LIVE Video Quality Database [19 20] alloffer compressed videos corrupted (impaired) by simulatedtransmission over error-prone networks MMSP 3D VideoQuality Assessment Database [21]mdashbesides being the firstpublic-domain database on 3D video qualitymdashfocuses oneffects of different camera distances while PolyNYUVideoQuality Databases [18] contains videos with different frame-rates and quantization parameters The sequences in thesedatabases exhibit different artifacts (such as ringing blurringand blocking just to name a few) that have been shown toaffect human perception of video quality and they have evenbeen characterized and quantified using different objectiveparameters such as Spatial Information (indicator of edgeenergy) Colorfulness (perceptual indicator of the varietyand intensity of colors in the image) and Motion Vectors(an indicator of motion energy for video) [14] Howeverthe level of perceived degradation seems to be a compoundeffect of different factors pertinent both to the introducedimpairments (which can be objectivelymeasured) propertiesof Human Visual System (HVS) and possibly emotionalstate of the assessor sometimes influenced by the presentedcontent

22 The Relationship between Content and Video QualityPerception Even though themajority of video qualitymodelsand objective metrics rely heavily on the low level sensoryprocessing of HVS research has shown that low level modelsare not sufficient in characterizing the video quality judg-ments For example a study byMcCarthy et al [22] has shownthat assessors are willing to accept objectively poor temporalvideo quality (eg low frame-rate) if they are thoroughlyinterested in the content of the sequence In the experimentparticipants whowere soccer (football) fans rated low frame-rate (6 frames per second) video as acceptable 80 of thetime even though the motion was not fluid A similar effectto low frame rate was experienced by assessors in anotherstudy [23] where the content of videos presented to themwas ldquofrozenrdquo occasionally due to simulated network package

The Scientific World Journal 3

loss Participants in this experiment (who observed videos onmobile devices) were not so keen to accept this impairmentand isolated it as one of the top reasons for giving low scoresto videos where it was prominent This study however usedLIVE Video Quality Database videos (which were sometimesdescribed as ldquodullrdquo by the participants)

When information assimilation is the primary aim ofmultimedia presentation content plays an important role inthe perception of multimedia video quality as reported byGulliver et al [24] They discovered that the content of thesequence has a more significant effect on a userrsquos level ofinformation transfer than either the frame rate or displaydevice type However when information transfer is left outof the equation participants in the same study found frame-rate and device type important for perceived video qualitydemonstrating that they were able to distinguish betweentheir subjective enjoyment of a video clip and the level ofquality which they perceive the video clip to possess Thisimplies that there is a relationship between clip contents anduser-perceived video quality but components of the equationleading to a final score need to be evaluated carefully

Finally Jumisko et al [13] show that recognition of thecontent has an effect on perceived video quality evaluationThey found that video clips which were recognized by theparticipants were generally given lower ratings comparedto unrecognized clips while interesting contents collectedhigher ratings compared to ones deemed uninteresting(whether they were recognized or not) It is argued thatevaluators with previous knowledge about the genre aremoredemanding for the acceptability of quality Not in discordancewith similar studies in the field [25] they found that audiocomponent (when available in the experiment) compensatedto a fair degree the impairments in the visual part of the videoand at the same time impairments in speech were found tobe very distracting This is justified by the fact that pertinentto that particular experimental design audio contains therelevant information and the visual component only supportsit (eg music videos and news with the narrating voice in thebackground)

23 Factors Influencing Visual Perception Visual perceptionmay be broadly defined as mental organization and interpre-tation of sensory information received via individualrsquos sightHuman visual perception while being relatively objective byits means (the sight) and constant in terms of properties ofthe object being watched (light surfaces and textures) is stilloften significantly subjective due to various factors residing inthe observerrsquos mind Since perception requires interpretationof any received information and since interpretation is acomplex phenomenon that is unique to every individualit is evident that visual perception is heavily influenced byinternal and external subjective factors that are often hardto grasp Visual perception is still not fully understoodby contemporary science there are competing theories thatconcentrate on different aspects of it and offer explanationsthat are often in discordance with one other [26] Further-more there is a number of physiological and psychologicalfactors that are to be taken into account when assessingvisual perception spanning from fatigue [27] and substance

intoxication [28] to observerrsquos expectations and motivation[29]

Also some cognitive biases are known to distort ourinterpretation and assessment of what we see and evaluate[30] attention bias (focusing only on one part of the infor-mation set) choice-supportive bias (taking past choices astemplates for the new ones) conservatism (underestimatinghigh values while overestimating low ones) contrast effect(enhancement or reduction of one objectrsquos perception asresult of comparing with a recent contrasted object) curseof knowledge (advanced knowledge diminishes ability toperceive something from a common perspective) halo effect(tendency to observe and evaluate latter aspects in the lightof formerly observed aspects) negativity bias (more attentionis given to unpleasant information than to the pleasantone) recentness bias (the tendency to weigh recent eventsmore than earlier events) and selective perception (whereexpectations influence perception)

Finally bottom-up processing is known to be driven bythe stimulus presented [31] and some stimuli are intrinsicallyconspicuous or salient (ie outliers) in a given contextSaliency is independent of the nature of the particular taskoperates very rapidly and if a stimulus is sufficiently salientit will pop out of a visual scene This knowledge has beenleveraged to incorporate saliency information into objectiveimage quality metrics [32] as it is reasonable to assumethat artifacts appearing in less salient regions are less visibleand thus less annoying than artifacts affecting the region ofinterestWhile this assumption holdswhen natural saliency isin question (when observers are not given a particular task)there are indications that quality assessment task (ie whenobservers specifically look for impairments) modifies thenatural saliency of images so the masking effect of saliencyfor distortions in background regions should be taken withsome reserve when modeling subjective quality experiments[33]

3 Methodology

31 Defining the Test Set As related research reveals assessorsare by no means indifferent to the content they evaluateThis has potential implications for real-world applicationssince subjective perception of quality of some ldquoneutralrdquocontent selected on basis of its suitability for introducingparticular impairments to video stream (ie standardizedVQA databases) might differ from quality perception ofcontent that the assessor is familiar with or interested inand pays more attention to (ie content that the assessorregularly consumes such as TV shows and movies) To testthis hypothesis we created an alternative set of videos forcomparison with the commonly used ones which ought toactivate mental responses that are not activated to a greaterextent within commonly used VQA databases and shouldthus reflect more truthfully the perceived quality of real-life video sequences as experienced by the end-users whenimpaired We will refer to this set as the proposed set Asfamiliar video content was shown to sometimes get lowerquality rating at the same impairment level than the unrec-ognized one we included a number of videos that should

4 The Scientific World Journal

be fairly familiar to assessors instead of focusing solely oncontent that was supposed to induce different emotions orothermental responses First based on the reports of nationaltelevision service [34] as well as on the results of a surveyconducted by a media research group [35] we have identifiedseveral categories of TV program that are most often viewedby average citizens of Serbia (as participants were all Serbiancitizens) The identified categories were

(1) TV series(2) movies(3) informative shows(4) entertainment(5) news(6) music shows(7) sports and(8) other

While most of the categories are self-explanatory theldquoInformative showsrdquo category comprises content that coversa specific topic or event (eg such as shows most oftenencountered on the ldquoDiscoveryrdquo ldquoTravelrdquo or ldquoAnimal Planetrdquochannels) Using television program ratings as a guideline[34] we selected 4 to 6 sequences representative of eachcategory and downloaded them from YouTube (the numbervaried because some of the categories were more popularthan the other with the viewers) Second we leveragedsocial networks (Facebook Twitter and Google+) as well asYouTube itself to gain insight into what videos (in the publicdomain) are deemed popular within different communities(global when it comes to YouTube and local when socialnetworks are in question) Videos that were circulating socialnetworks that the authors are a part of andwhichwere highlypopular within the community at the time the experimentwas devised (ie had a lot of views) were downloadedassigned to appropriate categories and included in the testset All of the videos obtained for the experiment were partof public domain and their usage for scientific purposes wasnot prohibited Also they were downloaded in 480p format(while being already compressed by YouTube using H264)Finally we have opted for the LIVE Video Quality Databaseas a representative source of video sequences commonly usedfor VQA tasks since it is widely accepted and cited withinvideo quality assessment scientific community regularlyupdated and devised with automatic VQA algorithms inmind [36] All 10 video sequences available in the databasewere used in our test set To make for a fair comparison theLIVE Video Quality Database sequences were downloadedfrom YouTube as well in the 480p format rather thanusing the unimpaired database version These videos will bereferred to as the LIVEVQDB or ldquostandardrdquo video set All ofthe videos used in the experiment were scaled (and croppedwhen necessary) to a common resolution of 576 times 432 pixels(1 3 aspect ratio)The audio component of the sequences wasleft out (ie only visual component remained) All sequenceswere between 10s and 11s longThus the final test-set of videoscomprised 66 sequences in total 56 of them were classified

Table 1 Distribution of videos in the test set

Category name Number of videosTV series 4Movies 4Informative shows 8Entertainment 6News 8Music shows 6Sports 5Other 15LIVEVQDB 10

into one of the aforementioned categories and the additional10 from the LIVE Video Quality Database were assigned tothe ldquoLIVEVQDBrdquo categoryThe distribution of the final set ofvideos is provided in Table 1

32 Experimental Design and Setup As the first research goalwas to determine if the LIVEVQDB video set is similar inability to activate mental responses to a set of real-life videosobserved by media consumers an experimental researchdesign with two independent sets was adopted The twosets were compared on a number of relevant variables thatidentified level of mental activation Mental response in thiscontext was regarded as any kind of an individualrsquos consciousmental activity that is directly induced by specific visual stim-uli presented to that individual From various classificationsof mental activities available the widest nomenclature wasused in this research the tripartite classification of mentalactivities (responses) into cognition affection and conationThis classification has been in use since the eighteen centuryand is still widely used nowadays to address various aspectsof human mental state and responses [37]

Cognitive mental activities are mostly observed as aldquorationalrdquo or ldquoobjectiverdquo peace of mind These activities arethought to be responsible for processing information thatpeople get from their sensory systems via attention andmemory In order to measure cognitive response we proposevariables we named ldquointerestingrdquo and ldquofamiliarrdquo to measurethe potential of a stimulus to attract attention and to assess itsnovelty or the lack of it In contrast to the cognitive activitiesaffective activities (often called emotional reactions) arethought to be very subjective and relative to the personwho isreceiving certain stimuli As identified in the western culturesby Ekman et al [38] and reaffirmed universally by Fridlundet al [39] there are six basic emotions that people expressindependently from their culture and other external factorsanger disgust fear happiness sadness and surprise Wehave adopted this classification when researching emotionalresponse for the video stimuli thus obtaining six more vari-ables (one for each of the basic emotions) Conative mentalactivities are the ones that drive subjects towards certainactivities which means that the conative part of the minduses cognitive and affective parts to fuel its role Thereforeit is necessary to observe conative responses in order to fullyassess cognitive and affective states For this component three

The Scientific World Journal 5

Table 2 Variables used in the experiment

Variable name ScaleMOS 1ndash5 Likert scaleSeen before Multiple choice questionInteresting 1ndash7 Likert scaleFamiliar 1ndash7 Likert scaleAnger 1ndash7 Likert scaleFear 1ndash7 Likert scaleSadness 1ndash7 Likert scaleDisgust 1ndash7 Likert scaleHappiness 1ndash7 Likert scaleSurprise 1ndash7 Likert scaleAppealing 1ndash7 Likert scale

Share Multiple choice question(recoded to 3 point ordinal scales)

Watch again Multiple choice question(recoded to 3 point ordinal scales)

variables were designed ldquoappealingrdquo (denoting the subjectivelevel of attractiveness of a particular video) ldquowatch againrdquo(willingness to watch the video again) and ldquoshare videordquo(willingness to share the video with other people)

A measurement of subjectively perceived video qualitywas expressed in terms ofMeanOpinion Score (MOS) whichis calculated as the average score for a video sequence over allassessors It was measured on a 5-point quality scale rangingfrom 1 (bad) to 5 (excellent) Finally to be able to keep track ofcontent that was recognized by the assessors a variable ldquoseenbeforerdquo was introduced Complete set of variables and scalesused to measure them is presented in Table 2

Experimental setup in general and laboratory setup inparticular closely resembled the one proposed by the Inter-national Telecommunication Union (ITU) in their recom-mendations for VQA tasks [40] Sequences were presentedto assessors on a 2010158401015840 monitor (SAMSUNG S20B300B)which was operated at its native resolution of 1600 times 900pixels A custom in-house software was developed thatenables playback of the sequences against the neutral graybackground and voting on each video (ie filling in aquestionnaire) For each assessor video sequence playbackorder was randomized to avoid any ordering effects Afterpresenting the assessor with the sequence the softwaredisplays a questionnaire where the assessor is asked to rateselected features on either a 1ndash7 scale (1 being the lowestworstand 7 being the highestbest score clearly labeled in assessorrsquoslanguage besides the scale) or by selecting appropriate answer(depending on the question) The only exception to this wastheMOS scale that ranged from 1 to 5 Since the questionnairecomprised 13 questions and it was thus impossible to replicatethe exact ITU proposed setup for single stimulus type ofexperiments without conducting multiple runs we haveplaced the question regarding perceived video quality on top(ie it was the first question) therefore complying to thestandard procedure as much as possible Questions regardingcognitive affective and conative components followed Upon

completing the questionnaire software played the next videoAssessors were allowed to see sequences only once (ie replaywas not possible nor were there any additional runs) Thetest consisted of one session of about 45 minutes includingtraining Before the actual test oral instructions were givento subjects and a training session was run that consistedof three videos and a questionnaire following each onewhere the subjects were free to ask any questions regardingthe test procedure (including clarification of questionnairequestions) 20 subjectsmdash8 male and 12 femalemdashparticipatedin the test their age ranging from 20 to 64 None of themwere familiar with video processing nor had previouslyparticipated in similar tests All of the subjects reportednormal or corrected vision prior to testing

4 Results

After the experiment a number of different analyses wereperformed All of them were conducted with two con-siderations in mind (1) we operated with a sample sizethat could possibly be considered small in comparison toestablished norms for behavioral sciences but at the sametime is considered more than adequate for VQA tasks and (2)we operated dominantly with ordinal level of measurement(for the dependent variables)

41 Mental Responses Relative to the Video Sets The dataobtained was observed relative to the two video sets (1)the proposed one and (2) the LIVEVQDB set Separatedimensionswere compared followed by a comparison of sub-sequently computed variables for the three mental responsecategories

Figure 1 displays average scores over different categoriesfor different variables LIVEVQDB set is presented in red(filled in area) while videos comprising the proposed setare divided into respective content categories and presentedby lines Just by looking at averages it seems that videoscomprising LIVEVQDB scored significantly lower on almostall measured variables

To test this hypothesis against the alternative one (that thetwo observed populations are the same) we ran the Mann-Whitney 119880 test results of which are shown in Tables 3ndash5Indeed the test demonstrated that these sets have very lowchance of originating from the same population suggestingthat the proposed set of videos stimulates mental reactionsthat are significantly more intense than the mental reactionsstimulated by the LIVEVQDB set

As shown in Table 3 the proposed video set achievedhigher ranks in both of the cognitive variables observed(ldquointerestingrdquo and ldquofamiliarrdquo) A reason that videos compris-ing this set were found to be more interesting probably liesin their rich and versatile content which was why they werechosen as good candidates to arouse the attention of assessorsin the first place They were also marked as more familiarto observers than the ones from the LIVEVQDB set mostlikely because none of the subjects have participated in VQAexperiments before and thus had low odds of encounteringvideos from LIVEVQDB set before

6 The Scientific World Journal

Table 3 Comparison of cognitive dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Interesting Proposed 70691 79173750 6002250 000LIVEVQDB 40061 8012250

Familiar Proposed 68453 76667100 8508900 000LIVEVQDB 52595 10518900

Table 4 Comparison of affective dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Anger Proposed 67014 75055700 10120300 001LIVEVQDB 60652 12130300

Disgust Proposed 67864 76007150 9168850 000LIVEVQDB 55894 11178850

Fear Proposed 67329 75408050 9767950 000LIVEVQDB 58890 11777950

Happiness Proposed 67866 76010050 9165950 000LIVEVQDB 55880 11175950

Sadness Proposed 67747 75876650 9299350 000LIVEVQDB 56547 11309350

Surprise Proposed 69199 77502900 7673100 000LIVEVQDB 48416 9683100

The proposed set also achieved higher ranks in all ofthe six basic emotions observed as shown in Table 4 Thisfinding is probably a result of original media producerrsquosintent of activating certain affective states depending on thenature of a video Conversely the LIVEVQDB set might havebeen produced with intention to primarily capture differentcontent that is known to bear weight in terms of introducedimpairments when codingdecoding and video transmissionis in focus thus neglecting possible emotional response fromthe assessors

Theproposed video setwas also found to activate conativedimensions of subjectsrsquo mental activity to a greater extentthan the LIVEVQDB set didThese dimensions are especiallyimportant because they cast a new light on both the cognitiveand the affective dimensions If conative part of onersquos mind isin idle state the chances are that the cognitive and affectiveactivation did not happen even if the subject states theopposite When somebody is presented with interestingcontent he or she is likely to do something after seeing itmdasheither watch it again or share it with relevant others

The mean ranks difference can be observed in Table 5but it should be noted that one part of the videos from theproposed set was relatively unpleasant to watch activatingemotions like sadness fear anger or disgust These dimen-sions of affective responses were found to be significantlynegatively correlated or uncorrelated to the conative dimen-sions in this research so the ranks in the conative dimensionswould be even higher if we were to select only pleasant andappealing videos thus showing even greater difference fromthe standard set

After individual analysis the dimensions were summedinto the corresponding mental components forming three

variables named cognitive component (comprising 2 dimen-sions) affective activation (comprising 6 dimensions) andconative activation (comprising 3 dimensions) Again theMann-Whitney 119880 test has demonstrated significantly highermental activation induced by the proposed set than themental activation induced by the LIVEVQDB set (Table 6)

Finally relationship between the mental activationdimensions andperceived video quality (MOS)was observedSince all of the dependent variables were of ordinal typeSpearmanrsquos Rho measure of correlation was used follow-ing common guidelines of interpretation for behavioralsciences [41 42] Moderate correlations significant at the 01level (2-tailed) were found between video quality assessmentand the following dimensions interesting (472) happiness(415) surprise (306) appealing (497) watch again (393)and share video (395) while other dimensions mostlycorrelated with low coefficients

Also video quality assessment was correlated with thethree calculated mental components gaining moderate cor-relation with cognitive component (412) and conative acti-vation (496) while low correlation was found with affectiveactivation (232) again all significant at the 01 level (2-tailed)

Additionally Spearmanrsquos rho correlations were calculatedfor the three mental components suggesting a strong corre-lation between the cognitive and the conative components(741) while also suggesting a moderate correlation betweenthese two components in relation to the affective component(351 with cognitive) and (462 with conative)

42 Video Quality Assessment Unknown to assessors allvideos in the full test set were obtained from the same sourceand with the same quality settings (H264 480p) and were

The Scientific World Journal 7

Table 5 Comparison of conative dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Appealing Proposed 69320 77638550 7537450 000LIVEVQDB 47737 9547450

Watch again Proposed 68410 76619600 8556400 000LIVEVQDB 52832 10566400

Share video Proposed 68223 76410700 8765300 000LIVEVQDB 53877 10775300

Table 6 Mental components relative to the two video sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Cognitive component Proposed 70300 78735800 6440200 000LIVEVQDB 42251 8450200

Affective activation Proposed 70198 78621750 6554250 000LIVEVQDB 42821 8564250

Conative activation Proposed 69529 77872850 7303150 000LIVEVQDB 46566 9313150

000

100

200

300

400

500

600Interesting

Familiar

Anger

Disgust

Fear

Happiness

SadnessSurprise

Appeal

MOS

LIVEVQDBEntertainmentInformativeMoviesMusic

NewsOtherSeriesSports

Figure 1 Average scores for different variables (grouped by videocategories)

hence of roughly the same visual quality Even though wecould not control for the impairments and objective videoquality of the source sequences (apart from the LIVEVQDBdatabase) close visual inspection of the obtained materialassured us that nonexperts should hardly be able to tell thedifference between the quality of sequences presented to them(ie any differences they observed would stem from sources

not pertinent to visual impairments) To further confirm ourassumption a subset of 20 sequences was played to threeindependent video quality experts Among these videos 10were randomly selected from the proposed set while theother 10 were LIVEVQDB sequences (all of them obtainedfrom YouTube to account for any codec effects) Playingorder was randomized and the experts were asked to voteonly on quality (ie they did not take the full experiment)of the videos Subsequently unpaired 119905-test was run onMOS obtained and it did not show statistically significantdifferences between the samples thus confirming our initialassumptions

Since a correlation was established between differentvariables andMOS and LIVEVQDB videos scored constantlylower than the videos from the proposed set we have tried tofurther investigate and explain these differences

On average videos comprising the proposed set scoreda MOS value of 347 while LIVEVQDB videos scored 280This is a large difference on a 5-point scale but it only getslarger when individual components shown to affect perceivedquality are considered For example when interestingness isobserved best ranked LIVEVQDBvideowas placed 39th (outof 66 videos in the full set) A comparison between the top 10ranked ldquointerestingrdquo videos (as deemed by assessors) and thesequences from the LIVEVQDB set reveals a mean differenceof 134 points (280MOS for the LIVEVQDB versus 413MOSfor the proposed set) which was further shown by a 119905-test tobe statistically significant (119875 lt 00001 with 95 confidenceinterval between 104 and 164)What is evenmore interestingis that 7 out of 8 categories (for the proposed video set)found their place on the ldquotop 10 interesting videosrdquo The onlycategory that did not make it was the ldquomoviesrdquo category

When comparison of top 10 MOS rated videos pertinentto different emotions versus LIVEVQDB videos is concernedresults comply with previously discovered correlationsThuswhen a positive emotion such as happiness was induced bya sequence it yielded a high quality score (MOS = 396)

8 The Scientific World Journal

Videos that caused surprise were also ranked as of higherquality (MOS = 381) while negative emotions caused videoto plummet on the quality scale (sadness (MOS = 286) fear(MOS = 300) disgust (MOS = 286) and anger (MOS =289)) Interestingly one of the videos from the LIVEVQDBfound its place among the top 10 videos that the assessorswereangry with

5 Conclusions and Future Research

The most important conclusion we drew from the resultsobtained in the experiment is that the content of videosequence has a strong impact on activating different cog-nitive affective and conative components within assessorsThese components in turn have been shown (both by thisand previous researches) to play an integral role inVQA taskswhether assessors are aware of it or not Thus we providea number of recommendations that ought to be taken intoaccountwhen conducting subjective video quality assessmentexperiments

First video content has to be considered very carefullywhen trying to measure the perceived video quality as failingto do so might introduce a bias which will reflect in finalMOS obtained It is thus vital to choose the correct typeof content for addressing the research question at hand (atany rate choose the type of content that is most likely to beencountered by the viewers in real-world conditions) Havingthis in mind the ldquocontentrdquo variable needs to be monitoredwhenever possible Second since emotions of disgust fearanger and sadness are found to have different impact onconative mental activities than emotions of happiness andsurprise additional attention should be exhibited if conativeactivities are to be observed as dependent variables (especiallywhen it comes to commercially funded research)

Finally we have shown that subjective video qualityassessment might not be such a reliable measure of videoquality if video content is not controlled This has potentialconsequences on both the subjective video quality modelsalready devised that do not take content variable into accountand automatic approaches (ie algorithms) that try to predictMOS based on existing (possibly biased) results

Since the relationship between video content and per-ceived video quality was identified new research directionsopen up for further investigation Some of those includeidentifying and quantifying the strength of this relationshipinvestigating to what degree can a video be impaired withoutassessors noticing it (relative to video content) and whethercontent plays the same role in subjective perception of videoquality when sequences are impaired with different levels andtypes of artifacts Also a study at a larger scale might revealother factors pertinent to different populations (gender-wise culture-wise and demographically wise) that impactsubjective perception of video quality and should thus betaken into account when VQA tasks are in question

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The research presented in this paper was supported by FP7IRSESProjectQoSTREAMmdashVideoQualityDrivenMultime-dia Streaming in Mobile Wireless Networks and by SerbianMinistry of Education Science and Technology projects nosIII43002 and III44003

References

[1] A Lenhart K Purcell A Smith and K Zickuhr ldquoSocial mediaamp mobile internet use among teens and young adultsrdquo TechRep Pew Internet amp American Life Project Washington DCUSA 2010 httpwebpewinternetorg

[2] ComScore ldquoTodayrsquos US Tablet Owner Revealedrdquo 2012 httpwwwcomscorecomInsights

[3] K OrsquoHara A S Mitchell and A Vorbau ldquoConsuming video onmobile devicesrdquo in Proceedings of the 25th SIGCHI Conferenceon Human Factors in Computing Systems pp 857ndash866 ACMMay 2007

[4] CISCO ldquoCiscoVisual Networking Index Forecast andMethod-ology 2011ndash2016rdquo Tech Rep 2012 httpwwwciscocom

[5] K Seshadrinathan and A Bovik ldquoAn information theoreticvideo quality metric based onmotionmodelsrdquo in Proceedings ofthe 3rd International Workshop on Video Processing and QualityMetrics for Consumer Electronics 2007

[6] R J Corsini The Dictionary of Psychology Psychology Press2002

[7] H Hagtvedt and V M Patrick ldquoArt infusion the influenceof visual art on the perception and evaluation of consumerproductsrdquo Journal of Marketing Research vol 45 no 3 pp 379ndash389 2008

[8] A M Giannini F Ferlazzo R Sgalla P Cordellieri F Barallaand S Pepe ldquoThe use of videos in road safety training cognitiveand emotional effectsrdquo Accident Analysis amp Prevention vol 52pp 111ndash117 2013

[9] I Blanchette and A Richards ldquoThe influence of affect on higherlevel cognition a review of research on interpretation judge-ment decision making and reasoningrdquo Cognition amp Emotionvol 24 no 4 pp 561ndash595 2010

[10] N Cranley P Perry and L Murphy ldquoUser perception of adapt-ing video qualityrdquo International Journal of Human ComputerStudies vol 64 no 8 pp 637ndash647 2006

[11] R R Pastrana-Vidal J C Gicquel J L Blin and H CherifildquoPredicting subjective video quality from separated spatial andtemporal assessmentrdquo in Human Vision and Electronic ImagingXI vol 6057 of Proceedings of SPIE pp 276ndash286 January 2006

[12] M Kapa L Happe and F Jakab ldquoPrediction of quality ofuser experience for video streaming over IP networksrdquo CyberJournals pp 22ndash35 2012

[13] SH JumiskoV P Ilvonen andKAVaananen-Vainio-MattilaldquoEffect of TV content in subjective assessment of video qualityon mobile devicesrdquo in IS amp T Electronic ImagingmdashMultimediaonMobile Devices vol 5684 of Proceedings of SPIE pp 243ndash254January 2005

[14] S Winkler ldquoAnalysis of public image and video databases forquality assessmentrdquo IEEE Journal of Selected Topics in SignalProcessing vol 6 no 6 pp 616ndash625 2012

[15] ITU-T Recommendation ldquoSubjective video quality assessmentmethods for multimedia applicationsrdquo Tech Rep 1999

The Scientific World Journal 9

[16] F De Simone ldquoEPFL-PoliMI video quality assessment data-baserdquo 2009 httpvqacomopolimiit

[17] VQEG ldquoReport on the validation of video quality models forhigh definition video contentrdquo Tech Rep June 2010 httpwwwitsbldrdocgovvqegvqeg-homeaspx

[18] Y Wang ldquoPoly NYU video quality databasesrdquo QualityAssess-mentDatabase 2008 httpvisionpolyeduindexhtmlindexphpn=HomePageVideoLab

[19] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoStudy of subjective and objective quality assessmentof videordquo IEEE Transactions on Image Processing vol 19 no 6pp 1427ndash1441 2010

[20] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoA subjective study to evaluate video quality assess-ment algorithmsrdquo in Human Vision and Electronic Imaging XVProceedings of SPIE January 2010

[21] LGoldmann FDe Simone andT Ebrahimi ldquoA comprehensivedatabase and subjective evaluation methodology for quality ofexperience in stereoscopic videordquo in Three-Dimensional ImageProcessing (3DIP) and Applications vol 7526 of Proceedings ofSPIE January 2010

[22] J D McCarthy M A Sasse and D Miras ldquoSharp or smoothComparing the effects of quantization vs frame rate forstreamed videordquo in Proceedings of the Conference on HumanFactors in Computing Systems pp 535ndash542 ACM April 2004

[23] S Sladojevic D Culibrk M Mirkovic D Ruiz Coll and GBorba ldquoLogging real packet reception patterns for end-to-endquality of experience assessment in wireless multimedia trans-missionrdquo in Proceedings of the IEEE International Workshop onEmerging Multimedia Systems and Applications (EMSA rsquo13) SanJose Calif USA July 2013

[24] S R Gulliver T Serif andGGhinea ldquoPervasive and standalonecomputing the perceptual effects of variable multimedia qual-ityrdquo International Journal of Human Computer Studies vol 60no 5-6 pp 640ndash665 2004

[25] S Rihs ldquoThe influence of audio on perceived picture quality andsubjective audio-video delay tolerancerdquo inMOSAIC Handbookpp 183ndash187 1996

[26] I E GordonTheories of Visual Perception Taylor amp Francis 3rdedition 2005

[27] S Hancock and L McNaughton ldquoEffects of fatigue on ability toprocess visual information by experienced orienteersrdquo Percep-tual and Motor Skills vol 62 no 2 pp 491ndash498 1986

[28] C S Weinstein and H J Shaffer ldquoNeurocognitive aspects ofsubstance abuse treatment a psychotherapistrsquos primerrdquo Psy-chotherapy vol 30 no 2 pp 317ndash333 1993

[29] E Balcetis and D Dunning ldquoSee what you want to see moti-vational influences on visual perceptionrdquo Journal of Personalityand Social Psychology vol 91 no 4 pp 612ndash625 2006

[30] A Parducci ldquoResponse bias and contextual effects whenbiasedrdquo Advances in Psychology vol 68 pp 207ndash219 1990

[31] E A Styles Attention Perception and Memory An IntegratedIntroduction Taylor amp Francis Routledge New York NY USA2005

[32] D Culibrk M Mirkovic V Zlokolica M Pokric V Crnojevicand D Kukolj ldquoSalient motion features for video qualityassessmentrdquo IEEE Transactions on Image Processing vol 20 no4 pp 948ndash958 2011

[33] J Redi H Liu R Zunino and I Heynderickx ldquoInteractions ofvisual attention and quality perceptionrdquo in Human Vision andElectronic Imaging XVI Proceedings of SPIE January 2011

[34] RTS Istrazivanje (izvestaji o gledanosti) httpwwwrtsrspagertssrCIPAnews171IstraC5BEivanjehtml

[35] Nielsen Television Audience Measurement Nielsen AudienceMeasurement Serbia httpwwwagbnielsennetwherewearedynPageasplang=localampcountry=Serbiaampid=354

[36] LIVE Video Quality Database httpliveeceutexaseduresearchqualitylive videohtml

[37] E R Hilgard ldquoThe trilogy of mind cognition affection andconationrdquo Journal of the History of the Behavioral Sciences vol16 no 2 pp 107ndash117 1980

[38] P EkmanWV Friesen andP EllsworthEmotion in theHumanFace Guidelines for Research and an Integration of FindingsPergamon Press New York NY USA 1972

[39] J Fridlund P Ekman and H Oster ldquoFacial expressions ofemotionrdquo in Nonverbal Behavior and Communication pp 143ndash223 2nd edition 1987

[40] ITU-T 500-11 ldquoMethodology for the subjective assessment ofthe quality of television picturesrdquo Tech Rep InternationalTelecommunication Union Geneva Switzerland 2002

[41] J Cohen Statistical Power Analysis for the Behavioral ScienciesRoutledge 1988

[42] D S Dunn Statistics and Data Analysis for the BehavioralSciences McGraw-Hill 2001

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 3: Research Article Evaluating the Role of Content in ...downloads.hindawi.com/journals/tswj/2014/625219.pdf · Research Article Evaluating the Role of Content in Subjective Video Quality

The Scientific World Journal 3

loss Participants in this experiment (who observed videos onmobile devices) were not so keen to accept this impairmentand isolated it as one of the top reasons for giving low scoresto videos where it was prominent This study however usedLIVE Video Quality Database videos (which were sometimesdescribed as ldquodullrdquo by the participants)

When information assimilation is the primary aim ofmultimedia presentation content plays an important role inthe perception of multimedia video quality as reported byGulliver et al [24] They discovered that the content of thesequence has a more significant effect on a userrsquos level ofinformation transfer than either the frame rate or displaydevice type However when information transfer is left outof the equation participants in the same study found frame-rate and device type important for perceived video qualitydemonstrating that they were able to distinguish betweentheir subjective enjoyment of a video clip and the level ofquality which they perceive the video clip to possess Thisimplies that there is a relationship between clip contents anduser-perceived video quality but components of the equationleading to a final score need to be evaluated carefully

Finally Jumisko et al [13] show that recognition of thecontent has an effect on perceived video quality evaluationThey found that video clips which were recognized by theparticipants were generally given lower ratings comparedto unrecognized clips while interesting contents collectedhigher ratings compared to ones deemed uninteresting(whether they were recognized or not) It is argued thatevaluators with previous knowledge about the genre aremoredemanding for the acceptability of quality Not in discordancewith similar studies in the field [25] they found that audiocomponent (when available in the experiment) compensatedto a fair degree the impairments in the visual part of the videoand at the same time impairments in speech were found tobe very distracting This is justified by the fact that pertinentto that particular experimental design audio contains therelevant information and the visual component only supportsit (eg music videos and news with the narrating voice in thebackground)

23 Factors Influencing Visual Perception Visual perceptionmay be broadly defined as mental organization and interpre-tation of sensory information received via individualrsquos sightHuman visual perception while being relatively objective byits means (the sight) and constant in terms of properties ofthe object being watched (light surfaces and textures) is stilloften significantly subjective due to various factors residing inthe observerrsquos mind Since perception requires interpretationof any received information and since interpretation is acomplex phenomenon that is unique to every individualit is evident that visual perception is heavily influenced byinternal and external subjective factors that are often hardto grasp Visual perception is still not fully understoodby contemporary science there are competing theories thatconcentrate on different aspects of it and offer explanationsthat are often in discordance with one other [26] Further-more there is a number of physiological and psychologicalfactors that are to be taken into account when assessingvisual perception spanning from fatigue [27] and substance

intoxication [28] to observerrsquos expectations and motivation[29]

Also some cognitive biases are known to distort ourinterpretation and assessment of what we see and evaluate[30] attention bias (focusing only on one part of the infor-mation set) choice-supportive bias (taking past choices astemplates for the new ones) conservatism (underestimatinghigh values while overestimating low ones) contrast effect(enhancement or reduction of one objectrsquos perception asresult of comparing with a recent contrasted object) curseof knowledge (advanced knowledge diminishes ability toperceive something from a common perspective) halo effect(tendency to observe and evaluate latter aspects in the lightof formerly observed aspects) negativity bias (more attentionis given to unpleasant information than to the pleasantone) recentness bias (the tendency to weigh recent eventsmore than earlier events) and selective perception (whereexpectations influence perception)

Finally bottom-up processing is known to be driven bythe stimulus presented [31] and some stimuli are intrinsicallyconspicuous or salient (ie outliers) in a given contextSaliency is independent of the nature of the particular taskoperates very rapidly and if a stimulus is sufficiently salientit will pop out of a visual scene This knowledge has beenleveraged to incorporate saliency information into objectiveimage quality metrics [32] as it is reasonable to assumethat artifacts appearing in less salient regions are less visibleand thus less annoying than artifacts affecting the region ofinterestWhile this assumption holdswhen natural saliency isin question (when observers are not given a particular task)there are indications that quality assessment task (ie whenobservers specifically look for impairments) modifies thenatural saliency of images so the masking effect of saliencyfor distortions in background regions should be taken withsome reserve when modeling subjective quality experiments[33]

3 Methodology

31 Defining the Test Set As related research reveals assessorsare by no means indifferent to the content they evaluateThis has potential implications for real-world applicationssince subjective perception of quality of some ldquoneutralrdquocontent selected on basis of its suitability for introducingparticular impairments to video stream (ie standardizedVQA databases) might differ from quality perception ofcontent that the assessor is familiar with or interested inand pays more attention to (ie content that the assessorregularly consumes such as TV shows and movies) To testthis hypothesis we created an alternative set of videos forcomparison with the commonly used ones which ought toactivate mental responses that are not activated to a greaterextent within commonly used VQA databases and shouldthus reflect more truthfully the perceived quality of real-life video sequences as experienced by the end-users whenimpaired We will refer to this set as the proposed set Asfamiliar video content was shown to sometimes get lowerquality rating at the same impairment level than the unrec-ognized one we included a number of videos that should

4 The Scientific World Journal

be fairly familiar to assessors instead of focusing solely oncontent that was supposed to induce different emotions orothermental responses First based on the reports of nationaltelevision service [34] as well as on the results of a surveyconducted by a media research group [35] we have identifiedseveral categories of TV program that are most often viewedby average citizens of Serbia (as participants were all Serbiancitizens) The identified categories were

(1) TV series(2) movies(3) informative shows(4) entertainment(5) news(6) music shows(7) sports and(8) other

While most of the categories are self-explanatory theldquoInformative showsrdquo category comprises content that coversa specific topic or event (eg such as shows most oftenencountered on the ldquoDiscoveryrdquo ldquoTravelrdquo or ldquoAnimal Planetrdquochannels) Using television program ratings as a guideline[34] we selected 4 to 6 sequences representative of eachcategory and downloaded them from YouTube (the numbervaried because some of the categories were more popularthan the other with the viewers) Second we leveragedsocial networks (Facebook Twitter and Google+) as well asYouTube itself to gain insight into what videos (in the publicdomain) are deemed popular within different communities(global when it comes to YouTube and local when socialnetworks are in question) Videos that were circulating socialnetworks that the authors are a part of andwhichwere highlypopular within the community at the time the experimentwas devised (ie had a lot of views) were downloadedassigned to appropriate categories and included in the testset All of the videos obtained for the experiment were partof public domain and their usage for scientific purposes wasnot prohibited Also they were downloaded in 480p format(while being already compressed by YouTube using H264)Finally we have opted for the LIVE Video Quality Databaseas a representative source of video sequences commonly usedfor VQA tasks since it is widely accepted and cited withinvideo quality assessment scientific community regularlyupdated and devised with automatic VQA algorithms inmind [36] All 10 video sequences available in the databasewere used in our test set To make for a fair comparison theLIVE Video Quality Database sequences were downloadedfrom YouTube as well in the 480p format rather thanusing the unimpaired database version These videos will bereferred to as the LIVEVQDB or ldquostandardrdquo video set All ofthe videos used in the experiment were scaled (and croppedwhen necessary) to a common resolution of 576 times 432 pixels(1 3 aspect ratio)The audio component of the sequences wasleft out (ie only visual component remained) All sequenceswere between 10s and 11s longThus the final test-set of videoscomprised 66 sequences in total 56 of them were classified

Table 1 Distribution of videos in the test set

Category name Number of videosTV series 4Movies 4Informative shows 8Entertainment 6News 8Music shows 6Sports 5Other 15LIVEVQDB 10

into one of the aforementioned categories and the additional10 from the LIVE Video Quality Database were assigned tothe ldquoLIVEVQDBrdquo categoryThe distribution of the final set ofvideos is provided in Table 1

32 Experimental Design and Setup As the first research goalwas to determine if the LIVEVQDB video set is similar inability to activate mental responses to a set of real-life videosobserved by media consumers an experimental researchdesign with two independent sets was adopted The twosets were compared on a number of relevant variables thatidentified level of mental activation Mental response in thiscontext was regarded as any kind of an individualrsquos consciousmental activity that is directly induced by specific visual stim-uli presented to that individual From various classificationsof mental activities available the widest nomenclature wasused in this research the tripartite classification of mentalactivities (responses) into cognition affection and conationThis classification has been in use since the eighteen centuryand is still widely used nowadays to address various aspectsof human mental state and responses [37]

Cognitive mental activities are mostly observed as aldquorationalrdquo or ldquoobjectiverdquo peace of mind These activities arethought to be responsible for processing information thatpeople get from their sensory systems via attention andmemory In order to measure cognitive response we proposevariables we named ldquointerestingrdquo and ldquofamiliarrdquo to measurethe potential of a stimulus to attract attention and to assess itsnovelty or the lack of it In contrast to the cognitive activitiesaffective activities (often called emotional reactions) arethought to be very subjective and relative to the personwho isreceiving certain stimuli As identified in the western culturesby Ekman et al [38] and reaffirmed universally by Fridlundet al [39] there are six basic emotions that people expressindependently from their culture and other external factorsanger disgust fear happiness sadness and surprise Wehave adopted this classification when researching emotionalresponse for the video stimuli thus obtaining six more vari-ables (one for each of the basic emotions) Conative mentalactivities are the ones that drive subjects towards certainactivities which means that the conative part of the minduses cognitive and affective parts to fuel its role Thereforeit is necessary to observe conative responses in order to fullyassess cognitive and affective states For this component three

The Scientific World Journal 5

Table 2 Variables used in the experiment

Variable name ScaleMOS 1ndash5 Likert scaleSeen before Multiple choice questionInteresting 1ndash7 Likert scaleFamiliar 1ndash7 Likert scaleAnger 1ndash7 Likert scaleFear 1ndash7 Likert scaleSadness 1ndash7 Likert scaleDisgust 1ndash7 Likert scaleHappiness 1ndash7 Likert scaleSurprise 1ndash7 Likert scaleAppealing 1ndash7 Likert scale

Share Multiple choice question(recoded to 3 point ordinal scales)

Watch again Multiple choice question(recoded to 3 point ordinal scales)

variables were designed ldquoappealingrdquo (denoting the subjectivelevel of attractiveness of a particular video) ldquowatch againrdquo(willingness to watch the video again) and ldquoshare videordquo(willingness to share the video with other people)

A measurement of subjectively perceived video qualitywas expressed in terms ofMeanOpinion Score (MOS) whichis calculated as the average score for a video sequence over allassessors It was measured on a 5-point quality scale rangingfrom 1 (bad) to 5 (excellent) Finally to be able to keep track ofcontent that was recognized by the assessors a variable ldquoseenbeforerdquo was introduced Complete set of variables and scalesused to measure them is presented in Table 2

Experimental setup in general and laboratory setup inparticular closely resembled the one proposed by the Inter-national Telecommunication Union (ITU) in their recom-mendations for VQA tasks [40] Sequences were presentedto assessors on a 2010158401015840 monitor (SAMSUNG S20B300B)which was operated at its native resolution of 1600 times 900pixels A custom in-house software was developed thatenables playback of the sequences against the neutral graybackground and voting on each video (ie filling in aquestionnaire) For each assessor video sequence playbackorder was randomized to avoid any ordering effects Afterpresenting the assessor with the sequence the softwaredisplays a questionnaire where the assessor is asked to rateselected features on either a 1ndash7 scale (1 being the lowestworstand 7 being the highestbest score clearly labeled in assessorrsquoslanguage besides the scale) or by selecting appropriate answer(depending on the question) The only exception to this wastheMOS scale that ranged from 1 to 5 Since the questionnairecomprised 13 questions and it was thus impossible to replicatethe exact ITU proposed setup for single stimulus type ofexperiments without conducting multiple runs we haveplaced the question regarding perceived video quality on top(ie it was the first question) therefore complying to thestandard procedure as much as possible Questions regardingcognitive affective and conative components followed Upon

completing the questionnaire software played the next videoAssessors were allowed to see sequences only once (ie replaywas not possible nor were there any additional runs) Thetest consisted of one session of about 45 minutes includingtraining Before the actual test oral instructions were givento subjects and a training session was run that consistedof three videos and a questionnaire following each onewhere the subjects were free to ask any questions regardingthe test procedure (including clarification of questionnairequestions) 20 subjectsmdash8 male and 12 femalemdashparticipatedin the test their age ranging from 20 to 64 None of themwere familiar with video processing nor had previouslyparticipated in similar tests All of the subjects reportednormal or corrected vision prior to testing

4 Results

After the experiment a number of different analyses wereperformed All of them were conducted with two con-siderations in mind (1) we operated with a sample sizethat could possibly be considered small in comparison toestablished norms for behavioral sciences but at the sametime is considered more than adequate for VQA tasks and (2)we operated dominantly with ordinal level of measurement(for the dependent variables)

41 Mental Responses Relative to the Video Sets The dataobtained was observed relative to the two video sets (1)the proposed one and (2) the LIVEVQDB set Separatedimensionswere compared followed by a comparison of sub-sequently computed variables for the three mental responsecategories

Figure 1 displays average scores over different categoriesfor different variables LIVEVQDB set is presented in red(filled in area) while videos comprising the proposed setare divided into respective content categories and presentedby lines Just by looking at averages it seems that videoscomprising LIVEVQDB scored significantly lower on almostall measured variables

To test this hypothesis against the alternative one (that thetwo observed populations are the same) we ran the Mann-Whitney 119880 test results of which are shown in Tables 3ndash5Indeed the test demonstrated that these sets have very lowchance of originating from the same population suggestingthat the proposed set of videos stimulates mental reactionsthat are significantly more intense than the mental reactionsstimulated by the LIVEVQDB set

As shown in Table 3 the proposed video set achievedhigher ranks in both of the cognitive variables observed(ldquointerestingrdquo and ldquofamiliarrdquo) A reason that videos compris-ing this set were found to be more interesting probably liesin their rich and versatile content which was why they werechosen as good candidates to arouse the attention of assessorsin the first place They were also marked as more familiarto observers than the ones from the LIVEVQDB set mostlikely because none of the subjects have participated in VQAexperiments before and thus had low odds of encounteringvideos from LIVEVQDB set before

6 The Scientific World Journal

Table 3 Comparison of cognitive dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Interesting Proposed 70691 79173750 6002250 000LIVEVQDB 40061 8012250

Familiar Proposed 68453 76667100 8508900 000LIVEVQDB 52595 10518900

Table 4 Comparison of affective dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Anger Proposed 67014 75055700 10120300 001LIVEVQDB 60652 12130300

Disgust Proposed 67864 76007150 9168850 000LIVEVQDB 55894 11178850

Fear Proposed 67329 75408050 9767950 000LIVEVQDB 58890 11777950

Happiness Proposed 67866 76010050 9165950 000LIVEVQDB 55880 11175950

Sadness Proposed 67747 75876650 9299350 000LIVEVQDB 56547 11309350

Surprise Proposed 69199 77502900 7673100 000LIVEVQDB 48416 9683100

The proposed set also achieved higher ranks in all ofthe six basic emotions observed as shown in Table 4 Thisfinding is probably a result of original media producerrsquosintent of activating certain affective states depending on thenature of a video Conversely the LIVEVQDB set might havebeen produced with intention to primarily capture differentcontent that is known to bear weight in terms of introducedimpairments when codingdecoding and video transmissionis in focus thus neglecting possible emotional response fromthe assessors

Theproposed video setwas also found to activate conativedimensions of subjectsrsquo mental activity to a greater extentthan the LIVEVQDB set didThese dimensions are especiallyimportant because they cast a new light on both the cognitiveand the affective dimensions If conative part of onersquos mind isin idle state the chances are that the cognitive and affectiveactivation did not happen even if the subject states theopposite When somebody is presented with interestingcontent he or she is likely to do something after seeing itmdasheither watch it again or share it with relevant others

The mean ranks difference can be observed in Table 5but it should be noted that one part of the videos from theproposed set was relatively unpleasant to watch activatingemotions like sadness fear anger or disgust These dimen-sions of affective responses were found to be significantlynegatively correlated or uncorrelated to the conative dimen-sions in this research so the ranks in the conative dimensionswould be even higher if we were to select only pleasant andappealing videos thus showing even greater difference fromthe standard set

After individual analysis the dimensions were summedinto the corresponding mental components forming three

variables named cognitive component (comprising 2 dimen-sions) affective activation (comprising 6 dimensions) andconative activation (comprising 3 dimensions) Again theMann-Whitney 119880 test has demonstrated significantly highermental activation induced by the proposed set than themental activation induced by the LIVEVQDB set (Table 6)

Finally relationship between the mental activationdimensions andperceived video quality (MOS)was observedSince all of the dependent variables were of ordinal typeSpearmanrsquos Rho measure of correlation was used follow-ing common guidelines of interpretation for behavioralsciences [41 42] Moderate correlations significant at the 01level (2-tailed) were found between video quality assessmentand the following dimensions interesting (472) happiness(415) surprise (306) appealing (497) watch again (393)and share video (395) while other dimensions mostlycorrelated with low coefficients

Also video quality assessment was correlated with thethree calculated mental components gaining moderate cor-relation with cognitive component (412) and conative acti-vation (496) while low correlation was found with affectiveactivation (232) again all significant at the 01 level (2-tailed)

Additionally Spearmanrsquos rho correlations were calculatedfor the three mental components suggesting a strong corre-lation between the cognitive and the conative components(741) while also suggesting a moderate correlation betweenthese two components in relation to the affective component(351 with cognitive) and (462 with conative)

42 Video Quality Assessment Unknown to assessors allvideos in the full test set were obtained from the same sourceand with the same quality settings (H264 480p) and were

The Scientific World Journal 7

Table 5 Comparison of conative dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Appealing Proposed 69320 77638550 7537450 000LIVEVQDB 47737 9547450

Watch again Proposed 68410 76619600 8556400 000LIVEVQDB 52832 10566400

Share video Proposed 68223 76410700 8765300 000LIVEVQDB 53877 10775300

Table 6 Mental components relative to the two video sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Cognitive component Proposed 70300 78735800 6440200 000LIVEVQDB 42251 8450200

Affective activation Proposed 70198 78621750 6554250 000LIVEVQDB 42821 8564250

Conative activation Proposed 69529 77872850 7303150 000LIVEVQDB 46566 9313150

000

100

200

300

400

500

600Interesting

Familiar

Anger

Disgust

Fear

Happiness

SadnessSurprise

Appeal

MOS

LIVEVQDBEntertainmentInformativeMoviesMusic

NewsOtherSeriesSports

Figure 1 Average scores for different variables (grouped by videocategories)

hence of roughly the same visual quality Even though wecould not control for the impairments and objective videoquality of the source sequences (apart from the LIVEVQDBdatabase) close visual inspection of the obtained materialassured us that nonexperts should hardly be able to tell thedifference between the quality of sequences presented to them(ie any differences they observed would stem from sources

not pertinent to visual impairments) To further confirm ourassumption a subset of 20 sequences was played to threeindependent video quality experts Among these videos 10were randomly selected from the proposed set while theother 10 were LIVEVQDB sequences (all of them obtainedfrom YouTube to account for any codec effects) Playingorder was randomized and the experts were asked to voteonly on quality (ie they did not take the full experiment)of the videos Subsequently unpaired 119905-test was run onMOS obtained and it did not show statistically significantdifferences between the samples thus confirming our initialassumptions

Since a correlation was established between differentvariables andMOS and LIVEVQDB videos scored constantlylower than the videos from the proposed set we have tried tofurther investigate and explain these differences

On average videos comprising the proposed set scoreda MOS value of 347 while LIVEVQDB videos scored 280This is a large difference on a 5-point scale but it only getslarger when individual components shown to affect perceivedquality are considered For example when interestingness isobserved best ranked LIVEVQDBvideowas placed 39th (outof 66 videos in the full set) A comparison between the top 10ranked ldquointerestingrdquo videos (as deemed by assessors) and thesequences from the LIVEVQDB set reveals a mean differenceof 134 points (280MOS for the LIVEVQDB versus 413MOSfor the proposed set) which was further shown by a 119905-test tobe statistically significant (119875 lt 00001 with 95 confidenceinterval between 104 and 164)What is evenmore interestingis that 7 out of 8 categories (for the proposed video set)found their place on the ldquotop 10 interesting videosrdquo The onlycategory that did not make it was the ldquomoviesrdquo category

When comparison of top 10 MOS rated videos pertinentto different emotions versus LIVEVQDB videos is concernedresults comply with previously discovered correlationsThuswhen a positive emotion such as happiness was induced bya sequence it yielded a high quality score (MOS = 396)

8 The Scientific World Journal

Videos that caused surprise were also ranked as of higherquality (MOS = 381) while negative emotions caused videoto plummet on the quality scale (sadness (MOS = 286) fear(MOS = 300) disgust (MOS = 286) and anger (MOS =289)) Interestingly one of the videos from the LIVEVQDBfound its place among the top 10 videos that the assessorswereangry with

5 Conclusions and Future Research

The most important conclusion we drew from the resultsobtained in the experiment is that the content of videosequence has a strong impact on activating different cog-nitive affective and conative components within assessorsThese components in turn have been shown (both by thisand previous researches) to play an integral role inVQA taskswhether assessors are aware of it or not Thus we providea number of recommendations that ought to be taken intoaccountwhen conducting subjective video quality assessmentexperiments

First video content has to be considered very carefullywhen trying to measure the perceived video quality as failingto do so might introduce a bias which will reflect in finalMOS obtained It is thus vital to choose the correct typeof content for addressing the research question at hand (atany rate choose the type of content that is most likely to beencountered by the viewers in real-world conditions) Havingthis in mind the ldquocontentrdquo variable needs to be monitoredwhenever possible Second since emotions of disgust fearanger and sadness are found to have different impact onconative mental activities than emotions of happiness andsurprise additional attention should be exhibited if conativeactivities are to be observed as dependent variables (especiallywhen it comes to commercially funded research)

Finally we have shown that subjective video qualityassessment might not be such a reliable measure of videoquality if video content is not controlled This has potentialconsequences on both the subjective video quality modelsalready devised that do not take content variable into accountand automatic approaches (ie algorithms) that try to predictMOS based on existing (possibly biased) results

Since the relationship between video content and per-ceived video quality was identified new research directionsopen up for further investigation Some of those includeidentifying and quantifying the strength of this relationshipinvestigating to what degree can a video be impaired withoutassessors noticing it (relative to video content) and whethercontent plays the same role in subjective perception of videoquality when sequences are impaired with different levels andtypes of artifacts Also a study at a larger scale might revealother factors pertinent to different populations (gender-wise culture-wise and demographically wise) that impactsubjective perception of video quality and should thus betaken into account when VQA tasks are in question

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The research presented in this paper was supported by FP7IRSESProjectQoSTREAMmdashVideoQualityDrivenMultime-dia Streaming in Mobile Wireless Networks and by SerbianMinistry of Education Science and Technology projects nosIII43002 and III44003

References

[1] A Lenhart K Purcell A Smith and K Zickuhr ldquoSocial mediaamp mobile internet use among teens and young adultsrdquo TechRep Pew Internet amp American Life Project Washington DCUSA 2010 httpwebpewinternetorg

[2] ComScore ldquoTodayrsquos US Tablet Owner Revealedrdquo 2012 httpwwwcomscorecomInsights

[3] K OrsquoHara A S Mitchell and A Vorbau ldquoConsuming video onmobile devicesrdquo in Proceedings of the 25th SIGCHI Conferenceon Human Factors in Computing Systems pp 857ndash866 ACMMay 2007

[4] CISCO ldquoCiscoVisual Networking Index Forecast andMethod-ology 2011ndash2016rdquo Tech Rep 2012 httpwwwciscocom

[5] K Seshadrinathan and A Bovik ldquoAn information theoreticvideo quality metric based onmotionmodelsrdquo in Proceedings ofthe 3rd International Workshop on Video Processing and QualityMetrics for Consumer Electronics 2007

[6] R J Corsini The Dictionary of Psychology Psychology Press2002

[7] H Hagtvedt and V M Patrick ldquoArt infusion the influenceof visual art on the perception and evaluation of consumerproductsrdquo Journal of Marketing Research vol 45 no 3 pp 379ndash389 2008

[8] A M Giannini F Ferlazzo R Sgalla P Cordellieri F Barallaand S Pepe ldquoThe use of videos in road safety training cognitiveand emotional effectsrdquo Accident Analysis amp Prevention vol 52pp 111ndash117 2013

[9] I Blanchette and A Richards ldquoThe influence of affect on higherlevel cognition a review of research on interpretation judge-ment decision making and reasoningrdquo Cognition amp Emotionvol 24 no 4 pp 561ndash595 2010

[10] N Cranley P Perry and L Murphy ldquoUser perception of adapt-ing video qualityrdquo International Journal of Human ComputerStudies vol 64 no 8 pp 637ndash647 2006

[11] R R Pastrana-Vidal J C Gicquel J L Blin and H CherifildquoPredicting subjective video quality from separated spatial andtemporal assessmentrdquo in Human Vision and Electronic ImagingXI vol 6057 of Proceedings of SPIE pp 276ndash286 January 2006

[12] M Kapa L Happe and F Jakab ldquoPrediction of quality ofuser experience for video streaming over IP networksrdquo CyberJournals pp 22ndash35 2012

[13] SH JumiskoV P Ilvonen andKAVaananen-Vainio-MattilaldquoEffect of TV content in subjective assessment of video qualityon mobile devicesrdquo in IS amp T Electronic ImagingmdashMultimediaonMobile Devices vol 5684 of Proceedings of SPIE pp 243ndash254January 2005

[14] S Winkler ldquoAnalysis of public image and video databases forquality assessmentrdquo IEEE Journal of Selected Topics in SignalProcessing vol 6 no 6 pp 616ndash625 2012

[15] ITU-T Recommendation ldquoSubjective video quality assessmentmethods for multimedia applicationsrdquo Tech Rep 1999

The Scientific World Journal 9

[16] F De Simone ldquoEPFL-PoliMI video quality assessment data-baserdquo 2009 httpvqacomopolimiit

[17] VQEG ldquoReport on the validation of video quality models forhigh definition video contentrdquo Tech Rep June 2010 httpwwwitsbldrdocgovvqegvqeg-homeaspx

[18] Y Wang ldquoPoly NYU video quality databasesrdquo QualityAssess-mentDatabase 2008 httpvisionpolyeduindexhtmlindexphpn=HomePageVideoLab

[19] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoStudy of subjective and objective quality assessmentof videordquo IEEE Transactions on Image Processing vol 19 no 6pp 1427ndash1441 2010

[20] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoA subjective study to evaluate video quality assess-ment algorithmsrdquo in Human Vision and Electronic Imaging XVProceedings of SPIE January 2010

[21] LGoldmann FDe Simone andT Ebrahimi ldquoA comprehensivedatabase and subjective evaluation methodology for quality ofexperience in stereoscopic videordquo in Three-Dimensional ImageProcessing (3DIP) and Applications vol 7526 of Proceedings ofSPIE January 2010

[22] J D McCarthy M A Sasse and D Miras ldquoSharp or smoothComparing the effects of quantization vs frame rate forstreamed videordquo in Proceedings of the Conference on HumanFactors in Computing Systems pp 535ndash542 ACM April 2004

[23] S Sladojevic D Culibrk M Mirkovic D Ruiz Coll and GBorba ldquoLogging real packet reception patterns for end-to-endquality of experience assessment in wireless multimedia trans-missionrdquo in Proceedings of the IEEE International Workshop onEmerging Multimedia Systems and Applications (EMSA rsquo13) SanJose Calif USA July 2013

[24] S R Gulliver T Serif andGGhinea ldquoPervasive and standalonecomputing the perceptual effects of variable multimedia qual-ityrdquo International Journal of Human Computer Studies vol 60no 5-6 pp 640ndash665 2004

[25] S Rihs ldquoThe influence of audio on perceived picture quality andsubjective audio-video delay tolerancerdquo inMOSAIC Handbookpp 183ndash187 1996

[26] I E GordonTheories of Visual Perception Taylor amp Francis 3rdedition 2005

[27] S Hancock and L McNaughton ldquoEffects of fatigue on ability toprocess visual information by experienced orienteersrdquo Percep-tual and Motor Skills vol 62 no 2 pp 491ndash498 1986

[28] C S Weinstein and H J Shaffer ldquoNeurocognitive aspects ofsubstance abuse treatment a psychotherapistrsquos primerrdquo Psy-chotherapy vol 30 no 2 pp 317ndash333 1993

[29] E Balcetis and D Dunning ldquoSee what you want to see moti-vational influences on visual perceptionrdquo Journal of Personalityand Social Psychology vol 91 no 4 pp 612ndash625 2006

[30] A Parducci ldquoResponse bias and contextual effects whenbiasedrdquo Advances in Psychology vol 68 pp 207ndash219 1990

[31] E A Styles Attention Perception and Memory An IntegratedIntroduction Taylor amp Francis Routledge New York NY USA2005

[32] D Culibrk M Mirkovic V Zlokolica M Pokric V Crnojevicand D Kukolj ldquoSalient motion features for video qualityassessmentrdquo IEEE Transactions on Image Processing vol 20 no4 pp 948ndash958 2011

[33] J Redi H Liu R Zunino and I Heynderickx ldquoInteractions ofvisual attention and quality perceptionrdquo in Human Vision andElectronic Imaging XVI Proceedings of SPIE January 2011

[34] RTS Istrazivanje (izvestaji o gledanosti) httpwwwrtsrspagertssrCIPAnews171IstraC5BEivanjehtml

[35] Nielsen Television Audience Measurement Nielsen AudienceMeasurement Serbia httpwwwagbnielsennetwherewearedynPageasplang=localampcountry=Serbiaampid=354

[36] LIVE Video Quality Database httpliveeceutexaseduresearchqualitylive videohtml

[37] E R Hilgard ldquoThe trilogy of mind cognition affection andconationrdquo Journal of the History of the Behavioral Sciences vol16 no 2 pp 107ndash117 1980

[38] P EkmanWV Friesen andP EllsworthEmotion in theHumanFace Guidelines for Research and an Integration of FindingsPergamon Press New York NY USA 1972

[39] J Fridlund P Ekman and H Oster ldquoFacial expressions ofemotionrdquo in Nonverbal Behavior and Communication pp 143ndash223 2nd edition 1987

[40] ITU-T 500-11 ldquoMethodology for the subjective assessment ofthe quality of television picturesrdquo Tech Rep InternationalTelecommunication Union Geneva Switzerland 2002

[41] J Cohen Statistical Power Analysis for the Behavioral ScienciesRoutledge 1988

[42] D S Dunn Statistics and Data Analysis for the BehavioralSciences McGraw-Hill 2001

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 4: Research Article Evaluating the Role of Content in ...downloads.hindawi.com/journals/tswj/2014/625219.pdf · Research Article Evaluating the Role of Content in Subjective Video Quality

4 The Scientific World Journal

be fairly familiar to assessors instead of focusing solely oncontent that was supposed to induce different emotions orothermental responses First based on the reports of nationaltelevision service [34] as well as on the results of a surveyconducted by a media research group [35] we have identifiedseveral categories of TV program that are most often viewedby average citizens of Serbia (as participants were all Serbiancitizens) The identified categories were

(1) TV series(2) movies(3) informative shows(4) entertainment(5) news(6) music shows(7) sports and(8) other

While most of the categories are self-explanatory theldquoInformative showsrdquo category comprises content that coversa specific topic or event (eg such as shows most oftenencountered on the ldquoDiscoveryrdquo ldquoTravelrdquo or ldquoAnimal Planetrdquochannels) Using television program ratings as a guideline[34] we selected 4 to 6 sequences representative of eachcategory and downloaded them from YouTube (the numbervaried because some of the categories were more popularthan the other with the viewers) Second we leveragedsocial networks (Facebook Twitter and Google+) as well asYouTube itself to gain insight into what videos (in the publicdomain) are deemed popular within different communities(global when it comes to YouTube and local when socialnetworks are in question) Videos that were circulating socialnetworks that the authors are a part of andwhichwere highlypopular within the community at the time the experimentwas devised (ie had a lot of views) were downloadedassigned to appropriate categories and included in the testset All of the videos obtained for the experiment were partof public domain and their usage for scientific purposes wasnot prohibited Also they were downloaded in 480p format(while being already compressed by YouTube using H264)Finally we have opted for the LIVE Video Quality Databaseas a representative source of video sequences commonly usedfor VQA tasks since it is widely accepted and cited withinvideo quality assessment scientific community regularlyupdated and devised with automatic VQA algorithms inmind [36] All 10 video sequences available in the databasewere used in our test set To make for a fair comparison theLIVE Video Quality Database sequences were downloadedfrom YouTube as well in the 480p format rather thanusing the unimpaired database version These videos will bereferred to as the LIVEVQDB or ldquostandardrdquo video set All ofthe videos used in the experiment were scaled (and croppedwhen necessary) to a common resolution of 576 times 432 pixels(1 3 aspect ratio)The audio component of the sequences wasleft out (ie only visual component remained) All sequenceswere between 10s and 11s longThus the final test-set of videoscomprised 66 sequences in total 56 of them were classified

Table 1 Distribution of videos in the test set

Category name Number of videosTV series 4Movies 4Informative shows 8Entertainment 6News 8Music shows 6Sports 5Other 15LIVEVQDB 10

into one of the aforementioned categories and the additional10 from the LIVE Video Quality Database were assigned tothe ldquoLIVEVQDBrdquo categoryThe distribution of the final set ofvideos is provided in Table 1

32 Experimental Design and Setup As the first research goalwas to determine if the LIVEVQDB video set is similar inability to activate mental responses to a set of real-life videosobserved by media consumers an experimental researchdesign with two independent sets was adopted The twosets were compared on a number of relevant variables thatidentified level of mental activation Mental response in thiscontext was regarded as any kind of an individualrsquos consciousmental activity that is directly induced by specific visual stim-uli presented to that individual From various classificationsof mental activities available the widest nomenclature wasused in this research the tripartite classification of mentalactivities (responses) into cognition affection and conationThis classification has been in use since the eighteen centuryand is still widely used nowadays to address various aspectsof human mental state and responses [37]

Cognitive mental activities are mostly observed as aldquorationalrdquo or ldquoobjectiverdquo peace of mind These activities arethought to be responsible for processing information thatpeople get from their sensory systems via attention andmemory In order to measure cognitive response we proposevariables we named ldquointerestingrdquo and ldquofamiliarrdquo to measurethe potential of a stimulus to attract attention and to assess itsnovelty or the lack of it In contrast to the cognitive activitiesaffective activities (often called emotional reactions) arethought to be very subjective and relative to the personwho isreceiving certain stimuli As identified in the western culturesby Ekman et al [38] and reaffirmed universally by Fridlundet al [39] there are six basic emotions that people expressindependently from their culture and other external factorsanger disgust fear happiness sadness and surprise Wehave adopted this classification when researching emotionalresponse for the video stimuli thus obtaining six more vari-ables (one for each of the basic emotions) Conative mentalactivities are the ones that drive subjects towards certainactivities which means that the conative part of the minduses cognitive and affective parts to fuel its role Thereforeit is necessary to observe conative responses in order to fullyassess cognitive and affective states For this component three

The Scientific World Journal 5

Table 2 Variables used in the experiment

Variable name ScaleMOS 1ndash5 Likert scaleSeen before Multiple choice questionInteresting 1ndash7 Likert scaleFamiliar 1ndash7 Likert scaleAnger 1ndash7 Likert scaleFear 1ndash7 Likert scaleSadness 1ndash7 Likert scaleDisgust 1ndash7 Likert scaleHappiness 1ndash7 Likert scaleSurprise 1ndash7 Likert scaleAppealing 1ndash7 Likert scale

Share Multiple choice question(recoded to 3 point ordinal scales)

Watch again Multiple choice question(recoded to 3 point ordinal scales)

variables were designed ldquoappealingrdquo (denoting the subjectivelevel of attractiveness of a particular video) ldquowatch againrdquo(willingness to watch the video again) and ldquoshare videordquo(willingness to share the video with other people)

A measurement of subjectively perceived video qualitywas expressed in terms ofMeanOpinion Score (MOS) whichis calculated as the average score for a video sequence over allassessors It was measured on a 5-point quality scale rangingfrom 1 (bad) to 5 (excellent) Finally to be able to keep track ofcontent that was recognized by the assessors a variable ldquoseenbeforerdquo was introduced Complete set of variables and scalesused to measure them is presented in Table 2

Experimental setup in general and laboratory setup inparticular closely resembled the one proposed by the Inter-national Telecommunication Union (ITU) in their recom-mendations for VQA tasks [40] Sequences were presentedto assessors on a 2010158401015840 monitor (SAMSUNG S20B300B)which was operated at its native resolution of 1600 times 900pixels A custom in-house software was developed thatenables playback of the sequences against the neutral graybackground and voting on each video (ie filling in aquestionnaire) For each assessor video sequence playbackorder was randomized to avoid any ordering effects Afterpresenting the assessor with the sequence the softwaredisplays a questionnaire where the assessor is asked to rateselected features on either a 1ndash7 scale (1 being the lowestworstand 7 being the highestbest score clearly labeled in assessorrsquoslanguage besides the scale) or by selecting appropriate answer(depending on the question) The only exception to this wastheMOS scale that ranged from 1 to 5 Since the questionnairecomprised 13 questions and it was thus impossible to replicatethe exact ITU proposed setup for single stimulus type ofexperiments without conducting multiple runs we haveplaced the question regarding perceived video quality on top(ie it was the first question) therefore complying to thestandard procedure as much as possible Questions regardingcognitive affective and conative components followed Upon

completing the questionnaire software played the next videoAssessors were allowed to see sequences only once (ie replaywas not possible nor were there any additional runs) Thetest consisted of one session of about 45 minutes includingtraining Before the actual test oral instructions were givento subjects and a training session was run that consistedof three videos and a questionnaire following each onewhere the subjects were free to ask any questions regardingthe test procedure (including clarification of questionnairequestions) 20 subjectsmdash8 male and 12 femalemdashparticipatedin the test their age ranging from 20 to 64 None of themwere familiar with video processing nor had previouslyparticipated in similar tests All of the subjects reportednormal or corrected vision prior to testing

4 Results

After the experiment a number of different analyses wereperformed All of them were conducted with two con-siderations in mind (1) we operated with a sample sizethat could possibly be considered small in comparison toestablished norms for behavioral sciences but at the sametime is considered more than adequate for VQA tasks and (2)we operated dominantly with ordinal level of measurement(for the dependent variables)

41 Mental Responses Relative to the Video Sets The dataobtained was observed relative to the two video sets (1)the proposed one and (2) the LIVEVQDB set Separatedimensionswere compared followed by a comparison of sub-sequently computed variables for the three mental responsecategories

Figure 1 displays average scores over different categoriesfor different variables LIVEVQDB set is presented in red(filled in area) while videos comprising the proposed setare divided into respective content categories and presentedby lines Just by looking at averages it seems that videoscomprising LIVEVQDB scored significantly lower on almostall measured variables

To test this hypothesis against the alternative one (that thetwo observed populations are the same) we ran the Mann-Whitney 119880 test results of which are shown in Tables 3ndash5Indeed the test demonstrated that these sets have very lowchance of originating from the same population suggestingthat the proposed set of videos stimulates mental reactionsthat are significantly more intense than the mental reactionsstimulated by the LIVEVQDB set

As shown in Table 3 the proposed video set achievedhigher ranks in both of the cognitive variables observed(ldquointerestingrdquo and ldquofamiliarrdquo) A reason that videos compris-ing this set were found to be more interesting probably liesin their rich and versatile content which was why they werechosen as good candidates to arouse the attention of assessorsin the first place They were also marked as more familiarto observers than the ones from the LIVEVQDB set mostlikely because none of the subjects have participated in VQAexperiments before and thus had low odds of encounteringvideos from LIVEVQDB set before

6 The Scientific World Journal

Table 3 Comparison of cognitive dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Interesting Proposed 70691 79173750 6002250 000LIVEVQDB 40061 8012250

Familiar Proposed 68453 76667100 8508900 000LIVEVQDB 52595 10518900

Table 4 Comparison of affective dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Anger Proposed 67014 75055700 10120300 001LIVEVQDB 60652 12130300

Disgust Proposed 67864 76007150 9168850 000LIVEVQDB 55894 11178850

Fear Proposed 67329 75408050 9767950 000LIVEVQDB 58890 11777950

Happiness Proposed 67866 76010050 9165950 000LIVEVQDB 55880 11175950

Sadness Proposed 67747 75876650 9299350 000LIVEVQDB 56547 11309350

Surprise Proposed 69199 77502900 7673100 000LIVEVQDB 48416 9683100

The proposed set also achieved higher ranks in all ofthe six basic emotions observed as shown in Table 4 Thisfinding is probably a result of original media producerrsquosintent of activating certain affective states depending on thenature of a video Conversely the LIVEVQDB set might havebeen produced with intention to primarily capture differentcontent that is known to bear weight in terms of introducedimpairments when codingdecoding and video transmissionis in focus thus neglecting possible emotional response fromthe assessors

Theproposed video setwas also found to activate conativedimensions of subjectsrsquo mental activity to a greater extentthan the LIVEVQDB set didThese dimensions are especiallyimportant because they cast a new light on both the cognitiveand the affective dimensions If conative part of onersquos mind isin idle state the chances are that the cognitive and affectiveactivation did not happen even if the subject states theopposite When somebody is presented with interestingcontent he or she is likely to do something after seeing itmdasheither watch it again or share it with relevant others

The mean ranks difference can be observed in Table 5but it should be noted that one part of the videos from theproposed set was relatively unpleasant to watch activatingemotions like sadness fear anger or disgust These dimen-sions of affective responses were found to be significantlynegatively correlated or uncorrelated to the conative dimen-sions in this research so the ranks in the conative dimensionswould be even higher if we were to select only pleasant andappealing videos thus showing even greater difference fromthe standard set

After individual analysis the dimensions were summedinto the corresponding mental components forming three

variables named cognitive component (comprising 2 dimen-sions) affective activation (comprising 6 dimensions) andconative activation (comprising 3 dimensions) Again theMann-Whitney 119880 test has demonstrated significantly highermental activation induced by the proposed set than themental activation induced by the LIVEVQDB set (Table 6)

Finally relationship between the mental activationdimensions andperceived video quality (MOS)was observedSince all of the dependent variables were of ordinal typeSpearmanrsquos Rho measure of correlation was used follow-ing common guidelines of interpretation for behavioralsciences [41 42] Moderate correlations significant at the 01level (2-tailed) were found between video quality assessmentand the following dimensions interesting (472) happiness(415) surprise (306) appealing (497) watch again (393)and share video (395) while other dimensions mostlycorrelated with low coefficients

Also video quality assessment was correlated with thethree calculated mental components gaining moderate cor-relation with cognitive component (412) and conative acti-vation (496) while low correlation was found with affectiveactivation (232) again all significant at the 01 level (2-tailed)

Additionally Spearmanrsquos rho correlations were calculatedfor the three mental components suggesting a strong corre-lation between the cognitive and the conative components(741) while also suggesting a moderate correlation betweenthese two components in relation to the affective component(351 with cognitive) and (462 with conative)

42 Video Quality Assessment Unknown to assessors allvideos in the full test set were obtained from the same sourceand with the same quality settings (H264 480p) and were

The Scientific World Journal 7

Table 5 Comparison of conative dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Appealing Proposed 69320 77638550 7537450 000LIVEVQDB 47737 9547450

Watch again Proposed 68410 76619600 8556400 000LIVEVQDB 52832 10566400

Share video Proposed 68223 76410700 8765300 000LIVEVQDB 53877 10775300

Table 6 Mental components relative to the two video sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Cognitive component Proposed 70300 78735800 6440200 000LIVEVQDB 42251 8450200

Affective activation Proposed 70198 78621750 6554250 000LIVEVQDB 42821 8564250

Conative activation Proposed 69529 77872850 7303150 000LIVEVQDB 46566 9313150

000

100

200

300

400

500

600Interesting

Familiar

Anger

Disgust

Fear

Happiness

SadnessSurprise

Appeal

MOS

LIVEVQDBEntertainmentInformativeMoviesMusic

NewsOtherSeriesSports

Figure 1 Average scores for different variables (grouped by videocategories)

hence of roughly the same visual quality Even though wecould not control for the impairments and objective videoquality of the source sequences (apart from the LIVEVQDBdatabase) close visual inspection of the obtained materialassured us that nonexperts should hardly be able to tell thedifference between the quality of sequences presented to them(ie any differences they observed would stem from sources

not pertinent to visual impairments) To further confirm ourassumption a subset of 20 sequences was played to threeindependent video quality experts Among these videos 10were randomly selected from the proposed set while theother 10 were LIVEVQDB sequences (all of them obtainedfrom YouTube to account for any codec effects) Playingorder was randomized and the experts were asked to voteonly on quality (ie they did not take the full experiment)of the videos Subsequently unpaired 119905-test was run onMOS obtained and it did not show statistically significantdifferences between the samples thus confirming our initialassumptions

Since a correlation was established between differentvariables andMOS and LIVEVQDB videos scored constantlylower than the videos from the proposed set we have tried tofurther investigate and explain these differences

On average videos comprising the proposed set scoreda MOS value of 347 while LIVEVQDB videos scored 280This is a large difference on a 5-point scale but it only getslarger when individual components shown to affect perceivedquality are considered For example when interestingness isobserved best ranked LIVEVQDBvideowas placed 39th (outof 66 videos in the full set) A comparison between the top 10ranked ldquointerestingrdquo videos (as deemed by assessors) and thesequences from the LIVEVQDB set reveals a mean differenceof 134 points (280MOS for the LIVEVQDB versus 413MOSfor the proposed set) which was further shown by a 119905-test tobe statistically significant (119875 lt 00001 with 95 confidenceinterval between 104 and 164)What is evenmore interestingis that 7 out of 8 categories (for the proposed video set)found their place on the ldquotop 10 interesting videosrdquo The onlycategory that did not make it was the ldquomoviesrdquo category

When comparison of top 10 MOS rated videos pertinentto different emotions versus LIVEVQDB videos is concernedresults comply with previously discovered correlationsThuswhen a positive emotion such as happiness was induced bya sequence it yielded a high quality score (MOS = 396)

8 The Scientific World Journal

Videos that caused surprise were also ranked as of higherquality (MOS = 381) while negative emotions caused videoto plummet on the quality scale (sadness (MOS = 286) fear(MOS = 300) disgust (MOS = 286) and anger (MOS =289)) Interestingly one of the videos from the LIVEVQDBfound its place among the top 10 videos that the assessorswereangry with

5 Conclusions and Future Research

The most important conclusion we drew from the resultsobtained in the experiment is that the content of videosequence has a strong impact on activating different cog-nitive affective and conative components within assessorsThese components in turn have been shown (both by thisand previous researches) to play an integral role inVQA taskswhether assessors are aware of it or not Thus we providea number of recommendations that ought to be taken intoaccountwhen conducting subjective video quality assessmentexperiments

First video content has to be considered very carefullywhen trying to measure the perceived video quality as failingto do so might introduce a bias which will reflect in finalMOS obtained It is thus vital to choose the correct typeof content for addressing the research question at hand (atany rate choose the type of content that is most likely to beencountered by the viewers in real-world conditions) Havingthis in mind the ldquocontentrdquo variable needs to be monitoredwhenever possible Second since emotions of disgust fearanger and sadness are found to have different impact onconative mental activities than emotions of happiness andsurprise additional attention should be exhibited if conativeactivities are to be observed as dependent variables (especiallywhen it comes to commercially funded research)

Finally we have shown that subjective video qualityassessment might not be such a reliable measure of videoquality if video content is not controlled This has potentialconsequences on both the subjective video quality modelsalready devised that do not take content variable into accountand automatic approaches (ie algorithms) that try to predictMOS based on existing (possibly biased) results

Since the relationship between video content and per-ceived video quality was identified new research directionsopen up for further investigation Some of those includeidentifying and quantifying the strength of this relationshipinvestigating to what degree can a video be impaired withoutassessors noticing it (relative to video content) and whethercontent plays the same role in subjective perception of videoquality when sequences are impaired with different levels andtypes of artifacts Also a study at a larger scale might revealother factors pertinent to different populations (gender-wise culture-wise and demographically wise) that impactsubjective perception of video quality and should thus betaken into account when VQA tasks are in question

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The research presented in this paper was supported by FP7IRSESProjectQoSTREAMmdashVideoQualityDrivenMultime-dia Streaming in Mobile Wireless Networks and by SerbianMinistry of Education Science and Technology projects nosIII43002 and III44003

References

[1] A Lenhart K Purcell A Smith and K Zickuhr ldquoSocial mediaamp mobile internet use among teens and young adultsrdquo TechRep Pew Internet amp American Life Project Washington DCUSA 2010 httpwebpewinternetorg

[2] ComScore ldquoTodayrsquos US Tablet Owner Revealedrdquo 2012 httpwwwcomscorecomInsights

[3] K OrsquoHara A S Mitchell and A Vorbau ldquoConsuming video onmobile devicesrdquo in Proceedings of the 25th SIGCHI Conferenceon Human Factors in Computing Systems pp 857ndash866 ACMMay 2007

[4] CISCO ldquoCiscoVisual Networking Index Forecast andMethod-ology 2011ndash2016rdquo Tech Rep 2012 httpwwwciscocom

[5] K Seshadrinathan and A Bovik ldquoAn information theoreticvideo quality metric based onmotionmodelsrdquo in Proceedings ofthe 3rd International Workshop on Video Processing and QualityMetrics for Consumer Electronics 2007

[6] R J Corsini The Dictionary of Psychology Psychology Press2002

[7] H Hagtvedt and V M Patrick ldquoArt infusion the influenceof visual art on the perception and evaluation of consumerproductsrdquo Journal of Marketing Research vol 45 no 3 pp 379ndash389 2008

[8] A M Giannini F Ferlazzo R Sgalla P Cordellieri F Barallaand S Pepe ldquoThe use of videos in road safety training cognitiveand emotional effectsrdquo Accident Analysis amp Prevention vol 52pp 111ndash117 2013

[9] I Blanchette and A Richards ldquoThe influence of affect on higherlevel cognition a review of research on interpretation judge-ment decision making and reasoningrdquo Cognition amp Emotionvol 24 no 4 pp 561ndash595 2010

[10] N Cranley P Perry and L Murphy ldquoUser perception of adapt-ing video qualityrdquo International Journal of Human ComputerStudies vol 64 no 8 pp 637ndash647 2006

[11] R R Pastrana-Vidal J C Gicquel J L Blin and H CherifildquoPredicting subjective video quality from separated spatial andtemporal assessmentrdquo in Human Vision and Electronic ImagingXI vol 6057 of Proceedings of SPIE pp 276ndash286 January 2006

[12] M Kapa L Happe and F Jakab ldquoPrediction of quality ofuser experience for video streaming over IP networksrdquo CyberJournals pp 22ndash35 2012

[13] SH JumiskoV P Ilvonen andKAVaananen-Vainio-MattilaldquoEffect of TV content in subjective assessment of video qualityon mobile devicesrdquo in IS amp T Electronic ImagingmdashMultimediaonMobile Devices vol 5684 of Proceedings of SPIE pp 243ndash254January 2005

[14] S Winkler ldquoAnalysis of public image and video databases forquality assessmentrdquo IEEE Journal of Selected Topics in SignalProcessing vol 6 no 6 pp 616ndash625 2012

[15] ITU-T Recommendation ldquoSubjective video quality assessmentmethods for multimedia applicationsrdquo Tech Rep 1999

The Scientific World Journal 9

[16] F De Simone ldquoEPFL-PoliMI video quality assessment data-baserdquo 2009 httpvqacomopolimiit

[17] VQEG ldquoReport on the validation of video quality models forhigh definition video contentrdquo Tech Rep June 2010 httpwwwitsbldrdocgovvqegvqeg-homeaspx

[18] Y Wang ldquoPoly NYU video quality databasesrdquo QualityAssess-mentDatabase 2008 httpvisionpolyeduindexhtmlindexphpn=HomePageVideoLab

[19] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoStudy of subjective and objective quality assessmentof videordquo IEEE Transactions on Image Processing vol 19 no 6pp 1427ndash1441 2010

[20] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoA subjective study to evaluate video quality assess-ment algorithmsrdquo in Human Vision and Electronic Imaging XVProceedings of SPIE January 2010

[21] LGoldmann FDe Simone andT Ebrahimi ldquoA comprehensivedatabase and subjective evaluation methodology for quality ofexperience in stereoscopic videordquo in Three-Dimensional ImageProcessing (3DIP) and Applications vol 7526 of Proceedings ofSPIE January 2010

[22] J D McCarthy M A Sasse and D Miras ldquoSharp or smoothComparing the effects of quantization vs frame rate forstreamed videordquo in Proceedings of the Conference on HumanFactors in Computing Systems pp 535ndash542 ACM April 2004

[23] S Sladojevic D Culibrk M Mirkovic D Ruiz Coll and GBorba ldquoLogging real packet reception patterns for end-to-endquality of experience assessment in wireless multimedia trans-missionrdquo in Proceedings of the IEEE International Workshop onEmerging Multimedia Systems and Applications (EMSA rsquo13) SanJose Calif USA July 2013

[24] S R Gulliver T Serif andGGhinea ldquoPervasive and standalonecomputing the perceptual effects of variable multimedia qual-ityrdquo International Journal of Human Computer Studies vol 60no 5-6 pp 640ndash665 2004

[25] S Rihs ldquoThe influence of audio on perceived picture quality andsubjective audio-video delay tolerancerdquo inMOSAIC Handbookpp 183ndash187 1996

[26] I E GordonTheories of Visual Perception Taylor amp Francis 3rdedition 2005

[27] S Hancock and L McNaughton ldquoEffects of fatigue on ability toprocess visual information by experienced orienteersrdquo Percep-tual and Motor Skills vol 62 no 2 pp 491ndash498 1986

[28] C S Weinstein and H J Shaffer ldquoNeurocognitive aspects ofsubstance abuse treatment a psychotherapistrsquos primerrdquo Psy-chotherapy vol 30 no 2 pp 317ndash333 1993

[29] E Balcetis and D Dunning ldquoSee what you want to see moti-vational influences on visual perceptionrdquo Journal of Personalityand Social Psychology vol 91 no 4 pp 612ndash625 2006

[30] A Parducci ldquoResponse bias and contextual effects whenbiasedrdquo Advances in Psychology vol 68 pp 207ndash219 1990

[31] E A Styles Attention Perception and Memory An IntegratedIntroduction Taylor amp Francis Routledge New York NY USA2005

[32] D Culibrk M Mirkovic V Zlokolica M Pokric V Crnojevicand D Kukolj ldquoSalient motion features for video qualityassessmentrdquo IEEE Transactions on Image Processing vol 20 no4 pp 948ndash958 2011

[33] J Redi H Liu R Zunino and I Heynderickx ldquoInteractions ofvisual attention and quality perceptionrdquo in Human Vision andElectronic Imaging XVI Proceedings of SPIE January 2011

[34] RTS Istrazivanje (izvestaji o gledanosti) httpwwwrtsrspagertssrCIPAnews171IstraC5BEivanjehtml

[35] Nielsen Television Audience Measurement Nielsen AudienceMeasurement Serbia httpwwwagbnielsennetwherewearedynPageasplang=localampcountry=Serbiaampid=354

[36] LIVE Video Quality Database httpliveeceutexaseduresearchqualitylive videohtml

[37] E R Hilgard ldquoThe trilogy of mind cognition affection andconationrdquo Journal of the History of the Behavioral Sciences vol16 no 2 pp 107ndash117 1980

[38] P EkmanWV Friesen andP EllsworthEmotion in theHumanFace Guidelines for Research and an Integration of FindingsPergamon Press New York NY USA 1972

[39] J Fridlund P Ekman and H Oster ldquoFacial expressions ofemotionrdquo in Nonverbal Behavior and Communication pp 143ndash223 2nd edition 1987

[40] ITU-T 500-11 ldquoMethodology for the subjective assessment ofthe quality of television picturesrdquo Tech Rep InternationalTelecommunication Union Geneva Switzerland 2002

[41] J Cohen Statistical Power Analysis for the Behavioral ScienciesRoutledge 1988

[42] D S Dunn Statistics and Data Analysis for the BehavioralSciences McGraw-Hill 2001

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 5: Research Article Evaluating the Role of Content in ...downloads.hindawi.com/journals/tswj/2014/625219.pdf · Research Article Evaluating the Role of Content in Subjective Video Quality

The Scientific World Journal 5

Table 2 Variables used in the experiment

Variable name ScaleMOS 1ndash5 Likert scaleSeen before Multiple choice questionInteresting 1ndash7 Likert scaleFamiliar 1ndash7 Likert scaleAnger 1ndash7 Likert scaleFear 1ndash7 Likert scaleSadness 1ndash7 Likert scaleDisgust 1ndash7 Likert scaleHappiness 1ndash7 Likert scaleSurprise 1ndash7 Likert scaleAppealing 1ndash7 Likert scale

Share Multiple choice question(recoded to 3 point ordinal scales)

Watch again Multiple choice question(recoded to 3 point ordinal scales)

variables were designed ldquoappealingrdquo (denoting the subjectivelevel of attractiveness of a particular video) ldquowatch againrdquo(willingness to watch the video again) and ldquoshare videordquo(willingness to share the video with other people)

A measurement of subjectively perceived video qualitywas expressed in terms ofMeanOpinion Score (MOS) whichis calculated as the average score for a video sequence over allassessors It was measured on a 5-point quality scale rangingfrom 1 (bad) to 5 (excellent) Finally to be able to keep track ofcontent that was recognized by the assessors a variable ldquoseenbeforerdquo was introduced Complete set of variables and scalesused to measure them is presented in Table 2

Experimental setup in general and laboratory setup inparticular closely resembled the one proposed by the Inter-national Telecommunication Union (ITU) in their recom-mendations for VQA tasks [40] Sequences were presentedto assessors on a 2010158401015840 monitor (SAMSUNG S20B300B)which was operated at its native resolution of 1600 times 900pixels A custom in-house software was developed thatenables playback of the sequences against the neutral graybackground and voting on each video (ie filling in aquestionnaire) For each assessor video sequence playbackorder was randomized to avoid any ordering effects Afterpresenting the assessor with the sequence the softwaredisplays a questionnaire where the assessor is asked to rateselected features on either a 1ndash7 scale (1 being the lowestworstand 7 being the highestbest score clearly labeled in assessorrsquoslanguage besides the scale) or by selecting appropriate answer(depending on the question) The only exception to this wastheMOS scale that ranged from 1 to 5 Since the questionnairecomprised 13 questions and it was thus impossible to replicatethe exact ITU proposed setup for single stimulus type ofexperiments without conducting multiple runs we haveplaced the question regarding perceived video quality on top(ie it was the first question) therefore complying to thestandard procedure as much as possible Questions regardingcognitive affective and conative components followed Upon

completing the questionnaire software played the next videoAssessors were allowed to see sequences only once (ie replaywas not possible nor were there any additional runs) Thetest consisted of one session of about 45 minutes includingtraining Before the actual test oral instructions were givento subjects and a training session was run that consistedof three videos and a questionnaire following each onewhere the subjects were free to ask any questions regardingthe test procedure (including clarification of questionnairequestions) 20 subjectsmdash8 male and 12 femalemdashparticipatedin the test their age ranging from 20 to 64 None of themwere familiar with video processing nor had previouslyparticipated in similar tests All of the subjects reportednormal or corrected vision prior to testing

4 Results

After the experiment a number of different analyses wereperformed All of them were conducted with two con-siderations in mind (1) we operated with a sample sizethat could possibly be considered small in comparison toestablished norms for behavioral sciences but at the sametime is considered more than adequate for VQA tasks and (2)we operated dominantly with ordinal level of measurement(for the dependent variables)

41 Mental Responses Relative to the Video Sets The dataobtained was observed relative to the two video sets (1)the proposed one and (2) the LIVEVQDB set Separatedimensionswere compared followed by a comparison of sub-sequently computed variables for the three mental responsecategories

Figure 1 displays average scores over different categoriesfor different variables LIVEVQDB set is presented in red(filled in area) while videos comprising the proposed setare divided into respective content categories and presentedby lines Just by looking at averages it seems that videoscomprising LIVEVQDB scored significantly lower on almostall measured variables

To test this hypothesis against the alternative one (that thetwo observed populations are the same) we ran the Mann-Whitney 119880 test results of which are shown in Tables 3ndash5Indeed the test demonstrated that these sets have very lowchance of originating from the same population suggestingthat the proposed set of videos stimulates mental reactionsthat are significantly more intense than the mental reactionsstimulated by the LIVEVQDB set

As shown in Table 3 the proposed video set achievedhigher ranks in both of the cognitive variables observed(ldquointerestingrdquo and ldquofamiliarrdquo) A reason that videos compris-ing this set were found to be more interesting probably liesin their rich and versatile content which was why they werechosen as good candidates to arouse the attention of assessorsin the first place They were also marked as more familiarto observers than the ones from the LIVEVQDB set mostlikely because none of the subjects have participated in VQAexperiments before and thus had low odds of encounteringvideos from LIVEVQDB set before

6 The Scientific World Journal

Table 3 Comparison of cognitive dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Interesting Proposed 70691 79173750 6002250 000LIVEVQDB 40061 8012250

Familiar Proposed 68453 76667100 8508900 000LIVEVQDB 52595 10518900

Table 4 Comparison of affective dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Anger Proposed 67014 75055700 10120300 001LIVEVQDB 60652 12130300

Disgust Proposed 67864 76007150 9168850 000LIVEVQDB 55894 11178850

Fear Proposed 67329 75408050 9767950 000LIVEVQDB 58890 11777950

Happiness Proposed 67866 76010050 9165950 000LIVEVQDB 55880 11175950

Sadness Proposed 67747 75876650 9299350 000LIVEVQDB 56547 11309350

Surprise Proposed 69199 77502900 7673100 000LIVEVQDB 48416 9683100

The proposed set also achieved higher ranks in all ofthe six basic emotions observed as shown in Table 4 Thisfinding is probably a result of original media producerrsquosintent of activating certain affective states depending on thenature of a video Conversely the LIVEVQDB set might havebeen produced with intention to primarily capture differentcontent that is known to bear weight in terms of introducedimpairments when codingdecoding and video transmissionis in focus thus neglecting possible emotional response fromthe assessors

Theproposed video setwas also found to activate conativedimensions of subjectsrsquo mental activity to a greater extentthan the LIVEVQDB set didThese dimensions are especiallyimportant because they cast a new light on both the cognitiveand the affective dimensions If conative part of onersquos mind isin idle state the chances are that the cognitive and affectiveactivation did not happen even if the subject states theopposite When somebody is presented with interestingcontent he or she is likely to do something after seeing itmdasheither watch it again or share it with relevant others

The mean ranks difference can be observed in Table 5but it should be noted that one part of the videos from theproposed set was relatively unpleasant to watch activatingemotions like sadness fear anger or disgust These dimen-sions of affective responses were found to be significantlynegatively correlated or uncorrelated to the conative dimen-sions in this research so the ranks in the conative dimensionswould be even higher if we were to select only pleasant andappealing videos thus showing even greater difference fromthe standard set

After individual analysis the dimensions were summedinto the corresponding mental components forming three

variables named cognitive component (comprising 2 dimen-sions) affective activation (comprising 6 dimensions) andconative activation (comprising 3 dimensions) Again theMann-Whitney 119880 test has demonstrated significantly highermental activation induced by the proposed set than themental activation induced by the LIVEVQDB set (Table 6)

Finally relationship between the mental activationdimensions andperceived video quality (MOS)was observedSince all of the dependent variables were of ordinal typeSpearmanrsquos Rho measure of correlation was used follow-ing common guidelines of interpretation for behavioralsciences [41 42] Moderate correlations significant at the 01level (2-tailed) were found between video quality assessmentand the following dimensions interesting (472) happiness(415) surprise (306) appealing (497) watch again (393)and share video (395) while other dimensions mostlycorrelated with low coefficients

Also video quality assessment was correlated with thethree calculated mental components gaining moderate cor-relation with cognitive component (412) and conative acti-vation (496) while low correlation was found with affectiveactivation (232) again all significant at the 01 level (2-tailed)

Additionally Spearmanrsquos rho correlations were calculatedfor the three mental components suggesting a strong corre-lation between the cognitive and the conative components(741) while also suggesting a moderate correlation betweenthese two components in relation to the affective component(351 with cognitive) and (462 with conative)

42 Video Quality Assessment Unknown to assessors allvideos in the full test set were obtained from the same sourceand with the same quality settings (H264 480p) and were

The Scientific World Journal 7

Table 5 Comparison of conative dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Appealing Proposed 69320 77638550 7537450 000LIVEVQDB 47737 9547450

Watch again Proposed 68410 76619600 8556400 000LIVEVQDB 52832 10566400

Share video Proposed 68223 76410700 8765300 000LIVEVQDB 53877 10775300

Table 6 Mental components relative to the two video sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Cognitive component Proposed 70300 78735800 6440200 000LIVEVQDB 42251 8450200

Affective activation Proposed 70198 78621750 6554250 000LIVEVQDB 42821 8564250

Conative activation Proposed 69529 77872850 7303150 000LIVEVQDB 46566 9313150

000

100

200

300

400

500

600Interesting

Familiar

Anger

Disgust

Fear

Happiness

SadnessSurprise

Appeal

MOS

LIVEVQDBEntertainmentInformativeMoviesMusic

NewsOtherSeriesSports

Figure 1 Average scores for different variables (grouped by videocategories)

hence of roughly the same visual quality Even though wecould not control for the impairments and objective videoquality of the source sequences (apart from the LIVEVQDBdatabase) close visual inspection of the obtained materialassured us that nonexperts should hardly be able to tell thedifference between the quality of sequences presented to them(ie any differences they observed would stem from sources

not pertinent to visual impairments) To further confirm ourassumption a subset of 20 sequences was played to threeindependent video quality experts Among these videos 10were randomly selected from the proposed set while theother 10 were LIVEVQDB sequences (all of them obtainedfrom YouTube to account for any codec effects) Playingorder was randomized and the experts were asked to voteonly on quality (ie they did not take the full experiment)of the videos Subsequently unpaired 119905-test was run onMOS obtained and it did not show statistically significantdifferences between the samples thus confirming our initialassumptions

Since a correlation was established between differentvariables andMOS and LIVEVQDB videos scored constantlylower than the videos from the proposed set we have tried tofurther investigate and explain these differences

On average videos comprising the proposed set scoreda MOS value of 347 while LIVEVQDB videos scored 280This is a large difference on a 5-point scale but it only getslarger when individual components shown to affect perceivedquality are considered For example when interestingness isobserved best ranked LIVEVQDBvideowas placed 39th (outof 66 videos in the full set) A comparison between the top 10ranked ldquointerestingrdquo videos (as deemed by assessors) and thesequences from the LIVEVQDB set reveals a mean differenceof 134 points (280MOS for the LIVEVQDB versus 413MOSfor the proposed set) which was further shown by a 119905-test tobe statistically significant (119875 lt 00001 with 95 confidenceinterval between 104 and 164)What is evenmore interestingis that 7 out of 8 categories (for the proposed video set)found their place on the ldquotop 10 interesting videosrdquo The onlycategory that did not make it was the ldquomoviesrdquo category

When comparison of top 10 MOS rated videos pertinentto different emotions versus LIVEVQDB videos is concernedresults comply with previously discovered correlationsThuswhen a positive emotion such as happiness was induced bya sequence it yielded a high quality score (MOS = 396)

8 The Scientific World Journal

Videos that caused surprise were also ranked as of higherquality (MOS = 381) while negative emotions caused videoto plummet on the quality scale (sadness (MOS = 286) fear(MOS = 300) disgust (MOS = 286) and anger (MOS =289)) Interestingly one of the videos from the LIVEVQDBfound its place among the top 10 videos that the assessorswereangry with

5 Conclusions and Future Research

The most important conclusion we drew from the resultsobtained in the experiment is that the content of videosequence has a strong impact on activating different cog-nitive affective and conative components within assessorsThese components in turn have been shown (both by thisand previous researches) to play an integral role inVQA taskswhether assessors are aware of it or not Thus we providea number of recommendations that ought to be taken intoaccountwhen conducting subjective video quality assessmentexperiments

First video content has to be considered very carefullywhen trying to measure the perceived video quality as failingto do so might introduce a bias which will reflect in finalMOS obtained It is thus vital to choose the correct typeof content for addressing the research question at hand (atany rate choose the type of content that is most likely to beencountered by the viewers in real-world conditions) Havingthis in mind the ldquocontentrdquo variable needs to be monitoredwhenever possible Second since emotions of disgust fearanger and sadness are found to have different impact onconative mental activities than emotions of happiness andsurprise additional attention should be exhibited if conativeactivities are to be observed as dependent variables (especiallywhen it comes to commercially funded research)

Finally we have shown that subjective video qualityassessment might not be such a reliable measure of videoquality if video content is not controlled This has potentialconsequences on both the subjective video quality modelsalready devised that do not take content variable into accountand automatic approaches (ie algorithms) that try to predictMOS based on existing (possibly biased) results

Since the relationship between video content and per-ceived video quality was identified new research directionsopen up for further investigation Some of those includeidentifying and quantifying the strength of this relationshipinvestigating to what degree can a video be impaired withoutassessors noticing it (relative to video content) and whethercontent plays the same role in subjective perception of videoquality when sequences are impaired with different levels andtypes of artifacts Also a study at a larger scale might revealother factors pertinent to different populations (gender-wise culture-wise and demographically wise) that impactsubjective perception of video quality and should thus betaken into account when VQA tasks are in question

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The research presented in this paper was supported by FP7IRSESProjectQoSTREAMmdashVideoQualityDrivenMultime-dia Streaming in Mobile Wireless Networks and by SerbianMinistry of Education Science and Technology projects nosIII43002 and III44003

References

[1] A Lenhart K Purcell A Smith and K Zickuhr ldquoSocial mediaamp mobile internet use among teens and young adultsrdquo TechRep Pew Internet amp American Life Project Washington DCUSA 2010 httpwebpewinternetorg

[2] ComScore ldquoTodayrsquos US Tablet Owner Revealedrdquo 2012 httpwwwcomscorecomInsights

[3] K OrsquoHara A S Mitchell and A Vorbau ldquoConsuming video onmobile devicesrdquo in Proceedings of the 25th SIGCHI Conferenceon Human Factors in Computing Systems pp 857ndash866 ACMMay 2007

[4] CISCO ldquoCiscoVisual Networking Index Forecast andMethod-ology 2011ndash2016rdquo Tech Rep 2012 httpwwwciscocom

[5] K Seshadrinathan and A Bovik ldquoAn information theoreticvideo quality metric based onmotionmodelsrdquo in Proceedings ofthe 3rd International Workshop on Video Processing and QualityMetrics for Consumer Electronics 2007

[6] R J Corsini The Dictionary of Psychology Psychology Press2002

[7] H Hagtvedt and V M Patrick ldquoArt infusion the influenceof visual art on the perception and evaluation of consumerproductsrdquo Journal of Marketing Research vol 45 no 3 pp 379ndash389 2008

[8] A M Giannini F Ferlazzo R Sgalla P Cordellieri F Barallaand S Pepe ldquoThe use of videos in road safety training cognitiveand emotional effectsrdquo Accident Analysis amp Prevention vol 52pp 111ndash117 2013

[9] I Blanchette and A Richards ldquoThe influence of affect on higherlevel cognition a review of research on interpretation judge-ment decision making and reasoningrdquo Cognition amp Emotionvol 24 no 4 pp 561ndash595 2010

[10] N Cranley P Perry and L Murphy ldquoUser perception of adapt-ing video qualityrdquo International Journal of Human ComputerStudies vol 64 no 8 pp 637ndash647 2006

[11] R R Pastrana-Vidal J C Gicquel J L Blin and H CherifildquoPredicting subjective video quality from separated spatial andtemporal assessmentrdquo in Human Vision and Electronic ImagingXI vol 6057 of Proceedings of SPIE pp 276ndash286 January 2006

[12] M Kapa L Happe and F Jakab ldquoPrediction of quality ofuser experience for video streaming over IP networksrdquo CyberJournals pp 22ndash35 2012

[13] SH JumiskoV P Ilvonen andKAVaananen-Vainio-MattilaldquoEffect of TV content in subjective assessment of video qualityon mobile devicesrdquo in IS amp T Electronic ImagingmdashMultimediaonMobile Devices vol 5684 of Proceedings of SPIE pp 243ndash254January 2005

[14] S Winkler ldquoAnalysis of public image and video databases forquality assessmentrdquo IEEE Journal of Selected Topics in SignalProcessing vol 6 no 6 pp 616ndash625 2012

[15] ITU-T Recommendation ldquoSubjective video quality assessmentmethods for multimedia applicationsrdquo Tech Rep 1999

The Scientific World Journal 9

[16] F De Simone ldquoEPFL-PoliMI video quality assessment data-baserdquo 2009 httpvqacomopolimiit

[17] VQEG ldquoReport on the validation of video quality models forhigh definition video contentrdquo Tech Rep June 2010 httpwwwitsbldrdocgovvqegvqeg-homeaspx

[18] Y Wang ldquoPoly NYU video quality databasesrdquo QualityAssess-mentDatabase 2008 httpvisionpolyeduindexhtmlindexphpn=HomePageVideoLab

[19] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoStudy of subjective and objective quality assessmentof videordquo IEEE Transactions on Image Processing vol 19 no 6pp 1427ndash1441 2010

[20] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoA subjective study to evaluate video quality assess-ment algorithmsrdquo in Human Vision and Electronic Imaging XVProceedings of SPIE January 2010

[21] LGoldmann FDe Simone andT Ebrahimi ldquoA comprehensivedatabase and subjective evaluation methodology for quality ofexperience in stereoscopic videordquo in Three-Dimensional ImageProcessing (3DIP) and Applications vol 7526 of Proceedings ofSPIE January 2010

[22] J D McCarthy M A Sasse and D Miras ldquoSharp or smoothComparing the effects of quantization vs frame rate forstreamed videordquo in Proceedings of the Conference on HumanFactors in Computing Systems pp 535ndash542 ACM April 2004

[23] S Sladojevic D Culibrk M Mirkovic D Ruiz Coll and GBorba ldquoLogging real packet reception patterns for end-to-endquality of experience assessment in wireless multimedia trans-missionrdquo in Proceedings of the IEEE International Workshop onEmerging Multimedia Systems and Applications (EMSA rsquo13) SanJose Calif USA July 2013

[24] S R Gulliver T Serif andGGhinea ldquoPervasive and standalonecomputing the perceptual effects of variable multimedia qual-ityrdquo International Journal of Human Computer Studies vol 60no 5-6 pp 640ndash665 2004

[25] S Rihs ldquoThe influence of audio on perceived picture quality andsubjective audio-video delay tolerancerdquo inMOSAIC Handbookpp 183ndash187 1996

[26] I E GordonTheories of Visual Perception Taylor amp Francis 3rdedition 2005

[27] S Hancock and L McNaughton ldquoEffects of fatigue on ability toprocess visual information by experienced orienteersrdquo Percep-tual and Motor Skills vol 62 no 2 pp 491ndash498 1986

[28] C S Weinstein and H J Shaffer ldquoNeurocognitive aspects ofsubstance abuse treatment a psychotherapistrsquos primerrdquo Psy-chotherapy vol 30 no 2 pp 317ndash333 1993

[29] E Balcetis and D Dunning ldquoSee what you want to see moti-vational influences on visual perceptionrdquo Journal of Personalityand Social Psychology vol 91 no 4 pp 612ndash625 2006

[30] A Parducci ldquoResponse bias and contextual effects whenbiasedrdquo Advances in Psychology vol 68 pp 207ndash219 1990

[31] E A Styles Attention Perception and Memory An IntegratedIntroduction Taylor amp Francis Routledge New York NY USA2005

[32] D Culibrk M Mirkovic V Zlokolica M Pokric V Crnojevicand D Kukolj ldquoSalient motion features for video qualityassessmentrdquo IEEE Transactions on Image Processing vol 20 no4 pp 948ndash958 2011

[33] J Redi H Liu R Zunino and I Heynderickx ldquoInteractions ofvisual attention and quality perceptionrdquo in Human Vision andElectronic Imaging XVI Proceedings of SPIE January 2011

[34] RTS Istrazivanje (izvestaji o gledanosti) httpwwwrtsrspagertssrCIPAnews171IstraC5BEivanjehtml

[35] Nielsen Television Audience Measurement Nielsen AudienceMeasurement Serbia httpwwwagbnielsennetwherewearedynPageasplang=localampcountry=Serbiaampid=354

[36] LIVE Video Quality Database httpliveeceutexaseduresearchqualitylive videohtml

[37] E R Hilgard ldquoThe trilogy of mind cognition affection andconationrdquo Journal of the History of the Behavioral Sciences vol16 no 2 pp 107ndash117 1980

[38] P EkmanWV Friesen andP EllsworthEmotion in theHumanFace Guidelines for Research and an Integration of FindingsPergamon Press New York NY USA 1972

[39] J Fridlund P Ekman and H Oster ldquoFacial expressions ofemotionrdquo in Nonverbal Behavior and Communication pp 143ndash223 2nd edition 1987

[40] ITU-T 500-11 ldquoMethodology for the subjective assessment ofthe quality of television picturesrdquo Tech Rep InternationalTelecommunication Union Geneva Switzerland 2002

[41] J Cohen Statistical Power Analysis for the Behavioral ScienciesRoutledge 1988

[42] D S Dunn Statistics and Data Analysis for the BehavioralSciences McGraw-Hill 2001

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 6: Research Article Evaluating the Role of Content in ...downloads.hindawi.com/journals/tswj/2014/625219.pdf · Research Article Evaluating the Role of Content in Subjective Video Quality

6 The Scientific World Journal

Table 3 Comparison of cognitive dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Interesting Proposed 70691 79173750 6002250 000LIVEVQDB 40061 8012250

Familiar Proposed 68453 76667100 8508900 000LIVEVQDB 52595 10518900

Table 4 Comparison of affective dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Anger Proposed 67014 75055700 10120300 001LIVEVQDB 60652 12130300

Disgust Proposed 67864 76007150 9168850 000LIVEVQDB 55894 11178850

Fear Proposed 67329 75408050 9767950 000LIVEVQDB 58890 11777950

Happiness Proposed 67866 76010050 9165950 000LIVEVQDB 55880 11175950

Sadness Proposed 67747 75876650 9299350 000LIVEVQDB 56547 11309350

Surprise Proposed 69199 77502900 7673100 000LIVEVQDB 48416 9683100

The proposed set also achieved higher ranks in all ofthe six basic emotions observed as shown in Table 4 Thisfinding is probably a result of original media producerrsquosintent of activating certain affective states depending on thenature of a video Conversely the LIVEVQDB set might havebeen produced with intention to primarily capture differentcontent that is known to bear weight in terms of introducedimpairments when codingdecoding and video transmissionis in focus thus neglecting possible emotional response fromthe assessors

Theproposed video setwas also found to activate conativedimensions of subjectsrsquo mental activity to a greater extentthan the LIVEVQDB set didThese dimensions are especiallyimportant because they cast a new light on both the cognitiveand the affective dimensions If conative part of onersquos mind isin idle state the chances are that the cognitive and affectiveactivation did not happen even if the subject states theopposite When somebody is presented with interestingcontent he or she is likely to do something after seeing itmdasheither watch it again or share it with relevant others

The mean ranks difference can be observed in Table 5but it should be noted that one part of the videos from theproposed set was relatively unpleasant to watch activatingemotions like sadness fear anger or disgust These dimen-sions of affective responses were found to be significantlynegatively correlated or uncorrelated to the conative dimen-sions in this research so the ranks in the conative dimensionswould be even higher if we were to select only pleasant andappealing videos thus showing even greater difference fromthe standard set

After individual analysis the dimensions were summedinto the corresponding mental components forming three

variables named cognitive component (comprising 2 dimen-sions) affective activation (comprising 6 dimensions) andconative activation (comprising 3 dimensions) Again theMann-Whitney 119880 test has demonstrated significantly highermental activation induced by the proposed set than themental activation induced by the LIVEVQDB set (Table 6)

Finally relationship between the mental activationdimensions andperceived video quality (MOS)was observedSince all of the dependent variables were of ordinal typeSpearmanrsquos Rho measure of correlation was used follow-ing common guidelines of interpretation for behavioralsciences [41 42] Moderate correlations significant at the 01level (2-tailed) were found between video quality assessmentand the following dimensions interesting (472) happiness(415) surprise (306) appealing (497) watch again (393)and share video (395) while other dimensions mostlycorrelated with low coefficients

Also video quality assessment was correlated with thethree calculated mental components gaining moderate cor-relation with cognitive component (412) and conative acti-vation (496) while low correlation was found with affectiveactivation (232) again all significant at the 01 level (2-tailed)

Additionally Spearmanrsquos rho correlations were calculatedfor the three mental components suggesting a strong corre-lation between the cognitive and the conative components(741) while also suggesting a moderate correlation betweenthese two components in relation to the affective component(351 with cognitive) and (462 with conative)

42 Video Quality Assessment Unknown to assessors allvideos in the full test set were obtained from the same sourceand with the same quality settings (H264 480p) and were

The Scientific World Journal 7

Table 5 Comparison of conative dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Appealing Proposed 69320 77638550 7537450 000LIVEVQDB 47737 9547450

Watch again Proposed 68410 76619600 8556400 000LIVEVQDB 52832 10566400

Share video Proposed 68223 76410700 8765300 000LIVEVQDB 53877 10775300

Table 6 Mental components relative to the two video sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Cognitive component Proposed 70300 78735800 6440200 000LIVEVQDB 42251 8450200

Affective activation Proposed 70198 78621750 6554250 000LIVEVQDB 42821 8564250

Conative activation Proposed 69529 77872850 7303150 000LIVEVQDB 46566 9313150

000

100

200

300

400

500

600Interesting

Familiar

Anger

Disgust

Fear

Happiness

SadnessSurprise

Appeal

MOS

LIVEVQDBEntertainmentInformativeMoviesMusic

NewsOtherSeriesSports

Figure 1 Average scores for different variables (grouped by videocategories)

hence of roughly the same visual quality Even though wecould not control for the impairments and objective videoquality of the source sequences (apart from the LIVEVQDBdatabase) close visual inspection of the obtained materialassured us that nonexperts should hardly be able to tell thedifference between the quality of sequences presented to them(ie any differences they observed would stem from sources

not pertinent to visual impairments) To further confirm ourassumption a subset of 20 sequences was played to threeindependent video quality experts Among these videos 10were randomly selected from the proposed set while theother 10 were LIVEVQDB sequences (all of them obtainedfrom YouTube to account for any codec effects) Playingorder was randomized and the experts were asked to voteonly on quality (ie they did not take the full experiment)of the videos Subsequently unpaired 119905-test was run onMOS obtained and it did not show statistically significantdifferences between the samples thus confirming our initialassumptions

Since a correlation was established between differentvariables andMOS and LIVEVQDB videos scored constantlylower than the videos from the proposed set we have tried tofurther investigate and explain these differences

On average videos comprising the proposed set scoreda MOS value of 347 while LIVEVQDB videos scored 280This is a large difference on a 5-point scale but it only getslarger when individual components shown to affect perceivedquality are considered For example when interestingness isobserved best ranked LIVEVQDBvideowas placed 39th (outof 66 videos in the full set) A comparison between the top 10ranked ldquointerestingrdquo videos (as deemed by assessors) and thesequences from the LIVEVQDB set reveals a mean differenceof 134 points (280MOS for the LIVEVQDB versus 413MOSfor the proposed set) which was further shown by a 119905-test tobe statistically significant (119875 lt 00001 with 95 confidenceinterval between 104 and 164)What is evenmore interestingis that 7 out of 8 categories (for the proposed video set)found their place on the ldquotop 10 interesting videosrdquo The onlycategory that did not make it was the ldquomoviesrdquo category

When comparison of top 10 MOS rated videos pertinentto different emotions versus LIVEVQDB videos is concernedresults comply with previously discovered correlationsThuswhen a positive emotion such as happiness was induced bya sequence it yielded a high quality score (MOS = 396)

8 The Scientific World Journal

Videos that caused surprise were also ranked as of higherquality (MOS = 381) while negative emotions caused videoto plummet on the quality scale (sadness (MOS = 286) fear(MOS = 300) disgust (MOS = 286) and anger (MOS =289)) Interestingly one of the videos from the LIVEVQDBfound its place among the top 10 videos that the assessorswereangry with

5 Conclusions and Future Research

The most important conclusion we drew from the resultsobtained in the experiment is that the content of videosequence has a strong impact on activating different cog-nitive affective and conative components within assessorsThese components in turn have been shown (both by thisand previous researches) to play an integral role inVQA taskswhether assessors are aware of it or not Thus we providea number of recommendations that ought to be taken intoaccountwhen conducting subjective video quality assessmentexperiments

First video content has to be considered very carefullywhen trying to measure the perceived video quality as failingto do so might introduce a bias which will reflect in finalMOS obtained It is thus vital to choose the correct typeof content for addressing the research question at hand (atany rate choose the type of content that is most likely to beencountered by the viewers in real-world conditions) Havingthis in mind the ldquocontentrdquo variable needs to be monitoredwhenever possible Second since emotions of disgust fearanger and sadness are found to have different impact onconative mental activities than emotions of happiness andsurprise additional attention should be exhibited if conativeactivities are to be observed as dependent variables (especiallywhen it comes to commercially funded research)

Finally we have shown that subjective video qualityassessment might not be such a reliable measure of videoquality if video content is not controlled This has potentialconsequences on both the subjective video quality modelsalready devised that do not take content variable into accountand automatic approaches (ie algorithms) that try to predictMOS based on existing (possibly biased) results

Since the relationship between video content and per-ceived video quality was identified new research directionsopen up for further investigation Some of those includeidentifying and quantifying the strength of this relationshipinvestigating to what degree can a video be impaired withoutassessors noticing it (relative to video content) and whethercontent plays the same role in subjective perception of videoquality when sequences are impaired with different levels andtypes of artifacts Also a study at a larger scale might revealother factors pertinent to different populations (gender-wise culture-wise and demographically wise) that impactsubjective perception of video quality and should thus betaken into account when VQA tasks are in question

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The research presented in this paper was supported by FP7IRSESProjectQoSTREAMmdashVideoQualityDrivenMultime-dia Streaming in Mobile Wireless Networks and by SerbianMinistry of Education Science and Technology projects nosIII43002 and III44003

References

[1] A Lenhart K Purcell A Smith and K Zickuhr ldquoSocial mediaamp mobile internet use among teens and young adultsrdquo TechRep Pew Internet amp American Life Project Washington DCUSA 2010 httpwebpewinternetorg

[2] ComScore ldquoTodayrsquos US Tablet Owner Revealedrdquo 2012 httpwwwcomscorecomInsights

[3] K OrsquoHara A S Mitchell and A Vorbau ldquoConsuming video onmobile devicesrdquo in Proceedings of the 25th SIGCHI Conferenceon Human Factors in Computing Systems pp 857ndash866 ACMMay 2007

[4] CISCO ldquoCiscoVisual Networking Index Forecast andMethod-ology 2011ndash2016rdquo Tech Rep 2012 httpwwwciscocom

[5] K Seshadrinathan and A Bovik ldquoAn information theoreticvideo quality metric based onmotionmodelsrdquo in Proceedings ofthe 3rd International Workshop on Video Processing and QualityMetrics for Consumer Electronics 2007

[6] R J Corsini The Dictionary of Psychology Psychology Press2002

[7] H Hagtvedt and V M Patrick ldquoArt infusion the influenceof visual art on the perception and evaluation of consumerproductsrdquo Journal of Marketing Research vol 45 no 3 pp 379ndash389 2008

[8] A M Giannini F Ferlazzo R Sgalla P Cordellieri F Barallaand S Pepe ldquoThe use of videos in road safety training cognitiveand emotional effectsrdquo Accident Analysis amp Prevention vol 52pp 111ndash117 2013

[9] I Blanchette and A Richards ldquoThe influence of affect on higherlevel cognition a review of research on interpretation judge-ment decision making and reasoningrdquo Cognition amp Emotionvol 24 no 4 pp 561ndash595 2010

[10] N Cranley P Perry and L Murphy ldquoUser perception of adapt-ing video qualityrdquo International Journal of Human ComputerStudies vol 64 no 8 pp 637ndash647 2006

[11] R R Pastrana-Vidal J C Gicquel J L Blin and H CherifildquoPredicting subjective video quality from separated spatial andtemporal assessmentrdquo in Human Vision and Electronic ImagingXI vol 6057 of Proceedings of SPIE pp 276ndash286 January 2006

[12] M Kapa L Happe and F Jakab ldquoPrediction of quality ofuser experience for video streaming over IP networksrdquo CyberJournals pp 22ndash35 2012

[13] SH JumiskoV P Ilvonen andKAVaananen-Vainio-MattilaldquoEffect of TV content in subjective assessment of video qualityon mobile devicesrdquo in IS amp T Electronic ImagingmdashMultimediaonMobile Devices vol 5684 of Proceedings of SPIE pp 243ndash254January 2005

[14] S Winkler ldquoAnalysis of public image and video databases forquality assessmentrdquo IEEE Journal of Selected Topics in SignalProcessing vol 6 no 6 pp 616ndash625 2012

[15] ITU-T Recommendation ldquoSubjective video quality assessmentmethods for multimedia applicationsrdquo Tech Rep 1999

The Scientific World Journal 9

[16] F De Simone ldquoEPFL-PoliMI video quality assessment data-baserdquo 2009 httpvqacomopolimiit

[17] VQEG ldquoReport on the validation of video quality models forhigh definition video contentrdquo Tech Rep June 2010 httpwwwitsbldrdocgovvqegvqeg-homeaspx

[18] Y Wang ldquoPoly NYU video quality databasesrdquo QualityAssess-mentDatabase 2008 httpvisionpolyeduindexhtmlindexphpn=HomePageVideoLab

[19] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoStudy of subjective and objective quality assessmentof videordquo IEEE Transactions on Image Processing vol 19 no 6pp 1427ndash1441 2010

[20] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoA subjective study to evaluate video quality assess-ment algorithmsrdquo in Human Vision and Electronic Imaging XVProceedings of SPIE January 2010

[21] LGoldmann FDe Simone andT Ebrahimi ldquoA comprehensivedatabase and subjective evaluation methodology for quality ofexperience in stereoscopic videordquo in Three-Dimensional ImageProcessing (3DIP) and Applications vol 7526 of Proceedings ofSPIE January 2010

[22] J D McCarthy M A Sasse and D Miras ldquoSharp or smoothComparing the effects of quantization vs frame rate forstreamed videordquo in Proceedings of the Conference on HumanFactors in Computing Systems pp 535ndash542 ACM April 2004

[23] S Sladojevic D Culibrk M Mirkovic D Ruiz Coll and GBorba ldquoLogging real packet reception patterns for end-to-endquality of experience assessment in wireless multimedia trans-missionrdquo in Proceedings of the IEEE International Workshop onEmerging Multimedia Systems and Applications (EMSA rsquo13) SanJose Calif USA July 2013

[24] S R Gulliver T Serif andGGhinea ldquoPervasive and standalonecomputing the perceptual effects of variable multimedia qual-ityrdquo International Journal of Human Computer Studies vol 60no 5-6 pp 640ndash665 2004

[25] S Rihs ldquoThe influence of audio on perceived picture quality andsubjective audio-video delay tolerancerdquo inMOSAIC Handbookpp 183ndash187 1996

[26] I E GordonTheories of Visual Perception Taylor amp Francis 3rdedition 2005

[27] S Hancock and L McNaughton ldquoEffects of fatigue on ability toprocess visual information by experienced orienteersrdquo Percep-tual and Motor Skills vol 62 no 2 pp 491ndash498 1986

[28] C S Weinstein and H J Shaffer ldquoNeurocognitive aspects ofsubstance abuse treatment a psychotherapistrsquos primerrdquo Psy-chotherapy vol 30 no 2 pp 317ndash333 1993

[29] E Balcetis and D Dunning ldquoSee what you want to see moti-vational influences on visual perceptionrdquo Journal of Personalityand Social Psychology vol 91 no 4 pp 612ndash625 2006

[30] A Parducci ldquoResponse bias and contextual effects whenbiasedrdquo Advances in Psychology vol 68 pp 207ndash219 1990

[31] E A Styles Attention Perception and Memory An IntegratedIntroduction Taylor amp Francis Routledge New York NY USA2005

[32] D Culibrk M Mirkovic V Zlokolica M Pokric V Crnojevicand D Kukolj ldquoSalient motion features for video qualityassessmentrdquo IEEE Transactions on Image Processing vol 20 no4 pp 948ndash958 2011

[33] J Redi H Liu R Zunino and I Heynderickx ldquoInteractions ofvisual attention and quality perceptionrdquo in Human Vision andElectronic Imaging XVI Proceedings of SPIE January 2011

[34] RTS Istrazivanje (izvestaji o gledanosti) httpwwwrtsrspagertssrCIPAnews171IstraC5BEivanjehtml

[35] Nielsen Television Audience Measurement Nielsen AudienceMeasurement Serbia httpwwwagbnielsennetwherewearedynPageasplang=localampcountry=Serbiaampid=354

[36] LIVE Video Quality Database httpliveeceutexaseduresearchqualitylive videohtml

[37] E R Hilgard ldquoThe trilogy of mind cognition affection andconationrdquo Journal of the History of the Behavioral Sciences vol16 no 2 pp 107ndash117 1980

[38] P EkmanWV Friesen andP EllsworthEmotion in theHumanFace Guidelines for Research and an Integration of FindingsPergamon Press New York NY USA 1972

[39] J Fridlund P Ekman and H Oster ldquoFacial expressions ofemotionrdquo in Nonverbal Behavior and Communication pp 143ndash223 2nd edition 1987

[40] ITU-T 500-11 ldquoMethodology for the subjective assessment ofthe quality of television picturesrdquo Tech Rep InternationalTelecommunication Union Geneva Switzerland 2002

[41] J Cohen Statistical Power Analysis for the Behavioral ScienciesRoutledge 1988

[42] D S Dunn Statistics and Data Analysis for the BehavioralSciences McGraw-Hill 2001

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 7: Research Article Evaluating the Role of Content in ...downloads.hindawi.com/journals/tswj/2014/625219.pdf · Research Article Evaluating the Role of Content in Subjective Video Quality

The Scientific World Journal 7

Table 5 Comparison of conative dimensions by the two sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Appealing Proposed 69320 77638550 7537450 000LIVEVQDB 47737 9547450

Watch again Proposed 68410 76619600 8556400 000LIVEVQDB 52832 10566400

Share video Proposed 68223 76410700 8765300 000LIVEVQDB 53877 10775300

Table 6 Mental components relative to the two video sets

Video set Mean rank Sum of ranks Mann-Whitney 119880 Asymp Sig (2-tailed)

Cognitive component Proposed 70300 78735800 6440200 000LIVEVQDB 42251 8450200

Affective activation Proposed 70198 78621750 6554250 000LIVEVQDB 42821 8564250

Conative activation Proposed 69529 77872850 7303150 000LIVEVQDB 46566 9313150

000

100

200

300

400

500

600Interesting

Familiar

Anger

Disgust

Fear

Happiness

SadnessSurprise

Appeal

MOS

LIVEVQDBEntertainmentInformativeMoviesMusic

NewsOtherSeriesSports

Figure 1 Average scores for different variables (grouped by videocategories)

hence of roughly the same visual quality Even though wecould not control for the impairments and objective videoquality of the source sequences (apart from the LIVEVQDBdatabase) close visual inspection of the obtained materialassured us that nonexperts should hardly be able to tell thedifference between the quality of sequences presented to them(ie any differences they observed would stem from sources

not pertinent to visual impairments) To further confirm ourassumption a subset of 20 sequences was played to threeindependent video quality experts Among these videos 10were randomly selected from the proposed set while theother 10 were LIVEVQDB sequences (all of them obtainedfrom YouTube to account for any codec effects) Playingorder was randomized and the experts were asked to voteonly on quality (ie they did not take the full experiment)of the videos Subsequently unpaired 119905-test was run onMOS obtained and it did not show statistically significantdifferences between the samples thus confirming our initialassumptions

Since a correlation was established between differentvariables andMOS and LIVEVQDB videos scored constantlylower than the videos from the proposed set we have tried tofurther investigate and explain these differences

On average videos comprising the proposed set scoreda MOS value of 347 while LIVEVQDB videos scored 280This is a large difference on a 5-point scale but it only getslarger when individual components shown to affect perceivedquality are considered For example when interestingness isobserved best ranked LIVEVQDBvideowas placed 39th (outof 66 videos in the full set) A comparison between the top 10ranked ldquointerestingrdquo videos (as deemed by assessors) and thesequences from the LIVEVQDB set reveals a mean differenceof 134 points (280MOS for the LIVEVQDB versus 413MOSfor the proposed set) which was further shown by a 119905-test tobe statistically significant (119875 lt 00001 with 95 confidenceinterval between 104 and 164)What is evenmore interestingis that 7 out of 8 categories (for the proposed video set)found their place on the ldquotop 10 interesting videosrdquo The onlycategory that did not make it was the ldquomoviesrdquo category

When comparison of top 10 MOS rated videos pertinentto different emotions versus LIVEVQDB videos is concernedresults comply with previously discovered correlationsThuswhen a positive emotion such as happiness was induced bya sequence it yielded a high quality score (MOS = 396)

8 The Scientific World Journal

Videos that caused surprise were also ranked as of higherquality (MOS = 381) while negative emotions caused videoto plummet on the quality scale (sadness (MOS = 286) fear(MOS = 300) disgust (MOS = 286) and anger (MOS =289)) Interestingly one of the videos from the LIVEVQDBfound its place among the top 10 videos that the assessorswereangry with

5 Conclusions and Future Research

The most important conclusion we drew from the resultsobtained in the experiment is that the content of videosequence has a strong impact on activating different cog-nitive affective and conative components within assessorsThese components in turn have been shown (both by thisand previous researches) to play an integral role inVQA taskswhether assessors are aware of it or not Thus we providea number of recommendations that ought to be taken intoaccountwhen conducting subjective video quality assessmentexperiments

First video content has to be considered very carefullywhen trying to measure the perceived video quality as failingto do so might introduce a bias which will reflect in finalMOS obtained It is thus vital to choose the correct typeof content for addressing the research question at hand (atany rate choose the type of content that is most likely to beencountered by the viewers in real-world conditions) Havingthis in mind the ldquocontentrdquo variable needs to be monitoredwhenever possible Second since emotions of disgust fearanger and sadness are found to have different impact onconative mental activities than emotions of happiness andsurprise additional attention should be exhibited if conativeactivities are to be observed as dependent variables (especiallywhen it comes to commercially funded research)

Finally we have shown that subjective video qualityassessment might not be such a reliable measure of videoquality if video content is not controlled This has potentialconsequences on both the subjective video quality modelsalready devised that do not take content variable into accountand automatic approaches (ie algorithms) that try to predictMOS based on existing (possibly biased) results

Since the relationship between video content and per-ceived video quality was identified new research directionsopen up for further investigation Some of those includeidentifying and quantifying the strength of this relationshipinvestigating to what degree can a video be impaired withoutassessors noticing it (relative to video content) and whethercontent plays the same role in subjective perception of videoquality when sequences are impaired with different levels andtypes of artifacts Also a study at a larger scale might revealother factors pertinent to different populations (gender-wise culture-wise and demographically wise) that impactsubjective perception of video quality and should thus betaken into account when VQA tasks are in question

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The research presented in this paper was supported by FP7IRSESProjectQoSTREAMmdashVideoQualityDrivenMultime-dia Streaming in Mobile Wireless Networks and by SerbianMinistry of Education Science and Technology projects nosIII43002 and III44003

References

[1] A Lenhart K Purcell A Smith and K Zickuhr ldquoSocial mediaamp mobile internet use among teens and young adultsrdquo TechRep Pew Internet amp American Life Project Washington DCUSA 2010 httpwebpewinternetorg

[2] ComScore ldquoTodayrsquos US Tablet Owner Revealedrdquo 2012 httpwwwcomscorecomInsights

[3] K OrsquoHara A S Mitchell and A Vorbau ldquoConsuming video onmobile devicesrdquo in Proceedings of the 25th SIGCHI Conferenceon Human Factors in Computing Systems pp 857ndash866 ACMMay 2007

[4] CISCO ldquoCiscoVisual Networking Index Forecast andMethod-ology 2011ndash2016rdquo Tech Rep 2012 httpwwwciscocom

[5] K Seshadrinathan and A Bovik ldquoAn information theoreticvideo quality metric based onmotionmodelsrdquo in Proceedings ofthe 3rd International Workshop on Video Processing and QualityMetrics for Consumer Electronics 2007

[6] R J Corsini The Dictionary of Psychology Psychology Press2002

[7] H Hagtvedt and V M Patrick ldquoArt infusion the influenceof visual art on the perception and evaluation of consumerproductsrdquo Journal of Marketing Research vol 45 no 3 pp 379ndash389 2008

[8] A M Giannini F Ferlazzo R Sgalla P Cordellieri F Barallaand S Pepe ldquoThe use of videos in road safety training cognitiveand emotional effectsrdquo Accident Analysis amp Prevention vol 52pp 111ndash117 2013

[9] I Blanchette and A Richards ldquoThe influence of affect on higherlevel cognition a review of research on interpretation judge-ment decision making and reasoningrdquo Cognition amp Emotionvol 24 no 4 pp 561ndash595 2010

[10] N Cranley P Perry and L Murphy ldquoUser perception of adapt-ing video qualityrdquo International Journal of Human ComputerStudies vol 64 no 8 pp 637ndash647 2006

[11] R R Pastrana-Vidal J C Gicquel J L Blin and H CherifildquoPredicting subjective video quality from separated spatial andtemporal assessmentrdquo in Human Vision and Electronic ImagingXI vol 6057 of Proceedings of SPIE pp 276ndash286 January 2006

[12] M Kapa L Happe and F Jakab ldquoPrediction of quality ofuser experience for video streaming over IP networksrdquo CyberJournals pp 22ndash35 2012

[13] SH JumiskoV P Ilvonen andKAVaananen-Vainio-MattilaldquoEffect of TV content in subjective assessment of video qualityon mobile devicesrdquo in IS amp T Electronic ImagingmdashMultimediaonMobile Devices vol 5684 of Proceedings of SPIE pp 243ndash254January 2005

[14] S Winkler ldquoAnalysis of public image and video databases forquality assessmentrdquo IEEE Journal of Selected Topics in SignalProcessing vol 6 no 6 pp 616ndash625 2012

[15] ITU-T Recommendation ldquoSubjective video quality assessmentmethods for multimedia applicationsrdquo Tech Rep 1999

The Scientific World Journal 9

[16] F De Simone ldquoEPFL-PoliMI video quality assessment data-baserdquo 2009 httpvqacomopolimiit

[17] VQEG ldquoReport on the validation of video quality models forhigh definition video contentrdquo Tech Rep June 2010 httpwwwitsbldrdocgovvqegvqeg-homeaspx

[18] Y Wang ldquoPoly NYU video quality databasesrdquo QualityAssess-mentDatabase 2008 httpvisionpolyeduindexhtmlindexphpn=HomePageVideoLab

[19] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoStudy of subjective and objective quality assessmentof videordquo IEEE Transactions on Image Processing vol 19 no 6pp 1427ndash1441 2010

[20] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoA subjective study to evaluate video quality assess-ment algorithmsrdquo in Human Vision and Electronic Imaging XVProceedings of SPIE January 2010

[21] LGoldmann FDe Simone andT Ebrahimi ldquoA comprehensivedatabase and subjective evaluation methodology for quality ofexperience in stereoscopic videordquo in Three-Dimensional ImageProcessing (3DIP) and Applications vol 7526 of Proceedings ofSPIE January 2010

[22] J D McCarthy M A Sasse and D Miras ldquoSharp or smoothComparing the effects of quantization vs frame rate forstreamed videordquo in Proceedings of the Conference on HumanFactors in Computing Systems pp 535ndash542 ACM April 2004

[23] S Sladojevic D Culibrk M Mirkovic D Ruiz Coll and GBorba ldquoLogging real packet reception patterns for end-to-endquality of experience assessment in wireless multimedia trans-missionrdquo in Proceedings of the IEEE International Workshop onEmerging Multimedia Systems and Applications (EMSA rsquo13) SanJose Calif USA July 2013

[24] S R Gulliver T Serif andGGhinea ldquoPervasive and standalonecomputing the perceptual effects of variable multimedia qual-ityrdquo International Journal of Human Computer Studies vol 60no 5-6 pp 640ndash665 2004

[25] S Rihs ldquoThe influence of audio on perceived picture quality andsubjective audio-video delay tolerancerdquo inMOSAIC Handbookpp 183ndash187 1996

[26] I E GordonTheories of Visual Perception Taylor amp Francis 3rdedition 2005

[27] S Hancock and L McNaughton ldquoEffects of fatigue on ability toprocess visual information by experienced orienteersrdquo Percep-tual and Motor Skills vol 62 no 2 pp 491ndash498 1986

[28] C S Weinstein and H J Shaffer ldquoNeurocognitive aspects ofsubstance abuse treatment a psychotherapistrsquos primerrdquo Psy-chotherapy vol 30 no 2 pp 317ndash333 1993

[29] E Balcetis and D Dunning ldquoSee what you want to see moti-vational influences on visual perceptionrdquo Journal of Personalityand Social Psychology vol 91 no 4 pp 612ndash625 2006

[30] A Parducci ldquoResponse bias and contextual effects whenbiasedrdquo Advances in Psychology vol 68 pp 207ndash219 1990

[31] E A Styles Attention Perception and Memory An IntegratedIntroduction Taylor amp Francis Routledge New York NY USA2005

[32] D Culibrk M Mirkovic V Zlokolica M Pokric V Crnojevicand D Kukolj ldquoSalient motion features for video qualityassessmentrdquo IEEE Transactions on Image Processing vol 20 no4 pp 948ndash958 2011

[33] J Redi H Liu R Zunino and I Heynderickx ldquoInteractions ofvisual attention and quality perceptionrdquo in Human Vision andElectronic Imaging XVI Proceedings of SPIE January 2011

[34] RTS Istrazivanje (izvestaji o gledanosti) httpwwwrtsrspagertssrCIPAnews171IstraC5BEivanjehtml

[35] Nielsen Television Audience Measurement Nielsen AudienceMeasurement Serbia httpwwwagbnielsennetwherewearedynPageasplang=localampcountry=Serbiaampid=354

[36] LIVE Video Quality Database httpliveeceutexaseduresearchqualitylive videohtml

[37] E R Hilgard ldquoThe trilogy of mind cognition affection andconationrdquo Journal of the History of the Behavioral Sciences vol16 no 2 pp 107ndash117 1980

[38] P EkmanWV Friesen andP EllsworthEmotion in theHumanFace Guidelines for Research and an Integration of FindingsPergamon Press New York NY USA 1972

[39] J Fridlund P Ekman and H Oster ldquoFacial expressions ofemotionrdquo in Nonverbal Behavior and Communication pp 143ndash223 2nd edition 1987

[40] ITU-T 500-11 ldquoMethodology for the subjective assessment ofthe quality of television picturesrdquo Tech Rep InternationalTelecommunication Union Geneva Switzerland 2002

[41] J Cohen Statistical Power Analysis for the Behavioral ScienciesRoutledge 1988

[42] D S Dunn Statistics and Data Analysis for the BehavioralSciences McGraw-Hill 2001

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 8: Research Article Evaluating the Role of Content in ...downloads.hindawi.com/journals/tswj/2014/625219.pdf · Research Article Evaluating the Role of Content in Subjective Video Quality

8 The Scientific World Journal

Videos that caused surprise were also ranked as of higherquality (MOS = 381) while negative emotions caused videoto plummet on the quality scale (sadness (MOS = 286) fear(MOS = 300) disgust (MOS = 286) and anger (MOS =289)) Interestingly one of the videos from the LIVEVQDBfound its place among the top 10 videos that the assessorswereangry with

5 Conclusions and Future Research

The most important conclusion we drew from the resultsobtained in the experiment is that the content of videosequence has a strong impact on activating different cog-nitive affective and conative components within assessorsThese components in turn have been shown (both by thisand previous researches) to play an integral role inVQA taskswhether assessors are aware of it or not Thus we providea number of recommendations that ought to be taken intoaccountwhen conducting subjective video quality assessmentexperiments

First video content has to be considered very carefullywhen trying to measure the perceived video quality as failingto do so might introduce a bias which will reflect in finalMOS obtained It is thus vital to choose the correct typeof content for addressing the research question at hand (atany rate choose the type of content that is most likely to beencountered by the viewers in real-world conditions) Havingthis in mind the ldquocontentrdquo variable needs to be monitoredwhenever possible Second since emotions of disgust fearanger and sadness are found to have different impact onconative mental activities than emotions of happiness andsurprise additional attention should be exhibited if conativeactivities are to be observed as dependent variables (especiallywhen it comes to commercially funded research)

Finally we have shown that subjective video qualityassessment might not be such a reliable measure of videoquality if video content is not controlled This has potentialconsequences on both the subjective video quality modelsalready devised that do not take content variable into accountand automatic approaches (ie algorithms) that try to predictMOS based on existing (possibly biased) results

Since the relationship between video content and per-ceived video quality was identified new research directionsopen up for further investigation Some of those includeidentifying and quantifying the strength of this relationshipinvestigating to what degree can a video be impaired withoutassessors noticing it (relative to video content) and whethercontent plays the same role in subjective perception of videoquality when sequences are impaired with different levels andtypes of artifacts Also a study at a larger scale might revealother factors pertinent to different populations (gender-wise culture-wise and demographically wise) that impactsubjective perception of video quality and should thus betaken into account when VQA tasks are in question

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The research presented in this paper was supported by FP7IRSESProjectQoSTREAMmdashVideoQualityDrivenMultime-dia Streaming in Mobile Wireless Networks and by SerbianMinistry of Education Science and Technology projects nosIII43002 and III44003

References

[1] A Lenhart K Purcell A Smith and K Zickuhr ldquoSocial mediaamp mobile internet use among teens and young adultsrdquo TechRep Pew Internet amp American Life Project Washington DCUSA 2010 httpwebpewinternetorg

[2] ComScore ldquoTodayrsquos US Tablet Owner Revealedrdquo 2012 httpwwwcomscorecomInsights

[3] K OrsquoHara A S Mitchell and A Vorbau ldquoConsuming video onmobile devicesrdquo in Proceedings of the 25th SIGCHI Conferenceon Human Factors in Computing Systems pp 857ndash866 ACMMay 2007

[4] CISCO ldquoCiscoVisual Networking Index Forecast andMethod-ology 2011ndash2016rdquo Tech Rep 2012 httpwwwciscocom

[5] K Seshadrinathan and A Bovik ldquoAn information theoreticvideo quality metric based onmotionmodelsrdquo in Proceedings ofthe 3rd International Workshop on Video Processing and QualityMetrics for Consumer Electronics 2007

[6] R J Corsini The Dictionary of Psychology Psychology Press2002

[7] H Hagtvedt and V M Patrick ldquoArt infusion the influenceof visual art on the perception and evaluation of consumerproductsrdquo Journal of Marketing Research vol 45 no 3 pp 379ndash389 2008

[8] A M Giannini F Ferlazzo R Sgalla P Cordellieri F Barallaand S Pepe ldquoThe use of videos in road safety training cognitiveand emotional effectsrdquo Accident Analysis amp Prevention vol 52pp 111ndash117 2013

[9] I Blanchette and A Richards ldquoThe influence of affect on higherlevel cognition a review of research on interpretation judge-ment decision making and reasoningrdquo Cognition amp Emotionvol 24 no 4 pp 561ndash595 2010

[10] N Cranley P Perry and L Murphy ldquoUser perception of adapt-ing video qualityrdquo International Journal of Human ComputerStudies vol 64 no 8 pp 637ndash647 2006

[11] R R Pastrana-Vidal J C Gicquel J L Blin and H CherifildquoPredicting subjective video quality from separated spatial andtemporal assessmentrdquo in Human Vision and Electronic ImagingXI vol 6057 of Proceedings of SPIE pp 276ndash286 January 2006

[12] M Kapa L Happe and F Jakab ldquoPrediction of quality ofuser experience for video streaming over IP networksrdquo CyberJournals pp 22ndash35 2012

[13] SH JumiskoV P Ilvonen andKAVaananen-Vainio-MattilaldquoEffect of TV content in subjective assessment of video qualityon mobile devicesrdquo in IS amp T Electronic ImagingmdashMultimediaonMobile Devices vol 5684 of Proceedings of SPIE pp 243ndash254January 2005

[14] S Winkler ldquoAnalysis of public image and video databases forquality assessmentrdquo IEEE Journal of Selected Topics in SignalProcessing vol 6 no 6 pp 616ndash625 2012

[15] ITU-T Recommendation ldquoSubjective video quality assessmentmethods for multimedia applicationsrdquo Tech Rep 1999

The Scientific World Journal 9

[16] F De Simone ldquoEPFL-PoliMI video quality assessment data-baserdquo 2009 httpvqacomopolimiit

[17] VQEG ldquoReport on the validation of video quality models forhigh definition video contentrdquo Tech Rep June 2010 httpwwwitsbldrdocgovvqegvqeg-homeaspx

[18] Y Wang ldquoPoly NYU video quality databasesrdquo QualityAssess-mentDatabase 2008 httpvisionpolyeduindexhtmlindexphpn=HomePageVideoLab

[19] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoStudy of subjective and objective quality assessmentof videordquo IEEE Transactions on Image Processing vol 19 no 6pp 1427ndash1441 2010

[20] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoA subjective study to evaluate video quality assess-ment algorithmsrdquo in Human Vision and Electronic Imaging XVProceedings of SPIE January 2010

[21] LGoldmann FDe Simone andT Ebrahimi ldquoA comprehensivedatabase and subjective evaluation methodology for quality ofexperience in stereoscopic videordquo in Three-Dimensional ImageProcessing (3DIP) and Applications vol 7526 of Proceedings ofSPIE January 2010

[22] J D McCarthy M A Sasse and D Miras ldquoSharp or smoothComparing the effects of quantization vs frame rate forstreamed videordquo in Proceedings of the Conference on HumanFactors in Computing Systems pp 535ndash542 ACM April 2004

[23] S Sladojevic D Culibrk M Mirkovic D Ruiz Coll and GBorba ldquoLogging real packet reception patterns for end-to-endquality of experience assessment in wireless multimedia trans-missionrdquo in Proceedings of the IEEE International Workshop onEmerging Multimedia Systems and Applications (EMSA rsquo13) SanJose Calif USA July 2013

[24] S R Gulliver T Serif andGGhinea ldquoPervasive and standalonecomputing the perceptual effects of variable multimedia qual-ityrdquo International Journal of Human Computer Studies vol 60no 5-6 pp 640ndash665 2004

[25] S Rihs ldquoThe influence of audio on perceived picture quality andsubjective audio-video delay tolerancerdquo inMOSAIC Handbookpp 183ndash187 1996

[26] I E GordonTheories of Visual Perception Taylor amp Francis 3rdedition 2005

[27] S Hancock and L McNaughton ldquoEffects of fatigue on ability toprocess visual information by experienced orienteersrdquo Percep-tual and Motor Skills vol 62 no 2 pp 491ndash498 1986

[28] C S Weinstein and H J Shaffer ldquoNeurocognitive aspects ofsubstance abuse treatment a psychotherapistrsquos primerrdquo Psy-chotherapy vol 30 no 2 pp 317ndash333 1993

[29] E Balcetis and D Dunning ldquoSee what you want to see moti-vational influences on visual perceptionrdquo Journal of Personalityand Social Psychology vol 91 no 4 pp 612ndash625 2006

[30] A Parducci ldquoResponse bias and contextual effects whenbiasedrdquo Advances in Psychology vol 68 pp 207ndash219 1990

[31] E A Styles Attention Perception and Memory An IntegratedIntroduction Taylor amp Francis Routledge New York NY USA2005

[32] D Culibrk M Mirkovic V Zlokolica M Pokric V Crnojevicand D Kukolj ldquoSalient motion features for video qualityassessmentrdquo IEEE Transactions on Image Processing vol 20 no4 pp 948ndash958 2011

[33] J Redi H Liu R Zunino and I Heynderickx ldquoInteractions ofvisual attention and quality perceptionrdquo in Human Vision andElectronic Imaging XVI Proceedings of SPIE January 2011

[34] RTS Istrazivanje (izvestaji o gledanosti) httpwwwrtsrspagertssrCIPAnews171IstraC5BEivanjehtml

[35] Nielsen Television Audience Measurement Nielsen AudienceMeasurement Serbia httpwwwagbnielsennetwherewearedynPageasplang=localampcountry=Serbiaampid=354

[36] LIVE Video Quality Database httpliveeceutexaseduresearchqualitylive videohtml

[37] E R Hilgard ldquoThe trilogy of mind cognition affection andconationrdquo Journal of the History of the Behavioral Sciences vol16 no 2 pp 107ndash117 1980

[38] P EkmanWV Friesen andP EllsworthEmotion in theHumanFace Guidelines for Research and an Integration of FindingsPergamon Press New York NY USA 1972

[39] J Fridlund P Ekman and H Oster ldquoFacial expressions ofemotionrdquo in Nonverbal Behavior and Communication pp 143ndash223 2nd edition 1987

[40] ITU-T 500-11 ldquoMethodology for the subjective assessment ofthe quality of television picturesrdquo Tech Rep InternationalTelecommunication Union Geneva Switzerland 2002

[41] J Cohen Statistical Power Analysis for the Behavioral ScienciesRoutledge 1988

[42] D S Dunn Statistics and Data Analysis for the BehavioralSciences McGraw-Hill 2001

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 9: Research Article Evaluating the Role of Content in ...downloads.hindawi.com/journals/tswj/2014/625219.pdf · Research Article Evaluating the Role of Content in Subjective Video Quality

The Scientific World Journal 9

[16] F De Simone ldquoEPFL-PoliMI video quality assessment data-baserdquo 2009 httpvqacomopolimiit

[17] VQEG ldquoReport on the validation of video quality models forhigh definition video contentrdquo Tech Rep June 2010 httpwwwitsbldrdocgovvqegvqeg-homeaspx

[18] Y Wang ldquoPoly NYU video quality databasesrdquo QualityAssess-mentDatabase 2008 httpvisionpolyeduindexhtmlindexphpn=HomePageVideoLab

[19] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoStudy of subjective and objective quality assessmentof videordquo IEEE Transactions on Image Processing vol 19 no 6pp 1427ndash1441 2010

[20] K Seshadrinathan R Soundararajan A C Bovik and L KCormack ldquoA subjective study to evaluate video quality assess-ment algorithmsrdquo in Human Vision and Electronic Imaging XVProceedings of SPIE January 2010

[21] LGoldmann FDe Simone andT Ebrahimi ldquoA comprehensivedatabase and subjective evaluation methodology for quality ofexperience in stereoscopic videordquo in Three-Dimensional ImageProcessing (3DIP) and Applications vol 7526 of Proceedings ofSPIE January 2010

[22] J D McCarthy M A Sasse and D Miras ldquoSharp or smoothComparing the effects of quantization vs frame rate forstreamed videordquo in Proceedings of the Conference on HumanFactors in Computing Systems pp 535ndash542 ACM April 2004

[23] S Sladojevic D Culibrk M Mirkovic D Ruiz Coll and GBorba ldquoLogging real packet reception patterns for end-to-endquality of experience assessment in wireless multimedia trans-missionrdquo in Proceedings of the IEEE International Workshop onEmerging Multimedia Systems and Applications (EMSA rsquo13) SanJose Calif USA July 2013

[24] S R Gulliver T Serif andGGhinea ldquoPervasive and standalonecomputing the perceptual effects of variable multimedia qual-ityrdquo International Journal of Human Computer Studies vol 60no 5-6 pp 640ndash665 2004

[25] S Rihs ldquoThe influence of audio on perceived picture quality andsubjective audio-video delay tolerancerdquo inMOSAIC Handbookpp 183ndash187 1996

[26] I E GordonTheories of Visual Perception Taylor amp Francis 3rdedition 2005

[27] S Hancock and L McNaughton ldquoEffects of fatigue on ability toprocess visual information by experienced orienteersrdquo Percep-tual and Motor Skills vol 62 no 2 pp 491ndash498 1986

[28] C S Weinstein and H J Shaffer ldquoNeurocognitive aspects ofsubstance abuse treatment a psychotherapistrsquos primerrdquo Psy-chotherapy vol 30 no 2 pp 317ndash333 1993

[29] E Balcetis and D Dunning ldquoSee what you want to see moti-vational influences on visual perceptionrdquo Journal of Personalityand Social Psychology vol 91 no 4 pp 612ndash625 2006

[30] A Parducci ldquoResponse bias and contextual effects whenbiasedrdquo Advances in Psychology vol 68 pp 207ndash219 1990

[31] E A Styles Attention Perception and Memory An IntegratedIntroduction Taylor amp Francis Routledge New York NY USA2005

[32] D Culibrk M Mirkovic V Zlokolica M Pokric V Crnojevicand D Kukolj ldquoSalient motion features for video qualityassessmentrdquo IEEE Transactions on Image Processing vol 20 no4 pp 948ndash958 2011

[33] J Redi H Liu R Zunino and I Heynderickx ldquoInteractions ofvisual attention and quality perceptionrdquo in Human Vision andElectronic Imaging XVI Proceedings of SPIE January 2011

[34] RTS Istrazivanje (izvestaji o gledanosti) httpwwwrtsrspagertssrCIPAnews171IstraC5BEivanjehtml

[35] Nielsen Television Audience Measurement Nielsen AudienceMeasurement Serbia httpwwwagbnielsennetwherewearedynPageasplang=localampcountry=Serbiaampid=354

[36] LIVE Video Quality Database httpliveeceutexaseduresearchqualitylive videohtml

[37] E R Hilgard ldquoThe trilogy of mind cognition affection andconationrdquo Journal of the History of the Behavioral Sciences vol16 no 2 pp 107ndash117 1980

[38] P EkmanWV Friesen andP EllsworthEmotion in theHumanFace Guidelines for Research and an Integration of FindingsPergamon Press New York NY USA 1972

[39] J Fridlund P Ekman and H Oster ldquoFacial expressions ofemotionrdquo in Nonverbal Behavior and Communication pp 143ndash223 2nd edition 1987

[40] ITU-T 500-11 ldquoMethodology for the subjective assessment ofthe quality of television picturesrdquo Tech Rep InternationalTelecommunication Union Geneva Switzerland 2002

[41] J Cohen Statistical Power Analysis for the Behavioral ScienciesRoutledge 1988

[42] D S Dunn Statistics and Data Analysis for the BehavioralSciences McGraw-Hill 2001

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 10: Research Article Evaluating the Role of Content in ...downloads.hindawi.com/journals/tswj/2014/625219.pdf · Research Article Evaluating the Role of Content in Subjective Video Quality

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of