file · web viewexcitement/enthusiasm, emotional affects, characters/casting, character...

9

Click here to load reader

Upload: nguyendiep

Post on 02-Apr-2019

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: file · Web viewexcitement/enthusiasm, emotional affects, characters/casting, character identification, meanings/themes, real world/personal relevance, feelings of nostalgia, social

METHODOLOGICAL APPENDICES

Due to space constraints we have been unable to offer full discussions of every aspect of our methodological approach in our previously published works. In the interests of transparency and for the benefit of researchers who may be interested in these finer details, we expand here on a few important aspects of our project. For further insight, please contact Carolyn Michelle or Charles Davis directly.

CONDUCTING THE CULTURAL TRAWLS AND Q ITEM SELECTION:

For the Hobbit Audience project, the statements that respondents were asked to sort were derived from extensive ‘cultural trawls’ (Stenner & Marshall, 1995, p. 626) of the wider discursive terrain or concourse around each film. This wide-ranging trawl aimed to identify the major issues, themes, and concerns being expressed in public discussions of these films, and to capture a range of perspectives on them. Our four cultural trawls focused on print and online news coverage of the production, media and film commentary, early professional and amateur film reviews in the case of the post-viewing surveys, commentary on social media and in key fan websites such as Theonering.net, film blogs, discussion board comments, and comments on Peter Jackson’s production videos and the Hobbit trailers on YouTube.

For the prefiguration survey, the cultural trawl was conducted over three weeks prior to the first film’s release, and focused on print and online news items relating to the production, media and film commentary, the public commentary of fans of the Lord of the Rings films and Tolkien followers on Twitter, Facebook and in key fan forums, film blogs, film discussion board comments, and comments on Peter Jackson’s production videos and The Hobbit trailers on YouTube. For the reception surveys, our focus was on news items covering the AUJ premiere and its reported reception by attendees, early professional and amateur film reviews, early commentary via social media, discussions on key fan websites such as Theonering.net, and many of the same sites trawled for the prefiguration survey. As outlined below, the cultural trawl for the AUJ post-viewing survey was conducted in three major world languages and checked for relevance by speakers of a few other languages also.

An important methodological principle of Q is that the statements considered for inclusion in a Q sample take the form of subjective opinions rather than facts, and all opinions are considered valid regardless of their origins. Thus, the primary consideration in selecting a statement was whether it clearly reflected a particular sentiment expressed in a succinct way, and we aimed to obtain a very broad and inclusive impression of the range of things being said about different aspects of each film within the wider public sphere.

Sampling of the concourse and selection of the Q sample is one area where subjective and cultural biases may be apparent in Q research. We mitigated these potential biases by using research assistants who were not involved in designing the research to undertake the cultural trawl and preliminary sampling of representative statements. In each phase of the project, hundreds of ‘raw’ statements were collected from the above sources, with a conscious effort made to locate comments across a very wide range of themes. For the cultural trawl conducted for our AUJ post-viewing survey, we used the following categories:

Story/narrative structure, director/directing, decision to make three films, editing, aspects of film craft, issues relating to adaptation, inclusion of additional materials, continuity with the LotR films, responses to stereoscopic 3D, CGI/ visual effects, HFR 48fps, music/score/songs, narrative transportation, suspension of disbelief,

Page 2: file · Web viewexcitement/enthusiasm, emotional affects, characters/casting, character identification, meanings/themes, real world/personal relevance, feelings of nostalgia, social

excitement/enthusiasm, emotional affects, characters/casting, character identification, meanings/themes, real world/personal relevance, feelings of nostalgia, social experience of viewing, opposition/dislike, disengagement, disappointment, other.

The selection of relevant statements continued until a degree of redundancy began to emerge among the themes expressed, although it quickly became apparent that certain issues were generating far more discussion and debate than others, which would need to be reflected in the final Q sample. Since it is not practical to include very large numbers of statements in online Q surveys (due to the risks of participant fatigue and the small size of many computer screens, which places constraints on the size of the grid and the readability of Q items) it was necessary to then progressively whittle these statements down by eliminating any repetition and redundancy in the categories and consolidating related ideas and themes. This process reduced the number of potential statements to a shortlist of 160-200, which were then coded for reception mode, content and valence (negative or positive) before further reduction, condensation and elimination was undertaken. Along the way, a few additional statements were created to capture perspectives that were alluded to in several statements but inadequately expressed by any single one. In other cases, revisions were made to original statements to enhance clarity and correct the syntax.

In our AUJ post-viewing survey, an effort was made to construct a structured Q set reflecting a balanced 4 x 8 design, in which 8 statements would be selected that loosely represented the four categories outlined in the composite model. However, since so much of the discourse around the film addressed issues relating to adaptation, the film’s unusual visual aesthetic, and aspects of filmcraft, we realised an unbalanced design would be needed in order to adequately represent the concourse. In the latter two reception surveys, we adopted a more flexible, unstructured approach, adding one or two categories as needed. In each case, we eventually settled on a final Q sample that in our view best reflected the range of sentiments expressed in the wider concourse. These ranged in size from 36-42 statements, and while they were not able to represent every single issue expressed in the concourse from every perspective, we believed they would be adequate to allow our respondents to express their general viewpoint on the films.

ENSURING CROSS-CULTURAL RELEVANCE OF THE Q ITEMS:

For the multilingual post-viewing study of AUJ, a polylingual research assistant surveyed the concourse around the film in three major world languages – English, Spanish, and German - and derived a large sample, with English translations being made of the Spanish and German statements selected. Once the Q sample had been provisionally determined, research assistants fluent in French, Russian, Spanish and Dutch were asked to read early film reviews, media commentary, Facebook and fan discussion board comments in these languages to see whether similar themes and issues were relevant within other language communities. While not a systematic review, all reported that the same kinds of issues and concerns were emerging within these language communities, though with some slightly different emphases.

By translating the same Q sample into all the languages used for the AUJ post-viewing survey, we believed any major cross-cultural differences in emphasis would become apparent, and more importantly become measurable: Using unique Q samples in each cultural context would have made it difficult to compare findings across the surveys. However, we

Page 3: file · Web viewexcitement/enthusiasm, emotional affects, characters/casting, character identification, meanings/themes, real world/personal relevance, feelings of nostalgia, social

acknowledge the limitations of this approach - including the possibility that some Q statements may have had little resonance in some contexts. When our Dutch, Danish and Flemish research collaborators checked the suitability of the statements for their own contexts, they found just one (referring to nationalistic pride in association with the film’s production) to be contentious in the sense that it was seen to be irrelevant locally, and thus likely to present difficulties in the ranking process. This resulted in a revision of the survey instructions to encourage respondents to rank statements as ‘neutral’ if they felt it did not apply to them.

A few other issues relating to the multi-lingual surveys are also worth acknowledging here. Along with the Q sample, the survey instructions and accompanying questionnaire were originally written in English before being translated into each the other languages used. These translations were undertaken and checked by highly proficient and/or native speakers of each language, and we encountered just a few minor issues relating to words and phrases that were ambiguous or difficult to translate precisely, along with a few regional differences between European and South American Spanish. The potentially deeper significance of such variations became apparent during pilot testing of the online Spanish survey, where one of the testers said that the initial introduction to the survey containing the ethics information was clearly written by a Spanish person from Spain, rather than a South American. While she could understand the information, she noted that Latinos can find European Spanish overly formal and off-putting. In hindsight, we wonder whether more should have been done to address this concern, as we later struggled to recruit large numbers from the South American region. But, given our resource constraints, we believed adding parallel Spanish versions of the survey was probably unnecessary, given that most Spanish speakers would still be able to understand the instructions and Q statements.

ANALYSIS OF Q SORTS:

In each stage of the project the raw data was ‘cleaned’ prior to analysis by reviewing the survey responses and discarding any that were obviously frivolous, along with those where the respondent did not answer any questions about their socio-demographic background. We also discarded responses that did not include any qualitative comments to help explain the respondents’ ranking of the statements, since in Q terms, these qualitative remarks are seen as important data sources that inform the interpretation of the identified viewpoints and offer independent confirmation of their validity.

We performed by-person principal components analysis on these Q sorts using SPSS and rotated the components using the Varimax procedure, investigating several possible solutions. With 38 items, a coefficient of .42 is the threshold of 1 percent significance. We sought a solution that accounted for the largest overall number of representative sorts (i.e. non cross-loaded sorts with a coefficient of .42 or greater), and which furthermore included no components characterized by only a small number of representative sorts (Brown, 1980; McKeown & Thomas, 2013). For the prefiguration survey Q sorts, a four-component solution best satisfied these criteria, accounting for 555 of the sorts and 65.2 percent of the variance.

To characterize the four factors (components) and produce model or 'typal' arrays for each factor, 120 of the most representative sorts (i.e. those with the highest positive or negative loadings) from each factor were entered into PCQ, a dedicated commercial software package

Page 4: file · Web viewexcitement/enthusiasm, emotional affects, characters/casting, character identification, meanings/themes, real world/personal relevance, feelings of nostalgia, social

for analysis of Q sorts, to produce a solution with the same number of factors as in the whole-population analysis.

Each survey data set contained several hundred cross-loaded sorts (sorts that load significantly on more than one factor). For example, the prefiguration survey contains 363 cross-loaded sorts. We examined the common variance of each sort's factor loading to identify those that accounted for more than half the variance among the four factors. This allowed us to allocate many of the cross-loaded sorts to one or the other of the four identified audience segments. Our four-factor solution accounts for 84.2% of the sorts. We discarded from further analysis the 156 sorts that were either cross-loaded sorts that did not have a factor loading accounting for more than 50% of the variance among the four factors, or sorts that did not load significantly on any of the four factors.

At various points in the book, we present figures relating to the average ranking of particular Q items or sets of Q items that are believed to be theoretically significant. This kind of analytical process is viewed by some as contrary to Q methodology conventions, because the selection of statements to represent a position is in some respects an imposition of the researcher’s own external viewpoint on the data. In Q, the placement of particular statements in the factor array must always be evaluated holistically, in a way that recognises and preserves the relative significance ascribed to each of the statements and how they contribute to a particular point of view, and the same statement can be given slightly different meaning within that holistic context. By taking and comparing only one or a chosen subset of statements, it could be argued that we are treating those statements as if they are scale-items that have been pre-determined to mean the same thing to every sorter, which in Q terms they often do not. In our defence, the statements we have selected are ones that have already been identified as markers of significant difference in value (but not necessarily divergent interpretation) through factor analysis. We acknowledge that there is a danger that, in some cases, the same statements may mean somewhat different things to different respondents, and hence the findings must be interpreted with care.

INDUCTIVE CODING OF OPEN-RESPONSE QUESTIONS:

We used an inductive coding process to analyse 25 open-ended questions used across the four different surveys. In inductive coding, raw qualitative data is closely read before being systematically condensed and summarised in a way that allows significant or dominant themes already within the data to emerge in a relatively unmediated fashion (Thomas, 2006). In terms of our process, the full data set relating to each question was read by the lead researcher and a postgraduate research assistant before a random sample of 150-200 responses was provisionally coded and category labels and descriptions developed, frequently drawing on words and phrases used by respondents themselves. Each main idea expressed in a statement was coded to one category with new columns added as needed to accommodate additional codes. Following discussion and evaluation, these provisional categories were compared and examples discussed to reach agreement on the codes/terminology that would be used and their meaning. This revised codebook was then used to re-code that data and a larger sample of 4-500 statements (depending on the question and number of responses). New categories of meaning that emerged were given provisional labels, and existing codes underwent further revision to eliminate redundancy and consolidate smaller categories. The revised codebook was then reapplied to all the data before being independently checked by the lead researcher for consistency of coding. Finalised codebooks ranged significantly in the number of categories, depending on the complexity and diversity of the responses. While

Page 5: file · Web viewexcitement/enthusiasm, emotional affects, characters/casting, character identification, meanings/themes, real world/personal relevance, feelings of nostalgia, social

some contained only 5-10 distinct codes, several others contained in the order of 20-30 categories (and in one particular case, closer to 50 including a number of related sub-categories). While inductive coding normally involves a further process of refinement to generate a smaller number of summary themes or categories, we felt any further reduction of the data risked conflating potentially important distinctions and associations which we hoped to later explore.

ISSUES WITH DEFINING POLITICAL AFFILIATION:

For the prefiguration and English-language AUJ surveys the core research team used the same kinds of political affiliations as large-scale studies of cultural differences, and had reviewed such studies to develop what was hoped would be a ‘catch all’ list. We knew certain categories would be problematic in certain contexts, and were also aware that commonly-used labels such as ‘liberal’ and ‘conservative’, ‘left wing’ and ‘right wing’ take on different inflections in different parts of the world (indeed, a few respondents objected to some of the categories used, while others considered these questions irrelevant or intrusive). Our Danish colleagues believed that several of the political categories did not adequately reflect the way in which political affiliations are conceptualised in Denmark, and possibly other European countries. We decided that we should allow some flexibility on this issue, even if it later presented problems when it came to comparing our cross-cultural data. The Danish team elected to use a smaller 5-point scale for this question ranging from højreorienteret to venstreorienteret (rightist to leftist).

In the DoS post-viewing survey, we presented this as an open response question. While this elicited some less helpful responses (such as ‘I agree with Billy Connolly: Don't vote, it only encourages them’ and ‘Orcish Red Brigades. Shagrat for President!’), it provided useful insight into some rather more meaningful ways of categorising political affiliation, based on how people think and act in their own different contexts and their self-ascribed political orientations. Once these open-ended responses were coded, we were able to employ the following revised set of codes in the BotFA survey:

1. Progressive/ Social Democrat/ Liberal Left;2. Conservative/Republican/Liberal Right;3. Socially liberal/fiscally conservative;4. Centrist; 5. Libertarian/Anarchist left;6. Libertarian/Anarchist right;7. Communist/Socialist;8. Nationalist;9. Monarchist; 10. Communal – e.g. ethnic/tribal;11. Faith-based;12. Green/environmentally conscious;13. None, neutral, apolitical;14. Decline to answer