problems in collecting social data: a review for the information researcher

13
Social Science Information Studies ( 198 11, I i 139- 15 1i 0 198 1 Butterworths PROBLEMS IN COLLECTING SOCIAL DATA: A REVIEW FOR THE INFORMATION RESEARCHER MICHAEL BRENNER” Oxford Polytechnic, England ABSTRACT Discusses some of the problems which the information scientist faces when having to select a particular social science research strategy. The established measurement approach is disfavoured as it is incompatible with the actual social and psychological conditions under which data are collected. Qualitative methods may also be biased by social and psychological constrains inherent in their use. It is recommended to base data collection practices on a realistic consideration of data collection settings as social situations in their own right. This is illustrated by reviewing the case of the research interview. Finally, two possibilities are outlined which help to increase the adequacy of data collection. One is to use more than one method so that a more complete picture of phenomena can be obtained. The other involves an action-psychological approach which is useful in the scrutiny of the effects of social and psychological processes operating on data in research settings. THE SOCIAL AND PSYCHOLOGICAL NATURE OF DATA COLLECTION Investigations of information needs and people’s uses of information require inevitably the employment of particular social research methodologies. These may involve quantitative or qualitative enquiry approaches based on observation and interviewing as the primary means of data collection. Whether it is the aim of information scientists to accomplish predictive, causal explanations of infor- mation needs and uses or to provide generalized or qualitatively more detailed descriptions of information-related behaviour, ultimately these aims can only be realized if maximally valid data about the phenomena of interest can be collected. The established measurement theory in the social sciences emphasizes as the primary function of methods ‘to institutionalize the demand for objectivity’ (Nachmias and Nachmias, 1976: 13) through valid, reliable and precise measurement. This presupposes the practical availability of ‘transparent’ methods of data collection. ‘This is the assumption that the phenomenon being recorded is shown more or less as it is, by the instrument. We “see” the property we are studying “through” the instrument.’ (Harre, 1979: 113.1 While there is no Dr. Michael Brenner is Lecturer in Sociology at the Department of Social Studies, Oxford Polytechnic. His research interests are in the social psychology of data collection methods.

Upload: michael-brenner

Post on 17-Nov-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Social Science Information Studies ( 198 11, I i 139- 15 1 i

0 198 1 Butterworths

PROBLEMS IN COLLECTING SOCIAL DATA: A REVIEW FOR THE INFORMATION RESEARCHER

MICHAEL BRENNER”

Oxford Polytechnic, England

ABSTRACT

Discusses some of the problems which the information scientist faces when having to select a particular social science research strategy. The established measurement approach is disfavoured as it is incompatible with the actual social and psychological conditions under which data are collected. Qualitative methods may also be biased by social and psychological constrains inherent in their use. It is recommended to base data collection practices on a realistic consideration of data collection settings as social situations in their own right. This is illustrated by reviewing the case of the research interview. Finally, two possibilities are outlined which help to increase the adequacy of data collection. One is to use more than one method so that a more complete picture of phenomena can be obtained. The other involves an action-psychological approach which is useful in the scrutiny of the effects of social and psychological processes operating on data in research settings.

THE SOCIAL AND PSYCHOLOGICAL NATURE OF DATA COLLECTION

Investigations of information needs and people’s uses of information require inevitably the employment of particular social research methodologies. These may involve quantitative or qualitative enquiry approaches based on observation and interviewing as the primary means of data collection. Whether it is the aim of information scientists to accomplish predictive, causal explanations of infor- mation needs and uses or to provide generalized or qualitatively more detailed descriptions of information-related behaviour, ultimately these aims can only be realized if maximally valid data about the phenomena of interest can be collected.

The established measurement theory in the social sciences emphasizes as the primary function of methods ‘to institutionalize the demand for objectivity’ (Nachmias and Nachmias, 1976: 13) through valid, reliable and precise measurement. This presupposes the practical availability of ‘transparent’ methods of data collection. ‘This is the assumption that the phenomenon being recorded is shown more or less as it is, by the instrument. We “see” the property we are studying “through” the instrument.’ (Harre, 1979: 113.1 While there is no

” Dr. Michael Brenner is Lecturer in Sociology at the Department of Social Studies, Oxford Polytechnic. His research interests are in the social psychology of data collection methods.

140 Problems in collecting social data

difficulty in outlining the ideal measuring properties of methods (see, for example, Selltiz et al., 19761, it is also apparent that social scientists have experienced considerable difficulty in actually achieving the unbiased measure- ment of phenomena. As Phillips (1973: 74) has commented in this context:

‘I would hazard a guess that if we examined the combined effects of’ the numerous sources of’ bias that operate in various sociological and psychological studies we would find that they account fbr considerably more of the variance in the dependent variables of interest than do the major independent variables.’

The problems of the practical realization of the established measurement theon in the social sciences arise primarily from the fact that its approach to data collection is, by and large, incompatible with the real social and psychological conditions under which data are collected. For example, ideally, in the established measurement theory, data collection is thought of in stimulus-response con- tingencies, as in survey research where the questions presented to respondents are often uncritically conceived of as stimuli which elicit the respondent’s answers. In practice, however, it is impossible, with few exceptions, to think of data collection in stimulus-response terms, as the majority of data collection situations are socially and psychologically much more complex.

As regards social complexity in the interview, interviewer and respondent must engage in mutual social interaction before any data can be generated. That is, measurement results are necessarily and inevitably the product of both the questions presented to respondents and the social relationship within which that presentation occurs. As regards psychological complexity, there are individual differences between people under study. ‘Such general variables as intelligence, education, information, social status, and various personality characteristics frequently “contaminate” the results of an attitude questionnaire or of an observer’s ratings. Hence the scores of the objects under study will reflect not only differences in the characteristic the instrument is intended to measure but also differences in other characteristics.’ (Selltiz et al., 1976: 165.) While such individual differences in a study population are likely to cause systematic measurement error, there are more transient psychological factors, such as mood, memory, fatigue, motivation and attention, which may affect the adequacy of measurement more variably.

Given the social and psychological nature of social science data collection, it is clear that a na’ive stimulus-response model of measurement, though widely assumed in social research, is unwarranted. Instead, it is always more reasonable to assume that social science data are complexly determined; they are the result of social and psychological processes which are wider and more intensive than is recognized in the established measurement theory. The practical inadequacy of a stimulus-response conception of measurement has inspired developments towards the construction of more appropriate approaches to data collection. Roughly, there have been three lines of development which will now be considered in turn.

One development has been to conceptualize, and to research, the social and psychological processes in data collection in terms of measurement error. This approach, exemplified by much of the artefact research movement (see Phillips, I97 1, 19731, emphasizes that ‘that there is no such thing as a completely valid instrument, in the sense of reflecting only differences in the characteristic we are attempting to measure’ (Selltiz et al., 1976: 169). The ideal of a stimulus-response

MICHAEL BRENNER 141

model of measurement, however, is maintained, and it is suggested that we increase the adequacy of measurement either by reducing the susceptibility of measurement situations to social and psychological sources of ‘data con- tamination’, or by measuring them, in terms of measurement error, in addition to our substantive measures of interest.

There are a number of problems with this approach, some of which I will mention by considering the concept of response error in survey research. This has been defined in the following terms:

‘We speak of response error when the respondent’s answer differs from the true answer; the direction and magnitude of a response error is measured bv the direction and distapce of the obtained answer from the true answer.’ (Sudman and Bradburn, 1974: 2.)

The practical utility of this concept of response error stands or falls with the actual availability of true answers which can serve as standards when assessing the direction and magnitude of response errors, or, as Moser and Kalton ( 197 1: 378) have put it, it must be assumed ‘that for each individual covered by the survey there is an individual true value (ITV). This value is quite independent of the survey, of the way the question is asked and by whom’. Unfortunately, however. even when verification data, the true answers, are available it is unlikely that they will be free of error (see Mathews et al., 1974). Thus, the researcher must assume, in his comparison, that the verification sources used are more accurate than the data obtained from respondents without being in a position to demonstrate, in most instances, that his assumption is correct. (Such a demonstration would involve a validation of the verification data, which is often practically impossible as no other sources of information, besides the verification data and the responses, can be accessed.)

More typical is the situation that true answers cannot be found independently of the respondent, which is, of course, particularly in the area of research into information needs, the reason why a survey is conducted instead of collecting the information sought by other means. (In the area of information use, however, verification data for some aspects of behaviour may well exist; in the form of records of library or information system use, photocopy demands, the citation characteristics of information users who are authors, and so on.) in the case of attitude questions it may be totally impossible to request a single true response, as it can be that different attitudes, which may all be honest attitudes on the same issue, can be elicited from the same person’in different social situations. That is, the attitudinal response obtained in the interview may be perfectly truthful, but may have little, or no, external or predictive validity when it comes to other social situations. Thus, when verification sources are absent or when it is impossible to determine true answers, the only way of assessing measurement error attributable to the interview situation is to scrutinize interviewer-respondent interaction. This amounts to the admission that the practical possibilities for improving rneasure- ment adequacy in survey research through the scrutiny of measurement error are very limited in most surveys; responses have to be taken as face-valid, when it is impossible to detect any overt sources of error in the interviews such as interviewer incompetence.

There are some ultimate limits of the concept of response error which are worth noting. To make the concept generally useful would require detailed knowledge of the social and psychological processes that can affect measurement adequacy in the interview. Such a comprehensive social psychology of measure-

142 Problems in collecting social data

ment which might be applied for the purposes of error control as well as to avoid sources of error in the first place does not exist. Furthermore, the effective use of the concept of response error requires the adequate measurement of biasing social and psychological processes. However, the measurement of sources of error is as prone to biasing influences as any measurement which is socially reactive. This necessitates, in principle, an infinite regress in conducting enquiries into sources of error ‘in which a study is designed to take systematic bias out of a study that was designed to take systematic bias out of a study that was . . .’ (Rosenberg, 1969: 291).

The second development concerned with the construction of more appropriate data collection practices is that there has been a widespread rejection of the established measurement theory. Deutscher has outlined one of the reasons for this re,jection, namely,

‘that the adoption of the scientific model in the social sciences has resulted in an uncommon concern for methodological problems centering on issues of reliability and to a concomitant neglect of the problem of validity. . . We concentrate on consistency without much concern with what it is we are being consistent about or whether we are consistently right or wrong. As a consequence we may have been learning a great deal about how to pursue an incorrect course with a maximum of precision.’ (Deutscher, 1966: 241.)

Deutscher’s remarks imply that social scientists, while engaged in apparently rigorous scientific activity, have actually failed to accomplish the ultimate goal of research, valid knowledge. Moreover, social scientists ‘have tended to bend, re-shape, and distort the empirical social world to fit the model they use to investigate it. Wherever possible, social reality is ignored’ (Filstead, 1970: 3).

While rejecting the measurement approach to social science data collection, it has been suggested that we adopt methods which are partly grounded in minority paradigms, such as phenomenological sociology and symbolic interactionism (see, for example, Filstead, 1970; Lofland, 19761, and which may be regarded as enabling more intimate familiarity with social life. Such methods (participant observation, in-depth interviewing, field work, total participation in the social life under study) are frequently characterized, in contrast to the measurement position, in terms of ‘qualitative methodology’:

‘Qualitative methodology allows the researcher to “get close to the data”, thereby developing the analytical, conceptual, and categorical components of explanation from the data itself-rather than from the preconceived, rigidly structured, and highly quantified techniques that pigeonhole the empirical social world into the operational definitions that the researcher has constructed.’ (Filstead, 1970: 6.)

There is no reason why we should not accept Filstead’s methodological advice. Notice, however, that the adoption of a qualitative methodology involves, from the point of view of the social and psychological processes inherent in data collection, problems of the same kind that may arise when using conventional methods, such as schedule-based interviewing. Thus, the rqjection of one research strategy and its replacement by another does not, of itself, solve problems of invalidity and bias in data collection, and, indeed, may even create new problems. This issue can be illustrated by considering the case of ‘account analysis’, which

MICHAEL BRENNER 143

refers to the unstructured, in-depth gathering of people’s explanatory speech material and its subsequent content analysis:

‘In account analysis we try to discover both the social force and explanatory content of the explanatory speech produced by social actors. This then serves as a guide to the structure of the cognitive resources required for the,genesis of intelligible and warrantable social action by those actors.’ (Harre, 1979: 127.1

Obviously, the gathering of accounts involves an interview situation. However, as far as this writer is aware, no research has been conducted into the possibly biasing effects of the social and psychological processes operating in the accounting interview on the accounts elicited, while a great number of studies are available to demonstrate equivalent effects for conventional research interviews (see, for example, Cannel1 and Kahn, 1968). In particular, we do not know which interviewing techniques should be used to accomplish accounts with the least bias. Some followers of an accounts methodology have recommended the use of interviewing techniques employed in research interviewing (see Brown and Sime, 198 1). This, interestingly, brings the interviewing aspect of account analysis in line with the ‘old’, disfavoured approach.

Furthermore, the accomplishment of accounts is the result of interpretational work; that is, informants have to select from their past experience those aspects which seem significant and, hence, worth reporting. This implies that informants have to make a choice about which frame of reference to use in their making-sense of some piece of experience. Marsh (1979) has noted in this context that there is not necessarily a one-to-one relationship, in accounts, between past experience and the frame of reference employed in ordering and reporting that experience. He observed a considerable overreporting of incidents of physical violence by football fans as the result of the fact that the frame of reference used by the youngsters was actually decontextualized. That is, its function was not so much to reflect their actual experience of violent incidents, but to display the phenomenon of violence rhetorically, for the purpose of symbolic exhibition, in its own right. Marsh argues that taking accounts literally, as Harre and Secord (1972) seem to suggest, would be mistaken, as one needs social and psychological knowledge of the accounting practices employed by informants before one can comprehend with reasonable confidence what it is that is meant in accounts.

The third line of development towards the construction of more appropriate data collection practices in the social sciences relates to the sociology and psychology of methodology. This area of work is concerned with a constructive and realistic consideration of the social and psychological processes that are, more often than not, necessarily and inevitably involved in the employment of methods, be they measurement-oriented or qualitative (see Brenner et al., 1978; Brenner, 1981). The approach to data collection in strictly social and psycho- logical terms contrasts with the two other methodological developments outlined above in a number of ways.

One position, which maintains the ideal of objective, unbiased measurement and suggests that we should attempt to maximally control measurement error, is essentially normative; it proposes standards of data collection independently and irrespective of an assessment of the real possibilities for methodological practices. As the social and psychological constraints inherent in the use of methods are considered mainly in terms of their propensity to biasing effects, the normative

144 Problems in collectmg social data

basis of the position, the ideal of unbiased measurement, is never seriously questioned. Given this situation, it is reasonable to hypothesize that the overall package of methodological prescriptions, rests on an illusion, namely, that ob.jective measurement is possible all the time, at least in principle. In contrast, the approach in terms of a sociology and psychology of methodology accepts that measurement has to live with the particular forms of social life that make up the real nature of data collection settings. As these social life forms need to be considered as fundamentally enabling (but also in peculiar ways restricting) methodological practices, it follows that the quest for total validity of measure- ment is unrealistic.

The other position, that of qualitative methodology, maintains that we should replace any measurement approach with a new methodological paradigm. From the point of view of a sociolog?; and psychology of methodology, this position is as unsatisfactory as the disfavoured approach, because the foundation of qualitative methodologv is similarly normative. That is, it is claimed that qualitative methods, unlike measurement, will allow the valid investigation of social phenomena. However, as already indicated, there is reason to believe that qualitative methods, by virtue of their particular social and psychological organization, may involve sources of bias which are similar to those of conventional methods. Thus, it seems more appropriate, before making any claims regarding the quality of the data collected bv means of qualitative methods, to treat such methods like anv other method, ‘;n terms of the particular social life forms that enable, but also limit, their emplovment.

111 cssencc, the argument has been that the use oi‘ methods in infbrmation research should not be guided by any normative conception of measurement or any other form of’ data collection. Instead data collection situations should be viewed as social realities in their own right, as it is from these realities that our data, with variable degrees of validity, must emerge. The issue at hand will be illustrated by considering the case of the research interview, a method often used when information needs and people’s uses of information are researched.

THE SOCIAL REALITY OF THE RESEARCH INTERVIEW

Whatever kind of research interview is used in information research (unstruc- tured, in-depth interviewing or structured, schedule-based interviewing, to name two extremes), the purpose will be to obtain valid information from respondents. That is, ideally, respondents should answer questions truthfully, while also meeting with precision the particular response requirements posed by the various kinds of questions used. However, the reporting of information is necessarily and inevitably embedded in a social situation, the interview, with its own peculiar social interactional organization. Thus, in reality, it cannot be assumed that responses are simply answers to questions; they are the .joint product of the questions and the social situational circumstances within which the questions were put to the respondent.

The most crucial component of the social situational circumstances affecting the answering process is the interviewer, or, more precisely, interviewing technique. Interviewing technique must meet, ideally, two requirements, it must not bias the answers, and it must ensure a socially effective interaction which helps the respondent to report adequately. Only one requirement will be considered here: the avoidance of bias.

MICHAEL BRENNER 145

To avoid bias, the interviewing must be done nondirectively. That is, the interviewer, in his use of questioning techniques must leave it entirely to the respondent to provide answers to questions. For example, questions must never be asked in a leading manner or directively as this exerts pressure on respondents to answer in particular ways. (‘You don’t . . you haven’t experienced any problems in getting information from these staff?‘, which implies as the ‘right’ answer ‘No’, as against the nondirective ‘Do you experience any problems in getting information from these staff?‘) In more general terms, the interviewer must maintain a neutral stance; whatever the topic of the conversation, he must not express his personal views about the issues under consideration as this amounts to an explicit interviewer effect on the respondent which might endanger, as in the case of leading or directive questioning, the validity of the information reported.

While unstructured interviewing requires, for the avoidance of interviewer bias, the consistent use of nondirective interviewing technique: structured. schedulc- based interviewing necessitates the command of a much more comprehensive set of skills. In addition to nondirective questioning, the interviewer must take care that the schedule is used in the interview situation as intended by the researcher. Most importantly, the interviewer must not alter the wording of the questions or omit them by mistake. The former mistake has the consequence that respondents are actually exposed to questions different from those printed in the schedule which may lead to invalid responses; the latter means that possible answers are simply lost. Furthermore, the interviewer must use the various questioning procedures prescribed by the schedule, that is, routing instructions must be obeyed and prompt cards, where necessary, must be used, so that relevant information as well as information meeting particular forms of response is obtained.

Throughout the interview, the interviewer has the difficult task of monitoring whether the respondent’s information is adequate, that is, provides acceptable and complete answers to the questions. Thus, clearly, the interviewer’s role in structured interviewing requires a highly skilled performance. This means, in turn, that the researcher, considering both the particulars of the interview schedule and the characteristics of the respondent population, must design the repertoire of skills needed in a data collection prdgramme in considerable detail (see Brenner, 1980). Also, intensive interviewer training must be given (see Wilson eta/., 1978).

In the practice of schedule-based interviewing, however, both these prere- quisites for adequate interviewing are often not recognized. This is because the current methodological approaches to interviewing technique and interviewer training are too broad to be implemented straightforwardly, and with success (see, for example, Hoinville, Jowell et al., 1978). It is also because the common attitude of researchers is to underplay, if not totally ignore, the relevance of a skilled approach to interviewing. In most instances of published survey research, for example, little or no attention is paid to reporting the interviewing methods used. (Thus, the use of schedule-based interviewing in information research requires from the researcher a particular sensitivity when it comes to the design and implementation of his data collection practices.)

Ignorance of the prerequisites of effective interviewing, however, is unwarran- ted as there exist many studies which have demonstrated the biasing effects of inadequate interviewer performance (see Hyman et al., 1954: 225-274; Cannel1 and Khan, 1968). More recently, Bradburn, Sudman et al. ( 1979, Ch. 3) investigated

146 Problems in collecting social data

interviewer variations in asking questions. Their analysis was based on interviews collected by 53 interviewers in a nationwide U.S. survey dealing with topics which were regarded as threatening to respondents. In all, 372 tape-recorded interviews were coded, and, given that the interviewers were ‘very good’, ‘the best available in the areas surveyed’ (p. 271, the extent of mistakes in asking the questions was surprisingly high. Unplanned questioning actions happened in over half of the question-answer sequences. More than one-third of the questions were altered. ‘This confirms many researchers’ belief that the survey presents a nonstandard- ized stimulus to the respondent’ (p. 49). Brenner (in press), having examined 60 tape-recorded interviews by six professional market and social research inter- viewers participating in a mobility survey conducted by an experienced social scientist at the University of Wales, found a considerable degree of interviewer bias due to interviewer incompetence. For example, it mattered a great deal how the questions were asked. When questions were significantly altered, this led to an increase in inadequate responses, while asking them directively gave rise to the opposite effect, that is, significantly more adequate answers were obtained when questions were asked directively rather than nondirectively, as worded in the questionnaire. Similarly, omitting the prompt cards to be used increased dramatically the extent of inadequate answering.

While the interviewer’s participation in the interview is probably most crucial, as regards the quality of the data collected, the contribution of the respondent to the adequacy of information reporting is also important. Ideally, research interviewing requires respondents who are motivated and competent to perform a faithful respondent role. Such a role implies ‘that if one is asked information, one is expected to give responses that are, to the best of one’s knowledge, in accordance with reality’ (Dijkstra and van der Zouwen, 1978: 63). To characterize the respondent’s participation in the interview solely in such terms however, would be misleading. For example, as regards the biasing effects of respondent motivation, it must be noted that respondents may use any means of action to damage, or destroy, the interview. ‘If the respondent is not sufficiently motivated to perform his role, the whole enterprise falls apart.’ (Sudman and Bradburn, 1974: 16.) Thus, the respondent’s role is open-ended, for it is up to him to cooperate, or not to cooperate, along the lines of action suggested by the interviewer.

The respondent’s degrees of freedom for action mean, first of all, that the interviewer must be prepared for the respondent to raise various problems encountered with the questions. Basically, four kinds of problems can occur. The respondent may require the interviewer to repeat an action (a question, a probe, an instruction); he may request clarification of what is meant by a question or of what is involved in following an instruction; he may give inadequate information, that is, information which does not meet the question objective; finally, he may refuse to answer. When such problems arise, the interviewer must try to deal with them in such a way that, in the end, adequate answers will be obtained. While requests for repetition are unproblematic, as the interviewer only needs to repeat the appropriate action, h an dl ing the other problems can be difficult and is sometimes impossible. When meeting requests for clarification, inadequate answers and refusals, it is most essential, to avoid interviewer bias, that the interviewer remains nondirective in his actions. For example, he may use nondirective probes (such as ‘Can you tell me more about it?‘, ‘How do you mean?‘) to clarify information which he cannot understand, or he may emphasize the confidentiality of the data collection with the aim of improving a refusal (for

MICHAEL BRENNER 147

details see Brenner, 1980, in press). In all, it is important to notice that, despite adequate interviewer performance, there is no guarantee of ultimate success for the interviewer attempting to secure adequate answers, as it is always up to the respondent to answer questions.

Another aspect of the open-endedness of the respondent’s role relates to the fact that the definition of the situation as research interview is only compelling for the interviewer. As the respondent cannot be forced to share the interviewer’s perspective of the situation, he may view the encounter in other terms, for example, as an opportunity to express personal issues beyond the interview. He may also, among other things, challenge the interviewer as a person on a number of grounds, for example, by doubting the interviewer’s competence (see Brenner, 1978). Such alterations of the definition of the situation may be subtle, that is, difficult, or impossible, to detect. For example, respondents may display ‘acquiescence’, that is, a tendency to agree, or disagree, with items independent of their content (see Couch and Keniston, 1960); they may feel ‘need for social approval’, that is, need ‘to respond in culturally sanctioned ways’ (Crowne and Marlowe, 1964: 354); they may be influenced by considerations concerning the social desirability of answers (see Phillips, 1973).

Due to the peculiar social and psychological character of the respondent’s role the interviewer must be prepared, from time to time, to encounter problems and instability in the relationship with the respondent. While many kinds of problems and instability can be dealt with successfully by means of the appropriate interviewing techniques, it must be acknowledged that the interviewer can only attempt to resolve problems that relate to the interview in terms of content or procedure; he cannot socially control the respondent’s overall conduct, as this would amount to a severe interviewer effect on the respondent. Thus, ultimately, adequacy of data collection in the research interview relies on the respondent’s goodwill to maintain the necessary working consensus, which the interviewer can try to reinforce positively, but cannot entirely create on his own.

DATA COLLECTION AS SOCIAL ACTION

Given the social and psychological nature of social science data-collection, and having considered the case of the research interview, it is now clear that empirical social science research involves, besides the substantive phenomena of interest, particular methodological social realities, that is, the social situations within which data are gathered. Thus, whatever the substantive aims of social science enquiry, researchers will have to deal, in one way or another, with the particular social life forms within which data are generated. How do social scientists generally cope with this issue?

The predominant strategy of social scientists has been to ignore the issue, and to assume that data collection situations are socially and psychologically unproblematic. There seems to be a kind of ‘gentlemen’s agreement’ among many social scientists to employ methods uncritically, that is, unscientifically. Data collection instruments are not developed methodologically to the level at which they have to be practiced, in terms of the social interactions needed to collect data. For example, the common practice of stressing the statistics of data analysis, while, more often than not, completely ignoring the actual conditions under which the data were generated in the first place, serves to emphasize the fact that scientific activity in much of empirical social science is regulated by fiat, just

148 Problems in collecting social data

normatively. There is an arbitrary consensus, among many social scientists, about what is acceptable scientific procedure, and this consensus, not the quest for objective knowledge sui generzs, determines the criteria for adequate enquiry.

It is possible to approach the social life forms inherent in data collection methods more positively, to obtain more valid insight into social life. The possibilities involve, primarily, increasing the degrees of social control over data collection processes. That is, instead of assuming that gathering data is, by and large, unproblematic, it is more useful to conceive of data collection situations strictly in terms of the particular social and psychological characteristics and constraints that are likely to arise in the research contacts between the investigator and people. In order to control data collection effectively in these terms, it is necessary to accomplish, for each particular research programme, detailed preparation for, as well as insight into, data collection processes qua social situations. One way of dealing with this approach to controlling the adequacy of data collection has been to recognize the particular social and psychological limits of data collection situations and to suggest methods by which such limits can be removed, at least to some degree (see, for example, Lofand, 1976). The spirit of this work has been aptly expressed by Webb et al. who lamented the overdependence of social science research on interviews and questionnaires :

‘Interviews and questionnaires intrude as a foreign element into the social setting they would describe, they create as well as measure attitudes, they elicit atypica! roles and responses. . . . But the principal objection is that they are used alone. No research method is without bias. Interviews and questionnaires must be supplemented by methods testing the same social science variables but having d$ erent methodological weaknesses.’ (Webb et al., 1966 : 1.)

A good example of the approach to social science research as recommended by Webb et al. is Wilson et al.2 (1978) research into information needs and information services in local authority social services departments. Essentially, they employed a three-tiered approach. to data collection, using participant observation as the first means of obtaining detailed insight into the social reality they were investigating. This was followed by writing narrative accounts of observation weeks and seeking subjects’ responses to these accounts. Once an intimate familiarity with the social life under study had developed, they were able to construct a salient, problem-relevant interview schedule. After intensive pre-testing and interviewer training, a third set of data was obtained by means of structured interviewing. Notice the two advantages of this research strategy. First, participant observation was used to become at all able to develop a salient interview approach. Secondly, given the relative weaknesses of all methods used the researchers were able to compare and, in a sense, cross-validate the sets of information obtained. Thus, in the end, a maximally (but surely not totally) valid insight into information needs and the uses of information in the social services departments studied could be provided.

Another way of approaching data collection control in social and psychological terms has been to conceive of the social interaction between participants in research settings as a problem in the psychology of action (see Brenner, 198 1). An action-psychological approach to data collection has two advantages. One is that the researcher can investigate how the social-interactional organization of data collection situations has affected his data base. The other advantage is that, using an action-psychological approach, detailed experience of what participants do, and of what they can do, in research settings is acquired over time. Such

MICHAEL BRENNER 149

knowledge assists the social control of‘data collection, for example, by means of a socially and psychologically detailed interviewer training procedure (see Brenner, 1980).

To conclude, a summary of the enquiry strategies that may be employed to investigate action psychological phenomena in interviews will be given. Among other things, it has been suggested that existing general psychological theories should be used. for an understanding of action processes in data collection methods (see, for example, Mertens, 1975). There are, however, two problems with this suggestion. First, psychologists are uncertain about how to explain the characteristics of human action (see Shaw and Costanzo, 1970). Thus, there is the problem of which theory one should favour. Secondly, whatever general psychological theories of action there are, these are both too wide and too narrow to serve the task well. They are too wide in that they refer to general and complex psychological phenomena which are difficult to operationalize in data collection settings. They are too narrow in that theorists’ overall concern with the human condition per se does not permit a theoretical accommodation of the social and psychological nuances that will be found in research settings. Thus, for a start, a theoretically unambitious approach seems to be more feasible. Such an approach, in the case of the interview, can involve two strategies. One entails the systematic observation and social categorization of the stream of happenings in interviews. The other, useful in the pre-testing of interview methods, is cognitive in nature in that, by means of post-interview interviews, participants can be requested to recall various aspects of their subjective experience of the data collection. (For reasons of space, the cognitive approach is not dealt with here; see Brenner, in press for details.)

One of the ways of grounding the overt analysis of action in interviews is to conceive of them as rule-structured. This is legitimate, as interviews are designed to serve enquiry purposes with a maximum of.precision, which implies that the researcher should try to exclude in advance the occurrence of unwanted happenings. Rules in research interviewing refer in particular to interviewing technique; there are also rules to cover the introductory phase of the interview as well as aspects of social etiquette. Assuming that researchers are willing and able to state the rules that should structure the interviewers’ performances, one can ask whether the interviews actually occurred within the rules. One of the ways of tackling this issue is to audio-tape a representative sample of interviews and to conduct, on the basis of the tapes, a systematic action analysis of the happenings, in terms of rule-following, rule-breaching and any other actions displayed by participants (see Marquis and Cannell, 1969; Cannel1 et al., 1975; Brenner, in press). The result of such work is that detailed insight, from an observer’s point of view, is obtained into the social interactional organization of inten;iews. Such insight helps to detect any overt sources of failure in data collection: for example, failures due to incorrect question-asking and recording errors.

While the overt analysis of action processes in interviews allows the close scrutiny of the conditions under which the data were collected, there are some problems in its application. For a start, such an analysis is cumbersome to operate, and it requires a degree of effort and methodological sophistication which is diametrically opposed to the quick-and-easy attitude of many users of the interview. Furthermore, any such analysis is likely to be defective in itself. The categorization of‘action processes, being observer-based, usually involves coding problems, as not all discernible events in the stream of happenings are easily identifiable in terms of’action. Thus, the overt analysis of interviewing interaction

150 Problems in collectvzg social data

is likely to lead to an under-reporting, and to some misrepresentation, of the true structure of events in data collection. This amounts to the admission that the total control of overt sources of failure is impossible. Nevertheless, the approach is useful when the adequacy of data collection by means of research interviews is to be assessed and inproved.

CONCLUSION

One of the ways in which the quality of data collection in social research and, hclic-c, iI1 iliii)rmation research might bc improv4 has been suggcstcd: that is, adopting a realistic view of the social and psychological processes that constitute and constrain data collection settings. In essence, the approach is optimistic as well as conservative. It is optimistic in the, sense that the necessity for abandoning traditional methods, for example, schedule-based interviewing and laboratory experimentation in psychology is rejected (see Brenner and Bungard, 198 11, as it is possible to obtain adequate data by means of these methods when they are conceived of in social and psychological terms and when sufficient social control is exerted over their operation. It is conceded, and emphasized, however, that the established measurement theory of these methods is mistaken, as the use of social science methods cannot involve ‘transparent’ measurement; it will always yield imperfect results, due to the social and psychological constraints that invariably and inevitably operate in all data-collection settings. The approach is conservative in that the usefulness or propriety of any method is acknowledged. In this view, the quantitative vs. qualitative methodology debate is somewhat mistaken, not only because any method will lead to bias, but in particular as it is clearly the abuse of traditional methods, not their character as forms of data collection per se, which has discredited the employment of these methods.

REFERENCES

BRENNER, M. (1978). Interviewing: the social phenomenology ot a research instrument. h: The Soczal Contexts ofMethod (M. Brenner, P. Marsh and M. Brenner, eds.), 122-139. London: Croom Helm.

BRENNER, M. (1980). Skills in the research interview. In: Manual of Social Skills (M. Argyle, ed.1. London: Methuen (in press).

RRENNER, M. led.1 (198 1). Soctal Method and Social L.z>. London: Academic Press. BRENNER, M.. MARSH, P. and BRENNER, M. (eds) (1978). The Social Contexts ofMethod. London:

Croom Helm. BRENNER, M and BUNGARD, w. (1981). What to do with social reactivity in psychological

experimentation? In: Social Method and Social Lifi M. Brenner, ed.), 89-l 13. London: Academic Press.

BRENNE‘R, M. (in press). The Social Structure of the Research Intervtew. London: Academic Press.

BROWN, J. and SIME, J. (1981). A methodology of accounts. In: Soczal Method and Soczal Lzye (M. Brenner, ed.). 159-187. London: Academic Press.

CANNELL, CH. F. and KAHN R. L. (1968). Interviewing. In: The Handbook ofSocial Psychology, Vol. 2 (G. Lindzey and E. Aronson, eds.), 526-595. Reading, Mass. : Addison-Wesley.

~:~~~Nt~.l.l., cl1. I,., I.AWSoN, S. A. ;lllct HALISSER, D. 1.. 119751. ,,I ~‘CChLZpf~ /or ~r~a/ua/zng In/er-

vztww Prrformnn~~~. AIIII Al-t)ol-, Mich.: Institute li)r Social Rc~arc~l~.

MICHAEL BRENNER 151

COUCH, A. and KENISTON, K. (1960). Yeasayers and naysayers: agreeing response set as a

personality variable. Journal of Abnormal and Social Psychology, 60, 15 l-l 74. CROWNE, D. and MARLOWE, D. (1964). The Approval Motive. New York: Wiley.

DEUTSCHER, I. (1966). Word and deeds: social science and social policy. Social Problems, 13, 233-254.

DIJKSTRA, w. and VAN DER ZOUWEN, J. (1978). Role playing in the interview: towards a theory of artifacts in the survey-interview. In: Sociocybernetics, Vol. 2 (R. F. Geyer and

,J. van der Zouwen, eds.), 59-83. Leiden: MartinusNijhoff.

FILSTEAD, w. J. (ed.) (1970). Qualitative Methodology. Chicago: Markham. HARRY, R. (1979). SocialBeing. Oxford: Blackwell. HARRY?, R. and SECORD, P. F. (1972). The Explanation of SocialBehaviour. Oxford: Blackwell. HOINVILLE, G. and JOWELL, R. ( 1978). Survey Research Practzce. London: Heinemann. HYMAN, H. H. (19.54). Interviewing in Social Research. Chicago, Ill.: [Jniversity of Chicago

Press. LOFLAND, J. (1976). Doing Social Lz$, New York: Wiley. MARSH, P. (1979). Problems Encountered in What People Say and What They Do. Paper

presented at the symposium ‘Analysis of open-ended data’, University of Surrey, 12 December.

MARQUIS, K. H. and CANNELL, CH. F. (1969). A Study of Interviewer-Respondent Interaction in the Urban Employment Survey. Ann Arbor, Mich. : Institute for Social Research.

MERTENS, w. (1975). Sozialpsychologie des Experiments. Hamburg: Hoffman und Campe. MATHEWS, v. L., FEATHER, J. and CRAWFORD, J. (1974). A ResporuelRecord Discrepancy Study.

University of Saskatchewan, Department of Social and Preventive Medicine. MOSER, c. A. and KALTON, G. (1971). Survey Methods in Social Investigation. London:

Heinemann. NACHMIAS, D. and NACHMIAS, CH. (1976). Research Methods in the Social Sciences. London:

Arnold. PHILLIPS, D. L. (197 1). Knowledgefrom What? Chicago: Rand McNally. PHILLIPS, D. L. (1973). Abandoning Method. London: Jossey-Bass. ROSENBERG, M. J. (1969). The conditions and consequences ofevaluation apprehension. In:

Artifact in Behavioural Research (R. Rosenthal and R. Rosnow, eds.), 280-350. New York: Academic Press.

SELLTIZ, C., WRIGHTSMAN, L. S., and COOK, s. w. (1976). Research Methods in Social Relations. New York: Holt, Rinehart and Winston.

SHAW, M. E. and COSTANZO, P. R. (1970). Theories of Social Psychology. New York: McGraw-Hill.

SUDMAN, s. and BRADBURN, N. M. (1974). Response Effects in Surveys. Chicago: Aldine. WEBB, E. J. (1966). Unobtrusive Measures: Nondirective Research in the Social Sciences. Chicago:

Rand McNally. WILSON, T. D., STREATFIELD, D. R., MULLINGS, C., LOWNDES SMITH, v. and PENDLETON, 8.

(1978). Information Needs and Information Services in Local Authority Social Services Departments. Sheffield: University of Sheffield, Postgraduate School of Librarianship and Information Science.