an alternative view on “an alternative paradigm”

4
Research in Nursing & Health, 2003, 26, 256–259 Focus on Research Methods: Commentary An Alternative View on ‘‘An Alternative Paradigm’’ Sandra Ward,* Heidi Scharf Donovan, { Ronald C. Serlin* University of Wisconsin–Madison, School of Nursing, Madison, WI Received 29 March 2002; accepted 2 April 2003 The strengths in the Sidani, Epstein, and Moritz (2003) article lie in the reinforcement of important procedures for enhancing the internal and external validity of intervention studies. For example, the authors highlight the importance of using theory to guide study design, operationalization of key variables, and data analysis. They emphasize that an important goal of research is to identify which participants under what circumstances respond in what ways to interventions. They stress that to do this, investigators must use theory to identify factors that may affect the implementation of interventions and/or influence outcomes. They encourage researchers to keep descriptive reports of reasons why potential participants decline to join a study, information that is useful for modifying current study procedures or future study design. Finally, they emphasize the impor- tance of developing protocols for intervention delivery, training study personnel in the theory underlying the intervention, and explaining the principles guiding individualization of the inter- vention for each participant. These are all excellent recommendations for intervention research. Unfortunately, the authors suggest that the re- commendations represent an ‘‘alternative method for clinical research.’’ In reality, most of their recommendations are part of the current, standard approach to conducting randomized controlled trials (RCTs). In fact, the narrow, and often erro- neous, characterization by the authors of the assumptions and goals of RCTs sets up a straw dog argument that is then used to frame several proposed solutions. Perhaps most important, critiques of the randomized trial are presented, with nothing new to offer as a substitute for randomization. Although it has long been known that conducting randomized trials in clinical settings is challenging, there is no substitute for randomization when it comes to controlling selection bias and allowing causal inference that the intervention being tested is responsible for the outcomes observed (Shadish, Cook, & Campbell, 2002). The article moves the field no further in solving this dilemma. In the following paragraphs we amplify this point and address selected other problems. THE TERM THEORY-DRIVEN RESEARCH The term theory-driven is problematic on at least two fronts. First, the idea that the theoretical foundations of a study should be clearly laid out is not new. It is commonly accepted that reports of intervention research should contain an explicit Contract grant sponsor: National Institute of Nursing Research; Contract grant number: R01 NR03126 (to Ward) and F31NR07556 (to Donovan). Correspondence to Sandra Ward, University of Wisconsin–Madison, School of Nursing K6/348, Clinical Science Center, 600 Highland Avenue, Madison, WI 53792-2455. *Professor. { Doctoral Candidate. Published online in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/nur.10088 256 ß 2003 Wiley Periodicals, Inc.

Upload: sandra-ward

Post on 06-Aug-2016

217 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: An alternative view on “an alternative paradigm”

Research in Nursing & Health, 2003, 26, 256–259

Focus on Research Methods:Commentary

An Alternative View on‘‘An Alternative Paradigm’’

Sandra Ward,* Heidi Scharf Donovan,{ Ronald C. Serlin*

University of Wisconsin–Madison, School of Nursing, Madison, WI

Received 29 March 2002; accepted 2 April 2003

The strengths in the Sidani, Epstein, and Moritz(2003) article lie in the reinforcement of importantprocedures for enhancing the internal and externalvalidity of intervention studies. For example, theauthors highlight the importance of using theory toguide study design, operationalization of keyvariables, and data analysis. They emphasize thatan important goal of research is to identify whichparticipants under what circumstances respond inwhat ways to interventions. They stress that to dothis, investigators must use theory to identifyfactors that may affect the implementation ofinterventions and/or influence outcomes. Theyencourage researchers to keep descriptive reportsof reasons why potential participants decline tojoin a study, information that is useful formodifying current study procedures or futurestudy design. Finally, they emphasize the impor-tance of developing protocols for interventiondelivery, training study personnel in the theoryunderlying the intervention, and explaining theprinciples guiding individualization of the inter-vention for each participant. These are allexcellent recommendations for interventionresearch.

Unfortunately, the authors suggest that the re-commendations represent an ‘‘alternative methodfor clinical research.’’ In reality, most of theirrecommendations are part of the current, standard

approach to conducting randomized controlledtrials (RCTs). In fact, the narrow, and often erro-neous, characterization by the authors of theassumptions and goals of RCTs sets up a strawdog argument that is then used to frame severalproposed solutions. Perhaps most important,critiques of the randomized trial are presented,with nothing new to offer as a substitute forrandomization. Although it has long been knownthat conducting randomized trials in clinicalsettings is challenging, there is no substitute forrandomization when it comes to controllingselection bias and allowing causal inference thatthe intervention being tested is responsible for theoutcomes observed (Shadish, Cook, & Campbell,2002). The article moves the field no further insolving this dilemma. In the following paragraphswe amplify this point and address selected otherproblems.

THE TERM THEORY-DRIVENRESEARCH

The term theory-driven is problematic on at leasttwo fronts. First, the idea that the theoreticalfoundations of a study should be clearly laid out isnot new. It is commonly accepted that reports ofintervention research should contain an explicit

Contract grant sponsor: National Institute of Nursing Research; Contract grantnumber: R01 NR03126 (to Ward) and F31NR07556 (to Donovan).

Correspondence to Sandra Ward, University of Wisconsin–Madison, School ofNursingK6/348, Clinical Science Center, 600 HighlandAvenue,Madison,WI 53792-2455.

*Professor.{Doctoral Candidate.Published online in Wiley InterScience (www.interscience.wiley.com)

DOI: 10.1002/nur.10088

256 �2003 Wiley Periodicals, Inc.

Page 2: An alternative view on “an alternative paradigm”

description of the theoretical underpinnings of thestudy, including an explication of the content andprocess of the intervention and the linkagesbetween the intervention and the outcome vari-ables. Similarly, it is accepted that theory shouldaddress such issues as for whom the interventionis likely to work (moderation) and how/why(mediation).

Second, when Chen and Rossi (1983, 1987)introduced the phrase theory-driven approach,they did so to address a particular problem, andthat problem was not a desire to replace therandomized trial. The theory-driven approach wasdeveloped to address the atheoretical nature of aparticular type of research—program evaluationresearch (Chen & Rossi, 1983). The theory-drivenapproach was never intended to preclude or re-place the need to design studies that have highinternal validity. In fact, Chen pointed out that thetheory-driven approach to program evaluationshould be used ‘‘. . .in addition to traditional de-sign or statistical techniques to rule out threats tovalidity. . .’’ (Chen, 1989, p. 394).

LACK OF PRECISION INLANGUAGE AND ARGUMENT

Lack of precision in language primarily concernsthe use of the terms effectiveness, efficacy, clinicalnursing research, and randomized controlledtrial. The authors’ initial definitions of effective-ness and efficacy research are accepted ones(although different conceptualizations do exist).Effectiveness studies are ‘‘undertaken to evaluatethe effects of interventions in achieving desiredoutcomes when tested under the conditions of thereal world of everyday practice’’ (Sidani et al.,2003, p. 244). Efficacy research is defined as‘‘testing the interventions’ effects under idealconditions’’ (Sidani et al., 2003, p. 244). Problemsarise in the representation of randomized con-trolled trials (RCTs) and clinical nursing research.RCTs as represented in the article are of the mostnarrow ‘‘black box’’ variety, in which there is noattention to subject characteristics and no concernfor identifying mechanisms through which theintervention has its hypothesized effect. Clinicalnursing research is never defined; but it is impliedby arguments in the article that clinical nursingresearch is effectiveness research. These erro-neous representations form the basis for anargument in which RCTs (the black box variety)cannot serve the purposes of nursing research(e.g., effectiveness research), and therefore an‘‘alternative paradigm’’ is required.

The focus on clinical research as effectivenessresearch is problematic. Clearly, nursing researchis undertaken for a variety of reasons, not just toevaluate effectiveness. Some research is con-ducted to describe a phenomenon, some is doneto generate a theory about a phenomenon, some isundertaken to test the initial feasibility of a newintervention, some is undertaken to test theefficacy of an intervention, and some is perform-ed to evaluate effectiveness. Given this rangeof purposes, it is wise to select a design that issuited to achieve a particular purpose. Whentesting the efficacy of an intervention, the primaryquestion is one of cause and effect. For such apurpose, the RCT is indeed the ‘‘gold standard’’because no other technique yet discovered hasthe same power to control for selection asa threat to internal validity (Shadish et al.).However, after establishing the efficacy of anintervention, one might go on to determineeffectiveness—to determine if the interventionis beneficial when included in everyday practice.For such studies the RCT is not always feasible.It is recognized that effectiveness research, out-comes research, program evaluation, and qualityassurance studies can be conducted with designsother than the RCT (for excellent examples seeAiken’s work, e.g., Aiken, Smith, & Lake, 1994).A great deal of attention has been given tomaintaining validity in such studies, and thepresent article does not offer new solutions forthese challenges.

Nursing interventions vary widely in scope andcomplexity, from rather focal interventions thataddress a single problem in a relatively shortperiod of time to rather global interventions thatcould best be described as nursing care. Althoughmany focal interventions eventually become partof nursing care, this ideally should happen onlyafter their efficacy has been established. Then theycan be incorporated into nursing care, and effec-tiveness can be studied. Perhaps the best exampleof such a progression of studies can be seen in thework of Johnson, who studied a focal intervention(sensory preparation for stressful medical proce-dures) first in the laboratory, then in a series ofefficacy studies, and finally in an effectivenessstudy wherein nurses were taught to incorporatethe intervention into multiple facets of theirpractice (Johnson, 1999). In a related vein, muchof the discussion of patient-centered care ismisleading because the terms patient-centeredcare and patient-centered research are used assynonyms, but they are not. Care and research arenot the same, and although it may be difficult totest ‘‘care’’ in an RCT, it is certainly possible to

ALTERNATIVEVIEW /WARDETAL. 257

Page 3: An alternative view on “an alternative paradigm”

use an RCT to test interventions that may laterbecome part of practice/care.

As another example of lack of precision inargument, in the ‘‘call for alternative methods forconducting clinical nursing research,’’ it is impli-ed that alternatives to the RCT are being sought,specifically, that alternatives to randomization aresought. Some of the cited authors were makingsuch a call (e.g., Gross & Fogg, 2001), but manywere not. Instead, they were asking for considera-tion of ways to improve how RCTs are conducted.For example, McGuire et al. (2000) called forattention to increasing study validity; they did notcall for a halt to using randomization.

Finally, consider the oversimplified statement‘‘Treatment manipulation implies that the inter-vention is given in the same way and at the samedose to all participants assigned to the experi-mental group.’’ This statement ignores the largebody of knowledge emerging (from randomizedtrials) regarding the efficacy of tailored and in-dividualized interventions, interventions in whichcontent varies depending on characteristics ofthe participants (Lauver et al., 2002). It is notnecessary to reject the RCT in order to test tailoredand individualized interventions. On the contrary,it is critical that randomized trials of suchinterventions be conducted.

THE ERRONEOUSCHARACTERIZATION OF

THE ASSUMPTIONSUNDERLYING THE RCT

The control of extraneous factors in an RCT ishighlighted as evidence that those conducting theRCT believe the influence of extraneous factors tobe irrelevant. A stated strength of the theory-driven approach is that it advocates empiricallytesting the impact of important factors as opposedto the approach in RCTs of experimentally orstatistically controlling the effects of such factors.Is not the standard RCT practice of identifying andevaluating potential mediators and moderators thesame as empirically testing the impact of thosefactors?

Several claims about RCTs and the nomotheticperspective of science are misleading. It shouldbe emphasized that a nomothetic perspective ofscience imposes no particular research design orstatistical analysis. Indeed, astronomers do notperform randomized experiments at all in theirsearch for laws. When Windelband (1894/1998)first coined the terms nomothetic and idiographicto distinguish between the goals and methods of

the natural and historical sciences, he observedof the former that they seek general laws of oc-currences. ‘‘For the student of nature,’’ he said,‘‘. . .reflects only on those features which lendinsight into a lawful generalization’’ (p. 15). Onereason for this focus, he went on to say, is that ‘‘theknowledge of general laws has everywhere thepractical value of making possible the anticipationof future circumstances and the goal-appropriateintercession of the human in the course of things’’(Windelband, p. 17). In seeking to understand‘‘which factors and to what extent they affectoutcomes,’’ both the theory-driven approach andthe RCT clearly fall within the nomotheticframework.

Furthermore, it is misleading to state that thenomothetic perspective of science is representedby two main assumptions: (a) that the treatmentgroups are equal prior to intervention, and (b) thatparticipants’ response to treatment is homoge-neous. In any experiment in which participants arerandomized to condition, there is no expectationthat the groups are identical. Instead, it is onlyknown that in the long run (e.g., over a series ofexperiments), the various patient characteristicswill be balanced between the two groups.

In addition, the nomothetic perspective wouldrequire only that some relationship exist betweenthe treatment and the outcome, not that all parti-cipants, regardless of individual characteristics,respond the same. Certainly, the search for mod-erator variables in RCTs indicates that possiblyheterogeneous treatment effects are expectedand accommodated. The relationships soughtcould be additive, multiplicative, linear, log-linear, exponential, ordinal, or none of these; theimposed statistical model and test will be selectedaccordingly.

Finally, it is interesting that although thestatistical tests used in RCTs are stated to be areflection of the nomothetic perspective (and itsassociated assumption of homogeneity of treat-ment effect), the same general linear model pro-cedures, such as regression analysis, are suggestedfor use in the described ‘‘alternative paradigm.’’It should be mentioned in this context that thesupposed power advantage of the theory-drivenapproach depends on the truth of the theoryunderpinning its procedures. (Indeed, virtually allthe advantages of the theory-driven approachdepend on the truth of the theories.) If variousaspects of the theory are not true, then the in-clusion of the associated variables as predictorsin the statistical model can actually result in anincreased mean square residual and correspond-ingly reduced power.

258 RESEARCH INNURSING&HEALTH

Page 4: An alternative view on “an alternative paradigm”

PROPOSED METHODOLOGICALMODIFICATIONS

Four methodological modifications are proposedin the theory-driven approach: more inclusiveparticipant selection; assignment to treatmentoptions by patient preference; creation of a con-tinuous treatment variable based on interventiondelivery and participant adherence; and selectionof valid, reliable, sensitive, and clinically usefuloutcomes. As described at the beginning of thiscommentary, many of these suggestions are ex-cellent reinforcements of current best practice inthe conduct of intervention research. They do not,however, represent an ‘‘alternative paradigm.’’

Two methodological modifications are proble-matic. Replacing (partially or completely) randomassignment to treatment with participant selectionof treatment condition and coding of the inter-vention into a continuous variable are techniquesthat are highly vulnerable to the effects of self-selection. It is impossible to disentangle the effectsof the intervention from the characteristics of theparticipants that may have (a) led them to select aparticular intervention and (b) resulted in a dif-ferential use (dose) of the intervention. Efficacy isnot replaced by dose–response, particularly whenthe participant is under control of the dose re-ceived. These suggested modifications are tanta-mount to taking a randomized trial and turning itinto one big correlational study, a study in whichinternal validity is completely compromised.

In conclusion, debates on methodology are cri-tical to thriving, developing sciences. We appre-ciate the opportunity to engage in such debate withaccomplished investigators who have raised signi-ficant questions about the conduct of interventionresearch in nursing. While lauding their focus

on the importance of theory in such research,we respectfully submit that they have providedneither an alternative paradigm nor sufficient sup-port for their methodological modifications.

REFERENCES

Aiken, L., Smith, H., & Lake, E. (1994). LowerMedicare mortality among a set of hospitals knownfor good nursing care. Medical Care, 32, 771–787.

Chen, H. (1989). The conceptual framework of thetheory-driven perspective. Evaluation and ProgramPlanning, 12, 391–396.

Chen, H., & Rossi, P. (1983). Evaluating with sense: Thetheory-driven approach. Evaluation Review, 7, 283–302.

Chen, H., & Rossi, P. (1987). The theory-drivenapproach to validity. Evaluation and Program Plan-ning, 10, 95–103.

Gross, D., & Fogg, L. (2001). Clinical trials in the 21stcentury: The case for participant-centered research.Research in Nursing & Health, 24, 530–539.

Johnson, J. (1999). Self-regulation theory and copingwith physical illness. Research in Nursing & Health,22, 435–448.

Lauver, D., Ward, S., Heidrich, S., Keller, M., Bowers,B., Brennan, P., Kirchhoff, K., & Wells, T. (2002).Patient-Centered Interventions. Research in Nursing& Health, 25, 246–255.

Shadish, W., Cook, T., & Campbell, D. (2002).Experimental and quasi-experimental designs forgeneralized causal inference. New York: HoughtonMifflin.

Sidani, S., Epstein, D.R., & Moritz, P. (2003). Analternative paradigm for clinical nursing research: Anexemplar. Research in Nursing & Health, 26, 244–255.

Windelband, W. (1894/1998). History and naturalscience. Theory & Psychology, 8, 5–22. (Originalwork delivered 1894.)

ALTERNATIVEVIEW /WARDETAL. 259