2009 - write results & discussion

5
Editorial: How to Write an Effective Results and Discussion for the Journal of Pediatric Psychology Dennis Drotar, PHD Department of Pediatrics, Cincinnati Children’s Hospital Medical Center Presenting Results Authors face the significant challenge of presenting their results in the Journal of Pediatric Psychology (JPP) comple- tely, yet succinctly and writing a convincing discussion section that highlights the importance of their research. The third and final in a series of editorials (Drotar, 2009a,b), this article provides guidance for authors to prepare effective results and discussion sections. Authors also should review the JPP website (http://www.jpepsy. oxfordjournals.org/) and consider other relevant sources (American Psychological Association, 2001; APA Publi- cations and Communications Board Working Group on Journal Reporting Standards, 2008; Bem, 2004; Brown, 2003; Wilkinson & The Task Force on Statistical Inference, 1999). Follow APA and JPP Standards for Presentation of Data and Statistical Analysis Authors’ presentations of data and statistical analyses should be consistent with publication manual guidelines (American Psychological Association, 2001). For example, authors should present the sample sizes, means, and standard deviations for all dependent measures and the direction, magnitude, degrees of freedom, and exact p levels for inferential statistics. In addition, JPP editorial policy requires that authors include effect sizes and confi- dence intervals for major findings (Cumming & Finch, 2005, 2008; Durlak, 2009; Wilkinson & the Task Force on Statistical Inference, 1999; Vacha-Haase & Thompson, 2004). Authors should follow the Consolidated Standards of Reporting Trials (CONSORT) when reporting the results of randomized clinical trials (RCTs) in JPP (Moher, Schultz, & Altman, 2001; Stinson-McGrath, & Yamoda, 2003). Guidelines have also been developed for nonrandomized designs, referred to as the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) statement (Des Jarlais, Lyles, Crepaz, & the TREND Group, 2004) (available from http://www.trend-statement. org/asp/statement.asp). Finally, studies of diagnostic accu- racy, including sensitivity and specificity of tests, should be reported in accord with the Standards for Reporting of Diagnostic Accuracy (STARD) (Bossuyt et al., 2003) (http://www.annals.org/cgi/content/full/138/1/W1). Finally, authors may also wish to consult a recent publication (APA Publications and Communications Board Working Group on Journal Reporting Standards, 2008) that contains useful guidelines for various types of manuscripts including reports of new data collection and meta-analyses. Guidance is also available for manu- scripts that contain observational longitudinal research (Tooth, Ware, Bain, Purdie, & Dobson, 2005) and quali- tative studies involving interviews and focus groups (Tong, Sainsbury, & Craig, 2007). Provide an Overview and Focus Results on Primary Study Questions and Hypotheses Readers and reviewers often have difficulty following authors’ presentation of their results, especially for com- plex data analyses. For this reason, it is helpful for authors to provide an overview of the primary sections of their results and also to take readers through their findings in a step-by-step fashion. This overview should follow directly from the data analysis plan stated in the method (Drotar, 2009b). Readers appreciate the clarity of results that are consistent with and focused on the major questions and/or specific hypotheses that have been described in the introduction. Readers and reviewers should be able to identify which specific hypotheses were supported, which received partial support, and which were not supported. Nonsignificant findings should not be ignored. Hypothesis-driven analyses should be presented first, prior to secondary analyses and/or more exploratory analyses All correspondence concerning this article should be addressed to Dennis Drotar, PhD, MLC 7039, 3333 Burnett Avenue, Cincinnati, OH 45229-3936, USA. E-mail: [email protected] Journal of Pediatric Psychology 34(4) pp. 339343, 2009 doi:10.1093/jpepsy/jsp014 Advance Access publication March 10, 2009 Journal of Pediatric Psychology vol. 34 no. 4 ß The Author 2009. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: [email protected]

Upload: albyzia

Post on 24-Apr-2017

215 views

Category:

Documents


1 download

TRANSCRIPT

Editorial: How to Write an Effective Results and Discussion forthe Journal of Pediatric Psychology

Dennis Drotar, PHD

Department of Pediatrics, Cincinnati Children’s Hospital Medical Center

Presenting Results

Authors face the significant challenge of presenting theirresults in the Journal of Pediatric Psychology (JPP) comple-tely, yet succinctly and writing a convincing discussionsection that highlights the importance of their research.The third and final in a series of editorials (Drotar,2009a,b), this article provides guidance for authors toprepare effective results and discussion sections. Authorsalso should review the JPP website (http://www.jpepsy.oxfordjournals.org/) and consider other relevant sources(American Psychological Association, 2001; APA Publi-cations and Communications Board Working Group onJournal Reporting Standards, 2008; Bem, 2004; Brown,2003; Wilkinson & The Task Force on StatisticalInference, 1999).

Follow APA and JPP Standards for Presentationof Data and Statistical Analysis

Authors’ presentations of data and statistical analysesshould be consistent with publication manual guidelines(American Psychological Association, 2001). For example,authors should present the sample sizes, means, andstandard deviations for all dependent measures and thedirection, magnitude, degrees of freedom, and exactp levels for inferential statistics. In addition, JPP editorialpolicy requires that authors include effect sizes and confi-dence intervals for major findings (Cumming & Finch,2005, 2008; Durlak, 2009; Wilkinson & the Task Forceon Statistical Inference, 1999; Vacha-Haase & Thompson,2004).

Authors should follow the Consolidated Standards ofReporting Trials (CONSORT) when reporting the results ofrandomized clinical trials (RCTs) in JPP (Moher, Schultz,& Altman, 2001; Stinson-McGrath, & Yamoda, 2003).Guidelines have also been developed for nonrandomizeddesigns, referred to as the Transparent Reporting ofEvaluations with Nonrandomized Designs (TREND)

statement (Des Jarlais, Lyles, Crepaz, & the TRENDGroup, 2004) (available from http://www.trend-statement.org/asp/statement.asp). Finally, studies of diagnostic accu-racy, including sensitivity and specificity of tests, shouldbe reported in accord with the Standards for Reportingof Diagnostic Accuracy (STARD) (Bossuyt et al., 2003)(http://www.annals.org/cgi/content/full/138/1/W1).

Finally, authors may also wish to consult a recentpublication (APA Publications and CommunicationsBoard Working Group on Journal Reporting Standards,2008) that contains useful guidelines for various typesof manuscripts including reports of new data collectionand meta-analyses. Guidance is also available for manu-scripts that contain observational longitudinal research(Tooth, Ware, Bain, Purdie, & Dobson, 2005) and quali-tative studies involving interviews and focus groups (Tong,Sainsbury, & Craig, 2007).

Provide an Overview and Focus Results onPrimary Study Questions and Hypotheses

Readers and reviewers often have difficulty followingauthors’ presentation of their results, especially for com-plex data analyses. For this reason, it is helpful for authorsto provide an overview of the primary sections of theirresults and also to take readers through their findings ina step-by-step fashion. This overview should follow directlyfrom the data analysis plan stated in the method (Drotar,2009b).

Readers appreciate the clarity of results that areconsistent with and focused on the major questionsand/or specific hypotheses that have been described inthe introduction. Readers and reviewers should be ableto identify which specific hypotheses were supported,which received partial support, and which were notsupported. Nonsignificant findings should not be ignored.Hypothesis-driven analyses should be presented first, priorto secondary analyses and/or more exploratory analyses

All correspondence concerning this article should be addressed to Dennis Drotar, PhD, MLC 7039, 3333 BurnettAvenue, Cincinnati, OH 45229-3936, USA. E-mail: [email protected]

Journal of Pediatric Psychology 34(4) pp. 339–343, 2009doi:10.1093/jpepsy/jsp014

Advance Access publication March 10, 2009Journal of Pediatric Psychology vol. 34 no. 4 ! The Author 2009. Published by Oxford University Press on behalf of the Society of Pediatric Psychology.

All rights reserved. For permissions, please e-mail: [email protected]

by guest on April 16, 2014

http://jpepsy.oxfordjournals.org/D

ownloaded from

(Bem, 2004). The rationale for the choice of statistics andfor relevant decisions within specific analyses should bedescribed (e.g., rationale for the order of entry of multiplevariables in a regression analysis).

Report Data that is Relevant to StatisticalAssumptions

Authors should provide appropriate evidence, includingquantitative results where necessary, to affirm that theirdata fit the assumptions required by the statistical analysesthat are reported. When assumptions underlying statisticaltests are violated, authors may use transformations of dataand/or alternative statistical methods in such situationsand should describe the rationale for them.

Integrate the Text of Results with Tables and/orFigures

Tables and figures provide effective, reader-friendly waysto highlight key findings (Wallgren, Wallgren, Perrson,Jorner, & Haaland, 1996). However, authors face thechallenge of describing their results in the text in a waythat is not highly redundant with information presented intables and/or figures. Figures are especially useful to reportthe results of complex statistics such as structural equationmodeling and path analyses that describe interrelation-ships among multiple variables and constructs. Givenconstraints on published text in JPP, tables and figuresshould always be used selectively and strategically.

Describe Missing Data

Reviewers are very interested in understanding the natureand impact of missing data. For this reason, informationconcerning the total number of participants and the flowof participants through each stage of the study (e.g., inprospective studies), the frequency and/or percentages ofmissing data at different time points, and analytic methodsused to address missing data is important to include. Asummary of cases that are missing from analyses of primaryand secondary outcomes for each group, the nature ofmissing data (e.g., missing at random or missing notat random), and, if applicable, statistical methods usedto replace missing data, and/or understand the impact ofmissing data (Schafer & Graham, 2002) are useful forreaders.

Consider Statistical Analyses that DocumentClinical Significance of Results

Improving the clinical significance of research findingsremains an important but elusive goal for the field of pedi-atric psychology (Drotar & Lemanek, 2001). Reviewersand readers are very interested in the question: what do

the findings mean for clinical care? For this reason,I strongly encourage authors to conduct statisticalevaluations of the clinical significance of their resultswhenever it is applicable and feasible. In order to describeand document clinical significance, authors are stronglyencouraged to use one of several recommendedapproaches including (but not limited to) the ReliableChange Index (Jacobson, Roberts, Burns, & McGlinchey,1999; Jacobson & Truax, 1991; Ogles, Lambert, & Sawyer,1995), normative comparisons (Kendall, Marrs-Garcia,Nath, & Sheldrick, 1999); or analyses of the functionalimpact of change (Kazdin, 1999, 2000). Statistical analysesof the cost effectiveness of interventions can also add toclinical significance (Gold, Russell, Siegel, & Weinstein,1996). Authors who report data from quality of lifemeasures should consider analyses of responsiveness andclinical significance that are appropriate for such measures(Revicki, Hays, Cella, & Sloan, 2008; Wywrich et al.,2005).

Include Supplementary Information ConcerningTables, Figures, and Other Relevant Data onthe JPP Website

The managing editors of JPP appreciate the increasingchallenges that authors face in presenting the results ofcomplicated study designs and data analytic procedureswithin the constraints of JPP policy for manuscriptlength. For this reason, our managing editors will workwith authors to determine which tables, analyses, andfigures are absolutely essential to be included in theprinted text version of the article versus those that areless critical but nonetheless of interest and can be postedon the JPP website in order to save text space. Specificguidelines for submitting supplementary material areavailable on the JPP website. We believe that increaseduse of the website to post supplementary data will notonly save text space but will facilitate communicationamong scientists that is so important to our field andencouraged by the National Institutes of Health.

Writing the Discussion Section

The purpose of the discussion is to give readers specificguidance about what was accomplished in the study, thescientific significance, and what research needs to bedone next.

The discussion section is very important to readersbut extremely challenging for authors, given the need fora focused synthesis and interpretation of findings and

340 Drotar

by guest on April 16, 2014

http://jpepsy.oxfordjournals.org/D

ownloaded from

Januarius Gobilik

presentation of relevant take-home messages that highlightthe significance and implications of their research.

Organize and Focus the Discussion

Authors are encouraged to ensure that their discussionsection is consistent with and integrated with all previoussections of their manuscripts. In crafting their discussion,authors may wish to review their introduction to make surethat the points that are most relevant to their study aims,framework, and hypotheses that have been previouslyarticulated are identified and elaborated.

A discussion section is typically organized aroundseveral key components presented in a logical sequenceincluding synthesis and interpretation of findings, descrip-tion of study limitations, and implications, includingrecommendations for future research and clinical care.Moreover, in order to maximize the impact of the discus-sion, it is helpful to discuss the most important orsignificant findings first followed by secondary findings.

One of the most common mistakes that authorsmake is to discuss each and every finding (Bem, 2004).This strategy can result in an uninteresting and unwieldypresentation. A highly focused, lively presentation thatcalls the reader’s attention to the most salient and inter-esting findings is most effective (Bem, 2004). A relatedproblematic strategy is to repeat findings in the discussionthat have already been presented without interpreting orsynthesizing them. This adds length to the manuscript,reduces reader interest, and detracts from the significanceof the research. Finally, it is also problematic to introducenew findings in the discussion that have not beendescribed in the results.

Describe the Novel Contribution of FindingsRelative to Previous Research

Readers and reviewers need to receive specific guidancefrom authors in order to identify and appreciate themost important new scientific contribution of the theory,methods, and/or findings of their research (Drotar, 2008;Sternberg & Gordeva, 2006). Readers need to understandhow authors’ primary and secondary findings fit with whatis already known as well as challenge and/or extend scien-tific knowledge. For example, how do the findings shedlight on important theoretical or empirical issues andresolve controversies in the field? How do the findingsextend knowledge of methods and theory? What is themost important new scientific contribution of the work(Sternberg & Gordeva, 2006)? What are the most impor-tant implications for clinical care and policy?

Discuss Study Limitations and RelevantImplications

Authors can engage their readers most effectively witha balanced presentation that emphasizes the strengths yetalso critically evaluates the limitations of their research.Every study has limitations that readers need to considerin interpreting their findings. For this reason, it is advan-tageous for authors to address the major limitations of theirresearch and their implications rather than leaving it toreaders or reviewers to identify them. An open discussionof study limitations is not only critical to scientific integrity(Drotar, 2008) but is an effective strategy for authors:reviewers may assume that if authors do not identify keylimitations of their studies they are not aware of them.

Description of study limitations should addressspecific implications for the validity of the inferencesand conclusions that can be drawn from the findings(Campbell & Stanley, 1963). Commonly identified threatsto internal validity include issues related to study design,measurement, and statistical power. Most relevant threatsto external validity include sample bias and specificcharacteristics of the sample that limit generalization offindings (Drotar, 2009b).

Although authors’ disclosure of relevant study limita-tions is important, it should be selective and focus on themost salient limitations, (i.e., those that pose the greatestthreats to internal or external validity). If applicable,authors may also wish to present counterarguments thattemper the primary threats to validity they discuss. Forexample, if a study was limited by a small sample butnonetheless demonstrated statistically significant findingswith a robust effect size, this should be considered byreviewers.

Study limitations often suggest important newresearch agendas that can shape the next generation ofresearch. For this reason, it is also very helpful for authorsto inform reviewers about the limitations of their researchthat should be addressed in future studies and specificrecommendations to accomplish this.

Describe Implications of Findings for NewResearch

One of the most important features of a discussion sectionis the clear articulation of the implications of study find-ings for research that extends the scientific knowledge baseof the field of pediatric psychology. Research findingscan have several kinds of implications, such as thedevelopment of theory, methods, study designs dataanalytic approaches, or identification of understudiedand important content areas that require new research

Editorial 341

by guest on April 16, 2014

http://jpepsy.oxfordjournals.org/D

ownloaded from

Januarius Gobilik
Januarius Gobilik
Januarius Gobilik
Januarius Gobilik
Januarius Gobilik
Januarius Gobilik
Januarius Gobilik
Januarius Gobilik
Januarius Gobilik
Januarius Gobilik

(Drotar, 2008). Providing a specific agenda for futureresearch based on the current findings is much morehelpful than general suggestions. Reviewers also appreciatebeing informed about how specific research recommen-dations can advance the field.

Describe Implications of Findings for ClinicalCare and/or Policy

I encourage authors to describe the potential clinicalimplications of their research and/or suggestions toimprove the clinical relevance of future research (Drotar& Lemanek, 2001). Research findings may have widelyvaried clinical implications. For example, studies thatdevelop a new measure or test an intervention have greaterpotential clinical application than a descriptive studythat is not directly focused on a clinical application.Nevertheless, descriptive research such as identificationof factors that predict clinically relevant outcomes mayhave implications for targeting clinical assessment orinterventions concerning such outcomes (Drotar, 2006).However, authors be careful not to overstate the implica-tions of descriptive research.

As is the case with recommendations for futureresearch, the recommendations for clinical care shouldbe as specific as possible. For example, in measure devel-opment studies it may be useful to inform readers aboutnext steps in research are needed to enhance the clinicalapplication of a measure.

This is the final in the series of editorials that areintended to be helpful to authors and reviewers and improvethe quality of the science in the field of pediatric psychology.I encourage your submissions to JPP and welcome ourcollective opportunity to advance scientific knowledge.

Acknowledgments

The hard work of Meggie Bonner in typing this manuscriptand the helpful critique of the associate editors of Journalof Pediatric Psychology and Rick Ittenbach are gratefullyacknowledged.

Conflict of interest: None declared.

Received February 4, 2009; revisions received and accepted

February 4, 2009

References

American Psychological Association. (2001). Publicationmanual of the American Psychological Association(5th ed.). Washington, DC: Author.

APA Publications and Communications Board WorkingGroup on Journal Article Reporting Standards. (2008).Reporting standards for research in psychology.Why do we need them? What do they need to be?American Psychologist, 63, 839–851.

Bem, D. (2004). Writing the empirical journal article.In J. M. Darley, M. P. Zanna, & H. Roediger III (Eds.),The complete academic: a career guide. Pediatricpsychology (2nd ed., pp. 105–219). Washington, DC:America Psychological Association.

Bossuyt, P., Reitsma, J. B., Bruns, D. E., Gatsonsis, C. A.,Glasziou, P. P., Irwig, L.M., et al. (2003). The STARDstatement for reporting studies of diagnostic accuracy:Explanation and elaboration. Annals of InternalMedicine, 138, W1–W12.

Campbell, D.J., & Stanley, J. L. (1963). Experimentaland quasi experimental designs for research. Chicago:Rand McNally.

Cumming, G., & Finch, S. (2005). Inference by eye:Confidence intervals and how to read pictures of data.American Psychologist, 60, 170–180.

Cumming, G., & Finch, S. (2008). Putting research incontext: Understanding confidence intervals from oneor more studies. Journal of Pediatric Psychology.Advance Access published December 18, 2008;doi:10.1093/jpepsy/jsn118.

Des Jarlais, D. C., Lyles, C., Crepaz, N., & the TRENDGroup. (2004). Improving the reporting quality ofnonrandomized evaluations of behavioral and publichealth interventions: The TREND Statement. AmericanJournal of Public Health, 94, 361–366. RetrievedSeptember 15, 2004, from http://www.trend-statement.org/asp/statement.asp.

Drotar, D. (2000). Writing research articles for publication.In D. Drotar (Ed.), Handbook of research methodsin clinical child and pediatric psychology. Pediatricpsychology (pp. 347–374). New York: KluwerAcademic/Plenum Publishers.

Drotar, D. (2006). Psychological interventions in childhoodchronic illness. Washington, D.C.: AmericanPsychological Association.

Drotar, D. (2008). Thoughts on establishing researchsignificance and presenting scientific integrity.Journal of Pediatric Psychology, 33, 1–3.

Drotar, D. (2009a). Editorial: Thoughts on improvingthe quality of manuscripts submitted to the Journalof Pediatric Psychology: Writing a convincingintroduction. Journal of Pediatric Psychology, 34,1–3.

Drotar, D. (2009b). Editorial: How to report methods inthe Journal of Pediatric Psychology. Journal of

342 Drotar

by guest on April 16, 2014

http://jpepsy.oxfordjournals.org/D

ownloaded from

Pediatric Psychology. Advance Access publishedFebruary 10; doi:10.1093/jpepsy/jsp002.

Drotar, D., & Lemanek, K. (2001). Steps toward aclinically relevant science of interventions inpediatric settings. Journal of Pediatric Psychology, 26,385–394.

Durlak, J. A., (2009). How to select, calculate, andinterpret effect sizes. Journal of Pediatric Psychology.Advance Access published February 16; doi:10.1093/jpepsy/jsp004.

Gold, M. R., Siegel, J. E., Russell, L. B., & Weinstein, M. C.(1996). Cost-effectiveness in health and medicine.New York: Oxford University Press. Advance Accesspublished February 16, 2009, doi:10.1093/jpepsy/jsp004.

Jacobson, N. S., Roberts, L. J., Berns, S. B.,& McGlinchey, B. (1999). Methods for definingand determining clinical significance of treatmenteffects: Description, application, and alternatives.Journal of Consulting and Clinical Psychology, 67,300–307.

Jacobson, N. S., & Truax, P. (1991). Clinical significance:A statistical approach to defining meaningful changein psychotherapy research. Journal of Consulting andClinical Psychology, 59, 12–19.

Kazdin, A. E. (1999). The meanings and measurement ofclinical significance. Journal of Consulting and ClinicalPsychology, 67, 332–339.

Kazdin, A.E. (2000). Psychotherapy for children andadolescents: Directions for research and practice.New York: Oxford University Press.

Kendall, P. C., Marrs-Garcia, A., Nath, S. R.,& Sheldrick, R. C. (1999). Normative comparisons forthe evaluation of clinical significance. Journal ofConsulting and Clinical Psychology, 67, 285–299.

Moher, N., Schultz, K. F., & Altman, D. (2001). TheCONSORT statement: Revised recommendations forimproving the quality of reports of parallel-grouprandomized trials. Journal of the American MedicalAssociation, 285, 1987–1991.

Ogles, B. M., Lambert, M. L., & Sawyer, J. D. (1995).Clinical significance of the National Institute of Mental

Health Treatment of Depression CollaborativeResearch Program data. Journal of Consulting andClinical Psychology, 63, 321–326.

Revicki, D., Hays, R. D., Cella, D., & Sloan, J. (2008).Recommended methods for determiningresponsiveness and minimally important differencesfor patient-reported outcomes. Journal of ClinicalEpidemiology, 61, 102–109.

Schafer, J. L., & Graham, J. W. (2002). Missing data:Our view of the state of the art. Psychological Methods,7, 147–177.

Sternberg, R. J., & Gordeva, T. (2006). The anatomy ofimpact. What makes an article influential?Psychological Science, 7, 69–75.

Stinson, J. N., McGrath, P. J., & Yamada, J. T. (2003).Clinical trials in the Journal of Pediatric Psychology:Applying the CONSORT statement. Journal of PediatricPsychology, 28, 159–167.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidatedcriteria for reporting qualitative research (COREQ):a 32-item checklist for interviews and focus groups.International Journal of Quality in Health Care, 19,349–357.

Tooth, L., Ware, R., Bain, C., Purdie, D. M., & Dobson, A.(2005). Quality of reporting on observationallongitudinal research. American Journal ofEpidemiology, 161, 280–288.

Vacha-Haase, T., & Thompson, B. (2004). How toestimate and interpret various effect sizes. Journal ofCounseling Psychology, 51, 473–481.

Wallgren, A., Wallgren, B., Persson, R., Jorner, V.,& Haaland, F. A. (1996). Graphing statistics and data.Creating better charts. Thousand Oaks, CA: Sage.

Wilkinson, L., & the Task Force on Statistical Inference.(1999). Statistical methods in psychology journals.American Psychologist, 54, 594–604.

Wywrich, K. W., Bullinger, M., Aaronson, N., Hays, R. D.,Patrick, D. L., Symonds, T., & The ClinicalSignificance Consensus Meeting Group (2005).Estimating clinically significant differences inquality of life outcomes. Quality of Life Research, 14,285–295.

Editorial 343

by guest on April 16, 2014

http://jpepsy.oxfordjournals.org/D

ownloaded from