Research Fraud, Misconduct, and the IRB

Download Research Fraud, Misconduct, and the IRB

Post on 16-Jan-2017




4 download


  • Research Fraud, Misconduct, and the IRBAuthor(s): Stephen HilgartnerSource: IRB: Ethics and Human Research, Vol. 12, No. 1 (Jan. - Feb., 1990), pp. 1-4Published by: The Hastings CenterStable URL: .Accessed: 14/06/2014 05:09

    Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .

    .JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact


    The Hastings Center is collaborating with JSTOR to digitize, preserve and extend access to IRB: Ethics andHuman Research.

    This content downloaded from on Sat, 14 Jun 2014 05:09:28 AMAll use subject to JSTOR Terms and Conditions

  • _I' A Reiwo

    Hma Sujet

    January/February 1990

    Volume 12 Number 1 January/February 1990

    Research Fraud, Misconduct, and the IRB by Stephen Hilgartner 1

    Cohort-Specific Consent: An Honest Approach to Phase I Clinical Cancer Studies by Benjamin Freedman 5

    Protecting Human Subjects from Harm Through Improved Risk Judgments by Eric M. Meslin 7 UPDATE 11 ANNOTATIONS 12

    Research Fraud, Misconduct, and the IRB by Stephen Hilgartner

    Research fraud and misconduct have ascended to the top of the science policy agenda. Major scientific organizations, such as the American Association for the Advancement of Science and the Insti- tute of Medicine, have recently issued reports on the topic. Prestigious jour- nals, such as Science, Nature, and the New England Journal of Medicine, have run numerous articles or editorials on misconduct. Several Congressional committees have held controversial hearings. The Department of Health and Human Services recently issued final regulations on investigations of alleged misconduct.' And a wide range of policy

    Stephen Hilgartner, a sociologist, is assistant professor of social medicine at the Center for the Study of Society and Medicine, College of Physicians and Surgeons, Columbia University.

    proposals are being debated at confer- ences and scientific meetings.2 Given all this attention, it is natural that members of Institutional Review Boards (IRBs) and others would begin to ask whether, when, and how IRBs should become involved in this issue.

    Amid the current heated debate, it is useful to recall that ten or twelve years ago scientific fraud and misconduct received almost no attention. The recent rise of the misconduct problem on the science policy agenda has been accom- panied by major shifts in beliefs and perceptions. In my overview of the misconduct issue, I will not be evaluating the validity of these beliefs, but instead will concentrate on changes in people's notions about misconduct and on the emergence of a new definition of the

    problem.3 How have ideas about scien- tific fraud and misconduct changed? How have conceptions of the causes of dishonesty in research shifted? And how have notions of appropriate solutions evolved? These questions serve as background to the main focus of this article-the role of IRBs in problems of fraud and misconduct.

    Changing Conceptions of the Misconduct Problem

    Only a decade or so ago most scientists and science watchers believed that serious dishonesty in science was not a major problem. This "traditional" view emphasizes the most extreme offenses, such as fabricating or plagiarizing data. It also narrowly focuses on the deviant individual, the perpetrator. Misconduct, in the traditional view, is seen as a problem of individual pathology-the compulsive liar, the cheat, the psycho- path. It is a problem of the psychology or moral failure of individuals.4

    The traditional view of misconduct also holds that scientific fraud is extremely rare and attributes this low incidence to the institutional norms of science,5 which require honest reporting of research findings. Further, the tradi- tional view holds that in those cases where someone actually does fake data, detection is probable, especially if the results are important. The scientific community's control mechanisms, such as peer review and replication, are deemed capable of detecting fraud. Finally, the traditional view holds that once detected, cases of gross dishonesty will be communicated to other scientists rapidly; punishment is, if not sure and swift, certainly likely and severe.

    The traditional view of scientific misconduct paints a comforting picture of a problem of very narrow scope. The harm caused by the few cases that do occur is minimal. Also, because respon- sibility is focused on the individual perpetrator, the wider scientific commu- nity is absolved of blame.

    During the 1980s the traditional view has come under increasingly severe attack. First, many observers have lost confidence in the claim that scientific misconduct is extremely rare. One count of cases of serious misconduct found 14 cases between the years 1950 and 1979, and 26 cases between 1980 and 1987.6 If most of the cases that occurred are included in these figures, then the incidence is indeed low in comparison with the number of prac- ticing scientists. But because of nagging doubts about the efficiency of detection, one cannot rule out the possibility that many instances of fraud have not been detected.7 In order to be counted in such

    A publication of The Hastings Center, 255 Elm Road, Briarcliff Manor, NY 10510 @ 1990


    This content downloaded from on Sat, 14 Jun 2014 05:09:28 AMAll use subject to JSTOR Terms and Conditions

  • 2

    0 93


    totals, a case must not only occur, but it also has to be noticed, reported, investigated, confirmed, and finally, disclosed in the media or through some other kind of public announcement. It is therefore possible that large numbers of cases are missed. Consequently, there has been ongoing debate about whether the known cases represent a few "bad apples," on the one hand, or the "tip of the iceberg," on the other.8

    Along a second dimension, a broader range of unacceptable practices is receiving attention. The most egregious and extreme offenses-e.g., fabrication of data and plagiarism-still dominate discussion of scientific misconduct, but there is also increasing concern about a variety of unacceptable or dubious practices that, while less serious, are almost certainly more widespread. These include listing "honorary authors" who contributed nothing substantive to publications that bear their names, failing to mention that a paper relied on historical controls, neglecting to disclose conflict of interest when reviewing manuscripts, giving the work of co- authors only a cursory review before "signing off" on it, publishing the same data repeatedly without notifying jour- nal editors, or using misleading statis- tical techniques. There is also concern about researchers nonrandomly assign- ing patients in supposedly random clinical trials, and even about researchers handling data negligently.9 Clearly, the extent to which some of these practices should be labeled mis- conduct is the subject of intense debate, and many observers have pointed out the importance of distinguishing between fraud, which is never accept- able, and error, which is inevitable in science. But nevertheless, recent discus- sions have moved away from a black- and-white distinction between deception and truth and toward a spectrum of practices of varying shades of gray.

    At the same time, notions of who and what is responsible for the problem of scientific misconduct have also grown broader. Here it is useful to distinguish between three kinds of responsibility: causal responsibility, moral responsibil- ity, and political responsibility.' Causal responsibility refers to beliefs and assertions about the etiology of a problem. Moral responsibility accrues to those who are blamed for the problem. Political responsibility concerns who is responsible for "doing something" about the problem.

    Turning first to causal responsibility, the causes of misconduct are now often seen as extending beyond the patholog- ical individual to include many features of the way scientific research is organ-

    ized. Misconduct has been attributed to pressure to publish, competition for funding, inadequate supervision of trainees, deficiencies in the peer review system, infrequent replication, inade- quate procedures for keeping records or retaining data, and the diffusion of responsibility for jointly authored works." This is not a complete list, but even so, it includes mechanisms that operate at several organizational levels in the scientific community--from particular laboratories to the undergrad- uate and graduate training programs, to the structure of the grant system, to journal peer review. One can lay these levels out in a spectrum with the causal mechanisms ranging from being nar- rowly focused on the individual to pointing to increasingly large and broadly based scientific institutions. In recent years, causal responsibility for misconduct has been attributed to a widening spectrum of scientific actors.

    Sometimes one hears still broader explanations that attribute scientific misconduct to things happening outside the scientific community. It is perhaps inevitable in an era of insider trading scandals and criminal indictments of top presidential aides that people would sometimes argue that scientific miscon- duct is merely a symptom of a broader moral decay in society, rather than a phenomenon peculiar to science.

    Theories about causality are obviously closely related to ideas about moral and political responsibility, and notions of moral responsibility have also broadened. The individual perpetrator still retains most of the blame, but increasingly the blame is shared with a number of others: the co-author who fails to check carefully the data under- lying a paper, the referee who gives a manuscript a cursory review, the "research czar" who neglects his or her lab. In addition, universities and medical schools have sometimes reluctantly investigated allegations, and some observers have extended the blame to those institutions. In the moral drama of scientific misconduct, the patholog- ical cheat still plays the lead role, but there is a large supporting cast.

    Conceptions of political responsibility for the problem have also broadened. In the traditional view of misconduct, no one needed to "do something" about the problem because a problem barely existed; the control mechanisms already in place seemed adequate. Today, in contrast, participants in the misconduct controversy are intensely debating a wide range of possible solutions. The overall trend in these proposals is toward requiring more extensive changes. People have put forward numerous

    ideas, but there are four general, although sometimes overlapping, orientations.

    The first of these reflects what might be called a 'law enforcement" perspec- tive, focusing on detection, deterrence, and punishment. This approach emphasizes the efficient and just inves- tigation of allegations,'2 swift and severe punishment, and what amounts to a witness protection program, safeguards to prevent retaliation against whistleblowers.'3

    A second perspective takes an "over- sight" approach. It emphasizes improv- ing routine quality assurance in science by intensifying routine scrutiny of research results, data, and laboratory practices. Here one finds proposals that institutions adopt policies concerning the recording and retention of data.14 Another oversight approach would require routine data audits to determine whether the data underlying a paper are verifiable and reproducible.'

    Another orientation takes an "educa- tional approach," emphasizing the training of researchers and the profes- sional socialization of researchers. Here one finds concern that research ethics need to be articulated more clearly during graduate education. The pro- posed reforms include providing for more intensive interaction between senior scientists and their students, and emphasizing education in good research practices and the ethics, as well as the methods, of research.'6

    Finally, a fourth perspective stresses the "reward system," seeking to change the rules of the game regarding aca- demic appointments, promotions, and grants in ways that will reduce the incentives for people to cheat or to cut corners in research. One well-publicized example is that of the Harvard Medical School, which has issued guidelines suggesting that departments should base promotion decisions on the quality of publications rather than on their sheer number. As an example, Harvard guidelines state that some have sug- gested that no more than five papers should be reviewed for appointment as assistant professor, no more than seven for associate professor, and no more than ten for full professor."7

    Needless to say, such schemes have engendered considerable controversy, since many of them would involve major changes in the way research is con- ducted in the United States. The pro- ponents of such changes believe them necessary to ensure the integrity of research and to preserve public confi- dence in science. Others fear that such proposals will intensify bureaucratic regulation of science, stifle creativity,

    This content downloaded from on Sat, 14 Jun 2014 05:09:28 AMAll use subject to JSTOR Terms and Conditions

  • January/February 1990

    and threaten the autonomy of the scientific community. They worry that government regulation of misconduct will pave the way for increased political control over science.18

    The Limited Role of the IRB

    In the context of this debate about misconduct, let me turn to a second set of questions: What is the appropriate role of the IRB in addressing research fraud and misconduct? Under what circumstances, if any, ought the IRB to concern itself with research miscon- duct? These questions must be consid- ered in light of the fundamental missions of the IRB, which are, first, to protect the rights and interests of human subjects, and, second, to assure that research involving human subjects is conducted in an ethically legitimate way.

    I propose the following guiding prin- ciple for considering whether and when the IRB should address misconduct: research misconduct is an IRB issue only when it is related to its fundamental missions; the IRB should not expand its missions to include misconduct as an area of additional concern.*

    The reasons the IRB should not expand its responsibilities are fairly straightforward. First, the missions of the IRB are of the utmost importance, and adding misconduct to the IRB's responsibilities could divert its attention from its basic tasks, dilute its focus, distract and demoralize its members, diffuse its energy, and confuse others about its missions.19 Second, the IRB's make-up, the federal regulations that govern its operation, and its organiza- tional style are all well adapted to its missions. But in contrast, the IRB is not particularly well-suited to the job of addressing scientific misconduct, a subject that, in many ways, differs from traditional IRB issues.

    Consider what I have called the '"law enforcement" approach to the miscon- duct problem. Conducting formal inves- tigations of allegations of research misconduct differs greatly from IRB review, both in style and in substance. Typically, such procedures involve two stages: an initial inquiry to determine if an allegation has any merit and a full investigation if one is warranted. Rather than the kind of cooperative, collegial discussions that take place between IRBs and researchers submitting pro- tocols, these inquiries and investigations usually become very contentious. While "This principle implies, for example, that miscon-

    duct that does not involve research on human subjects is not an issue for the IRB. Thus, misconduct (should it occur) involving physics or astronomy should not be of concern to the IRB.

    IRBs are accustomed to assuming good faith on the part of the researchers coming before them, this can no longer be assumed in investigations of miscon- duct. The proceedings usually take an adversarial tone, in most cases lawyers-- for both the accused scientist and the institution-become involved. Sensitive questions of due process for the accused researcher immediately emerge, and problems such as the protection of whistleblowers raise complex dilem- mas.20 Complicated technical issues may also arise. Consequently, misconduct investigations are unpleasant and time- consuming. They are also easy to mishandle.21 Ideally, such investigations should be conducted by a committee that includes, on the one hand, some members who have appropriate techni- cal expertise, and, on the other hand, some members who are familiar with previous cases of misconduct. The IRB does not meet these criteria, so it should leave the law enforcement end of the misconduct problem to specially consti- tuted committees.

    The IRB is also not the right forum for discussing changes in the reward system for scientists, or for conducting education aimed at preventing miscon- duct, since these issues need to be addressed in forums with a broad base among the scientific community and the public. Nor is the IRB the right institution to conduct data audits or implement other oversight procedures. The bottom line, then, is that the IRB should play a rather limited role in the area of scientific misconduct.

    Nevertheless, there are certain very specific situations that may force the IRB to address misconduct, and con- sequently, the IRB cannot simply ignore the issue. Let me describe some situa- tions when misconduct may become a concern for the IRB.

    Discovery of Evidence of Misconduct

    First, if, during the course of conduct- ing its normal business, the IRB uncov- ers evidence of misconduct, it may be obligated to present this evidence to the person or office in its institution respon- sible for handling such cases. By "evi- dence" I do not mean conclusive, beyond-a-reasonable-doubt proof, but something resembling "probable cause"-enough to trigger an inquiry but not necessarily enough to constitute evidence adequate for conviction. In this situation, the IRB's role is merely to notify the appropriate body (the stand- ing misconduct committee, the dean, the department chair, etc., depending on institutional policy) and let it decide how to proceed.

    Risks to Human Subjects

    Second, it is possible that scientific misconduct may pose a risk to human subjects, and institutions should have mechanisms for addressing such cir- cumstances. Ideally, institutions should have a formal policy that states explicitly that in cases of suspected misconduct involving human subjects research, a designated person or agency-e.g., the dean, the standing misconduct commit- tee, possibly in consultation with the IRB-must determine whether allowing the researcher to continue his or her work during the inquiry would endanger the subjects. It is essential that the research be terminated or passed into the hands of someone else in cases where the well-being of human subjects would be put in jeopardy.

    Data Retention and Record Keeping

    Third, the IRB may want to inquire- and I stress both the word "may" (as opposed to "should") and the word "inquire" (as opposed to "require"'-- about data-retention and record- keeping practices associated with pro- tocols under review. Because of its connection with confidentiality, proce- dures for the storage and disposition of data raise issues that are not at all unfamiliar to IRBs, but I refer to record keeping in a different connection. tbor record keeping and the premature destruction of data have seemed to accompany fraud in a number of cases, and clearly, sloppy records can also cause unintentional scientific errors. So there has been some discussion of establishing guidelines regarding the handling and retention of data.22 Argu- ably, procedures for keeping records and retaining data should be even more stringent in human subjects research than in other areas, because if the records are inadequate, emerging risks to the research subjects may go unde- tected. One might also argue that in granting researchers access to their bodies, minds, or biological materials, human subjects are giving a gift to society and that they have an interest in having their gift properly cared for. The IRB may want to ensure that researchers are thinking about these issues.

    But I want to sound a cautionary note-and a loud one-about attempts to develop formal rules about data retention and record keeping, since appropriate regulations are extremely difficult to draft. Simple rules ("Researchers must preserve all raw data for seven years.") may initially seem straightforward, when, in fact, they beg


    This content downloaded from on Sat, 14 Jun 2014 05:09:28 AMAll use subject to JSTOR Terms and Conditions

  • 4


    complicated questions, such as what constitutes "raw data." For example, should university scientists be required to keep each piece of tissue that they extract, every slide made from that tissue, or only the statistical tabulations that are later made, based on analyses of those slides? Obviously, the burden associated with storing every piece of tissue in a freezer is much greater than that of requiring people to keep statis- tical summaries on paper. Clearly, if data retention rules were poorly conceived, they would be ambiguous, burdensome, and generally useless.

    Financial Conflicts of Interest

    Finally, the IRB may need to consider a fourth issue of proper scientific conduct-financial conflicts of interest. This issue differs substantially from concerns about deceptive practices such as the fabrication of data and plaigiar- ism, but addressing it appropriately is no less important. Complicated dilem- mas flow from the fact that biomedical researchers now often stand to make significant amounts of money from entrepreneurial activities, consulting, and stock ownership plans. Although the development of marketable products is one of the legitimate goals of research, it would be foolish to neglect the potential negative effects of such entre- preneurship. For example, the promise of large profits could affect scientists' interpretation of their data in subtle ways. Or it could affect the way they present findings publicly, changing the "spin" they put on the data.23 The promise of personal gain could also influence scientists' view of the ethical issues, or the risks and benefits, involved in particular human experiments. More- over, even in cases where the potential for profit did not affect any of these things, there is the danger that it could be perceived as having done so.

    Two general ways of addressing the issue of conflicts of interest are disclo- sures and regulation. Some journals, such as the New England Journal of Medicine, require as a condition of publication disclosure of all financial ties between researchers and the products and procedures that they study.24 Regarding human subjects research, the IRB, in my view, should insist that financial ties be disclosed on the consent form that is given to potential subjects.

    In principle, regulation of financial ties could be accomplished through self- regulation or through legal require- ments. One research group engaged in a clinical trial recently agreed volun- tarily not to own stock or to be paid consultants to companies that stood to

    gain from products being tested.25 IRBs may want to encourage voluntary agreements of this type. At a minimum, when investigators stand to gain finan- cially from research, IRBs may want to examine protocols with an especially keen analysis to make sure that human subjects are not being exposed to questionable risks.

    It is possible that other situations might arise in which the IRB would have to involve itself in the issue of research misconduct, but these situations will probably be rare and the IRB's involve- ment in the misconduct issue will remain quite limited. At all times, the IRB should be guided by the principle that it should only get involved when scien- tific misconduct impinges in some way on its fundamental mission of protecting the rights and interests of human subjects.

    REFERENCES 1 Federal Register, 8 August 1989, 32246-51. 2 For a useful bibliography on misconduct, see

    LaFollette, M.C.: Ethical Misconduct in Research Publication, (Cambridge, MA, Massachusetts Institute of Technology, August 1988), photocopy.

    3 For a more detailed treatment of this topic, along with a proposed explanation for the rise of misconduct on the policy agenda, see Hilgartner, S.: "Fraud and Misconduct in Science: The Emergence of a Social Problem," unpublished manuscript.

    4For statements by scientists that express the themes of the traditional view, see: Luria, S.E.: What makes a scientist cheat? Prism 1975; (May): 15-18, 44; Handler, P.: Statement before the U.S. Congress, House Committee on Science and Technology, hearings held March 31-April 1, 1981; Kennedy, D.: The regulation of science: How much can we afford? Monsanto Lecture pres- ented at the Marine Biological Laboratory Centennial, 6 August 1988, photocopy; Koshland, D.E.: Fraud in science. Science 1987; 235: 141.

    5 Merton, R.K.: The normative structure of science, in Merton, R.K.: The Sociology of Science. Chicago, University of Chicago Press, 1973, 267- 78.

    6 Woolf, P.K: Deception in science, in American Association for the Advancement of Science and American Bar Association Conference of Law- yers and Scientists, Project of Scientific Fraud and Misconduct: Report on Workshop Number One, Washington, DC: AAAS, 1988, 37-86.

    7 Ibid. 8 See ibid. and Woolf, P.K: Fraud in science: How

    much, How serious? Hastings Center Report 1981; 11(5): 9-14. See also Zuckerman, H.: Deviant behavior and social control in science, in Sagarin, E. ed., Deviance and Social Change. Beverly Hills, Sage, 1977, 87-138.

    9 Discussions of these practices are found in Stewart, W.W. and Feder, N.: The integrity of the scientific literature. Nature. 1987; 325:207-14; Croll, R.P.: The Noncontributing author. An issue of credit and responsibility. Perspectives in Biology and Medicine 1984; 27:401-07; Bailar, J.C.: Science, statistics, and deception. Annals of Internal Medicine 1986; 104:259-60; Altman, L. and Melcher, L.: Fraud in science. British Medical Journal 1983; 286:2003-06; Schmaus, W.: An Analysis of Fraud and Misconduct in Science, in AAAS-ABA, Project on Scientific Fraud and Misconduct: Report on Workshop Number One, pp. 87-115.

    10 Gusfield, J.R.: The Culture of Public Problems.

    Chicago, University of Chicago Press, 1981. See also Hilgartner, S. and Bosk, C.L.: The rise and fall of social problems: A public arenas model. American Journal of Sociology 1988; 94:53-78.

    1 See, e.g., Chubin, D.E.: Misconduct in research: An issue of science policy and practice. Minerva 1985; 23:175-202; Broad, W. and Wade, N.: Betrayers of the Truth. New York, Simon and Schuster, 1982; Relman, A.S.: Lessons from the Darsee affair. New England Journal of Medicine 1983; 308(23): 1415-17.

    12 Mishkin, B.: Responding to scientific misconduct: Due process and prevention. Journal of the American Medical Association 1988; 260:1732-36; Association of American Medical Colleges, Framework for Institutional Policies and Proce- dures to Deal with Misconduct in Research, Washington: AAMC, 1989.

    13 Chalk, R.: Workshop Summary, pp. 1-36 in AAAS- ABA, Project on Scientific Fraud and Misconduct: Workshop Number One, 1988, pp. 19-23; Swazey, J.P. and Scher, S.R.: The Whistleblower as Deviant Professional: Professional Norms and Responses to Fraud in Clinical Research, pp. 173-92 in Swazey and Scher, eds, Whistleblowing in Biomedical Research, President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, Washing- ton; 1981.

    14 For a discussion of this issue, see Institute of Medicine: The Responsible Conduct of Research in the Health Sciences, Washington, National Academy Press, 1989.

    15 Shamoo, A.E.: We need data audit. The AAAS Observer, 4 November 1988, p. 4. A deputy editor of the Journal of the American Medical Associ- ation has also proposed conducting a small-scale, "experimental" audit in order to help answer questions about the incidence of misconduct. See Rennie, D.: Editors and auditors. Journal of the American Medical Association 1989; 261(17): 2543-45.

    16 See, e.g., Institute of Medicine, The Responsible Conduct of Research in the Health Sciences, 1989.

    17 Harvard Medical School, "Guidelines for Inves- tigators in Scientific Research," Office of the Dean, 16 February 1988, photocopy.

    B8 For example, Wigodsky, H.S.: Fraud and misrep- resentation in research-Whose responsibility? IRB: A Review of Human Subjects Research 1984; 5(2): 1-5

    19 For a more detailed discussion of the need to maintain the credibility of the IRB and keep it focused on its basic mission, see Levine, R.J.: Ethics and Regulation of Clinical Research. 2nd ed., Baltimore, Urban and Schwarzenberg, 1986, pp. 341-50.

    20 AAMC, Framework for Institutional Policies. See also, Friedman, P.J.: Responding to allegations of research misconduct in the university, pp. 29-65, and Vaughn, R.G.: Whistleblowing in academic research, pp. 95-136 in AAAS-ABA National Conference of Lawyers and Scientists, Project on Scientific Fraud and Misconduct: Report on Workshop Number Two, Washington: AAAS, 1989.

    21 Mazur, A.: The experience of universities in handling allegations of fraud or misconduct in research, in AAAS-ABA National Conference of Lawyers and Scientists, Project on Research Fraud and Misconduct: Report of Workshop Number Two, pp. 67-94.

    22 See, e.g., Institute of Medicine: The Responsible Conduct of Research in the Health Sciences.

    23 See the discussion of public relations in science in Nelkin, D.: Selling Science: How the Press Covers Science and Technology. New York, Freeman, 1987.

    24 Relman, A.S.: Economic incentives in clinical investigation. New England Journal of Medicine 1989; 320(14): 933-34.

    25 Healey, B., et al;: Conflict-of-interest guidelines for a multicenter clinical trial of treatment after coronary-artery-bypass-graft surgery. New England Journal of Medicine 1989; 320(14): 949- 51.

    This content downloaded from on Sat, 14 Jun 2014 05:09:28 AMAll use subject to JSTOR Terms and Conditions

    Article Contentsp. 1p. 2p. 3p. 4

    Issue Table of ContentsIRB: Ethics and Human Research, Vol. 12, No. 1 (Jan. - Feb., 1990), pp. 1-12Research Fraud, Misconduct, and the IRB [pp. 1-4]Cohort-Specific Consent: An Honest Approach to Phase 1 Clinical Cancer Studies [pp. 5-7]Protecting Human Subjects from Harm through Improved Risk Judgments [pp. 7-10]Calendar [p. 10]UpdateNew Categories of Research Eligible for Certificates of Confidentiality [p. 11]DHHS Continues to Ban Fetal Research [p. 11]

    Annotations [p. 12]