undermining peer review

8
UNDERMINING PEER REVIEW Albert Henderson p eer review provides an assurance of quality in science. When it threatens government largess, it can be an annoyance. During the 2000 primaries, for instance, John McCain's opponent denounced him for voting against breast cancer research. In fact, he admitted that Senator McCain had a strong record of support for qualified research. The objec- tion was that McCain had disagreed with proposals that bypassed peer review--"science projects" reek- ing of pork-barrel politics. Backers of such congres- sionally "earmarked" funding argue that normal ap- plication processes are not as effective as political advocacy and action by Congress. In addition, the argument suggests that well-funded institutions le- verage past laurels to compete for funds; newcom- ers need the help of political friends to level the playing field. They maintain that formal review may be unfair to projects that are difficult to explain. The Achilles' heel of peer review is that referees usually know less than the author about the work under examination. A few reviewers with expertise matching the author may labor under any number of biases ranging from loyalty to rivalry. Armed with the glib authority of a star-quality scientist, lobby- ists claim urgency, promote swift action, and prom- ise that good science will naturally follow. The present director of the National Science Fotmdation (NSF), Rita R. Colwell, once touted the controversial University of Maryland Christopher Columbus Cen- ter of Marine Research and Exploration. With her help, the university squeezed $130 million out of a 1992 appropriations bill. She was quoted at the time say- ing, "This is a new concept .... If you think of it as a standard laboratory, you're totally missing the point." Government sponsored academic R&D, totaling $15 billion annually, is a major target for lobbying. A third of it goes to support "overhead" costs of fa- cilities, administration, etc. at 125 universities. Although R&D is a tiny fraction of the total federal budget, it controls 14 percent of discretionary spending. A handful of states--Maryland, New Mexico, Massachusetts, California, Virginia and Texas--receive disproportionate amounts of non- entitlement spending from R&D. Jack Lew, direc- tor of the Office of Management and Budget, re- cently indicated that the escalation of earmarking is dramatic, appearing where it has never been seen before. Universities' success in avoiding peer review and competition in the budget pools left to agen- cies, would account for much of the increase. By fiscal 2000, Congress more than tripled funding for university projects since 1996--two years after it took over Congress vowing to control pork-barrel politics. University managers have also been learning the value of "grease": investments that ease the insertion of fa- vorable riders into appropriations bills. Boston Uni- versity, for instance, spent $ 760,000 lobbying in 1999 and contributed $65,000 to 1997-98 political cam- paigns. It received $97.5 million to spend on fed- eral research in 1997, up from $83 million 1995. Riding the coattails of every breakthrough in sci- ence and technology, no matter how remote, uni- versity managers, scientists, technocrats, and poli- ticians have argued for and won increased federal investments in R&D. How do they address the chal- lenge of the growth they have fostered? Simply put, they say "trust me" and then look the other way. No one is authority seems to be concerned about im- proving the processes that generate social return and scientific productivity. The return on invest- ments in science is notoriously difficult to trace. University managers only complain about costs. They bitterly protest a litany of burdens: the "ex- plosion" of information, the expense of libraries, the strictures of copyright and tenure, and the de- pendence on peer reviewed publications. Universities' complaints are unfair, of course, although containing a grain of truth about weak- nesses of the present practice. The major cost of information is readers' time--more than double to- UNDERMINING PEER REVIEW 47

Upload: albert-henderson

Post on 19-Aug-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Undermining peer review

UNDERMINING PEER REVIEW

Alber t Henderson

p eer review provides an assurance of quality in science. When it threatens government largess,

it can be an annoyance. During the 2000 primaries, for instance, John McCain's opponent denounced him for voting against breast cancer research. In fact, he admitted that Senator McCain had a strong record of support for qualified research. The objec- tion was that McCain had disagreed with proposals that bypassed peer review--"science projects" reek- ing of pork-barrel politics. Backers of such congres- sionally "earmarked" funding argue that normal ap- plication processes are not as effective as political advocacy and action by Congress. In addition, the argument suggests that well-funded institutions le- verage past laurels to compete for funds; newcom- ers need the help of political friends to level the playing field. They maintain that formal review may be unfair to projects that are difficult to explain.

The Achilles' heel of peer review is that referees usually know less than the author about the work under examination. A few reviewers with expertise matching the author may labor under any number of biases ranging from loyalty to rivalry. Armed with the glib authority of a star-quality scientist, lobby- ists claim urgency, promote swift action, and prom- ise that good science will naturally follow. The present director of the National Science Fotmdation (NSF), Rita R. Colwell, once touted the controversial University of Maryland Christopher Columbus Cen- ter of Marine Research and Exploration. With her help, the university squeezed $130 million out of a 1992 appropriations bill. She was quoted at the time say- ing, "This is a new concept .... If you think of it as a standard laboratory, you're totally missing the point."

Government sponsored academic R&D, totaling $15 billion annually, is a major target for lobbying. A third of it goes to support "overhead" costs of fa- cilities, administration, etc. at 125 universities. Although R&D is a tiny fraction of the total federal budget, it controls 14 percen t of discretionary

spending. A handful of states--Maryland, New Mexico, Massachusetts, California, Virginia and Texas--receive disproportionate amounts of non- entitlement spending from R&D. Jack Lew, direc- tor of the Office of Management and Budget, re- cently indicated that the escalation of earmarking is dramatic, appearing where it has never been seen before. Universities' success in avoiding peer review and competit ion in the budget pools left to agen- cies, would account for much of the increase. By fiscal 2000, Congress more than tripled funding for university projects since 1996--two years after it took over Congress vowing to control pork-barrel politics. University managers have also been learning the value of "grease": investments that ease the insertion of fa- vorable riders into appropriations bills. Boston Uni- versity, for instance, spent $ 760,000 lobbying in 1999 and contributed $65,000 to 1997-98 political cam- paigns. It received $97.5 million to spend on fed- eral research in 1997, up from $83 million 1995.

Riding the coattails of every breakthrough in sci- ence and technology, no matter how remote, uni- versity managers, scientists, technocrats, and poli- ticians have argued for and won increased federal investments in R&D. How do they address the chal- lenge of the growth they have fostered? Simply put, they say "trust me" and then look the other way. No one is authority seems to be concerned about im- proving the processes that generate social return and scientific productivity. The return on invest- ments in science is notoriously difficult to trace. University managers only complain about costs. They bitterly protest a litany of burdens: the "ex- plosion" of information, the expense of libraries, the strictures of copyright and tenure, and the de- pendence on peer reviewed publications.

Universities' complaints are unfair, of course, although containing a grain of truth about weak- nesses of the present practice. The major cost of information is readers' t ime- -more than double to-

UNDERMINING PEER REVIEW 47

Page 2: Undermining peer review

tal spending on libraries, authorship, and publish- ing combined. Denials of responsibility for read- ers' costs puts the onus for being well informed on each individual. Over 20 years ago, Johns Hopkins psychologist William D. Garvey described the scien- tists' problem concisely: "Even if they had perfect re- trieval systems they would be presented with so many items that they could not assimilate and process them." As I shall explain below, the seemingly irresistible advance of progress must struggle against management antipathy--deliberately immovable policies directed by cash flow and profit considerations. We should doubly appreciate every scientific breakthrough. Academic researchers are the underdogs in a game taken over and umpired by their opponents.

The dilution of scientific quality as a product of growth is at issue. Consider the inputs and outputs of science which mark its unabated exponential growth over the last half-century. The number of journal articles published worldwide has doubled every 15 years, an oft-cited example that started in the 17th century. The money spent on research that produces the journal articles has increased at near the same rate, when adjusted for inflation, since 1960. International manpower also increased, bol- stered significantly by the entry of women and Third World researchers. The late Yale historian, Derek de Solla Price, incidentally, once noted that the number of journal articles, according to Lotka's Law of Productivity, has a constant relationship to the number of scientific authors. U. S. patents doubled between 1986 and 1998. In contrast to these ro- bust growth indicators, the National Science Board (NSB) recorded that the number of journal articles qualifying for its high-citation database increased only 12 percent be tween 1986-88 and 1995-97; moreover, it indicated that the U.S. contr ibut ion to the world literature declined from 38 percent to 34 percent . The NSB makes little c o m m e n t about these facts. The difference be tween the growth of investment and qualified output sug- gests a degradat ion of quality that the science bureaucracy cannot defend. It is obvious enough for then-Speaker of the House Newt Gingrich and others to have not iced it. His 1997 order to ad- dress the incohe rence of science (discussed in Society 35:6, Sep./Oct. 1998, pp. 38-43) was un- fortunately ignored by the House Science Commit- tee as soon as the Speaker resigned.

Measuring Performance and Results Recent legislation provided an opportuni ty to

address flaws in the operations of taxpayer-spon-

sored science. The Government Performance and Results Act of 1993 [GPRA] aimed to encourage ef- ficiency and effectiveness. Under this law, agen- cies must produce strategic performance plans with targets. They must then explain whether those tar- gets have been met. Because of the nature of scien- tific research results, the task for science agencies is doubly difficult. R&D has long eluded account- ability because the use and specific impact of sci- entific discovery is unpredictable. At best, we have norms: methods and conditions that produce reli- able information the usefulness of which may not be understood for some time. The NSF policy board set the following goals:

Enable the U.S. to uphold a position of world leadership in all aspects of science and engi- neering;

Promote the discovery, integration, dissemi- nation, and employment of new knowledge in service to society; and

�9 Achieve excellence in U.S. science, mathemat- ics, engineering and technology education.

"Dissemination" appears without emphasis. It is, of course, the key to all the other goals. Communica- tion is the essence of science. Information is the main ingredient of science research and its primary product. According to economists who specialize in information, such as the late Fritz Machlup, pro- ductivity comes from information that improves the quality of results, providing economies by minimiz- ing duplication and error. In practical terms, to obtain useful intelligence, scientists must winnow away a burgeoning chaff of misleading and extraneous mate- rial that includes unpublished data, meeting abstracts, informal communications, commentaries, myths, tra- ditions, advocacies, and a variety of theories as well as the formal literature of discovery. The NSF's plan, published in 1998, addresses dissemination only in terms of assessing core processes and to "engage in a dialog with other stakeholders to enhance global scientific communication.. . :'

Aiming to help science agencies comply with GPRA, the National Academy of Sciences recom- mended that peer review--examinat ion by quali- fied experts--provides the most effective means of evaluating science programs. Its effectiveness de- pends on "the talent, objective judgment and expe- rience of these experts:' They defined three types of peer review:

48 SOCIETY �9 JANUARY / FEBRUARY 2001

Page 3: Undermining peer review

Quality review, sometimes called "merit re- view," judges h o w well a work or proposal compares with other works in the field.

Relevance review examines whe the r a re- search program focuses on an agency's mis- sion and the appropriateness of the research and its potential value.

International benchmarking aims to determine whether the research program is at the fore- front of scientific and technical knowledge.

The Academy's omission of the oldest type of peer review is stunning. Critical post-publication evalu- ation is at the heart of letters, review articles, sum- maries, comments in pr imary research reports, meta-analyses of data, conference papers, hand- books, textbooks, research proposals, and infor- mal communications. It has been an intrinsic part of science since the beginning of civilization, a step- ping-stone of the search for truth and an essential connection of basic research to technology and prac- tice. As new information and ideas circulate, scien- tists' judgments may change. A paradigm may be adjusted or overthrown. Accepted one day, under- standings may dissolve under the scrutiny of a spe- cialist or in the light of recent Findings. Alternatively, a discovery may be enhanced by the addition of in- formation that is new or newly applied. Nobel Prize winner Leon Lederberg called the literature the root of scientific progress, "a tureen, a place for the di- gestion and assimilation of the variety of inputs whereby scientific claims go through a period of seasoning, modification, modulation. Even the truths look different five or 10 years later, regard- less of explicit criticism." The 1963 President's Sci- ence Advisory Committee recognized the value of the fourth type of peer review. Its first recommenda- tion described it this way, "We shall cope with the information explosion, in the long run, only if some scientists are prepared to commit themselves deeply to the job of sifting, reviewing, and synthesizing in- formation; i.e., to handling information with sophis- tication and meaning, not merely mechanically. Such scientists must create new science, not just shuffle documents: their activities of reviewing, writing books, criticizing, and synthesizing are as much a part of science as is traditional research."

The comprehensive literature review of an im- portant research topic should be vital to the goals of the GPRA and to the strategic plans of the Na- tional Science Board and other science agencies. It

was well described as "distillation" by AT&T physi- cist Conyers Herring and elaborated upon by Johns Hopkins University psychologist William D. Garvey. Depending upon the topic, a comprehensive litera- ture review can be a "big science" undertaking that requires a task force of specialists working full-time for a year or more. Such an exhaustive sweep of the literature produces highly credible conclusions. It can also produce breakthrough results sufficiently notable to rate front-page treatment by The Science Times. Compared to the brief surveys prepared to justify grant proposals, it must generate a more reli- able foundation for new research and new practical applications. Reaching beyond, while including, specific programs in the context of GPRA, it might well serve to evaluate entire lines of research as well as program investments in terms of their cost-effec- tiveness, to suggest improvements in programs, and to reject outdated practices. It also promises to as- sess the performance of"earmarked" programs that initially avoided peer review.

An important aspect of the deliberately compre- hensive review addresses fundamental biases of insu- larity in research. Insularity--ignoring inconvenient research--typically excludes foreign disciplines, for- eign countries, foreign languages, indeed anything not immediately familiar or bonded by loyalty. It may even exclude information that does not support - -or contradicts--authors' conclusions. Overdependence on bibliographic databases that exclude vital informa- tion is a well-documented systemic problem. Insular- ity is often preserved in the relatively brief forms of peer review described by the Academy.

To combat insularity, several medical journals adopted the Consolidated Standards of Reporting Trials [CONSORT]. One of their recommendations was that authors "state general interpretation of the data in light of the totality of the available evidence." Journal o f the American Medical Association editors emphasized this commitment to quality by asking au- thors to use a checklist that includes this recommen- dation. Unfortunately, it is the research sponsor- - for example, the National Institutes of Heal th- -not the journal, that calls the tune. A study reported at the International Congress on Peer Review held at Prague in 1997 showed little evidence that authors complied or that journal editors were able to insist on it.

Poor preparation and the omission of relevant previous work may actually be devastating on pro- ductivity. Nobelist Joshua Lederberg commented on the cost of inadequate experimental design and sloppy execution, "The lost effort that is expended in straightening out muddy claims, or merely in

UNDERMINING PEER REVIEW 49

Page 4: Undermining peer review

plowing through their presentation in the literature, greatly exceeds what can be attributed to intentional fraud" The CONSORT recommendation might well apply to grant proposals, adapted to eliminate wast- ing money on duplicative studies and bad science. One need not be a rocket scientist or a certified accountant to appreciate that the risk of investment benefits from "due diligence."

We expect scientists to be in command of the facts. When they are not, it is probably not their fault. The well-documented growth of science, its chaotic output, and the twigging of narrow spe- cialties generate urgency for thoughtful assessments. To effectively evaluate all research in a single area, a variety of specialists and a concentrated effort may be required. Less than 10 years ago a Canadian task force employed a statistician, four epidemiologists, a neurosurgeon, an oto-rhino-laryngologist, an or- thopedic surgeon, an engineer, etc. to redefine "whiplash" and its management. Their first task was to collect and screen over 10,000 citations to po- tentially relevant research articles. Of the short list of 239 relevant articles, they rejected 74 percent on scientific grounds, finding the research that was reported to be poorly designed with inappropriate samples and methods.

Conflicts o f Interests There is another striking omission by the Acad-

emy report: The GPRA mandates that each agency identify key factors, external to the agency and be- yond its control, that could significantly affect the achievement of its goals. The Academy responded, recognizing the members of universities. It said nothing, however, about the libraries and other facilities used by university researchers. Libraries remain the major disseminators of formal scientific information, most of it produced by non-U. S. au- thors. The use of libraries increased over the last 30 years. The importance of libraries is underscored by an observation made by a 1994 GAO study of peer review. It noted, "Although most reviewers reported expertise in the general areas of the pro- posals they reviewed, many were not expert on closely related questions and could cite only a few, if any, references."

Poor libraries and short deadlines weaken these referees' potentials for objectivity, effectiveness and excellence. In spite of receiving federal grant "over- head'designated for libraries, universities increased spending on libraries only half as much as sponsored spending on R&D since 1970. Focus for a moment on some of the profound impact on research of

skewing information supply and demand--libraries and research--by administrative fiat for thirty years:

Budget austerity forced major academic librar- ies to cancel thousands of subscriptions since 1970, forcing their patrons to resort to inter- library borrowing.

It takes an average 16 calendar days, accord- ing to the Association of Research Libraries, to obtain an interlibrary photocopy.

The typical deadline for peer-review is two weeks, according to the Council of Biology Editors. Within this time, a referee is expected to check unfamiliar sources cited by a proposal or a report. It is commonplace for citations in a paper to link a variety of disciplines, per- haps in unexpected ways. When cited sources are unfamiliar and not to be found in the li- brary, the referee waits an average 16 calen- dar days. When the copy arrives, it may cite additional unfamiliar sources, propelling the researcher back to the interlibrary loan desk.

Referees usually take less than three hours to render a judgment. They probably do not spend much time procuring and reading in- terlibrary photocopies.

In order to keep their prices "affordable" many information services narrowed their coverage. Only half the formal literature is covered by major indexes such as Medline, Biosis, and AGRICOLA. Indexes often excludeThirdWorld

journals.

Publishers' backlogs increased. In mathemat- ics for instance, the minimum wait for publi- cation of manuscripts received in final form increased 33 percent between 1994 and 1997. The maximum wait increased from thirty-five to fifty-seven months.

In sum, the c h a n c e of insular i ty in fec t ing a researcher 's proposal , an author 's article, or a referee's report has increased through deliberate administrative failures to maintain library growth equal to the rising output of research.

Rather than acknowledge the imbalance of finan- cial allocations that left libraries so far behind R&D, universities raised a huge red herring. They blamed the victims of the decimation of library budgets - -

50 SOCIETY �9 JANUARY / FEBRUARY 2001

Page 5: Undermining peer review

researchers and publishers--in a campaign that in- cluded assaults on peer review, tenure, and copy- right. A propaganda blitz that was initiated byAsso- ciat ion of Research Libraries in 1989 accused researchers of "excessive publication" and publish- ers of profiteering. In addition to countless refer- ences in the library literature, their report was ech- oed by Science, The Scientist, The New York Times, and Science and Engineering Indicators. Even the 60 MinutesTV program presented instructors whin- ing that "publish or perish" requirements Filled miles of library shelves with trivial research that "nobody reads." No mention was made of the fact that in 1986/87 colleges and universities spent $110 mil- lion less on their libraries than the year before. This unique reduction caught publishers unaware. Al- though it was revealed when government statistics were compiled and published by the Department of Education years later, it continued to go unno- ticed for a decade. The unheralded library spend- ing cut impacted publishers' sales, forcing them to raise their prices, and fired a warning shot at librar- ians, to suggest that their jobs were in jeopardy. The argument that the cut was essential is negated by the fact that the cut helped higher education institutions add $114 million to unspent revenues, according to the same statistical source.

General Accounting Office Blinders In support of the Academy study, the General

Accounting Office (GAO) consulted a dozen federal agencies to understand how they define peer re-

view. They studied agency practices, and inquired about other assurances of quality. They included National Institutes of Health, National Science Foun- dation, National Institute of Standards and Technol- ogy, National Oceanic and Atmospheric Adminis- trat ion, D e p a r t m e n t of Energy, and National Aeronautics and Space Administration among oth- ers. The GAO reported no government-wide defini- tion of peer review. Agency officials generally con- cur, however , that pee r r ev iew "includes an independent assessment of the technical, or scien- tific merit of research by peers who are scientists with knowledge and expertise equal to that of the researchers whose work they review."

Again, startling omissions, now by the GAO: the largest R&D spender, the Department of Defense, is simply not mentioned. The GAO skirts the sub- ject of earmarking, noting only briefly and without explanation that the Office of Science andTechnol- ogy Policy and the Office of Management and Bud- get favor projects that are peer-reviewed over those

that are not. The latter category includes renewals of previously reviewed grants. Why not reevaluate a grant before committing more money? Finally, one would think that the GAO, by virtue of its auditing experience, would be more sensitive to potential conflicts of interest. Universities evaluate the re- search they propose and perform. In a prior report, the GAO noted that universities discourage referee participation even though they benefit f rom the system. Moreover, the institutions with the largest sponsored research programs participated least in agency reviews, leaving the bulk of the work to academics with poorer resources. Such troubling inconsistencies combined with the dual role of uni- versities should raise this broad issue of conflict at the institutional level.

Although university administrators do not con- trol specific details of proposal evaluations, they can sabotage their effectiveness by creating bottle- necks in formal communications. Building on the fundamental weakness of peer review, referees'lack of specific knowledge, universities cut information resources. Their sabotage also serves to weaken tenure and the power of associations, two thorns in the bureaucratic paw that are associated with peer review and publishing. Economist David J. Brown studied the widening gap between univer- sity spending on R&D and on libraries. He was con- vinced that it demonstrated how the objectives of science agencies, which pay for research, contrast with goals of universities, which grudgingly pay for the dissemination of research results through their libraries. Science agencies claim to promote progress. University managers aim to maximize fi- nancial benefit. Their programs are badly out of balance. This is old news, of course. Forty years ago, a former president of Columbia University warned that the government contract had replaced intellectual curiosity. Vannevar Bush's plan for government sponsored research held that univer- sities "are charged wi th the responsibility of con- serving the knowledge accumulated by the past, imparting that knowledge to students, and con- tributing new knowledge of all kinds." Why do science agencies tolerate nonper fo rmance of this responsibility? Perhaps conflicts are inherent in the convent ional recrui tment of agency manag- e r s - s u c h as the NSF's Rita R. C o l w e l l - - f r o m univers i t ies- -such as University of Maryland. In- dividual loyalties and concern about employment after public service create at minimum the appear- ance of a compelling motivation to avoid jeopardiz- ing university financial goals.

UNDERMINING PEER REVIEW 51

Page 6: Undermining peer review

Is it in the interest of the science bureaucracy to rein in the rate of discovery? Several reasons sup- porting such an agenda come to mind. First, higher education institutions demonstrate a disturbing pref- erence for financial objectives over excellence in research, education and public service. "Contain- ing" the costs of dissemination via libraries liber- ates cash for other purposes, including administra- tive expansion and increasing unspent revenues. Private research universities that once spent 6 cents of their dollar on libraries now spend less than 3 cents. Their profits averaged 25 percent of revenue in the fiscal year ending in 1999. The administra- tive share of spending in higher education rose 12 percent between 1970 and 1995 according to the U.S. Department of Education. Add the predilec- tion of bureaucrats for process over results. Results in science are particularly difficult to judge, even with peer review. One theory holds that major ad- vances are more the products of "being in the right place at the right time" and the emergence of new

generations, rather than the products of systematic inquiry. Justified by such ideas, implying that dis- semination is expendable, one might believe that throttling down formal communica t ion does no harm. Fourth, fostering mediocrity "suppresses" the rate at which systematic inquiries succeed in end- ing grant renewals.

The End of Science Where major resources are involved, for instance

in the "war on cancer,' the hunt becomes more val- ued by the hunters than their quarry. The hunt pro- vides employment, success means job hunting. The cure that ends research, as it did with smallpox and polio, is a b~te n o i r e of administrators preoc- cupied with cash flow, payrolls, and facilities ex- penses. With overly rapid progress, fiefdoms and employment might collapse. Fueling such anxieties is the ironic literature that claims that most of a fi- nite number of possible discoveries have already been found. According to the concei t of these doomsayers and their talk of "final theories," each new finding hastens the end of science.

A deliberate suspension of research into science communica t ions handicaps clarification of the present status. The avoidance of dissemination is- sues was identified as a"policy vacuum" by the Con- gressional Research Service nearly 25 years ago and revisited in 1989 hearings. Thanks to the lack of policy an agency manager boldly sidestepped peer review, Congressional debate, and the fundamen- tal government policy doctrine that addresses the

demarcation of public and private activity. In 1991, Los Alamos National Laboratories (LANL) developed a database that soon circulated tens of thousands of prepublicat ion drafts of articles to physicists and mathematicians. The unpreceden ted "welfare program" serves up unreviewed drafts of papers free to readers wor ldwide at the expense of the U.S. taxpayer. Many scientists now consider the free information an "entitlement," one that is particularly dear since universities have canceled so many subscrip- tions and publishers have increased their backlogs of unpublished articles. Celebrated by managers of U.S. universities as much as by foreigners in less fortunate circumstances, a major attraction appears to be the promise that it may eliminate journal subscriptions and thereby reduce library expenses.

Within five years it claimed to have supplanted traditional journals in some fields. It now receives over 2,500 new submissions a month. "Who needs peer review?" the LANL web manager told Sc ience

magazine. This puts the federal agency, defiantly spending $1 million or more over 10 years, and 14 mirror sites around the world at odds with the pri- vate sector. Without the support of a rational policy on scientific information or the benefit of peer re- view, it is quite radical. In fact it is an example of a far more aggressive and obvious "sharing" than Napster, which enables individuals to exchange music Flies. Physics publishers are reluctant to con- front the issues of government interference and copyright infringement, however, out of respect for the power of the agency. The Department of Energy, which controls LANk has R&D obligations running near $6 billion. It is "godfather" over au- thors' grants and other benefits to the major pub- lishers in physics and mathematics, which are non- profit associations. At the time of LANL's initiative the American Physical Society and other publishers had been studying the potential of electronic pub- lishing for some time.

The situation would have turned out differently if LANL had given its preprint experiment the ben- efit of a full review before proceeding. In 1999, the National Institutes of Health (NIH) announced E- B i o m e d . The NIH proposal was copied largely from the Los Alamos program. It was met with glee by universities and foreigners. It quickly lost momen- tum with the objections of editors who described wha t potent ial for errors and quackery lurk in unevaluated"findings" and theories. Associations and other publishers objected to its economic interfer- ence with journal subscriptions. It also raised op- position from policy experts who recalled a previ-

52 SOCIETY ~ JANUARY / FEBRUARY 2001

Page 7: Undermining peer review

ous proposal for government creating a new science

information service that would duplicate and com-

pe te with the private sector. In spite of the urgency

of the Cold War competition with the Soviets, it was

quickly disposed of by the President's Science Advi-

sory Committee chaired byW. O. Baker: "The case for

a Government-operated, highly centralized type of

center can be no better defended for scientific infor-

marion services than it could be for automobile agen-

cies, delicatessens, or barber shops."As a result of

that report , President Eisenhower di rected the gov-

e rnment to assist the private sector in computeriz-

ing scientific information services and broadening

their scope, rather than to usurp private initiative.

Condemned to Repeat Seven teen years be fo re GPRA, Congress at-

t empted to overr ide the intransigence of the sci-

ence Balkans with the National Science and Tech-

no logy Policy, Organizat ion, and Priori t ies Act

(NSTPOPA) of 1976. The first form of that law, in-

t roduced in 1971, aimed to Fred work for scientists.

Along the way, the underemployment issue resolved

itself. The bill then developed a focus on dissemina-

tion derived from nearly 20 years of post-Sputnik research into science communications. It also ex-

p re s sed Congress 's cons te rna t ion at Richard M.

Nixon's elimination of the presidential science ad-

visor post in 1973. The conference repor t noted,

"Finally the law establishes a Federal pol icy that

sc ience and technology cons t i tu te a nat ional re-

s o u r c e - - m a d e up of people, facilities, and knowl-

e d g e - i n which there must be a continuing Federal

investment adequate to national needs." It added,

"But one lesson stands out .... This is the unmistak-

able fact that the potential of science has not been

matched by its performance. The fault does not lie

with the scientists, but rather wi th the policy-mak-

ers. We in Government have failed to provide ad-

equate resources to suppor t the magnificent talent of our scientists and engineers. And at the same

time, we have failed to set the goals and guidelines

to assure the applicat ion of that talent to the prior-

ity needs of the nation"

Government failed again. The science establish-

ment created insurmountable p rob lems with the

implementa t ion of this legislation:

Nat ional Science Foundat ion (NSF): The law

establishing the NSF sought, "to foster the in-

terchange of scientific and engineering infor-

mation among scientists and engineers in the

United States and foreign countries." The NSF

was the subject of criticism by the Congressional

Research Service and a special subcommittee

on the National Science Foundat ion of the Sen-

ate Committee on Labor and Public Welfare in

1975. In an unexplained act of defiance, the

NSF quietly folded its division of science infor-

mation two years later. It s topped sponsorship of an important series of studies of science com-

munications. It hal ted the collect ion of data

and publ icat ion of the annual Statistical Indi- cators o f Scientific and Technical Communi- cation after only two issues. Today, the NSF

generally excludes content dissemination from

its studies and recommendat ions, preferr ing

to concentrate on condui t technology.

Office o f Sc ience a n d Technology Pol icy (OSTP): In an equally striking example, The

OSTP's operations betray governing principles

that specify, "the deve lopmen t and mainte-

nance of a solid base for science and technol-

ogy in the United States, including strong par-

t ic ipa t ion of and coopera t ive re la t ionships

with ... the private sector ... el imination of

needless barriers to scientific and technologi-

cal innovation [and] ... effective management

and dissemination of scientific and technologi- cal informat ion"The law directs the di rector

of the OSTP to, "provide the President with

per iodic reviews of Federal statutes and ad-

ministrative regulations of the various depart- ments which affect research and development

activities, both internally and in relation to the

private sector, or which may interfere with de-

sirable technological innovation" The OSTP's

current official documents do not reflect this

language. It resisted criticism leveled by the

Office of Technology Assessment during 1989

Congressional hearings: "We do not yet have a

strategic focus on how to get the maximum

return on this investment [of $65 billion in

R&D] in terms of the use and users of the in-

formation p roduced by our national R&D pro-

grams .... In the absence of the OSTP, the Of-

fice of Management and Budget has become

the dominant force in the management and

dissemination of government information, in-

cluding scientific and technical information"

President's Commi t tee on Science a n d Tech- nology (PCST): NSTPOPA calls for a commit-

tee to advise the pres ident on complex issues

of s c i ence and t echno logy . Staffed wi th ,

UNDERMINING PEER REVIEW 53

Page 8: Undermining peer review

among other specialists, an exper t In infor- mation dissemination, Congress aimed this panel to consider "the role to be played by the private sector in the dissemination of in- formation." Under an execut ive order, the pane l d e s c r i b e d by the law was qu ie t ly scrapped. We have instead a President 's Com- mittee of Advisors on Science andTechnology [PCAST]. More important , PCAST is unim- peded by anyone distinguished by expertise in dissemination.

The work of PCAST is influenced by major corpora- tions. It proceeds under what appears to be a per- manent chairmanship controlled by Hewlett Packard Corpora t ion- - the late David Packard, then John A. Young. Its present and past membersh ip is marked by connect ions to Fortune 500 names including Lockheed Martin Corporation, IBM, AT&T, Allied Signal, Ford Motor Company, and Glaxo-Wellcome Inc. Corporate interests occasionally overlap with those of universities. Technology firms mount their own task force reviews of the scientific literature, often ongoing and exhaustive. Exclusive informa- tion gives them a competit ive edge. In contrast to the open sharing of academe, their findings are trade secrets. It is not surprising, therefore, that PCAST's observations of the problems of research universi- ties published in 1992 and 1996 failed entirely to take any notice of the information explosion, the library crisis, and the policy vacuum so well docu- mented by others.

In short, the aims of NSTPOPA to suppor t an active policy on dissemination were boldly trashed by the executive branch- -Democra t ic and Repub- lication administrations alike, starting with Gerald Ford. This chronic delinquency of the Executive and its toleration of the systemic problems of dis- semination have the tacit approval of Congress, the media, universities, and corporations. The major associations chartered specifically to p romote sci- entific information appear to be otherwise occupied. The bitter complaints of academic senates and fac- ulty are unheard. Faculty unions have no say in universities where faculty is presumed to have man- agement powers. The taxpayer is unrepresented. The Congressional Research Service concluded 25 years ago that Institutions of science and technol- ogy have too much invested in the status quo to permit real reform. They are too adept at illusion and distraction to let themselves be pinned down on questions of performance and results. It is in- structive to recall that when Sputnik embarrassed

the wes t long before anyone thought it was pos- sible, the science establishment confessed at once to inadequate dissemination policy. The following period through 1960s, when spending on dissemi- nation matched the growth of research, is remem- bered by many as the Golden Age of science. After an American set foot on the Moon, suppor t for dis- semination died. The potential for reviving it is faint. The Cold War is over.

Will GPRA improve scientific results? The evi- dence so far suggests that it will not. The public underwri ters of research have avoided improving the quality of authorship and peer review in the face of considerable criticism. They ignore 40 years of authoritative condemnat ions from sources that include the Congressional Research Service, the Office of Technology Assessment, Indiana Univer- sity deans Bernard M. Fry and Herber t S. White, Syracuse University professor Charles R. McClure, and many others. They ignore the requirements of the NSTPOP Act of 1976. They will probably man- age to sidestep the reforms intended by GPRA of 1993 with sham documents that are ineffective and unreadable. Unfortunately for the public trust, it is the job of these science agencies to pursue produc- tivity: to clarify issues, to point out erroneous as- sumptions, to r e c o m m e n d the most reliable meth- odologies, to identify information of practical use, and to emphasize the best opportuni t ies for study. Perhaps the agencies have convinced themselves that admissions of weakness might e rode public support of science. Perhaps they fear exposure of deliberate decept ion and distortion. Perhaps soiled linen has become the problem. Rather than correct the back-of-the pants stain on its credibility, the science bureaucracy remains glued to its p e w - - i n denial or praying for a miracle.

SUGGESTED FURTHER READINGS

Garvey, W. D. Communication: The Essence o f Science. Oxford: Pergamon, 1979.

McClure, Charles R., and Peter Hernon. US. Scientific and Technical Information (STI) Policies: Views and Perspectives. Norwood, NJ: Ablex, 1989.

Peer Review in Scientific Publishing. Chicago: Council of Biology Editors, Inc., 1991.

Albert Henderson is the f o r m e r editor o f Publish- ing Research Quarterly. He has wr i t ten extensively on publisher-library relations.

54 SOCIETY ~ JANUARY / FEBRUARY 2001