mapping and sequencing information: the social context for the genomics revolution

6
Mapping and sequencing information: the social context for the genomics revolution Miguel Garcı´a-Sancho Centre for the History of Science, Imperial College, London, United Kingdom In 1983, after devoting some eight years of his life to the description of how a nematode worm develops from an embryo into an adult, molecular biologist John Sulston embarked on a remarkably different project: he decided to map the worm’s genome. Sulston’s impulsive desire to characterise this creature’s DNA from start to finish offers only a partial explanation for this transition. Instead, a close examination of the wider social context for this ‘moment’ in molecular biology gives a more rewarding explanation of Sulston’s intellectual leap. This reveals a world in which biotechnology gradually adapted to and integrated into an ‘information society’ increasingly dependent on the creation, distribution and manipulation of information. The application of comput- ing to DNA during the first half of the 1980s was crucial for this integration, fostering the emergence of geno- mics and ultimately the Human Genome Project. Information processing and biological technologies The mid 1980s witnessed an explosion of interest in mapping and sequencing DNA (Box 1). Organisms like yeast, Escherichia coli and the nematode worm Caenor- habditis elegans all became the focus for this effort to locate genes to a particular part of a chromosome and determine their nucleotide structure [1]. The biographies of the scien- tists involved and several popular accounts often cite the emergence of new technologies as an explanation for this new direction in molecular biology, failing to address the social conditions in which those technologies arose [2,3]. In fact, it is helpful to take a look at how these new DNA mapping and sequencing tools were a product of both biotechnology and ‘information society’, two distinct social, technical and economic forms that converged throughout the 1980s. The decade before, during the 70s, the desire of indi- vidual professionals and laymen (rather than companies) to become computer customers had coincided with the development of the microprocessor and digital networks to miniaturise and interconnect the machines. This resulted in the personal computer – Altair in 1975, Apple II in 1977 and IBM’s PC in 1981 – and user-friendly software manufactured, in particular, by Xerox and Microsoft. With these key developments, information processing began to gain significant economic and cul- tural currency. To capture these developments, sociologist Manuel Castells coined the term ‘information society’, one in which ‘information generation, processing and transmission’ became ‘the fundamental sources of productivity and power’. One of the key features of this new order, claimed Castells, was ‘the networking logic of its basic structure’. The countries, markets and social groups in an information society tended to be electronically interconnected [4]. Also in the 1970s, biologists were looking for ways to explore and understand DNA in greater detail. They devel- oped instruments to cut, clone and analyse genetic material, among them the physical mapping and sequen- cing techniques. At the end of the decade and during the 1980s, several companies – mainly university start-ups – began commercialising products based on these technol- ogies and, in 1986, the US firm Applied Biosystems released the first automatic DNA sequencing machine onto the market. Scholars have coined this landscape the ‘biotechnology era’, one characterized by the high expectations of scien- tists in the power of these new techniques but also by public concern over the ethical implications of develop- ments like genetic modification and cloning [5]. The information society and biotechnology era both emerged in a similar social context. During the 1970s, the developed world experienced a growing economic crisis owing to the escalating price of oil and the increasing instability of the welfare state. Companies and govern- ments sought new areas of investment and found them in the new information and biotechnology markets. But even at the end of the decade, these two domains were still clearly distinct from one another. In 1979, a report by the Commission of the European Communities cited ‘the technology of information’ and ‘biotechnology’ as two of just three key fields ‘in which the potential for technological innovation and the consequent changes in the economic, social and political organisation of our societies’ would be ‘most significant’ in the next 20 years. Yet the text dealt with information society and biotechn- ology separately, not only treating them as genuinely independent phenomena in space but also in time. Whereas the information revolution was likely to concern ‘the next ten years’, ‘bio-industry’ would take place ‘the latest at the beginning of the twenty-first century’ [6]. Review Endeavour Vol.31 No.1 Corresponding author: Garcı ´a-Sancho, M. ([email protected]). Available online 2 March 2007. www.sciencedirect.com 0160-9327/$ – see front matter ß 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.endeavour.2007.01.006

Upload: miguel-garcia-sancho

Post on 05-Sep-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Mapping and sequencing information:the social context for the genomicsrevolutionMiguel Garcıa-Sancho

Centre for the History of Science, Imperial College, London, United Kingdom

Review Endeavour Vol.31 No.1

In 1983, after devoting some eight years of his life to thedescription of how a nematode worm develops from anembryo into an adult, molecular biologist John Sulstonembarked on a remarkably different project: he decidedto map the worm’s genome. Sulston’s impulsive desireto characterise this creature’s DNA from start to finishoffers only a partial explanation for this transition.Instead, a close examination of the wider social contextfor this ‘moment’ in molecular biology gives a morerewarding explanation of Sulston’s intellectual leap.This reveals a world in which biotechnology graduallyadapted to and integrated into an ‘information society’increasingly dependent on the creation, distribution andmanipulation of information. The application of comput-ing to DNA during the first half of the 1980s was crucialfor this integration, fostering the emergence of geno-mics and ultimately the Human Genome Project.

Information processing and biological technologiesThe mid 1980s witnessed an explosion of interest inmapping and sequencing DNA (Box 1). Organisms likeyeast, Escherichia coli and the nematode worm Caenor-habditis elegans all became the focus for this effort to locategenes to a particular part of a chromosome and determinetheir nucleotide structure [1]. The biographies of the scien-tists involved and several popular accounts often cite theemergence of new technologies as an explanation for thisnew direction in molecular biology, failing to address thesocial conditions in which those technologies arose [2,3]. Infact, it is helpful to take a look at how these new DNAmapping and sequencing tools were a product of bothbiotechnology and ‘information society’, two distinct social,technical and economic forms that converged throughoutthe 1980s.

The decade before, during the 70s, the desire of indi-vidual professionals and laymen (rather than companies)to become computer customers had coincided with thedevelopment of the microprocessor and digital networksto miniaturise and interconnect the machines. Thisresulted in the personal computer – Altair in 1975, AppleII in 1977 and IBM’s PC in 1981 – and user-friendlysoftware manufactured, in particular, by Xerox andMicrosoft. With these key developments, information

Corresponding author: Garcıa-Sancho, M.([email protected]).

Available online 2 March 2007.

www.sciencedirect.com 0160-9327/$ – see front matter � 2007 Elsevier Ltd. All rights reserve

processing began to gain significant economic and cul-tural currency.

To capture these developments, sociologist ManuelCastells coined the term ‘information society’, one in which‘information generation, processing and transmission’became ‘the fundamental sources of productivity andpower’. One of the key features of this new order, claimedCastells, was ‘the networking logic of its basic structure’.The countries, markets and social groups in an informationsociety tended to be electronically interconnected [4].

Also in the 1970s, biologists were looking for ways toexplore and understand DNA in greater detail. They devel-oped instruments to cut, clone and analyse geneticmaterial, among them the physical mapping and sequen-cing techniques. At the end of the decade and during the1980s, several companies – mainly university start-ups –began commercialising products based on these technol-ogies and, in 1986, the US firm Applied Biosystemsreleased the first automatic DNA sequencingmachine ontothe market.

Scholars have coined this landscape the ‘biotechnologyera’, one characterized by the high expectations of scien-tists in the power of these new techniques but also bypublic concern over the ethical implications of develop-ments like genetic modification and cloning [5].

The information society and biotechnology era bothemerged in a similar social context. During the 1970s,the developed world experienced a growing economic crisisowing to the escalating price of oil and the increasinginstability of the welfare state. Companies and govern-ments sought new areas of investment and found them inthe new information and biotechnology markets.

But even at the end of the decade, these two domainswere still clearly distinct from one another. In 1979, areport by the Commission of the European Communitiescited ‘the technology of information’ and ‘biotechnology’ astwo of just three key fields ‘in which the potential fortechnological innovation and the consequent changes inthe economic, social and political organisation of oursocieties’ would be ‘most significant’ in the next 20 years.Yet the text dealt with information society and biotechn-ology separately, not only treating them as genuinelyindependent phenomena in space but also in time.Whereas the information revolution was likely to concern‘the next ten years’, ‘bio-industry’ would take place ‘thelatest at the beginning of the twenty-first century’ [6].

d. doi:10.1016/j.endeavour.2007.01.006

Box 1. The two engines of the DNA race

The path taken by DNA researchers in the mid 1980s involved

two types of activity: mapping and sequencing. Both aim to pin

down the structure of this molecule, but they result in different

kinds of description: mapping yields a set of DNA fragments;

sequencing gives a set of nucleotides that comprise each fragment

(Figure I).

The most common approach to a large genome like that of C.

elegans would be to produce a ‘physical map’, an ordered list of

overlapping fragments covering the genome from one end to the

other. Each of these fragments could then be sequenced, giving an

ordered string of the four nucleotide bases – adenine, cytosine,

guanine and thymine.

In 1998, Celera Genomics came up with an alternative approach to

the normal process of mapping and then sequencing. This so-called

shotgun sequencing technique challenged the assumption that a

physical map was needed before sequencing could commence and

triggered a debate on which method produced the most reliable

description of an organism’s genome [21].

Figure I. The difference between mapping and sequencing. (a) Mapping involves reducing the genome to an ordered set of unsequenced DNA fragments.

(b) Sequencing yields the nucleotide structure of each fragment.

Review Endeavour Vol.31 No.1 19

Sulston’s research during this period was removed fromboth the information and the new biological technologies.His investigations between the mid-1970s and 1983focused exclusively on the development of the worm C.elegans, including systematic microscopic work to trace thedivisions and movements of every single cell from theembryo to the adult nematode [7] (Figure 1). Despite earlyattempts to introduce computers to his group of research-ers [8], this did not occur until the mid-1980s when theworm’s genome became the practice ground for new DNAtechnologies.

Bioinformatics: a crucial connectionThe institution in which Sulston was based – theLaboratory of Molecular Biology in Cambridge, UK(LMB) – had been using computers since the 1950s, not

Figure 1. Pictures of C. elegans from which John Sulston deduced the identity and m

Biology (1977).

www.sciencedirect.com

for work into C. elegans or DNA, but to automate analysisof X-ray crystallographic data and reveal protein structure[9]. It was only in the 1970s, when another group at theLMB headed by Fred Sanger came up with ways tosequence DNA, that computers became associated withthis molecule.

Sanger sought to create programs that would store andhandle theDNA sequences generated by his group. For thispurpose, he brought in Rodger Staden, a mathematicalphysicist working at the crystallography department of theLMB. The purpose of Staden’s first software programs –designed between 1977 and 1980 – was to order and storedata from DNA gels, the sequence outputs derived fromSanger’s techniques. Researchers could then save thesequences on a magnetic disk as text files, allowing themto be edited, searched and compared at a later stage [10].

ovements of cells from a nematode embryo to a nematode adult. Developmental

20 Review Endeavour Vol.31 No.1

In 1982, Staden introduced an algorithm to easesequence assembly. Sanger’s methods entailed breakingthe molecule into overlapping pieces, so to reconstructthem it was necessary to match the sequences at the edgesof each fragment. Staden’s algorithm employed the samestrategy as ‘hash-coding’, a technique lifted directly fromword-processing software, which allowed the user to locatea sequence of common letters in a mass of text. So, forexample, a search for pupp*would turn up both puppy andpuppet. Clearly then, Staden saw the DNA sequence as atext, an analogy he made explicit in the following remark:

The sequence can be considered as a series of 7character words that overlap each of their neighboursby 6 characters. Using an alphabet of 4 [the numberof DNA nucleotides] it is possible to make 16,348 (4 tothe power of 7) different words of length 7. If we havea table of 16,348 positions we can record whether ornot each of the possible words of length 7 are presentin the consensus [assembled sequence] (. . .). In orderto compare a gel reading [DNA fragment] with theconsensus we simply look in the table to see if any ofthe gel’s 7 character words are present [11].

Managing texts was a key problem for those involved inthe emergent computer industry. By borrowing a tool fromword-processing software, Staden began to break down thebarriers between information society and biotechnology.

From DNA to dataThe challenge Sanger and Staden had faced of rearrangingDNA fragments into a coherent sequence was similar tothat which confronted Sulston in his effort to map DNAfragments and entire genes to a particular place in theworm’s genome. It therefore made sense for Sulston toborrow and adapt Staden’s sequencing software tomeet hisown mapping needs. This he did from the outset. Hence,the use of these sequencing programs with their particulartext-processing features made the reduction of DNA to

Figure 2. The ‘fingerprinting’ mapping technique used at the beginning of C. elegans pr

and separated according to size; (3) the position of each sub-fragment is entered into a co

physical map of the genome.

www.sciencedirect.com

information the main goal of the C. elegans genome-map-ping project.

Indeed, in the first paper on the project, published in1986, Sulston and his colleagues described the mappingmethod – also known as the ‘fingerprinting’ technique – asa procedure ‘for digital characterisation and comparisonof DNA fragments’. They broke the DNA into fragmentsand treated each fragment as ‘data’, storing it ‘in a com-puter data base’. The computer detected overlaps betweenthe fragments and used them to construct a physical mapof the genome [12] (Figure 2). Evidently, mapping waslittle more than an exercise in handling and processinginformation.

This informational shift had a huge impact on the waythat researchers across the world worked together. Sincethe mid-1970s, the investigations on C. elegans had beensimultaneously conducted at various institutions, mainlyin the United States but coordinated by the LMB. Theemergence of the mapping project resulted in the LMBproducing and administering the computer database, firstalone and from the late 1980s onwards in cooperation withother researchers atWashington University in Saint Louisheaded by Robert Waterston. This increased the domi-nance of both institutions over other centres carryinginvestigations on C. elegans.

From themid-1980s until the publication of the completeC. elegansmap and sequence in the late 90s, any laboratoryworking on a nematode gene had to send the correspondingDNA fragment to the LMB or Washington University toidentify its location within the entire genome. Sulston andWaterston’s groups benefited from these incoming DNAfragments and, in return, sent out a physical map of theregion that contained the new fragment. Sulston’s teamreferred to it as ‘genomic communication’ in its 1986 paper[13].

Thanks to its mastery of new DNA technologies, theLMB therefore achieved two of the key requirements forleadership in an information society. First, by setting

oject: (1) the DNA molecule is broken with enzymes; (2) the fragments are cut again

mputer, which detects where sequences overlap; and (4) the computer constructs a

Figure 3. One of the advertisements in the first issue of the journal Genomics

(1987).

Review Endeavour Vol.31 No.1 21

computers to work on DNA it obtained control over infor-mation. Second, it exerted dominance – first individuallyand then with Washington University – over a network ofinterconnected laboratories. The generalisation of Internetin the late 1980s improved the access that other researchershad to the C. elegans mapping information. But it did noterase the dominance of Sulston and Waterston’s groups.

The emergence of genomics and the human genomeThis cross-over between information society and biotech-nology was not exclusive to Cambridge. Elsewhere, large-scale mapping and sequencing projects were under way,most notably the Human Genome Project (HGP). Theseinitiatives led to the configuration of a new discipline –‘genomics’ – engaged with the application of DNAmappingand sequencing to different organisms. The term wascoined in 1986 by geneticist Thomas H. Roderick and hiscolleagues Victor McKusick and Frank Ruddle. One yearlater, McKusick and Ruddle went on to found the journalGenomics. In the first issue, they defined the discipline asresulting from ‘a marriage of molecular and cell biologywith classical genetics’, being also ‘fostered by compu-tational science’. Researchers in this emergent field shouldbe ‘competent in constructing and interpreting’ varioustypes of ‘maps’ and interested in ‘learning their biologicalsignificance’ for ‘development and disease’. In the journal’seditorial, McKusick and Ruddle made it abundantly clearthey saw DNA as the data of this new discipline:

Mapping all expressed genes . . . regardless ofwhether their function is known, sequencing thesegenes together with their introns [non-codingregions], and sequencing out from these is seen bymany as ‘the way to go’. The ultimate map, thesequence, is seen as a rosetta stone from which thecomplexities of gene expression in development canbe translated and the genetic mechanisms of diseaseinterpreted. For the newly developing discipline ofmapping/sequencing (including analysis of the infor-mation) we have adopted the term ‘GENOMICS’ [14].

Genomics embodied the idea of reducing the genes todata that would yield scientific knowledge and power.Genetic information had acquired a scientific value andthe newdiscipline was ready to exploit it. Indeed, two of theadvertisements included inGenomics’ first issue – announ-cing a new bioinformatics company and a journal devotedto publishing DNA sequences – reflected how researchersthen perceived the significance of DNA’s digital output(Figure 3).

Therewas another reason to get excited about genomics:it began to attract serious funding from public and privateinstitutions alike. Indeed, as soon as genomics emerged asa formal discipline, so scientists in the US began to discussthe feasibility of the HGP and seek money for it [15]. Othercountries were quick to follow. In Britain, in 1986, SydneyBrenner – the first person to propose C. elegans as a modelorganism – applied for funding from the UK’s MedicalResearch Council for a physical map of the human genome.

Brenner’s map would require scaling-up the techniquesused on the nematode by a factor of 50 – the difference insize between the C. elegans and human genomes. His

www.sciencedirect.com

proposal also called for a centralised laboratory with agroup of dedicated technicians to perform the task. Accord-ing to the grant application, ‘the most important product ofthe work’ would be the ‘information about the humangenes’, which, Brenner claimed, would bring benefits ‘tomedical research and practice’. The same mapping tech-nologies might also be applied to ‘industrial micro-organ-isms and plants’ and the resulting ‘information couldbecome proprietary’, he suggested [16].

The UK government approved Brenner’s application,giving rise to the British HumanGenomeMapping Project,which began in 1989. This coincided with several publichuman genome mapping and sequencing enterprises inthe United States, Germany and France, among othercountries. The national initiatives progressively convergedand gave rise to a concerted international effort during the1990s, now referred to as the HGP. Celera Genomics, acompany fronted by Craig Venter, launched a privateenterprise in 1998, forcing the HGP to increase its alreadymulti-million dollar budget. Both initiatives publishedhuman DNA sequences in 2001 [17].

Limitations of informational thinkingThe emergent large-scale mapping and sequencing projectswere not without their critics. In 1992, shortly after thelaunch of the HGP, philosophers of biology Alfred Tauber

22 Review Endeavour Vol.31 No.1

andSahotraSarkarweighed into the debate. Investing suchvast resources to analyse whole genomes was unjustifiable,they argued, since there was no ‘practicable way to begincharacterising biological function . . . from sequence infor-mation alone’ [18]. Such criticism questioned the informa-tional understanding of DNA onwhich genomicswas based.According to Tauber and Sarkar, the nucleotide text of thismolecule, which theHGP sought to obtain, would be insuffi-cient to deduce how genes worked.

As the HGP progressed, so this concern began to spread.Researchers increasingly revealed the complexities of generegulation and protein folding, consequently eroding thebelief that DNA sequence was they key to understandingdisease and other phenomena. When the draft sequence ofthe human genomewas published in 2001, it was clear thattheHGP’s expectations of knowledge and healthweremorethan a little optimistic. A new consensus with an emphasison biological factors beyond sequence information began toemerge in the scientific community.

Interestingly, this shift in scientific opinion over the lastten years has been accompanied by criticism of informationsociety. Historian of technology David Edgerton considersthe term a ‘fashionable, and misleading, way of sayinglittle more than service industries now account for verylarge proportions of GDP and employment’. And, he claims,these industries are not necessarily based on informationor other immaterial entities. This is evident from ‘the sheerbulk of the things associatedwith them, the unprecedentedweight of stuff in the shops, the piles of paper in any office,not to mention the proliferation of computers, faxmachines, and Xeroxes’ [19]. Edgerton’s critique suggeststhat there is amaterial counterpart, which should be takeninto account when analysing the power of information.

The challenges to so-called information society havecoincided with new explanatory models of genetic action.Systems biology, a new way of thinking that has emergedsince the late 90s, proposes that DNA is in continuousinteraction with other cell components and externalenvironmental inputs. This suggests that ‘emergent proper-ties’ not deducible from isolated parts can only be under-stood by taking into account all the elements in a system[20]. Systems biology offers an alternative vision in whichthe DNA sequence does not offer a direct route to biologicalmeaning, but is part of amore complex coordinated process.

The simultaneous emergence of systems biology andcritique of information society suggests a new social andscientific paradigm for the current post-genomics era.Natural scientists, on the one hand, are beginning to thinkof cellular function in terms of complex interactionsbetween biological molecules rather than just a productof the DNA sequence. Social scientists, on the other, con-sider that the material basis of society is as important asinformation (or any other abstraction) for explainingrelationships between knowledge and power. Bothtendencies, if converging, are likely to define a newresearch space in which the genetic text will at least beplaced into its material context.

AcknowledgementsJohn Lagnado (UK Biochemical Society Archives); Frances Martin andJoan Green (Sanger Institute Library); Annette Faux and Michael Fuller

www.sciencedirect.com

(Laboratory of Molecular Biology of Cambridge Archives and Storage);Wellcome Trust Archives and Manuscript Collection; UK NationalArchives; Andrew Mendelsohn, David Edgerton, Andrew Warwick andRobert Iliffe (Centre for the History of Science, Imperial College, London);Soraya de Chadarevian (University of California Los Angeles); FredSanger, John Sulston, Sydney Brenner, Bart Barrell, Richard Durbin,Alan Coulson (Sanger Institute and Laboratory of Molecular Biology ofCambridge); Adam Bostanci and other researchers at the ESRC Centrefor Genomics in Society (University of Exeter, UK); Jose Manuel SanchezRon, Javier Ordonez, Antonio Sillero, Rafael Garesse and otherresearchers (Universidad Autonoma de Madrid).

This research was conducted while holding postgraduate fellowshipsgiven by Caja Madrid Foundation, Madrid City Hall and Residencia deEstudiantes (Spain), as well as a Hans Rausing Fellowship given by theCentre for the History of Science, Imperial College, London. Withoutthem, it would not have been feasible.

References1 Olson, M. et al. (1986) Random-clone strategy for genomic restriction

mapping in yeast. Proc. Nat. Acad. Sci. 83, pp. 7826–7830;Kohara, Y. et al. (1987) The physical map of the whole E. colichromosome: application of a new strategy for rapid analysis andsorting of a large genomic library. Cell 50, pp. 495–508;and Coulson, A. et al. (1986) Toward a physical map of the genome ofthe nematode Caenorhabditis Elegans. Proc. Nat. Acad. Sci. 83, pp.7821–7825

2 In the scientific side see Sulston, J. (2002) The Common Thread: AStory of Science, Politics, Ethics and the Humane Genome, Bantam;and Brenner, S. (2001) My Life in Science (BioMed), especially chapter8. In the popular science one;Cook-Deegan, R. (1994) The Gene Wars: Science, Politics and theHuman Genome, W.W. Norton and Company chapters 2–4;and Wills, C. (1991) Exons, Introns and Talking Genes: The Sciencebehind the Human Genome, Project, Basic Books

3 Two remarkable exceptions to the above referred exclusivelytechnological accounts are de Chadarevian, S. (2004) Mapping theworm’s genome: tools, networks, patronage. In From MolecularGenetics to Genomics: The Mapping Cultures of Twentieth CenturyBiology (Rheinberger, H.J. and Gaudilliere, J.P., eds), Routledge, pp.95–110

4 Quotes fromCastells, M. (1996) The Information Age: Economy, Societyand Culture (Vol. 1), Blackwell, p. 21;On Information Society see also Parker, E.B. (1973) Implications of newinformation technology. Public Opin. Quart. 37 (4), pp. 590–600 andQuah D.T. (Feb. 1997), Increasingly weightless economies. Bank ofEngland Quart. Bull pp. 49–56;On the history of computing see Ceruzzi, P. (1998) A History of ModernComputing, MIT chapters 6–9

5 Bud, R. (1993) The Uses of Life: A History of Biotechnology, Cambridgechapters 7–9;Rabinow, P. (1996) Making PCR: A Story of Biotechnology, Chicagochapters 1–2;Kenney, M. (1986) Biotechnology: The University-Industrial Complex,Yale

6 Godet, M. and Ruyssen, O. (1979) The Old World and the NewTechnologies, Commission of the European Communities chapter 5,quotes from pp. 115 and 125

7 Sulston, J. and Horvitz, R. (1977) Post-embryonic cell lineages of thenematode Caenorhabditis Elegans. Dev. Biol. 56 (1), pp. 110–156;and Sulston, J. et al. (1983) The embryonic cell lineage of the nematodeCaenorhabditis Elegans. Developmental Biology 100 (1), pp. 64–119

8 Sydney Brenner, S. and John, W., members of Sulston’s group,attempted without success to apply the computer to their researchline, aimed to characterise every single neuron in C. elegans nervoussystem and to describe their connections. Sulston op. cit. 2002, pp.28–29;White, J. (1974) Computer Aided Reconstruction of the Nervous Systemof C. elegans, PhD dissertation, University of Cambridge

9 de Chadarevian, S. (2002) Designs for Life: Molecular Biology afterWorld War II, Cambridge chapter 4;Bennet, J.M. and Kendrew, J. (1952) The computation of Fouriersyntheses with a digital electronic calculating machine. ActaCrystallogr. 5, pp. 109–116

Review Endeavour Vol.31 No.1 23

10 Staden, R. (1977), Sequence data handling by computer. Nucl. AcidsRes. 4 (11), pp. 4037–4051; id. (1979) A strategy of DNA sequencingemploying computer programs. Nucl. Acids Res. 6 (7), pp. 2601–2610;id. (1980) A new computer method for the storage and manipulation ofDNA gel reading data. Nucl. Acids Res. 8 (16), pp. 3673–3694

11 Staden, R. (1982) Automation of the computer handling of gel readingdata produced by the shotgun method of DNA sequencing. Nucl. AcidsRes. 10 (15), pp. 4731–4751;The text-processing algorithm had previously been introduced insequencing by the French researchers Jean Pierre Dumas andJacques Ninio Dumas, J.P. and Ninio, J. (1982) Efficient algorithmsfor folding and comparing nucleic acid sequences. Nucl. Acids Res. 10(1), pp. 197–206

12 Coulson A., Sulston J. et al. op. cit., quotes in p. 7821 and 25;See also Sulston, J. and Coulson, A. et al. (1988) Software for genomemapping by fingerprinting techniques.Comput. Appl. Biol. Biosci. 4 (1),pp. 125–132;On the digitalisation of the DNA data see Staden, R. (1984) A computerprogram to enter DNA gel reading data into a computer. Nucl. AcidsRes. 12 (1), pp. 499–503

13 Coulson, A. and Sulston, J. op. cit. For more details about theinternational exchanges among the worm’s community see Sulston,J. op. cit. and de Chadarevian, S. op. cit.

14 McKusick, V. and Riddle, F. (1987) Editorial: a new discipline, a newname, a new journal. Genomics 1, pp. 1–2 quote p. 1;On the coinage of the term genomics see Kuska, B. (1998) BeerBethesda and biology: how ‘‘genomics’’ came into being. J. Nat.Cancer Inst. 90 (2), p. 93

15 Sinsheimer, R. (1989) The Santa Cruz Workshop, May 1985.Genomics5 (4), pp. 954–956

16 Brenner, S. (1986), A physical map of the human genome, quote p. 1; id.(1987) The human genome, quote p. 5. Both held at the UK NationalArchives, folder no. FD 23/3441.

Elsevier joins major healt

Elsevier has joined with scientific publishers and le

patientINFORM, a groundbreaking initiative to help p

gap. patientINFORM is a free online service de

Elsevier provides voluntary health organizations wi

biomedical journals immediately upon publication

voluntary health organizations integrate the informat

text of selected research ar

patientINFORM has been created to enable patients

options online access to the most up-to-date, rel

For more information, visit

www.sciencedirect.com

17 For more details about the transition of the human genome initiativessee Cook-Deegan op. cit., especially Part Two. The public and privatesequences are available at International Human Genome SequencingConsortium (2001) Initial sequencing and analysis of the humangenome. Nature 409, pp. 860–921;and Venter, C. et al. (2001) The sequence of the human genome. Science291, pp. 1304–1351

18 Tauber, A. and Sarkar, S. (1992) The human genome project: has blindreductionism gone so far? Perspect. Biol. Med. 35 (2), pp. 220–235 quotep. 223

19 Edgerton (2006) The Shock of the Old. Technology and Global Historysince, 1900, Oxford quote pp. 96–97;For other critiques of Information Society seeWinston, B. (1998)MediaTechnology and Society: A History from the Telegraph to the Internet,Routledge;andWebster, F. (1997) Is this the Information Age? Towards a critiqueof Manuel Castells. City 8, pp. 71–84 Reprinted in id. and, Dimitriou,B., eds., Manuel Castells (Vol. III), Sage Publications 2001, pp. 65–80

20 One of the main advocates of Systems Biology – and of the formerinformational understanding of DNA – is Leroy Hood, father of theHuman Genome Project and now heading the Institute for SystemsBiology in Seattle. See Hood, L. (2004) The networks of life.Schrodinger Lecture, delivered at Imperial College, London. Hood,L. et al. (upcoming) Biological Information and the Emergence ofSystems Biology, Roberts & Co. A group at the University ofPittsburgh is also currently researching on a new definition of theconcept of gene in the light of the increasing limitations of DNAsequences to account for its properties http://www.pitt.edu/�kstotz/genes/genes.html (last accessed February 2007)

21 For more details about this debate see Bostanci, A. (2004) Sequencinghuman genomes. In From Molecular Genetics to Genomics: TheMapping Cultures of Twentieth Century Biology (Gaudilliere, J.P.and Rheinberger, H.J., eds), Routledge, pp. 158–179

h information initiative

ading voluntary health organizations to create

atients and caregivers close a crucial information

dicated to disseminating medical research.

th increased online access to our peer-reviewed

, together with content from back issues. The

ion into materials for patients and link to the full

ticles on their websites.

seeking the latest information about treatment

iable research available for specific diseases.

www.patientinform.org